Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
930 views
in Technique[技术] by (71.8m points)

windows - Spark 2.0: Relative path in absolute URI (spark-warehouse)

I'm trying to migrate from Spark 1.6.1 to Spark 2.0.0 and I am getting a weird error when trying to read a csv file into SparkSQL. Previously, when I would read a file from local disk in pyspark I would do:

Spark 1.6

df = sqlContext.read 
        .format('com.databricks.spark.csv') 
        .option('header', 'true') 
        .load('file:///C:/path/to/my/file.csv', schema=mySchema)

In the latest release I think it should look like this:

Spark 2.0

spark = SparkSession.builder 
           .master('local[*]') 
           .appName('My App') 
           .getOrCreate()

df = spark.read 
        .format('csv') 
        .option('header', 'true') 
        .load('file:///C:/path/to/my/file.csv', schema=mySchema)

But I am getting this error no matter how many different ways I try to adjust the path:

IllegalArgumentException: 'java.net.URISyntaxException: Relative path in 
absolute URI: file:/C:/path//to/my/file/spark-warehouse'

Not sure if this is just an issue with Windows or there is something I am missing. I was excited that the spark-csv package is now a part of Spark right out of the box, but I can't seem to get it to read any of my local files anymore. Any ideas?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I was able to do some digging around in the latest Spark documentation, and I notice they have a new configuration setting that I hadn't noticed before:

spark.sql.warehouse.dir

So I went ahead and added this setting when I set up my SparkSession:

spark = SparkSession.builder 
           .master('local[*]') 
           .appName('My App') 
           .config('spark.sql.warehouse.dir', 'file:///C:/path/to/my/') 
           .getOrCreate()

That seems to set the working directory, and then I can just feed my filename directly into the csv reader:

df = spark.read 
        .format('csv') 
        .option('header', 'true') 
        .load('file.csv', schema=mySchema) 

Once I set the spark warehouse, Spark was able to locate all of my files and my app finishes successfully now. The amazing thing is that it runs about 20 times faster than it did in Spark 1.6. So they really have done some very impressive work optimizing their SQL engine. Spark it up!


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...