Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
2.0k views
in Technique[技术] by (71.8m points)

scala - Load Spark data locally Incomplete HDFS URI

I have experienced a problem with SBT loading in a local CSV file. Basically, I've written a Spark program in Scala Eclipse which reads the following file:

val searches = sc.textFile("hdfs:///data/searches")

This works fine on hdfs, but for de-bug reasons, I wish to load in this file from a local directory, which I have set-up to be in the project directory.

So I tired the following:

val searches = sc.textFile("file:///data/searches")
val searches = sc.textFile("./data/searches")
val searches = sc.textFile("/data/searches")

None of which allows me to read the file from local, and all of them returns this error on SBT:

Exception in thread "main" java.io.IOException: Incomplete HDFS URI, no host: hdfs:/data/pages
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
at org.apache.spark.rdd.RDD.count(RDD.scala:904)
at com.user.Result$.get(SparkData.scala:200)
at com.user.StreamingApp$.main(SprayHerokuExample.scala:35)
at com.user.StreamingApp.main(SprayHerokuExample.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

In the error report, at com.user.Result$.get(SparkData.scala:200) is the line where sc.textFile is called. It seems to run in Hadoop environment by default. Is there anything I could do to read this file locally?

Edit: While on Local, I've reconfigured build.sbt with:

submit <<= inputTask{(argTask:TaskKey[Seq[String]]) => {
(argTask,mainClass in Compile,assemblyOutputPath in assembly,sparkHome) map { 
(args,main,jar,sparkHome) => {
  args match {
    case List(output) => {
      val sparkCmd = sparkHome+"/bin/spark-submit"
      Process(
        sparkCmd :: "--class" :: main.get :: "--master" :: "local[4]" ::
        jar.getPath :: "local[4]" :: output :: Nil)!
    } 
    case _ => Process("echo" :: "Usage" :: Nil) !
  }
}

}}}

The submit command is what I use to run the code.

Solution Found: So it turns out that file:///path/ is the correct way to do it, but in my case, the full path worked: i.e. home/projects/data/searches. While just putting data/searches did not (despite working under home/projects directory).

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Use:

val searches = sc.textFile("hdfs://host:port_no/data/searches")

Default

host: master
port_no: 9000

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...