Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
304 views
in Technique[技术] by (71.8m points)

python - What is the Spark DataFrame method `toPandas` actually doing?

I'm a beginner of Spark-DataFrame API.

I use this code to load csv tab-separated into Spark Dataframe

lines = sc.textFile('tail5.csv')
parts = lines.map(lambda l : l.strip().split(''))
fnames = *some name list*
schemaData = StructType([StructField(fname, StringType(), True) for fname in fnames])
ddf = sqlContext.createDataFrame(parts,schemaData)

Suppose I create DataFrame with Spark from new files, and convert it to pandas using built-in method toPandas(),

  • Does it store the Pandas object to local memory?
  • Does Pandas low-level computation handled all by Spark?
  • Does it exposed all pandas dataframe functionality?(I guess yes)
  • Can I convert it toPandas and just be done with it, without so much touching DataFrame API?
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Using spark to read in a CSV file to pandas is quite a roundabout method for achieving the end goal of reading a CSV file into memory.

It seems like you might be misunderstanding the use cases of the technologies in play here.

Spark is for distributed computing (though it can be used locally). It's generally far too heavyweight to be used for simply reading in a CSV file.

In your example, the sc.textFile method will simply give you a spark RDD that is effectively a list of text lines. This likely isn't what you want. No type inference will be performed, so if you want to sum a column of numbers in your CSV file, you won't be able to because they are still strings as far as Spark is concerned.

Just use pandas.read_csv and read the whole CSV into memory. Pandas will automatically infer the type of each column. Spark doesn't do this.

Now to answer your questions:

Does it store the Pandas object to local memory:

Yes. toPandas() will convert the Spark DataFrame into a Pandas DataFrame, which is of course in memory.

Does Pandas low-level computation handled all by Spark

No. Pandas runs its own computations, there's no interplay between spark and pandas, there's simply some API compatibility.

Does it exposed all pandas dataframe functionality?

No. For example, Series objects have an interpolate method which isn't available in PySpark Column objects. There are many many methods and functions that are in the pandas API that are not in the PySpark API.

Can I convert it toPandas and just be done with it, without so much touching DataFrame API?

Absolutely. In fact, you probably shouldn't even use Spark at all in this case. pandas.read_csv will likely handle your use case unless you're working with a huge amount of data.

Try to solve your problem with simple, low-tech, easy-to-understand libraries, and only go to something more complicated as you need it. Many times, you won't need the more complex technology.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...