Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
294 views
in Technique[技术] by (71.8m points)

python - Read JSON file as Pyspark Dataframe using PySpark?

How can I read the following JSON structure to spark dataframe using PySpark?

My JSON structure

{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}

I have tried with :

df = spark.read.json('simple.json');

I want the output a,b,c as columns and values as respective rows.

Thanks.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Json string variables

If you have json strings as variables then you can do

simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}'
rddjson = sc.parallelize([simple_json])
df = sqlContext.read.json(rddjson)

from pyspark.sql import functions as F
df.select(F.explode(df.results).alias('results')).select('results.*').show(truncate=False)

which will give you

+---+---+----+
|a  |b  |c   |
+---+---+----+
|1  |2  |name|
|2  |5  |foo |
+---+---+----+

Json strings as separate lines in a file (sparkContext and sqlContext)

If you have json strings as separate lines in a file then you can read it using sparkContext into rdd[string] as above and the rest of the process is same as above

rddjson = sc.textFile('/home/anahcolus/IdeaProjects/pythonSpark/test.csv')
df = sqlContext.read.json(rddjson)
df.select(F.explode(df['results']).alias('results')).select('results.*').show(truncate=False)

Json strings as separate lines in a file (sqlContext only)

If you have json strings as separate lines in a file then you can just use sqlContext only. But the process is complex as you have to create schema for it

df = sqlContext.read.text('path to the file')

from pyspark.sql import functions as F
from pyspark.sql import types as T
df = df.select(F.from_json(df.value, T.StructType([T.StructField('results', T.ArrayType(T.StructType([T.StructField('a', T.IntegerType()), T.StructField('b', T.IntegerType()), T.StructField('c', T.StringType())])))])).alias('results'))
df.select(F.explode(df['results.results']).alias('results')).select('results.*').show(truncate=False)

which should give you same as above result

I hope the answer is helpful


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...