Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
361 views
in Technique[技术] by (71.8m points)

apache spark - Pivoting ArrayType columns in pyspark

I have a pyspark dataframe with the following schema

+----------+-------------------+-----------------------------------+------------------+
|      date|         numeric_id|                     feature_column|              city|
+----------+-------------------+-----------------------------------+------------------+
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|            Berlin|
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|              Rome|
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|           NewYork|
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|           Beijing|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|            Berlin|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|              Rome|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|           NewYork|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|           Beijing|
+----------+-------------------+-----------------------------------+------------------+

I want to pivot the dataframe so that I can have each feature_column x city as a new column, grouped by date and numeric_id. The output dataframe should look like

+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
|      date|   numeric_id| feature_column_Berlin| feature_column_Rome| feature_column_NewYork|feature_column_Beijing|
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
|2017-08-01|   2343434545|  [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...   |[0.0, 0.0, 0.0, 0...  |
|2019-12-01|   6455534545|  [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...   |[0.0, 0.0, 0.0, 0...  |
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+

This is different from the question posted on pivoting strings Pivot String column on Pyspark Dataframe since I am dealing with ArrayType columns. I'm thinking it would be easier to implement it in Pandas (but handling ArrayType columns will be tricky), so am curious about how to do it using spark SQL. Any suggestions?

question from:https://stackoverflow.com/questions/65912677/pivoting-arraytype-columns-in-pyspark

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
//Initially I am creating the sample data to load the data in dataframe.
import org.apache.spark.sql.functions._
val df = Seq(("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Berlin"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Rome"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"NewYork"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Beijing"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Berlin"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Rome"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"NewYork"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Beijing"))
.toDF("date","numeric_id","feature_column","city")

df.groupBy("date","numeric_id").pivot("city")
 .agg(collect_list("feature_column"))
.withColumnRenamed("Beijing","feature_column_Beijing")
.withColumnRenamed("Berlin","feature_column_Berlin")
.withColumnRenamed("NewYork","feature_column_NewYork")
.withColumnRenamed("Rome","feature_column_Rome").show()

You can see the output as below :

enter image description here


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...