Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
755 views
in Technique[技术] by (71.8m points)

apache spark - Apply a transformation to multiple columns pyspark dataframe

Suppose I have the following spark-dataframe:

+-----+-------+
| word|  label|
+-----+-------+
|  red|  color|
|  red|  color|
| blue|  color|
| blue|feeling|
|happy|feeling|
+-----+-------+

Which can be created using the following code:

sample_df = spark.createDataFrame([
        ('red', 'color'),
        ('red', 'color'),
        ('blue', 'color'),
        ('blue', 'feeling'),
        ('happy', 'feeling')
    ],
    ('word', 'label')
)

I can perform a groupBy() to get the counts of each word-label pair:

sample_df = sample_df.groupBy('word', 'label').count()
#+-----+-------+-----+
#| word|  label|count|
#+-----+-------+-----+
#| blue|  color|    1|
#| blue|feeling|    1|
#|  red|  color|    2|
#|happy|feeling|    1|
#+-----+-------+-----+

And then pivot() and sum() to get the label counts as columns:

import pyspark.sql.functions as f
sample_df = sample_df.groupBy('word').pivot('label').agg(f.sum('count')).na.fill(0)
#+-----+-----+-------+
#| word|color|feeling|
#+-----+-----+-------+
#|  red|    2|      0|
#|happy|    0|      1|
#| blue|    1|      1|
#+-----+-----+-------+

What is the best way to transform this dataframe such that each row is divided by the total for that row?

# Desired output
+-----+-----+-------+
| word|color|feeling|
+-----+-----+-------+
|  red|  1.0|    0.0|
|happy|  0.0|    1.0|
| blue|  0.5|    0.5|
+-----+-----+-------+

One way to achieve this result is to use __builtin__.sum (NOT pyspark.sql.functions.sum) to get the row-wise sum and then call withColumn() for each label:

labels = ['color', 'feeling']

sample_df.withColumn('total', sum([f.col(x) for x in labels]))
    .withColumn('color', f.col('color')/f.col('total'))
    .withColumn('feeling', f.col('feeling')/f.col('total'))
    .select('word', 'color', 'feeling')
    .show()

But there has to be a better way than enumerating each of the possible columns.

More generally, my question is:

How can I apply an arbitrary transformation, that is a function of the current row, to multiple columns simultaneously?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Found an answer on this Medium post.

First make a column for the total (as above), then use the * operator to unpack a list comprehension over the labels in select():

labels = ['color', 'feeling']
sample_df = sample_df.withColumn('total', sum([f.col(x) for x in labels]))

sample_df.select(
    'word', *[(f.col(col_name)/f.col('total')).alias(col_name) for col_name in labels]
).show()

The approach shown on the linked post shows how to generalize this for arbitrary transformations.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...