Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.7k views
in Technique[技术] by (71.8m points)

python - replace column values in spark dataframe based on dictionary similar to np.where

My data frame looks like -

no          city         amount   
1           Kenora        56%
2           Sudbury       23%
3           Kenora        71%
4           Sudbury       41%
5           Kenora        33%
6           Niagara       22%
7           Hamilton      88%

It consist of 92M records. I want my data frame looks like -

no          city         amount      new_city
1           Kenora        56%           X
2           Niagara       23%           X       
3           Kenora        71%           X
4           Sudbury       41%           Sudbury       
5           Ottawa        33%           Ottawa
6           Niagara       22%           X
7           Hamilton      88%           Hamilton

Using python I can manage it(using np.where) but not getting any results in pyspark. Any help?

I have done so far -

#create dictionary
city_dict = {'Kenora':'X','Niagara':'X'}

mapping_expr  = create_map([lit(x) for x in chain(*city_dict .items())])

#lookup and replace 
df= df.withColumn('new_city', mapping_expr[df['city']])

#But it gives me wrong results.

df.groupBy('new_city').count().show()

new_city    count
   X          2
  null        3

Why gives me null values?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The problem is that mapping_expr will return null for any city that is not contained in city_dict. A quick fix is to use coalesce to return the city if the mapping_expr returns a null value:

from pyspark.sql.functions import coalesce

#lookup and replace 
df1= df.withColumn('new_city', coalesce(mapping_expr[df['city']], df['city']))
df1.show()
#+---+--------+------+--------+
#| no|    city|amount|new_city|
#+---+--------+------+--------+
#|  1|  Kenora|   56%|       X|
#|  2| Sudbury|   23%| Sudbury|
#|  3|  Kenora|   71%|       X|
#|  4| Sudbury|   41%| Sudbury|
#|  5|  Kenora|   33%|       X|
#|  6| Niagara|   22%|       X|
#|  7|Hamilton|   88%|Hamilton|
#+---+--------+------+--------+

df1.groupBy('new_city').count().show()
#+--------+-----+
#|new_city|count|
#+--------+-----+
#|       X|    4|
#|Hamilton|    1|
#| Sudbury|    2|
#+--------+-----+

The above method will fail, however, if one of the replacement values is null.

In this case, an easier alternative may be to use pyspark.sql.DataFrame.replace():

First use withColumn to create new_city as a copy of the values from the city column.

df.withColumn("new_city", df["city"])
    .replace(to_replace=city_dict.keys(), value=city_dict.values(), subset="new_city")
    .groupBy('new_city').count().show()
#+--------+-----+
#|new_city|count|
#+--------+-----+
#|       X|    4|
#|Hamilton|    1|
#| Sudbury|    2|
#+--------+-----+

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...