Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
879 views
in Technique[技术] by (71.8m points)

apache spark - PySpark: How to fillna values in dataframe for specific columns?

I have the following sample DataFrame:

a    | b    | c   | 

1    | 2    | 4   |
0    | null | null| 
null | 3    | 4   |

And I want to replace null values only in the first 2 columns - Column "a" and "b":

a    | b    | c   | 

1    | 2    | 4   |
0    | 0    | null| 
0    | 3    | 4   |

Here is the code to create sample dataframe:

rdd = sc.parallelize([(1,2,4), (0,None,None), (None,3,4)])
df2 = sqlContext.createDataFrame(rdd, ["a", "b", "c"])

I know how to replace all null values using:

df2 = df2.fillna(0)

And when I try this, I lose the third column:

df2 = df2.select(df2.columns[0:1]).fillna(0)
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
df.fillna(0, subset=['a', 'b'])

There is a parameter named subset to choose the columns unless your spark version is lower than 1.3.1


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...