Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
331 views
in Technique[技术] by (71.8m points)

apache spark sql - how to get all column in Pyspark code with Selectexpr

I have the following code:

df = df1.withColumn('idx',
F.coalesce(
     #Get the smallest index of a stop word in the string
    F.least(*[F.when(F.instr('Title_lower_case', s) != 0, F.instr('Title_lower_case', s)) for s in ['/', ' / ', '/ ',' /', '/', ' & ', '& ',' &','&','.', '-',' - ', '- ',' -']]),
    # If no stop words found, get the whole string
    F.length('Title_lower_case') + 1)
).selectExpr(f'trim(substring(Title_lower_case, 1, idx-1)) Title_lower_case')

It actually removes words after ['/', ' / ', '/ ',' /', '/', ' & ', '& ',' &','&','.', '-',' - ', '- ',' -']. The problem is this code just give me Title_lower_case column. but I need to get the data set with all column

Is that right if I add more columns in .selectExpr? For example:

df.selectExpr(f'trim(substring(Title_lower_case, 1, idx-1)) Title_lower_case', "id", "count")
question from:https://stackoverflow.com/questions/65896980/how-to-get-all-column-in-pyspark-code-with-selectexpr

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...