Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
2.5k views
in Technique[技术] by (71.8m points)

apache spark - Drop consecutive duplicates in a pyspark dataframe

Having a dataframe like:

## +---+---+
## | id|num|
## +---+---+
## |  2|3.0|
## |  3|6.0|
## |  3|2.0|
## |  3|1.0|
## |  2|9.0|
## |  4|7.0|
## +---+---+

and I want to remove the consecutive repetitions, and obtain:

## +---+---+
## | id|num|
## +---+---+
## |  2|3.0|
## |  3|6.0|
## |  2|9.0|
## |  4|7.0|
## +---+---+

I found ways of doing this in Pandas but nothing in Pyspark.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The answer should work as you desired, however there might be room for some optimization:

from pyspark.sql.window import Window as W
test_df = spark.createDataFrame([
    (2,3.0),(3,6.0),(3,2.0),(3,1.0),(2,9.0),(4,7.0)
    ], ("id", "num"))
test_df = test_df.withColumn("idx", monotonically_increasing_id())  # create temporary ID because window needs an ordered structure
w = W.orderBy("idx")
get_last= when(lag("id", 1).over(w) == col("id"), False).otherwise(True) # check if the previous row contains the same id

test_df.withColumn("changed",get_last).filter(col("changed")).select("id","num").show() # only select the rows with a changed ID

Output:

+---+---+
| id|num|
+---+---+
|  2|3.0|
|  3|6.0|
|  2|9.0|
|  4|7.0|
+---+---+

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...