Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
223 views
in Technique[技术] by (71.8m points)

Scala Spark reverse grouping of groupBy

Im trying to reverse (flatten out) the grouping created on a RDD in Scala, like this: https://backtobazics.com/big-data/spark/apache-spark-groupby-example/

Basically what i have is a key - value where the value is a list. I want to flatten that out. I cant figure out how to go about it, im thinking it must lie in flatmap somehow, but i cant figure out the syntax. Can anybody point me in the right direction please?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You should provide some code in order to answer your question, but here is how you can flatten a groupBy by leveraging flatMap (I am using a code snippet similar to the "Spark groupBy Example Using Scala"). For now, I assume you are working with an RDD of strings.

val v = Array("foo", "bar", "foobarz")
val rdd: org.apache.spark.rdd.RDD[String] = sc.parallelize(v)
val kvRDD: org.apache.spark.rdd.RDD[(String, Iterable[String])] = rdd.groupBy(x => x) // your group by function goes here
// if you explicitly want to keep the key and generate an RDD of tuples
val pairRDD: org.apache.spark.rdd.RDD[(String, String)] = kvRDD.flatMap({ case (k: String, v: Iterable[String]) => v.map(i => (k, i))})
// or if you just want to undo the grouping without preserving the key
val origRDD: org.apache.spark.rdd.RDD[String] = kvRDD.flatMap({ case (_: String, v: Iterable[String]) => v})

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...