AFAIk you need to call withColumn
twice (once for each new column). But if your udf is computationally expensive, you can avoid to call it twice with storing the "complex" result in a temporary column and then "unpacking" the result e.g. using the apply
method of column (which gives access to the array element). Note that sometimes it's necessary to cache the intermediate result (to prevent that the UDF is called twice per row during unpacking), sometimes it's not needed. This seems to depend on how spark the optimizes the plan :
val myUDf = udf((s:String) => Array(s.toUpperCase(),s.toLowerCase()))
val df = sc.parallelize(Seq("Peter","John")).toDF("name")
val newDf = df
.withColumn("udfResult",myUDf(col("name"))).cache
.withColumn("uppercaseColumn", col("udfResult")(0))
.withColumn("lowercaseColumn", col("udfResult")(1))
.drop("udfResult")
newDf.show()
gives
+-----+---------------+---------------+
| name|uppercaseColumn|lowercaseColumn|
+-----+---------------+---------------+
|Peter| PETER| peter|
| John| JOHN| john|
+-----+---------------+---------------+
With an UDF returning a tuple, the unpacking would look like this:
val newDf = df
.withColumn("udfResult",myUDf(col("name"))).cache
.withColumn("lowercaseColumn", col("udfResult._1"))
.withColumn("uppercaseColumn", col("udfResult._2"))
.drop("udfResult")
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…