In Scala I can do get(#)
or getAs[Type](#)
to get values out of a dataframe. How should I do it in pyspark
?
I have a two columns DataFrame: item(string)
and salesNum(integers)
. I do a groupby
and mean
to get a mean of those numbers like this:
saleDF.groupBy("salesNum").mean()).collect()
and it works. Now I have the mean in a dataframe with one value.
How can I get that value out of the dataframe to get the mean as a float number?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…