the following example describes how you can't calculate the number of distinct values without aggregating the rows using dplyr with sparklyr.
is there a work around that doesn't break the chain of commands?
more generally, how can you use sql like window functions on sparklyr data frames.
## generating a data set
set.seed(.328)
df <- data.frame(
ids = floor(runif(10, 1, 10)),
cats = sample(letters[1:3], 10, replace = TRUE),
vals = rnorm(10)
)
## copying to Spark
df.spark <- copy_to(sc, df, "df_spark", overwrite = TRUE)
# Source: table<df_spark> [?? x 3]
# Database: spark_connection
# ids cats vals
# <dbl> <chr> <dbl>
# 9 a 0.7635935
# 3 a -0.7990092
# 4 a -1.1476570
# 6 c -0.2894616
# 9 b -0.2992151
# 2 c -0.4115108
# 9 b 0.2522234
# 9 c -0.8919211
# 6 c 0.4356833
# 6 b -1.2375384
# # ... with more rows
# using the regular dataframe
df %>% mutate(n_ids = n_distinct(ids))
# ids cats vals n_ids
# 9 a 0.7635935 5
# 3 a -0.7990092 5
# 4 a -1.1476570 5
# 6 c -0.2894616 5
# 9 b -0.2992151 5
# 2 c -0.4115108 5
# 9 b 0.2522234 5
# 9 c -0.8919211 5
# 6 c 0.4356833 5
# 6 b -1.2375384 5
# using the sparklyr data frame
df.spark %>% mutate(n_ids = n_distinct(ids))
Error: Window function `distinct()` is not supported by this database
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…