First I would really avoid using coalesce
, as this is often pushed up further in the chain of transformation and may destroy the parallelism of your job (I asked about this issue here : Coalesce reduces parallelism of entire stage (spark))
Writing 1 file per parquet-partition is realtively easy (see Spark dataframe write method writing many small files):
data.repartition($"key").write.partitionBy("key").parquet("/location")
If you want to set an arbitrary number of files (or files which have all the same size), you need to further repartition your data using another attribute which could be used (I cannot tell you what this might be in your case):
data.repartition($"key",$"another_key").write.partitionBy("key").parquet("/location")
another_key
could be another attribute of your dataset, or a derived attribute using some modulo or rounding-operations on existing attributes. You could even use window-functions with row_number
over key
and then round this by something like
data.repartition($"key",floor($"row_number"/N)*N).write.partitionBy("key").parquet("/location")
This would put you N
records into 1 parquet file
using orderBy
You can also control the number of files without repartitioning by ordering your dataframe accordingly:
data.orderBy($"key").write.partitionBy("key").parquet("/location")
This will lead to a total of (at least, but not much more than) spark.sql.shuffle.partitions
files across all partitions (by default 200). It's even beneficial to add a second ordering column after $key
, as parquet will remember the ordering of the dataframe and will write the statistics accordingly. For example, you can order by an ID:
data.orderBy($"key",$"id").write.partitionBy("key").parquet("/location")
This will not change the number of files, but it will improve the performance when you query your parquet file for a given key
and id
. See e.g. https://www.slideshare.net/RyanBlue3/parquet-performance-tuning-the-missing-guide and https://db-blog.web.cern.ch/blog/luca-canali/2017-06-diving-spark-and-parquet-workloads-example
Spark 2.2+
From Spark 2.2 on, you can also play with the new option maxRecordsPerFile
to limit the number of records per file if you have too large files. You will still get at least N files if you have N partitions, but you can split the file written by 1 partition (task) into smaller chunks:
df.write
.option("maxRecordsPerFile", 10000)
...
See e.g. http://www.gatorsmile.io/anticipated-feature-in-spark-2-2-max-records-written-per-file/ and spark write to disk with N files less than N partitions