Considering Spark accepts Hadoop input files, have a look at below image.
Only bzip2
formatted files are splitable and other formats like zlib, gzip, LZO, LZ4 and Snappy
formats are not splitable.
Regarding your query on partition, partition does not depend on file format you are going to use. It depends on content in the file - Values of partitioned column like date etc.
EDIT 1:
Have a look at this SE question and this working code on Spark reading zip file.
JavaPairRDD<String, String> fileNameContentsRDD = javaSparkContext.wholeTextFiles(args[0]);
JavaRDD<String> lineCounts = fileNameContentsRDD.map(new Function<Tuple2<String, String>, String>() {
@Override
public String call(Tuple2<String, String> fileNameContent) throws Exception {
String content = fileNameContent._2();
int numLines = content.split("[
]+").length;
return fileNameContent._1() + ": " + numLines;
}
});
List<String> output = lineCounts.collect();
EDIT 2:
LZO files can be splittable.
LZO files can be split as long as the splits occur on block boundaries
Refer to this article for more details.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…