Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
423 views
in Technique[技术] by (71.8m points)

mapreduce - About Hadoop/HDFS file splitting

Want to just confirm on following. Please verify if this is correct: 1. As per my understanding when we copy a file into HDFS, that is the point when file (assuming its size > 64MB = HDFS block size) is split into multiple chunks and each chunk is stored on different data-nodes.

  1. File contents are already split into chunks when file is copied into HDFS and that file-split does not happen at the time of running map job. Map tasks are only scheduled in such a way that they work on each chunk of max. size 64 MB with data-locality (i.e. map task runs on that node which contains the data/chunk)

  2. File splitting also happens if file is compressed (gzipped) but MR ensures that each file is processed by just one mapper, i.e. MR will collect all the chunks of gzip file lying at other data nodes and give them all to the single mapper.

  3. Same thing as above will happen if we define isSplitable() to return false, i.e. all the chunks of a file will be processed by one mapper running on one machine. MR will read all the chunks of a file from different data-nodes and make them available to a single mapper.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

David's answer pretty much hits the nail on its head, i am just elaborating on it here.

There are two distinct concepts at work here, each concept is handled by a different entity in the hadoop framework

Firstly --

1) Dividing a file into blocks -- When a file is written into HDFS, HDFS divides the file into blocks and takes care of its replication. This is done once (mostly), and then is available to all MR jobs running on the cluster. This is a cluster wide configuration

Secondly --

2) Splitting a file into input splits -- When input path is passed into a MR job, the MR job uses the path along with the input format configured to divide the files specified in the input path into splits, each split is processed by a map task. Calculation of input splits is done by the input format each time a job is executed

Now once we have this under our belt, we can understand that isSplitable() method comes under the second category.

To really nail this down have a look at the HDFS write data flow (Concept 1)

HDFS Write Data Flow

The second point in the diagram is probably where the split happens, note that this has nothing to do with running of a MR Job

Now have a look at the execution steps of a MR job

MR

Here the first step is the calculation of the input splits via the inputformat configured for the job.

A lot of your confusion stems from the fact that you are clubbing both of these concepts, i hope this makes it a little clearer.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...