Could you please help me with following issue.
Goal
Read file on client side (in browser via JS and HTML5 classes) line by line, without loading whole file to memory.
Scenario
I'm working on web page which should parse files on client side. Currently, I'm reading file as it described in this article.
HTML:
<input type="file" id="files" name="files[]" />
JavaScript:
$("#files").on('change', function(evt){
// creating FileReader
var reader = new FileReader();
// assigning handler
reader.onloadend = function(evt) {
lines = evt.target.result.split(/
?
/);
lines.forEach(function (line) {
parseLine(...);
});
};
// getting File instance
var file = evt.target.files[0];
// start reading
reader.readAsText(file);
}
The problem is that FileReader reads whole file at once, which causes crashed tab for big files (size >= 300 MB). Using reader.onprogress
doesn't solve a problem, as it just increments a result till it will hit the limit.
Inventing a wheel
I've done some research in internet and have found no simple way to do this (there are bunch of articles describing this exact functionality but on server side for node.js).
As only way to solve it I see only following:
- Split file by chunks (via
File.split(startByte, endByte)
method)
- Find last new line character in that chunk ('/n')
- Read that chunk except part after last new line character and convert it to the string and split by lines
- Read next chunk starting from last new line character found on step 2
But I'll better use something already existing to avoid entropy growth.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…