I am creating a subroutine that:
(1) Parses a CSV file;
(2) And checks if all the rows in that file have the expected number of columns. It croaks if the number of columns is invalid.
When the number of rows is ranging from thousands to millions, what do you think is the most efficient way to do it?
Right now, I'm trying out these implementations.
(1) Basic file parser
open my $in_fh, '<', $file or
croak "Cannot open '$file': $OS_ERROR";
my $row_no = 0;
while ( my $row = <$in_fh> ) {
my @values = split (q{,}, $row);
++$row_no;
if ( scalar @values < $min_cols_no ) {
croak "Invalid file format. File '$file' does not have '$min_cols_no' columns in line '$row_no'.";
}
}
close $in_fh
or croak "Cannot close '$file': $OS_ERROR";
(2) Using Text::CSV_XS (bind_columns and csv->getline)
my $csv = Text::CSV_XS->new () or
croak "Cannot use CSV: " . Text::CSV_XS->error_diag();
open my $in_fh, '<', $file or
croak "Cannot open '$file': $OS_ERROR";
my $row_no = 1;
my @cols = @{$csv->getline($in_fh)};
my $row = {};
$csv->bind_columns (@{$row}{@cols});
while ($csv->getline ($in_fh)) {
++$row_no;
if ( scalar keys %$row < $min_cols_no ) {
croak "Invalid file format. File '$file' does not have '$min_cols_no' columns in line '$row_no'.";
}
}
$csv->eof or $csv->error_diag();
close $in_fh or
croak "Cannot close '$file': $OS_ERROR";
(3) Using Text::CSV_XS (csv->parse)
my $csv = Text::CSV_XS->new() or
croak "Cannot use CSV: " . Text::CSV_XS->error_diag();
open my $in_fh, '<', $file or
croak "Cannot open '$file': $OS_ERROR";
my $row_no = 0;
while ( <$in_fh> ) {
$csv->parse($_);
++$row_no;
if ( scalar $csv->fields < $min_cols_no ) {
croak "Invalid file format. File '$file' does not have '$min_cols_no' columns in line '$row_no'.";
}
}
$csv->eof or $csv->error_diag();
close $in_fh or
croak "Cannot close '$file': $OS_ERROR";
(4) Using Parse::CSV
use Parse::CSV;
my $simple = Parse::CSV->new(
file => $file
);
my $row_no = 0;
while ( my $array_ref = $simple->fetch ) {
++$row_no;
if ( scalar @$array_ref < $min_cols_no ) {
croak "Invalid file format. File '$file' does not have '$min_cols_no' columns in line '$row_no'.";
}
}
I benchmark-ed them using the Benchmark module.
use Benchmark qw(timeit timestr timediff :hireswallclock);
And these are the numbers (in seconds) that I got:
1,000 lines of file:
Implementation 1: 0.0016
Implementation 2: 0.0025
Implementation 3: 0.0050
Implementation 4: 0.0097
10,000 lines of file:
Implementation 1: 0.0204
Implementation 2: 0.0244
Implementation 3: 0.0523
Implementation 4: 0.1050
1,500,000 lines of file:
Implementation 1: 1.8697
Implementation 2: 3.1913
Implementation 3: 7.8475
Implementation 4: 15.6274
Given these numbers, I would conclude that the simple parser is the fastest but from what I have read from different sources, Text::CSV_XS should be the fastest.
Will someone enlighten me on this? Is there something wrong with how I used the modules? Thanks a lot for your help!
See Question&Answers more detail:
os