Before using R, I used quite a bit of Perl. In Perl, I would often use hashes, and lookups of hashes are generally regarded as fast in Perl.
For example, the following code will populate a hash with up to 10000 key/value pairs, where the keys are random letters and the values are random integers. Then, it does 10000 random lookups in that hash.
#!/usr/bin/perl -w
use strict;
my @letters = ('a'..'z');
print @letters . "
";
my %testHash;
for(my $i = 0; $i < 10000; $i++) {
my $r1 = int(rand(26));
my $r2 = int(rand(26));
my $r3 = int(rand(26));
my $key = $letters[$r1] . $letters[$r2] . $letters[$r3];
my $value = int(rand(1000));
$testHash{$key} = $value;
}
my @keyArray = keys(%testHash);
my $keyLen = scalar @keyArray;
for(my $j = 0; $j < 10000; $j++) {
my $key = $keyArray[int(rand($keyLen))];
my $lookupValue = $testHash{$key};
print "key " . $key . " Lookup $lookupValue
";
}
Now that increasingly, I am wanting to have a hash-like data structure in R. The following is the equivalent R code:
testHash <- list()
for(i in 1:10000) {
key.tmp = paste(letters[floor(26*runif(3))], sep="")
key <- capture.output(cat(key.tmp, sep=""))
value <- floor(1000*runif(1))
testHash[[key]] <- value
}
keyArray <- attributes(testHash)$names
keyLen = length(keyArray);
for(j in 1:10000) {
key <- keyArray[floor(keyLen*runif(1))]
lookupValue = testHash[[key]]
print(paste("key", key, "Lookup", lookupValue))
}
The code seem to be doing equivalent things. However, the Perl one is much faster:
>time ./perlHashTest.pl
real 0m4.346s
user **0m0.110s**
sys 0m0.100s
Comparing to R:
time R CMD BATCH RHashTest.R
real 0m8.210s
user **0m7.630s**
sys 0m0.200s
What explains the discrepancy? Are lookups in R lists just not good?
Increasing to 100,000 list length and 100,000 lookups only exaggerates the discrepancy? Is there a better alternative for the hash data structure in R than the native list()?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…