I want to test the performance of mongodb. And I choose the workloadc, which is a read-only workload. So I load about 10G data.
First, I did not set the 'target' parameter. And the test command is :
./bin/ycsb run mongodb -s -P workloads/workloadc
the result is:
[READ], AverageLatency(us), 135.2952
[READ], MinLatency(us), 84
[READ], MaxLatency(us), 75455
[READ], 95thPercentileLatency(us), 196
[READ], 99thPercentileLatency(us), 271
[READ], Return=OK, 300000
Second, I set the 'target' parameter to 4000. The test command is:
./bin/ycsb run mongodb -s -P workloads/workloadc -target 4000
the result is:
[READ], AverageLatency(us), 221.37833987374168
[READ], MinLatency(us), 87
[READ], MaxLatency(us), 98239
[READ], 95thPercentileLatency(us), 403
[READ], 99thPercentileLatency(us), 459
[READ], Return=OK, 117220
The read latency of first experiment is 135us, but the read latency of second experiment is 221us. The latency becomes much higer when I try to set the 'target' to 1000 or smaller. Why the read latency becomes higher when I try to limit the throughput? I expect the latency becomes lower.
Note:I run these experiments on a machine with 32-core CPU and 32G DRAM. Although my data size is 10G, which can be all cached in the DRAM, I evit the data from the cache with command echo 1 > /proc/sys/vm/drop_caches
before I run each expeiment.
Is there something wrong with my operation?Or is there something wrong with the time statistics in the source code?
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…