Why is :memory: in sqlite so slow?
I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do not hit disk during the course of the application.
However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups.
I experimented with several other sizes of dbs and query sets; the advantage of :memory: does seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses:
- the table used isn't big enough (in rows) to make :memory: a huge winner
- more joins/tables would make the :memory: advantage more apparent
- there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark
- there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)
Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark:
==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes.
[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
Shouldn't RAM be almost instant relative to disk? What's going wrong here?
Edit
Some good suggestions here.
I guess the main takehome point for me is that **there's probably no way to make :memory: absolutely faster, but there is a way to make disk access relatively slower. **
In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance.
That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...
See Question&Answers more detail:
os