PyTables and NumPy are the way to go.
PyTables will store the data on disk in HDF format, with optional compression. My datasets often get 10x compression, which is handy when dealing with tens or hundreds of millions of rows. It's also very fast; my 5 year old laptop can crunch through data doing SQL-like GROUP BY aggregation at 1,000,000 rows/second. Not bad for a Python-based solution!
Accessing the data as a NumPy recarray again is as simple as:
data = table[row_from:row_to]
The HDF library takes care of reading in the relevant chunks of data and converting to NumPy.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…