I have a huge numpy 3D tensor which is stored in a file on my disk (which I normally read using np.load
). This is a binary .npy
file. On using np.load
, I quickly end up using most of my memory.
Luckily, at every run of the program, I only require a certain slice of the huge tensor. The slice is of a fixed size and its dimensions are provided from an external module.
What's the best way to do this? The only way I could figure out is somehow storing this numpy
matrix into a MySQL database. But I'm sure there are much better / easier ways. I'll also be happy to build my 3D tensor file differently if it will help.
Does the answer change if my tensor is sparse in nature?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…