Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
518 views
in Technique[技术] by (71.8m points)

python - Keras: release memory after finish training process

I built an autoencoder model based on CNN structure using Keras, after finish the training process, my laptop has 64GB memory, but I noticed that at least 1/3 of the memory is still occupied, and the same thing for the GPU memory, too. I did not find out a good method to release the memory, I could only release the memory by closing the Anaconda Prompt command window and jupyter notebook. I am not sure if anyone has a good suggestion. Thanks!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Releasing RAM memory

For releasing the RAM memory, just do del Variables as suggested by @nuric in the comment.

Releasing GPU memory

This is a little bit trickier than releasing the RAM memory. Some people will suggest you the following code (Assuming you are using keras)

from keras import backend as K
K.clear_session()

However, the above code doesn't work for all people. (Even when you try del Models, it is still not going to work)

If the above method doesn't work for you, then try the following (You need to install the numba library first):

from numba import cuda
cuda.select_device(0)
cuda.close()

The reason behind it is: Tensorflow is just allocating memory to the GPU, while CUDA is responsible for managing the GPU memory.

If CUDA somehow refuses to release the GPU memory after you have cleared all the graph with K.clear_session(), then you can use the cuda library to have a direct control on CUDA to clear up GPU memory.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...