I have developed a flask API which will execute a long-running function via ThreadPoolExecutor as background process. This function consumes considerable amount of memory (up-to 8 GB). While monitoring memory occupied with the process, we noticed that it is not releasing the complete memory upon task completion. Also, the application is behaving differently while running locally and as a cloud application.
- On testing application locally, the memory occupied is coming down as soon as a new request is received.i.e, if the first thread is is already in completed state, upon receiving the new request it is freeing the memory associated with it and proceeding with new request. I could find this as an expected behavior as mentioned in this post. (Ref. https://github.com/dchevell/flask-executor/issues/6)
- As a cloud foundry application - each new request is adding the memory. i.e memory associated with completed thread is not released.
sample code:
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers)
app = Flask(__name__)
@app.route("/migrate", methods=['POST'])
def migrate():
try:
startTime = datetime.datetime.now()
jobId = str(startTime).replace('-','').replace(':','').replace(' ','').replace('.','')
inputJSON = request.get_json()
executor.submit(wrapper, args)
return "Job has been started successfully. Job ID:"+jobId
except Exception as e:
print("Unable to start job.Exception has occured:",e)
return "Unable to start job.Exception has occured.", 500
As there is a limitation with the allocated memory in cloud, we need to release the memory as soon as the thread is completes processing. Please help with the correct way to achieve this.
Thanks in advance.
question from:
https://stackoverflow.com/questions/65841070/flask-api-is-not-releasing-memory 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…