It's not clear to me how connections pools work, and how to properly use them. I was hoping someone could elaborate. I've sketched out my use case below:
settings.py:
import redis
def get_redis_connection():
return redis.StrictRedis(host='localhost', port=6379, db=0)
task1.py
import settings
connection = settings.get_redis_connection()
def do_something1():
return connection.hgetall(...)
task2.py
import settings
connection = settings.get_redis_connection()
def do_something1():
return connection.hgetall(...)
etc.
Basically I have a setting.py file that returns redis connections, and several different task files that get the redis connections, and then run operations. So each task file has its own redis instance (which presumably is very expensive). What's the best way of optimizing this process. Is it possible to use connection pools for this example? Is there a more efficient way of setting up this pattern?
For our system, we have over a dozen task files following this same pattern, and I've noticed our requests slowing down.
Thanks
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…