I'm trying to keep multiple celery queues with different tasks and workers in the same redis database. Really just a convenience issue of only wanting one redis server rather than two on my machine.
I followed the celery tutorial docs verbatim, as it as the only way to get it to work for me. Now when I try to duplicate everything with slightly tweaked names/queues, it keeps erroring out.
Note - I'm a newish to Python and Celery, which is obviously part of the problem. I'm not sure which parts are named "task/tasks" as a name vs special words.
My condensed version of docs:
Run celery -A tasks worker
to spawn the workers.
tasks.py contains task code with celery = Celery('tasks', broker='redis://localhost')
to connect to Celery and @task()
above my functions that I want to delay.
Within my program for queueing tasks...
from tasks import do_work
do_work.delay()
So given all of the above, what are the steps I need to take to turn this into two types of tasks that run independently on separate queues and workers? For example, blue_tasks and red_tasks?
I've tried changing all instances of tasks to blue_tasks or red_tasks. However, when I queue blue_tasks, the red_tasks workers I've started up start trying to work on them.
I read about default queues and such, so I tried this code, which didn't work:
CELERY_DEFAULT_QUEUE = 'red'
CELERY_QUEUES = (
Queue('red', Exchange('red'), routing_key='red'),
)
As a side note, I don't understand why celery worker
errors out with celery attempting to connect to a default amqp instance, while celery -A tasks worker
tells celery to connect to Redis. What task code is celery worker
attempting to run on the worker if nothing has been specified?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…