A Docker implementation of Celery running on Flask, managed with supervisord.
A walkthrough of this setup is documented at this Medium article.
Celery Scheduler allows you to setup a powerful, distributed and fuss-free application task scheduler. Once you set it up on a server, it can reliably run scheduled tasks at regular defined intervals.
All you need to do is to define your task method, and the task schedule, and Celery Scheduler will handle the rest for you.
Some interesting uses for your own task scheduler include:
- Home automation (IOT projects)
- Data Workflow management for Business Intelligence (BI)
- Triggering Email campaigns
- Any other routine, periodic tasks
This is a scheduler application powered by Celery running on a minimal python web framework, Flask.
The application is process-managed by Supervisord which takes care of managing celery task workers, celerybeat and Redis as the message broker.
The deployment of the application is handled through Docker which isolates the application environment. It allows the application to run the same, whether locally, in staging or when deployed within a server.
This setup is built for deployment with Docker. You may also choose to run this setup without Docker however no script is provided. Setup instructions can be interpreted from the given Dockerfile.
Deployment with Docker is recommended for consistency of application environment.
- Clone the repository
cd ~
git clone https://github.com/channeng/celery-scheduler.git
cd celery-scheduler
- Install Docker
- Mac or Windows
- Ubuntu server
- To install docker in Ubuntu, you may run the install script:
sudo bash scripts/startup/ubuntu_docker_setup.sh
Note: You may need to run the following docker commands with sudo
prefix if docker was set up to run with root.
- Build docker image
docker build -t celery-scheduler .
- (Optional) Stop any containers running on existing docker image
docker stop $(docker ps -f ancestor=celery-scheduler --format "{{.ID}}")
- Run supervisord with docker container
docker run -p 3020:80 -d celery-scheduler /usr/bin/supervisord --nodaemon
-
Enter bash terminal of running Docker container
docker exec -i -t $(docker ps -f ancestor=celery-scheduler --format "{{.ID}}") /bin/bash
-
Check all required processes are running:
ps aux
You shoud see something like the following:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 2.5 56492 12728 ? Ss Oct21 0:05 /usr/bin/python /usr/bin/supervisord --nodaemon root 7 0.0 0.5 31468 2616 ? Sl Oct21 0:30 /home/ubuntu/celery-scheduler/redis-3.2.1/src/redis-server *:6379 root 8 0.0 8.3 98060 41404 ? S Oct21 0:00 /home/ubuntu/.virtualenvs/celery_env/bin/python2.7 /home/ubuntu/.virtualenvs/celery_env/bin/celery beat -A ap root 9 0.1 8.4 91652 41900 ? S Oct21 0:42 /home/ubuntu/.virtualenvs/celery_env/bin/python2.7 /home/ubuntu/.virtualenvs/celery_env/bin/celery worker -A root 20 0.0 9.3 99540 46820 ? S Oct21 0:00 /home/ubuntu/.virtualenvs/celery_env/bin/python2.7 /home/ubuntu/.virtualenvs/celery_env/bin/celery worker -A
-
Retrieving logs
tail /var/log/redis/redis.log tail /var/log/celery/beat.log tail /var/log/celery/worker.log tail /var/log/supervisor/supervisord.log
-
If successfully deployed, supervisor logs should display:
INFO success: redis entered RUNNING state, process has stayed up for > than 10 seconds (startsecs) INFO success: celerybeat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs) INFO success: celery entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
-
You should also see the task print_hello running every minute in your worker.log
tail -f /var/log/celery/worker.log
Output:
[2017-10-22 03:18:00,050: INFO/MainProcess] Received task: app.tasks.test.print_hello[aa1b7700-1665-4751-ada2-35aba5670d40] [2017-10-22 03:18:00,051: INFO/ForkPoolWorker-1] app.tasks.test.print_hello[aa1b7700-1665-4751-ada2-35aba5670d40]: Hello [2017-10-22 03:18:00,052: INFO/ForkPoolWorker-1] Task app.tasks.test.print_hello[aa1b7700-1665-4751-ada2-35aba5670d40] succeeded in 0.000455291003163s: None
- Task scripts should be written and stored in app/tasks.
- Update
celeryconfig.py
for new tasks and trigger times. - Remember to rebuild the docker image after updating for new tasks.
- Update
manage.py
for a manager command for the task to run on trigger - Run:
python manage.py <manager_command>
- Eg.
source /home/ubuntu/.virtualenvs/celery_env/bin/activate python manage.py test
supervisorctl stop all
Feel free to submit Pull Requests. For any other enquiries, you may contact me at channeng@gmail.com.