This document describes an older version of Celery (2.1). For the latest stable version please go here.
This document describes the configuration options available.
If you’re using the default loader, you must create the celeryconfig.py module and make sure it is available on the Python path.
This is an example configuration file to get you started. It should contain all you need to run a basic Celery set-up.
# List of modules to import when celery starts.
CELERY_IMPORTS = ("myapp.tasks", )
## Result store settings.
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = "sqlite:///mydatabase.db"
## Broker settings.
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
## Worker settings
## If you're doing mostly I/O you can have more processes,
## but if mostly spending CPU, try to keep it close to the
## number of CPUs on your machine. If not set, the number of CPUs/cores
## available will be used.
CELERYD_CONCURRENCY = 10
# CELERYD_LOG_FILE = "celeryd.log"
# CELERYD_LOG_LEVEL = "INFO"
The number of concurrent worker processes, executing tasks simultaneously.
Defaults to the number of CPUs/cores available.
How many messages to prefetch at a time multiplied by the number of concurrent processes. The default is 4 (four messages for each process). The default setting is usually a good choice, however – if you have very long running tasks waiting in the queue and you have to start the workers, note that the first worker to start will receive four times the number of messages initially. Thus the tasks may not be fairly distributed to the workers.
The backend used to store task results (tombstones). Can be one of the following:
Use a relational database supported by SQLAlchemy. See Database backend settings.
Use memcached to store the results. See Cache backend settings.
Use MongoDB to store the results. See MongoDB backend settings.
Use Redis to store the results. See Redis backend settings.
Use Tokyo Tyrant to store the results. See Tokyo Tyrant backend settings.
Send results back as AMQP messages See AMQP backend settings.
Please see Supported Databases for a table of supported databases. To use this backend you need to configure it with an Connection String, some examples include:
# sqlite (filename)
CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"
# mysql
CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"
# postgresql
CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"
# oracle
CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"
See Connection String for more information about connection strings.
To specify additional SQLAlchemy database engine options you can use the CELERY_RESULT_ENGINE_OPTIONS setting:
# echo enables verbose logging from SQLAlchemy.
CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = "mysql://user:password@host/dbname"
The time in seconds of which the task result queues should expire.
Note
AMQP result expiration requires RabbitMQ versions 2.1.0 and higher.
Name of the exchange to publish results in. Default is "celeryresults".
The exchange type of the result exchange. Default is to use a direct exchange.
Result message serialization format. Default is "pickle". See Serializers.
If set to True, result messages will be persistent. This means the messages will not be lost after a broker restart. The default is for the results to be transient.
CELERY_RESULT_BACKEND = "amqp"
CELERY_AMQP_TASK_RESULT_EXPIRES = 18000 # 5 hours.
Note
The cache backend supports the pylibmc and python-memcached libraries. The latter is used only if pylibmc is not installed.
Using a single memcached server:
CELERY_CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
Using multiple memcached servers:
CELERY_RESULT_BACKEND = "cache"
CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
You can set pylibmc options using the CELERY_CACHE_BACKEND_OPTIONS setting:
CELERY_CACHE_BACKEND_OPTIONS = {"binary": True,
"behaviors": {"tcp_nodelay": True}}
Note
The Tokyo Tyrant backend requires the pytyrant library: http://pypi.python.org/pypi/pytyrant/
This backend requires the following configuration directives to be set:
Host name of the Tokyo Tyrant server.
The port the Tokyo Tyrant server is listening to.
CELERY_RESULT_BACKEND = "tyrant"
TT_HOST = "localhost"
TT_PORT = 1978
Note
The Redis backend requires the redis library: http://pypi.python.org/pypi/redis/0.5.5
To install the redis package use pip or easy_install:
$ pip install redis
This backend requires the following configuration directives to be set.
Host name of the Redis database server. e.g. “localhost”.
Port to the Redis database server. e.g. 6379.
Database number to use. Default is 0
Password used to connect to the database.
CELERY_RESULT_BACKEND = "redis"
REDIS_HOST = "localhost"
REDIS_PORT = 6379
REDIS_DB = 0
REDIS_CONNECT_RETRY = True
Note
The MongoDB backend requires the pymongo library: http://github.com/mongodb/mongo-python-driver/tree/master
This is a dict supporting the following keys:
Host name of the MongoDB server. Defaults to “localhost”.
The port the MongoDB server is listening to. Defaults to 27017.
User name to authenticate to the MongoDB server as (optional).
Password to authenticate to the MongoDB server (optional).
The database name to connect to. Defaults to “celery”.
The collection name to store task meta data. Defaults to “celery_taskmeta”.
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
"host": "192.168.1.100",
"port": 30000,
"database": "mydb",
"taskmeta_collection": "my_taskmeta_collection",
}
The mapping of queues the worker consumes from. This is a dictionary of queue name/options. See Routing Tasks for more information.
The default is a queue/exchange/binding key of "celery", with exchange type direct.
You don’t have to care about this unless you want custom routing facilities.
A list of routers, or a single router used to route tasks to queues. When deciding the final destination of a task the routers are consulted in order. See Routers for more information.
If enabled (default), any queues specified that is not defined in CELERY_QUEUES will be automatically created. See Automatic routing.
The queue used by default, if no custom queue is specified. This queue must be listed in CELERY_QUEUES. The default is: celery.
Name of the default exchange to use when no custom exchange is specified. The default is: celery.
Default exchange type used when no custom exchange is specified. The default is: direct.
The default routing key used when sending tasks. The default is: celery.
Can be transient or persistent. The default is to send persistent messages.
The messaging backend to use. Default is "amqplib".
Hostname of the broker.
Custom port of the broker. Default is to use the default port for the selected backend.
Username to connect as.
Password to connect with.
Virtual host. Default is "/".
Use SSL to connect to the broker. Off by default. This may not be supported by all transports.
The default timeout in seconds before we give up establishing a connection to the AMQP server. Default is 4 seconds.
Automatically try to re-establish the connection to the AMQP broker if lost.
The time between retries is increased for each retry, and is not exhausted before CELERY_BROKER_CONNECTION_MAX_RETRIES is exceeded.
This behavior is on by default.
Maximum number of retries before we give up re-establishing a connection to the AMQP broker.
If this is set to 0 or None, we will retry forever.
Default is 100 retries.
If this is True, all tasks will be executed locally by blocking until it is finished. apply_async and Task.delay will return a EagerResult which emulates the behavior of AsyncResult, except the result has already been evaluated.
Tasks will never be sent to the queue, but executed locally instead.
If this is True, eagerly executed tasks (using .apply, or with CELERY_ALWAYS_EAGER on), will raise exceptions.
It’s the same as always running apply with throw=True.
Whether to store the task return values or not (tombstones). If you still want to store errors, just not successful return values, you can set CELERY_STORE_ERRORS_EVEN_IF_IGNORED.
Time (in seconds, or a timedelta object) for when after stored task tombstones will be deleted.
A built-in periodic task will delete the results after this time (celery.task.builtins.backend_cleanup).
Note
For the moment this only works with the database, cache, redis and MongoDB backends. For the AMQP backend see CELERY_AMQP_TASK_RESULT_EXPIRES.
When using the database or MongoDB backends, celerybeat must be running for the results to be expired.
Total number of results to store before results are evicted from the result cache. The default is 5000.
If True the task will report its status as “started” when the task is executed by a worker. The default value is False as the normal behaviour is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried. Having a “started” state can be useful for when there are long running tasks and there is a need to report which task is currently running.
A string identifying the default serialization method to use. Can be pickle (default), json, yaml, or any custom serialization methods that have been registered with carrot.serialization.registry.
See also
The global default rate limit for tasks.
This value is used for tasks that does not have a custom rate limit The default is no rate limit.
Disable all rate limits, even if tasks has explicit rate limits set.
Late ack means the task messages will be acknowledged after the task has been executed, not just before, which is the default behavior.
See also
A sequence of modules to import when the celery daemon starts.
This is used to specify the task modules to import, but also to import signal handlers and additional remote control commands, etc.
Maximum number of tasks a pool worker process can execute before it’s replaced with a new one. Default is no limit.
Task hard time limit in seconds. The worker processing the task will be killed and replaced with a new one when this is exceeded.
Task soft time limit in seconds.
The SoftTimeLimitExceeded exception will be raised when this is exceeded. The task can catch this to e.g. clean up before the hard time limit comes.
Example:
from celery.decorators import task
from celery.exceptions import SoftTimeLimitExceeded
@task()
def mytask():
try:
return do_work()
except SoftTimeLimitExceeded:
cleanup_in_a_hurry()
If set, the worker stores all task errors in the result store even if Task.ignore_result is on.
Name of the file used to stores persistent worker state (like revoked tasks). Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).
Can also be set via the --statedb argument to celeryd.
Not enabled by default.
Set the maximum time in seconds that the ETA scheduler can sleep between rechecking the schedule. Default is 1 second.
Setting this value to 1 second means the schedulers precision will be 1 second. If you need near millisecond precision you can set this to 0.1.
The default value for the Task.send_error_emails attribute, which if set to True means errors occurring during task execution will be sent to ADMINS by e-mail.
A white list of exceptions to send error e-mails for.
List of (name, email_address) tuples for the administrators that should receive error e-mails.
The e-mail address this worker sends e-mails from. Default is celery@localhost.
The mail server to use. Default is "localhost".
User name (if required) to log on to the mail server with.
Password (if required) to log on to the mail server with.
The port the mail server is listening on. Default is 25.
This configuration enables the sending of error e-mails to george@vandelay.com and kramer@vandelay.com:
# Enables error e-mails.
CELERY_SEND_TASK_ERROR_EMAILS = True
# Name and e-mail addresses of recipients
ADMINS = (
("George Costanza", "george@vandelay.com"),
("Cosmo Kramer", "kosmo@vandelay.com"),
)
# E-mail address used as sender (From field).
SERVER_EMAIL = "no-reply@vandelay.com"
# Mailserver configuration
EMAIL_HOST = "mail.vandelay.com"
EMAIL_PORT = 25
# EMAIL_HOST_USER = "servers"
# EMAIL_HOST_PASSWORD = "s3cr3t"
EMAIL_TIMEOUT = 2 # two seconds is the default
Send events so the worker can be monitored by tools like celerymon.
Name of the queue to consume event messages from. Default is "celeryevent".
Name of the exchange to send event messages to. Default is "celeryevent".
The exchange type of the event exchange. Default is to use a "direct" exchange.
Routing key used when sending event messages. Default is "celeryevent".
Message serialization format used when sending event messages. Default is "json". See Serializers.
Name prefix for the queue used when listening for broadcast messages. The workers host name will be appended to the prefix to create the final queue name.
Default is "celeryctl".
Name of the exchange used for broadcast messages.
Default is "celeryctl".
Exchange type used for broadcast messages. Default is "fanout".
The default file name the worker daemon logs messages to. Can be overridden using the --logfile option to celeryd.
The default is None (stderr)
Worker log level, can be one of DEBUG, INFO, WARNING, ERROR or CRITICAL.
Can also be set via the --loglevel argument to celeryd.
See the logging module for more information.
The format to use for log messages.
Default is [%(asctime)s: %(levelname)s/%(processName)s] %(message)s
See the Python logging module for more information about log formats.
The format to use for log messages logged in tasks. Can be overridden using the --loglevel option to celeryd.
Default is:
[%(asctime)s: %(levelname)s/%(processName)s]
[%(task_name)s(%(task_id)s)] %(message)s
See the Python logging module for more information about log formats.
If enabled stdout and stderr will be redirected to the current logger.
Enabled by default. Used by celeryd and celerybeat.
The log level output to stdout and stderr is logged as. Can be one of DEBUG, INFO, WARNING, ERROR or CRITICAL.
Default is WARNING.
Name of the task pool class used by the worker. Default is celery.concurrency.processes.TaskPool.
Name of the listener class used by the worker. Default is celery.worker.listener.CarrotListener.
Name of the mediator class used by the worker. Default is celery.worker.controllers.Mediator.
Name of the ETA scheduler class used by the worker. Default is celery.worker.controllers.ScheduleController.
The periodic task schedule used by celerybeat. See Entries.
The default scheduler class. Default is "celery.beat.PersistentScheduler".
Can also be set via the -S argument to celerybeat.
Name of the file used by PersistentScheduler to store the last run times of periodic tasks. Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).
Can also be set via the --schedule argument to celerybeat.
The maximum number of seconds celerybeat can sleep between checking the schedule. Default is 300 seconds (5 minutes).
The default file name to log messages to. Can be overridden using the –logfile` option to celerybeat.
The default is None (stderr).
Logging level. Can be any of DEBUG, INFO, WARNING, ERROR, or CRITICAL.
Can also be set via the --loglevel argument to celerybeat.
See the logging module for more information.
The default file name to log messages to. Can be overridden using the --logfile argument to celerymon.
The default is None (stderr)
Logging level. Can be any of DEBUG, INFO, WARNING, ERROR, or CRITICAL.
See the logging module for more information.