This document describes an older version of Celery (2.1). For the latest stable version please go here.

Configuration and defaults

This document describes the configuration options available.

If you’re using the default loader, you must create the celeryconfig.py module and make sure it is available on the Python path.

Example configuration file

This is an example configuration file to get you started. It should contain all you need to run a basic Celery set-up.

# List of modules to import when celery starts.
CELERY_IMPORTS = ("myapp.tasks", )

## Result store settings.
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = "sqlite:///mydatabase.db"

## Broker settings.
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_VHOST = "/"
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"

## Worker settings
## If you're doing mostly I/O you can have more processes,
## but if mostly spending CPU, try to keep it close to the
## number of CPUs on your machine. If not set, the number of CPUs/cores
## available will be used.
CELERYD_CONCURRENCY = 10
# CELERYD_LOG_FILE = "celeryd.log"
# CELERYD_LOG_LEVEL = "INFO"

Configuration Directives

Concurrency settings

CELERYD_CONCURRENCY

The number of concurrent worker processes, executing tasks simultaneously.

Defaults to the number of CPUs/cores available.

CELERYD_PREFETCH_MULTIPLIER

How many messages to prefetch at a time multiplied by the number of concurrent processes. The default is 4 (four messages for each process). The default setting is usually a good choice, however – if you have very long running tasks waiting in the queue and you have to start the workers, note that the first worker to start will receive four times the number of messages initially. Thus the tasks may not be fairly distributed to the workers.

Task result backend settings

CELERY_RESULT_BACKEND

The backend used to store task results (tombstones). Can be one of the following:

Database backend settings

CELERY_RESULT_DBURI

Please see Supported Databases for a table of supported databases. To use this backend you need to configure it with an Connection String, some examples include:

# sqlite (filename)
CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"

# mysql
CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"

# postgresql
CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"

# oracle
CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"

See Connection String for more information about connection strings.

CELERY_RESULT_ENGINE_OPTIONS

To specify additional SQLAlchemy database engine options you can use the CELERY_RESULT_ENGINE_OPTIONS setting:

# echo enables verbose logging from SQLAlchemy.
CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}

Example configuration

CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = "mysql://user:password@host/dbname"

AMQP backend settings

CELERY_AMQP_TASK_RESULT_EXPIRES

The time in seconds of which the task result queues should expire.

Note

AMQP result expiration requires RabbitMQ versions 2.1.0 and higher.

CELERY_RESULT_EXCHANGE

Name of the exchange to publish results in. Default is "celeryresults".

CELERY_RESULT_EXCHANGE_TYPE

The exchange type of the result exchange. Default is to use a direct exchange.

CELERY_RESULT_SERIALIZER

Result message serialization format. Default is "pickle". See Serializers.

CELERY_RESULT_PERSISTENT

If set to True, result messages will be persistent. This means the messages will not be lost after a broker restart. The default is for the results to be transient.

Example configuration

CELERY_RESULT_BACKEND = "amqp"
CELERY_AMQP_TASK_RESULT_EXPIRES = 18000  # 5 hours.

Cache backend settings

Note

The cache backend supports the pylibmc and python-memcached libraries. The latter is used only if pylibmc is not installed.

CELERY_CACHE_BACKEND

Using a single memcached server:

CELERY_CACHE_BACKEND = 'memcached://127.0.0.1:11211/'

Using multiple memcached servers:

CELERY_RESULT_BACKEND = "cache"
CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'

CELERY_CACHE_BACKEND_OPTIONS

You can set pylibmc options using the CELERY_CACHE_BACKEND_OPTIONS setting:

CELERY_CACHE_BACKEND_OPTIONS = {"binary": True,
                                "behaviors": {"tcp_nodelay": True}}

Tokyo Tyrant backend settings

Note

The Tokyo Tyrant backend requires the pytyrant library: http://pypi.python.org/pypi/pytyrant/

This backend requires the following configuration directives to be set:

TT_HOST

Host name of the Tokyo Tyrant server.

TT_PORT

The port the Tokyo Tyrant server is listening to.

Example configuration

CELERY_RESULT_BACKEND = "tyrant"
TT_HOST = "localhost"
TT_PORT = 1978

Redis backend settings

Note

The Redis backend requires the redis library: http://pypi.python.org/pypi/redis/0.5.5

To install the redis package use pip or easy_install:

$ pip install redis

This backend requires the following configuration directives to be set.

REDIS_HOST

Host name of the Redis database server. e.g. “localhost”.

REDIS_PORT

Port to the Redis database server. e.g. 6379.

REDIS_DB

Database number to use. Default is 0

REDIS_PASSWORD

Password used to connect to the database.

Example configuration

CELERY_RESULT_BACKEND = "redis"
REDIS_HOST = "localhost"
REDIS_PORT = 6379
REDIS_DB = 0
REDIS_CONNECT_RETRY = True

MongoDB backend settings

Note

The MongoDB backend requires the pymongo library: http://github.com/mongodb/mongo-python-driver/tree/master

CELERY_MONGODB_BACKEND_SETTINGS

This is a dict supporting the following keys:

  • host

    Host name of the MongoDB server. Defaults to “localhost”.

  • port

    The port the MongoDB server is listening to. Defaults to 27017.

  • user

    User name to authenticate to the MongoDB server as (optional).

  • password

    Password to authenticate to the MongoDB server (optional).

  • database

    The database name to connect to. Defaults to “celery”.

  • taskmeta_collection

    The collection name to store task meta data. Defaults to “celery_taskmeta”.

Example configuration

CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
    "host": "192.168.1.100",
    "port": 30000,
    "database": "mydb",
    "taskmeta_collection": "my_taskmeta_collection",
}

Message Routing

CELERY_QUEUES

The mapping of queues the worker consumes from. This is a dictionary of queue name/options. See Routing Tasks for more information.

The default is a queue/exchange/binding key of "celery", with exchange type direct.

You don’t have to care about this unless you want custom routing facilities.

CELERY_ROUTES

A list of routers, or a single router used to route tasks to queues. When deciding the final destination of a task the routers are consulted in order. See Routers for more information.

CELERY_CREATE_MISSING_QUEUES

If enabled (default), any queues specified that is not defined in CELERY_QUEUES will be automatically created. See Automatic routing.

CELERY_DEFAULT_QUEUE

The queue used by default, if no custom queue is specified. This queue must be listed in CELERY_QUEUES. The default is: celery.

CELERY_DEFAULT_EXCHANGE

Name of the default exchange to use when no custom exchange is specified. The default is: celery.

CELERY_DEFAULT_EXCHANGE_TYPE

Default exchange type used when no custom exchange is specified. The default is: direct.

CELERY_DEFAULT_ROUTING_KEY

The default routing key used when sending tasks. The default is: celery.

CELERY_DEFAULT_DELIVERY_MODE

Can be transient or persistent. The default is to send persistent messages.

Broker Settings

BROKER_BACKEND

The messaging backend to use. Default is "amqplib".

BROKER_HOST

Hostname of the broker.

BROKER_PORT

Custom port of the broker. Default is to use the default port for the selected backend.

BROKER_USER

Username to connect as.

BROKER_PASSWORD

Password to connect with.

BROKER_VHOST

Virtual host. Default is "/".

BROKER_USE_SSL

Use SSL to connect to the broker. Off by default. This may not be supported by all transports.

BROKER_CONNECTION_TIMEOUT

The default timeout in seconds before we give up establishing a connection to the AMQP server. Default is 4 seconds.

BROKER_CONNECTION_RETRY

Automatically try to re-establish the connection to the AMQP broker if lost.

The time between retries is increased for each retry, and is not exhausted before CELERY_BROKER_CONNECTION_MAX_RETRIES is exceeded.

This behavior is on by default.

CELERY_BROKER_CONNECTION_MAX_RETRIES

Maximum number of retries before we give up re-establishing a connection to the AMQP broker.

If this is set to 0 or None, we will retry forever.

Default is 100 retries.

Task execution settings

CELERY_ALWAYS_EAGER

If this is True, all tasks will be executed locally by blocking until it is finished. apply_async and Task.delay will return a EagerResult which emulates the behavior of AsyncResult, except the result has already been evaluated.

Tasks will never be sent to the queue, but executed locally instead.

CELERY_EAGER_PROPAGATES_EXCEPTIONS

If this is True, eagerly executed tasks (using .apply, or with CELERY_ALWAYS_EAGER on), will raise exceptions.

It’s the same as always running apply with throw=True.

CELERY_IGNORE_RESULT

Whether to store the task return values or not (tombstones). If you still want to store errors, just not successful return values, you can set CELERY_STORE_ERRORS_EVEN_IF_IGNORED.

CELERY_TASK_RESULT_EXPIRES

Time (in seconds, or a timedelta object) for when after stored task tombstones will be deleted.

A built-in periodic task will delete the results after this time (celery.task.builtins.backend_cleanup).

Note

For the moment this only works with the database, cache, redis and MongoDB backends. For the AMQP backend see CELERY_AMQP_TASK_RESULT_EXPIRES.

When using the database or MongoDB backends, celerybeat must be running for the results to be expired.

CELERY_MAX_CACHED_RESULTS

Total number of results to store before results are evicted from the result cache. The default is 5000.

CELERY_TRACK_STARTED

If True the task will report its status as “started” when the task is executed by a worker. The default value is False as the normal behaviour is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried. Having a “started” state can be useful for when there are long running tasks and there is a need to report which task is currently running.

CELERY_TASK_SERIALIZER

A string identifying the default serialization method to use. Can be pickle (default), json, yaml, or any custom serialization methods that have been registered with carrot.serialization.registry.

See also

Serializers.

CELERY_DEFAULT_RATE_LIMIT

The global default rate limit for tasks.

This value is used for tasks that does not have a custom rate limit The default is no rate limit.

CELERY_DISABLE_RATE_LIMITS

Disable all rate limits, even if tasks has explicit rate limits set.

CELERY_ACKS_LATE

Late ack means the task messages will be acknowledged after the task has been executed, not just before, which is the default behavior.

Worker: celeryd

CELERY_IMPORTS

A sequence of modules to import when the celery daemon starts.

This is used to specify the task modules to import, but also to import signal handlers and additional remote control commands, etc.

CELERYD_MAX_TASKS_PER_CHILD

Maximum number of tasks a pool worker process can execute before it’s replaced with a new one. Default is no limit.

CELERYD_TASK_TIME_LIMIT

Task hard time limit in seconds. The worker processing the task will be killed and replaced with a new one when this is exceeded.

CELERYD_TASK_SOFT_TIME_LIMIT

Task soft time limit in seconds.

The SoftTimeLimitExceeded exception will be raised when this is exceeded. The task can catch this to e.g. clean up before the hard time limit comes.

Example:

from celery.decorators import task
from celery.exceptions import SoftTimeLimitExceeded

@task()
def mytask():
    try:
        return do_work()
    except SoftTimeLimitExceeded:
        cleanup_in_a_hurry()

CELERY_STORE_ERRORS_EVEN_IF_IGNORED

If set, the worker stores all task errors in the result store even if Task.ignore_result is on.

CELERYD_STATE_DB

Name of the file used to stores persistent worker state (like revoked tasks). Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).

Can also be set via the --statedb argument to celeryd.

Not enabled by default.

CELERYD_ETA_SCHEDULER_PRECISION

Set the maximum time in seconds that the ETA scheduler can sleep between rechecking the schedule. Default is 1 second.

Setting this value to 1 second means the schedulers precision will be 1 second. If you need near millisecond precision you can set this to 0.1.

Error E-Mails

CELERY_SEND_TASK_ERROR_EMAILS

The default value for the Task.send_error_emails attribute, which if set to True means errors occurring during task execution will be sent to ADMINS by e-mail.

CELERY_TASK_ERROR_WHITELIST

A white list of exceptions to send error e-mails for.

ADMINS

List of (name, email_address) tuples for the administrators that should receive error e-mails.

SERVER_EMAIL

The e-mail address this worker sends e-mails from. Default is celery@localhost.

MAIL_HOST

The mail server to use. Default is "localhost".

MAIL_HOST_USER

User name (if required) to log on to the mail server with.

MAIL_HOST_PASSWORD

Password (if required) to log on to the mail server with.

MAIL_PORT

The port the mail server is listening on. Default is 25.

Example E-Mail configuration

This configuration enables the sending of error e-mails to george@vandelay.com and kramer@vandelay.com:

# Enables error e-mails.
CELERY_SEND_TASK_ERROR_EMAILS = True

# Name and e-mail addresses of recipients
ADMINS = (
    ("George Costanza", "george@vandelay.com"),
    ("Cosmo Kramer", "kosmo@vandelay.com"),
)

# E-mail address used as sender (From field).
SERVER_EMAIL = "no-reply@vandelay.com"

# Mailserver configuration
EMAIL_HOST = "mail.vandelay.com"
EMAIL_PORT = 25
# EMAIL_HOST_USER = "servers"
# EMAIL_HOST_PASSWORD = "s3cr3t"

EMAIL_TIMEOUT = 2   # two seconds is the default

Events

CELERY_SEND_EVENTS

Send events so the worker can be monitored by tools like celerymon.

CELERY_EVENT_QUEUE

Name of the queue to consume event messages from. Default is "celeryevent".

CELERY_EVENT_EXCHANGE

Name of the exchange to send event messages to. Default is "celeryevent".

CELERY_EVENT_EXCHANGE_TYPE

The exchange type of the event exchange. Default is to use a "direct" exchange.

CELERY_EVENT_ROUTING_KEY

Routing key used when sending event messages. Default is "celeryevent".

CELERY_EVENT_SERIALIZER

Message serialization format used when sending event messages. Default is "json". See Serializers.

Broadcast Commands

CELERY_BROADCAST_QUEUE

Name prefix for the queue used when listening for broadcast messages. The workers host name will be appended to the prefix to create the final queue name.

Default is "celeryctl".

CELERY_BROADCAST_EXCHANGE

Name of the exchange used for broadcast messages.

Default is "celeryctl".

CELERY_BROADCAST_EXCHANGE_TYPE

Exchange type used for broadcast messages. Default is "fanout".

Logging

CELERYD_LOG_FILE

The default file name the worker daemon logs messages to. Can be overridden using the --logfile option to celeryd.

The default is None (stderr)

CELERYD_LOG_LEVEL

Worker log level, can be one of DEBUG, INFO, WARNING, ERROR or CRITICAL.

Can also be set via the --loglevel argument to celeryd.

See the logging module for more information.

CELERYD_LOG_FORMAT

The format to use for log messages.

Default is [%(asctime)s: %(levelname)s/%(processName)s] %(message)s

See the Python logging module for more information about log formats.

CELERYD_TASK_LOG_FORMAT

The format to use for log messages logged in tasks. Can be overridden using the --loglevel option to celeryd.

Default is:

[%(asctime)s: %(levelname)s/%(processName)s]
    [%(task_name)s(%(task_id)s)] %(message)s

See the Python logging module for more information about log formats.

CELERY_REDIRECT_STDOUTS

If enabled stdout and stderr will be redirected to the current logger.

Enabled by default. Used by celeryd and celerybeat.

CELERY_REDIRECT_STDOUTS_LEVEL

The log level output to stdout and stderr is logged as. Can be one of DEBUG, INFO, WARNING, ERROR or CRITICAL.

Default is WARNING.

Custom Component Classes (advanced)

CELERYD_POOL

Name of the task pool class used by the worker. Default is celery.concurrency.processes.TaskPool.

CELERYD_LISTENER

Name of the listener class used by the worker. Default is celery.worker.listener.CarrotListener.

CELERYD_MEDIATOR

Name of the mediator class used by the worker. Default is celery.worker.controllers.Mediator.

CELERYD_ETA_SCHEDULER

Name of the ETA scheduler class used by the worker. Default is celery.worker.controllers.ScheduleController.

Periodic Task Server: celerybeat

CELERYBEAT_SCHEDULE

The periodic task schedule used by celerybeat. See Entries.

CELERYBEAT_SCHEDULER

The default scheduler class. Default is "celery.beat.PersistentScheduler".

Can also be set via the -S argument to celerybeat.

CELERYBEAT_SCHEDULE_FILENAME

Name of the file used by PersistentScheduler to store the last run times of periodic tasks. Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).

Can also be set via the --schedule argument to celerybeat.

CELERYBEAT_MAX_LOOP_INTERVAL

The maximum number of seconds celerybeat can sleep between checking the schedule. Default is 300 seconds (5 minutes).

CELERYBEAT_LOG_FILE

The default file name to log messages to. Can be overridden using the –logfile` option to celerybeat.

The default is None (stderr).

CELERYBEAT_LOG_LEVEL

Logging level. Can be any of DEBUG, INFO, WARNING, ERROR, or CRITICAL.

Can also be set via the --loglevel argument to celerybeat.

See the logging module for more information.

Monitor Server: celerymon

CELERYMON_LOG_FILE

The default file name to log messages to. Can be overridden using the --logfile argument to celerymon.

The default is None (stderr)

CELERYMON_LOG_LEVEL

Logging level. Can be any of DEBUG, INFO, WARNING, ERROR, or CRITICAL.

See the logging module for more information.

Previous topic

Optimizing

Next topic

Cookbook

This Page