This document describes the current stable version of Celery (4.4). For development docs, go here.

celery.bin.worker

Program used to start a Celery worker instance.

The celery worker command (previously known as celeryd)

See also

See Preload Options.

-c, --concurrency

Number of child processes processing the queue. The default is the number of CPUs available on your system.

-P, --pool

Pool implementation:

prefork (default), eventlet, gevent or solo.

-n, --hostname

Set custom hostname (e.g., ‘w1@%%h’). Expands: %%h (hostname), %%n (name) and %%d, (domain).

-B, --beat

Also run the celery beat periodic task scheduler. Please note that there must only be one instance of this service.

Note

-B is meant to be used for development purposes. For production environment, you need to start celery beat separately.

-Q, --queues

List of queues to enable for this worker, separated by comma. By default all configured queues are enabled. Example: -Q video,image

-X, --exclude-queues

List of queues to disable for this worker, separated by comma. By default all configured queues are enabled. Example: -X video,image.

-I, --include

Comma separated list of additional modules to import. Example: -I foo.tasks,bar.tasks

-s, --schedule

Path to the schedule database if running with the -B option. Defaults to celerybeat-schedule. The extension “.db” may be appended to the filename.

-O

Apply optimization profile. Supported: default, fair

--prefetch-multiplier

Set custom prefetch multiplier value for this worker instance.

--scheduler

Scheduler class to use. Default is celery.beat.PersistentScheduler

-S, --statedb

Path to the state database. The extension ‘.db’ may be appended to the filename. Default: {default}

-E, --task-events

Send task-related events that can be captured by monitors like celery events, celerymon, and others.

--without-gossip

Don’t subscribe to other workers events.

--without-mingle

Don’t synchronize with other workers at start-up.

--without-heartbeat

Don’t send event heartbeats.

--heartbeat-interval

Interval in seconds at which to send worker heartbeat

--purge

Purges all waiting tasks before the daemon is started. WARNING: This is unrecoverable, and the tasks will be deleted from the messaging server.

--time-limit

Enables a hard time limit (in seconds int/float) for tasks.

--soft-time-limit

Enables a soft time limit (in seconds int/float) for tasks.

--max-tasks-per-child

Maximum number of tasks a pool worker can execute before it’s terminated and replaced by a new worker.

--max-memory-per-child

Maximum amount of resident memory, in KiB, that may be consumed by a child process before it will be replaced by a new one. If a single task causes a child process to exceed this limit, the task will be completed and the child process will be replaced afterwards. Default: no limit.

--autoscale

Enable autoscaling by providing max_concurrency, min_concurrency. Example:

--autoscale=10,3

(always keep 3 processes, but grow to 10 if necessary)

--detach

Start worker as a background process.

-f, --logfile

Path to log file. If no logfile is specified, stderr is used.

-l, --loglevel

Logging level, choose between DEBUG, INFO, WARNING, ERROR, CRITICAL, or FATAL.

--pidfile

Optional file used to store the process pid.

The program won’t start if this file already exists and the pid is still alive.

--uid

User id, or user name of the user to run as after detaching.

--gid

Group id, or group name of the main group to change to after detaching.

--umask

Effective umask(1) (in octal) of the process after detaching. Inherits the umask(1) of the parent process by default.

--workdir

Optional directory to change to after detaching.

--executable

Executable to use for the detached process.

class celery.bin.worker.worker(app=None, get_app=None, no_color=False, stdout=None, stderr=None, quiet=False, on_error=None, on_usage_error=None)[source]

Start worker instance.

Examples

$ celery worker --app=proj -l info
$ celery worker -A proj -l info -Q hipri,lopri

$ celery worker -A proj --concurrency=4
$ celery worker -A proj --concurrency=1000 -P eventlet
$ celery worker --autoscale=10,0
add_arguments(parser)[source]
doc = 'Program used to start a Celery worker instance.\n\nThe :program:`celery worker` command (previously known as ``celeryd``)\n\n.. program:: celery worker\n\n.. seealso::\n\n See :ref:`preload-options`.\n\n.. cmdoption:: -c, --concurrency\n\n Number of child processes processing the queue. The default\n is the number of CPUs available on your system.\n\n.. cmdoption:: -P, --pool\n\n Pool implementation:\n\n prefork (default), eventlet, gevent or solo.\n\n.. cmdoption:: -n, --hostname\n\n Set custom hostname (e.g., \'w1@%%h\'). Expands: %%h (hostname),\n %%n (name) and %%d, (domain).\n\n.. cmdoption:: -B, --beat\n\n Also run the `celery beat` periodic task scheduler. Please note that\n there must only be one instance of this service.\n\n .. note::\n\n ``-B`` is meant to be used for development purposes. For production\n environment, you need to start :program:`celery beat` separately.\n\n.. cmdoption:: -Q, --queues\n\n List of queues to enable for this worker, separated by comma.\n By default all configured queues are enabled.\n Example: `-Q video,image`\n\n.. cmdoption:: -X, --exclude-queues\n\n List of queues to disable for this worker, separated by comma.\n By default all configured queues are enabled.\n Example: `-X video,image`.\n\n.. cmdoption:: -I, --include\n\n Comma separated list of additional modules to import.\n Example: -I foo.tasks,bar.tasks\n\n.. cmdoption:: -s, --schedule\n\n Path to the schedule database if running with the `-B` option.\n Defaults to `celerybeat-schedule`. The extension ".db" may be\n appended to the filename.\n\n.. cmdoption:: -O\n\n Apply optimization profile. Supported: default, fair\n\n.. cmdoption:: --prefetch-multiplier\n\n Set custom prefetch multiplier value for this worker instance.\n\n.. cmdoption:: --scheduler\n\n Scheduler class to use. Default is\n :class:`celery.beat.PersistentScheduler`\n\n.. cmdoption:: -S, --statedb\n\n Path to the state database. The extension \'.db\' may\n be appended to the filename. Default: {default}\n\n.. cmdoption:: -E, --task-events\n\n Send task-related events that can be captured by monitors like\n :program:`celery events`, `celerymon`, and others.\n\n.. cmdoption:: --without-gossip\n\n Don\'t subscribe to other workers events.\n\n.. cmdoption:: --without-mingle\n\n Don\'t synchronize with other workers at start-up.\n\n.. cmdoption:: --without-heartbeat\n\n Don\'t send event heartbeats.\n\n.. cmdoption:: --heartbeat-interval\n\n Interval in seconds at which to send worker heartbeat\n\n.. cmdoption:: --purge\n\n Purges all waiting tasks before the daemon is started.\n **WARNING**: This is unrecoverable, and the tasks will be\n deleted from the messaging server.\n\n.. cmdoption:: --time-limit\n\n Enables a hard time limit (in seconds int/float) for tasks.\n\n.. cmdoption:: --soft-time-limit\n\n Enables a soft time limit (in seconds int/float) for tasks.\n\n.. cmdoption:: --max-tasks-per-child\n\n Maximum number of tasks a pool worker can execute before it\'s\n terminated and replaced by a new worker.\n\n.. cmdoption:: --max-memory-per-child\n\n Maximum amount of resident memory, in KiB, that may be consumed by a\n child process before it will be replaced by a new one. If a single\n task causes a child process to exceed this limit, the task will be\n completed and the child process will be replaced afterwards.\n Default: no limit.\n\n.. cmdoption:: --autoscale\n\n Enable autoscaling by providing\n max_concurrency, min_concurrency. Example::\n\n --autoscale=10,3\n\n (always keep 3 processes, but grow to 10 if necessary)\n\n.. cmdoption:: --detach\n\n Start worker as a background process.\n\n.. cmdoption:: -f, --logfile\n\n Path to log file. If no logfile is specified, `stderr` is used.\n\n.. cmdoption:: -l, --loglevel\n\n Logging level, choose between `DEBUG`, `INFO`, `WARNING`,\n `ERROR`, `CRITICAL`, or `FATAL`.\n\n.. cmdoption:: --pidfile\n\n Optional file used to store the process pid.\n\n The program won\'t start if this file already exists\n and the pid is still alive.\n\n.. cmdoption:: --uid\n\n User id, or user name of the user to run as after detaching.\n\n.. cmdoption:: --gid\n\n Group id, or group name of the main group to change to after\n detaching.\n\n.. cmdoption:: --umask\n\n Effective :manpage:`umask(1)` (in octal) of the process after detaching.\n Inherits the :manpage:`umask(1)` of the parent process by default.\n\n.. cmdoption:: --workdir\n\n Optional directory to change to after detaching.\n\n.. cmdoption:: --executable\n\n Executable to use for the detached process.\n'
enable_config_from_cmdline = True
maybe_detach(argv, dopts=None)[source]
namespace = 'worker'
removed_flags = {'--force-execv', '--no-execv'}
run(hostname=None, pool_cls=None, app=None, uid=None, gid=None, loglevel=None, logfile=None, pidfile=None, statedb=None, **kwargs)[source]
run_from_argv(prog_name, argv=None, command=None)[source]
supports_args = False
with_pool_option(argv)[source]

Return tuple of (short_opts, long_opts).

Returns only if the command supports a pool argument, and used to monkey patch eventlet/gevent environments as early as possible.

Example

>>> has_pool_option = (['-P'], ['--pool'])
celery.bin.worker.main(app=None)[source]

Start worker.