This document describes Celery 2.4. For development docs, go here.

celery.concurrency.processes.pool

class celery.concurrency.processes.pool.ApplyResult(cache, callback, accept_callback=None, timeout_callback=None, error_callback=None, soft_timeout=None, timeout=None)
accepted()
get(timeout=None)
ready()
successful()
wait(timeout=None)
worker_pids()
celery.concurrency.processes.pool.DynamicPool

alias of Pool

class celery.concurrency.processes.pool.IMapIterator(cache)
next(timeout=None)
class celery.concurrency.processes.pool.IMapUnorderedIterator(cache)
class celery.concurrency.processes.pool.LaxBoundedSemaphore(value=1, verbose=None)

Semaphore that checks that # release is <= # acquires, but ignores if # releases >= value.

clear()
release()
class celery.concurrency.processes.pool.MapResult(cache, chunksize, length, callback)
accepted()
worker_pids()
exception celery.concurrency.processes.pool.MaybeEncodingError(exc, value)

Wraps unpickleable object.

class celery.concurrency.processes.pool.Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None, timeout=None, soft_timeout=None)

Class which supports an async version of the apply() builtin

class Process(group=None, target=None, name=None, args=(), kwargs={})

Process objects represent activity that is run in a separate process

The class is analagous to threading.Thread

authkey
daemon

Return whether process is a daemon

exitcode

Return exit code of process or None if it has yet to stop

ident

Return identifier (PID) of process or None if it has yet to start

is_alive()

Return whether process is alive

join(timeout=None)

Wait until child process terminates

name
pid

Return identifier (PID) of process or None if it has yet to start

run()

Method to be run in sub-process; can be overridden in sub-class

start()

Start child process

terminate()

Terminate process; sends SIGTERM signal or uses TerminateProcess()

class Pool.ResultHandler(outqueue, get, cache, poll, join_exited_workers, putlock)
body()
exception Pool.SoftTimeLimitExceeded

The soft time limit has been exceeded. This exception is raised to give the task a chance to clean up.

class Pool.Supervisor(pool)
body()
class Pool.TaskHandler(taskqueue, put, outqueue, pool)
body()
class Pool.TimeoutHandler(processes, cache, t_soft, t_hard)
body()
Pool.apply(func, args=(), kwds={})

Equivalent of apply() builtin

Pool.apply_async(func, args=(), kwds={}, callback=None, accept_callback=None, timeout_callback=None, waitforslot=False, error_callback=None, soft_timeout=None, timeout=None)

Asynchronous equivalent of apply() builtin.

Callback is called when the functions return value is ready. The accept callback is called when the job is accepted to be executed.

Simplified the flow is like this:

>>> if accept_callback:
...     accept_callback()
>>> retval = func(*args, **kwds)
>>> if callback:
...     callback(retval)
Pool.close()
Pool.grow(n=1)
Pool.imap(func, iterable, chunksize=1)

Equivalent of itertools.imap() – can be MUCH slower than Pool.map()

Pool.imap_unordered(func, iterable, chunksize=1)

Like imap() method but ordering of results is arbitrary

Pool.join()
Pool.map(func, iterable, chunksize=None)

Equivalent of map() builtin

Pool.map_async(func, iterable, chunksize=None, callback=None)

Asynchronous equivalent of map() builtin

Pool.shrink(n=1)
Pool.terminate()
class celery.concurrency.processes.pool.PoolThread(*args, **kwargs)
close()
run()
terminate()
class celery.concurrency.processes.pool.ResultHandler(outqueue, get, cache, poll, join_exited_workers, putlock)
body()
class celery.concurrency.processes.pool.Supervisor(pool)
body()
class celery.concurrency.processes.pool.TaskHandler(taskqueue, put, outqueue, pool)
body()
class celery.concurrency.processes.pool.ThreadPool(processes=None, initializer=None, initargs=())
class DummyProcess(group=None, target=None, name=None, args=(), kwargs={})
exitcode
start()
ThreadPool.Process

alias of DummyProcess

class celery.concurrency.processes.pool.TimeoutHandler(processes, cache, t_soft, t_hard)
body()
exception celery.concurrency.processes.pool.WorkersJoined

All workers have terminated.

celery.concurrency.processes.pool.error(msg, *args, **kwargs)
celery.concurrency.processes.pool.mapstar(args)
celery.concurrency.processes.pool.soft_timeout_sighandler(signum, frame)
celery.concurrency.processes.pool.worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None)