Skip to content

Releases: coleifer/huey

2.3.1

04 Mar 15:15
Compare
Choose a tag to compare
  • Add SIGNAL_INTERRUPTED to signal when a task is interrupted when a consumer exits abruptly.
  • Use the Huey.create_consumer() API within the Django management command, to allow Django users to customize the creation of the Consumer instance.

View commits

2.3.0

04 Mar 15:14
Compare
Choose a tag to compare
  • Use monotonic clock for timing operations within the consumer.
  • Ensure internal state is cleaned up on file-lock when released.
  • Support passing around TaskException as a pickled value.
  • Set the multiprocessing mode to "fork" on MacOS and Python 3.8 or newer.
  • Added option to enforce FIFO behavior when using Sqlite as storage.
  • Added the on_shutdown handler to djhuey namespace.
  • Ensure exception is set on AsyncResult in mini-huey.

View commits

2.2.0

23 Feb 15:39
Compare
Choose a tag to compare
  • Fix task repr (refs #460).
  • Adds task-id into metadata for task exceptions (refs #461).
  • Ensure database connection is not closed when using the call_local method of Django helper extension db_periodic_task().
  • Allow pickle protocol to be explicitly configured in serializer parameters.
  • Adds FileHuey and full FileStorage implementation.
  • Add shutdown() hook, which will be run in the context of the worker threads/processes during shutdown. This hook can be used to clean-up shared or global resources, for example.
  • Allow pipelines to be chained together. Additionally, support chaining task instances.

View commits

2.1.3

16 Oct 14:24
Compare
Choose a tag to compare
  • Fix semantics of SIGNAL_COMPLETE so that it is not sent until the result is ready.
  • Use classes for the specific Huey implementations (e.g. RedisHuey) so that it is easier to subclass / extend. Previously we just used a partial application of the constructor, which could be confusing.
  • Fix shutdown logic in consumer when using multiprocess worker model. Previously the consumer would perform a "graceful" shutdown, even when an immediate shutdown was requested (SIGTERM). Also cleans up the signal-handling code and ensures that interrupted tasks log a warning properly to indicate they were interrupted.

View commits

2.1.2

04 Sep 15:26
Compare
Choose a tag to compare
  • Allow AsyncResult object used in MiniHuey to support the __call__() method to block and resolve the task result.
  • When running the django run_huey management command, the huey loggers will not be configured if another logging handler is already registered to the huey namespace.
  • Added experimental contrib storage engine using kyoto tycoon <http://fallabs.com/kyototycoon>_ which supports task priority and the option to do automatic result expiration. Requires the ukt <https://github.com/coleifer/ukt>_ python package and a custom kyototycoon lua script.
  • Allow the Sqlite storage engine busy timeout to be configured when instantiating SqliteHuey.

View commits

2.1.1

07 Aug 15:58
Compare
Choose a tag to compare
  • Ensure that task()-decorated functions retain their docstrings.
  • Fix logger setup so that the consumer log configuration is only applied to the huey namespace, rather than the root logger.
  • Expose result, signal and disconnect_signal in the Django huey extension.
  • Add SignedSerializer, which signs and validates task messages.
  • Refactor the SqliteStorage so that it can be more easily extended to support other databases.

View commits

2.1.0

06 Jun 13:37
Compare
Choose a tag to compare
  • Added new contrib module sql_huey, which uses peewee <https://github.com/coleifer/peewee>_ to provide storage layer using any of the supported databases (sqlite, mysql or postgresql).
  • Added RedisExpireHuey, which modifies the usual Redis result storage logic to use an expire time for task result values. A consequence of this is that this storage implementation must keep all result keys at the top-level Redis keyspace. There are some small changes to the storage APIs as well, but will only possibly affect maintainers of alternative storage layers.
  • Also added a PriorityRedisExpireHuey which combines the priority-queue support from PriorityRedisHuey with the result-store expiration mechanism of RedisExpireHuey.
  • Fix gzip compatibility issue when using Python 2.x.
  • Add option to Huey to use zlib as the compression method instead of gzip.
  • Added FileStorageMethods storage mixin, which uses the filesystem for task result-store APIs (put, peek, pop).
  • The storage-specific Huey implementations (e.g. RedisHuey) are no longer subclasses, but instead are partial applications of the Huey constructor.

View commits

2.0.1

03 Apr 15:26
Compare
Choose a tag to compare
  • Small fixes, fixed typo in Exception class being caught by scheduler.

View commits

2.0.0

02 Apr 02:23
Compare
Choose a tag to compare

View commits

This section describes the changes in the 2.0.0 release. A detailed list of
changes can be found here: https://huey.readthedocs.io/en/latest/changes.html

Overview of changes:

  • always_eager mode has been renamed to immediate mode. Unlike previous
    versions, immediate mode involves the same code paths used by the consumer
    process. This makes it easier to test features like task revocation and task
    scheduling without needing to run a dedicated consumer process. Immediate
    mode uses an in-memory storage layer by default, but can be configured to use
    "live" storage like Redis or Sqlite.
  • The events stream API has been removed in favor of simpler callback-driven
    signals APIs. These
    callbacks are executed synchronously within the huey consumer process.
  • A new serialization format is used in 2.0.0, however consumers running 2.0
    will continue to be able to read and deserialize messages enqueued by Huey
    version 1.11.0 for backwards compatibility.
  • Support for task priorities.
  • New Serializer abstraction allows users to customize the serialization
    format used when reading and writing tasks.
  • Huey consumer and scheduler can be more easily run within the application
    process, if you prefer not to run a separate consumer process.
  • Tasks can now specify an on_error handler, in addition to the
    previously-supported on_complete handler.
  • Task pipelines return a special ResultGroup object which simplifies reading
    the results of a sequence of task executions.
  • SqliteHuey has been promoted out of contrib, onto an equal footing with
    RedisHuey. To simplify deployment, the dependency on
    peewee was removed and the Sqlite
    storage engine uses the Python sqlite3 driver directly.

1.11.0

16 Feb 21:31
Compare
Choose a tag to compare

View commits

Backwards-incompatible changes

Previously, it was possible for certain tasks to be silently ignored if a task with that name already existed in the registry. To fix this, I have made two changes:

  1. The task-name, when serialized, now consists of the task module and the name of the decorated function. So, "queue_task_foo" becomes "myapp.tasks.foo".
  2. An exception will be raised when attempting to register a task function with the same module + name.

Together, these changes are intended to fix problems described in #386.

Because these changes will impact the serialization (and deserialization) of messages, it is important that you consume all tasks (including scheduled tasks) before upgrading.

Always-eager mode changes

In order to provide a more consistent API, tasks enqueued using always_eager mode will now return a dummy TaskResultWrapper implementation that wraps the return value of the task. This change is designed to provide the same API for reading task result values, regardless of whether you are using always-eager mode or not.

Previously, tasks executed with always_eager would return the Python value directly from the task. When using Huey with the consumer, though, task results are not available immediately, so a special wrapper TaskResultWrapper is returned, which provides helper methods for retrieving the return value of the task. Going forward, always_eager tasks will return EagerTaskResultWrapper, which implements the same get() API that is typically used to retrieve task return values.