bull vs agenda vs kue vs bree vs bee-queue vs node-resque
Node.js Job Queues Comparison
1 Year
bullagendakuebreebee-queuenode-resqueSimilar Packages:
What's Node.js Job Queues?

Job queue libraries in Node.js are designed to handle asynchronous tasks and background jobs efficiently. They allow developers to manage and schedule tasks that can be processed outside the main application flow, improving performance and user experience. These libraries provide features such as delayed job execution, concurrency control, and job prioritization, making them essential for applications that require task scheduling and background processing.

Package Weekly Downloads Trend
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
bull906,75815,809309 kB1503 months agoMIT
agenda122,5369,481353 kB350-MIT
kue22,8759,459-2878 years agoMIT
bree21,5433,11690.5 kB307 months agoMIT
bee-queue19,1603,901106 kB46a year agoMIT
node-resque13,0311,387705 kB192 months agoApache-2.0
Feature Comparison: bull vs agenda vs kue vs bree vs bee-queue vs node-resque

Storage Backend

  • bull:

    Bull also uses Redis as its backend, offering advanced features like job events and delayed jobs, making it a powerful choice for complex job processing needs.

  • agenda:

    Agenda uses MongoDB as its storage backend, allowing it to leverage MongoDB's capabilities for persistence and querying. This makes it suitable for applications already using MongoDB.

  • kue:

    Kue uses Redis for job storage and provides a UI for monitoring jobs, making it suitable for applications that require visibility into job processing.

  • bree:

    Bree does not require a database; it uses Node.js's native worker threads for job execution, making it lightweight and easy to set up without external dependencies.

  • bee-queue:

    Bee-Queue is built on Redis, providing a fast and efficient in-memory data structure store. It is optimized for performance and designed for high throughput.

  • node-resque:

    Node-Resque supports multiple backends, including Redis and MongoDB, providing flexibility in how jobs are stored and managed.

Concurrency Control

  • bull:

    Bull offers advanced concurrency control features, allowing you to set the number of concurrent jobs processed, making it suitable for applications with high job volumes.

  • agenda:

    Agenda allows for scheduling jobs with specific intervals but does not provide built-in concurrency control, making it less suitable for high-concurrency scenarios.

  • kue:

    Kue provides basic concurrency control but is limited compared to Bull, making it less ideal for applications that require fine-tuned concurrency management.

  • bree:

    Bree utilizes worker threads to handle job concurrency, allowing multiple jobs to run in parallel without blocking the main thread, enhancing performance.

  • bee-queue:

    Bee-Queue supports concurrency by allowing multiple workers to process jobs simultaneously, making it ideal for high-throughput applications.

  • node-resque:

    Node-Resque supports concurrency by allowing multiple workers to process jobs from the queue, providing flexibility in how jobs are handled.

Job Scheduling

  • bull:

    Bull supports delayed jobs and scheduling, allowing you to set specific execution times for jobs, making it a versatile choice for various scheduling needs.

  • agenda:

    Agenda excels in job scheduling, allowing for recurring jobs and flexible scheduling options, making it suitable for time-based tasks.

  • kue:

    Kue provides scheduling capabilities but is more focused on job management and monitoring than on advanced scheduling features.

  • bree:

    Bree allows for job scheduling using cron-like syntax, making it easy to set up recurring tasks with minimal configuration.

  • bee-queue:

    Bee-Queue focuses on job processing rather than scheduling, making it less suitable for applications that require complex scheduling capabilities.

  • node-resque:

    Node-Resque allows for job scheduling but is primarily focused on processing jobs rather than complex scheduling.

Monitoring and UI

  • bull:

    Bull provides a powerful UI for monitoring jobs, allowing developers to visualize job statuses, retries, and failures, making it ideal for applications that require job visibility.

  • agenda:

    Agenda does not provide a built-in UI for monitoring jobs, requiring additional tools for visibility into job processing.

  • kue:

    Kue includes a built-in UI for monitoring job progress, making it easy to manage and visualize job queues and their statuses.

  • bree:

    Bree does not include a UI for monitoring jobs, focusing instead on performance and simplicity in job execution.

  • bee-queue:

    Bee-Queue offers a simple API for job management but lacks a built-in UI for monitoring jobs, which may require external solutions.

  • node-resque:

    Node-Resque does not provide a built-in UI but can be integrated with external monitoring tools for job visibility.

Ease of Use

  • bull:

    Bull has a steeper learning curve due to its advanced features, but it offers extensive documentation to help developers get started.

  • agenda:

    Agenda is straightforward to set up and use, especially for developers familiar with MongoDB, making it a good choice for simple scheduling needs.

  • kue:

    Kue is user-friendly with a clear API and built-in UI, making it easy to manage jobs, but it may require more setup compared to simpler libraries.

  • bree:

    Bree is designed to be easy to use, leveraging native Node.js features, making it suitable for developers looking for a modern job scheduler without complex dependencies.

  • bee-queue:

    Bee-Queue has a simple API that is easy to understand, making it accessible for developers looking for a lightweight job queue solution.

  • node-resque:

    Node-Resque is relatively easy to use, especially for those familiar with Resque, but may require additional setup for different backends.

How to Choose: bull vs agenda vs kue vs bree vs bee-queue vs node-resque
  • bull:

    Choose Bull if you require a robust job queue with advanced features such as rate limiting, retries, and job prioritization. It is well-suited for complex applications that need a powerful and flexible job processing solution.

  • agenda:

    Choose Agenda if you need a simple, MongoDB-backed job scheduler that integrates well with Mongoose and supports recurring jobs. It is suitable for applications that require a straightforward scheduling mechanism without complex setup.

  • kue:

    Use Kue if you need a feature-rich job queue that provides a user interface for monitoring jobs. It is beneficial for applications that require visibility into job processing and management.

  • bree:

    Select Bree if you want a modern job scheduler that supports worker threads and is built on top of Node.js's native features. It is perfect for applications that need to run jobs concurrently without relying heavily on external services.

  • bee-queue:

    Opt for Bee-Queue if you need a fast and lightweight job queue that is optimized for Redis. It is ideal for applications that prioritize performance and require a simple API for managing jobs with minimal overhead.

  • node-resque:

    Select Node-Resque if you are looking for a job queue that is inspired by Resque and supports multiple backends. It is suitable for applications that need a versatile job processing system with support for various storage options.

README for bull



The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.


Sponsors · Features · UIs · Install · Quick Guide · Documentation

Check the new Guide!


🚀 Sponsors 🚀

Dragonfly Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ.

📻 News and updates

Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.

Follow me on Twitter for other important news and updates.

🛠 Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

Atlassian Autodesk Mozilla Nest Salesforce


Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Bull Features

  • [x] Minimal CPU usage due to a polling-free design.
  • [x] Robust design based on Redis.
  • [x] Delayed jobs.
  • [x] Schedule and repeat jobs according to a cron specification.
  • [x] Rate limiter for jobs.
  • [x] Retries.
  • [x] Priority.
  • [x] Concurrency.
  • [x] Pause/resume—globally or locally.
  • [x] Multiple job types per queue.
  • [x] Threaded (sandboxed) processing functions.
  • [x] Automatic recovery from process crashes.

And coming up on the roadmap...

  • [ ] Job completion acknowledgement (you can use the message queue pattern in the meantime).
  • [ ] Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

BullMQ

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

| Feature | BullMQ-Pro | BullMQ | Bull | Kue | Bee | Agenda | | :------------------------ | :-------------: | :-------------: | :-------------: | :---: | -------- | ------ | | Backend | redis | redis | redis | redis | redis | mongo | | Observables | ✓ | | | | | | | Group Rate Limit | ✓ | | | | | | | Group Support | ✓ | | | | | | | Batches Support | ✓ | | | | | | | Parent/Child Dependencies | ✓ | ✓ | | | | | | Priorities | ✓ | ✓ | ✓ | ✓ | | ✓ | | Concurrency | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Delayed jobs | ✓ | ✓ | ✓ | ✓ | | ✓ | | Global events | ✓ | ✓ | ✓ | ✓ | | | | Rate Limiter | ✓ | ✓ | ✓ | | | | | Pause/Resume | ✓ | ✓ | ✓ | ✓ | | | | Sandboxed worker | ✓ | ✓ | ✓ | | | | | Repeatable jobs | ✓ | ✓ | ✓ | | | ✓ | | Atomic ops | ✓ | ✓ | ✓ | | ✓ | | | Persistence | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | UI | ✓ | ✓ | ✓ | ✓ | | ✓ | | Optimized for | Jobs / Messages | Jobs / Messages | Jobs / Messages | Jobs | Messages | Jobs |

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

const Queue = require('bull');

const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function (job, done) {

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function (job, done) {
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function (job, done) {
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function (job) {
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });

Using promises

Alternatively, you can return promises instead of using the done callback:

videoQueue.process(function (job) { // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function (job) {
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function (job) {
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });

As a tip, check your expressions here to verify they are correct: cron expression generator

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function () {
  // queue is paused now
});

queue.resume().then(function () {
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function (job, result) {
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

const Queue = require('bull');
const cluster = require('cluster');

const numWorkers = 8;
const queue = new Queue('test concurrent queue');

if (cluster.isMaster) {
  for (let i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function (worker) {
    // Let's create a few jobs for the queue workers
    for (let i = 0; i < 500; i++) {
      queue.add({ foo: 'bar' });
    };
  });

  cluster.on('exit', function (worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  queue.process(function (job, jobDone) {
    console.log('Job done by worker', cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).