bull vs agenda vs bee-queue vs bree vs kue vs node-resque
Node.js 任务队列库
bullagendabee-queuebreekuenode-resque类似的npm包:
Node.js 任务队列库

任务队列库用于处理异步任务和作业调度,允许开发者将任务推送到队列中,随后由工作进程异步执行。这些库提供了可靠的任务管理、调度和重试机制,适用于需要处理大量后台任务的应用程序。它们可以帮助提高应用程序的性能和响应能力,确保任务的顺序执行和失败重试。

npm下载趋势
3 年
GitHub Stars 排名
统计详情
npm包名称
下载量
Stars
大小
Issues
发布时间
License
bull1,172,47816,191309 kB1481 年前MIT
agenda143,4229,585353 kB355-MIT
bee-queue38,5273,996107 kB362 天前MIT
bree31,6803,22090.4 kB291 天前MIT
kue19,8309,457-2889 年前MIT
node-resque11,9871,405705 kB168 个月前Apache-2.0
功能对比: bull vs agenda vs bee-queue vs bree vs kue vs node-resque

任务调度

  • bull:

    Bull 提供了强大的任务调度功能,支持延迟和重复任务,能够处理复杂的任务流,适合需要高可靠性的应用。

  • agenda:

    Agenda 提供了强大的任务调度功能,支持定时和重复任务,可以使用 cron 表达式进行灵活调度,非常适合需要定期执行的任务。

  • bee-queue:

    Bee-Queue 主要专注于任务的快速处理,虽然不支持复杂的调度功能,但其高性能使其在处理短小任务时表现出色。

  • bree:

    Bree 支持 cron 表达式,可以轻松设置定时任务,适合需要定期执行的场景。

  • kue:

    Kue 提供了简单的任务调度功能,支持延迟任务和优先级设置,适合需要可视化管理的应用。

  • node-resque:

    Node-Resque 支持多种调度方式,能够与 Redis 集成,适合需要高可用性的分布式任务处理。

性能

  • bull:

    Bull 提供了高性能的任务处理能力,能够处理复杂的任务流,适合需要高可靠性的应用。

  • agenda:

    Agenda 的性能依赖于 MongoDB 的性能,适合处理较少的高频任务,但在高并发场景下可能会受到限制。

  • bee-queue:

    Bee-Queue 以高性能著称,特别适合处理大量短小任务,能够快速入队和出队,适合高并发场景。

  • bree:

    Bree 的性能良好,适合定时任务的调度,能够有效管理工作进程。

  • kue:

    Kue 的性能较为稳定,适合中等规模的任务处理,提供了良好的监控界面。

  • node-resque:

    Node-Resque 的性能取决于 Redis 的性能,适合需要高可用性和分布式处理的场景。

易用性

  • bull:

    Bull 提供了丰富的功能和灵活的 API,适合需要复杂任务管理的开发者,但学习曲线相对较陡。

  • agenda:

    Agenda 提供了简单易用的 API,易于集成到现有应用中,适合初学者使用。

  • bee-queue:

    Bee-Queue 的设计简单,易于上手,适合快速开发和部署。

  • bree:

    Bree 提供了直观的 API,易于使用,适合需要快速实现定时任务的开发者。

  • kue:

    Kue 提供了易于使用的 API 和可视化界面,适合需要监控任务状态的开发者。

  • node-resque:

    Node-Resque 提供了简单的 API,易于与 Redis 集成,适合需要高可用性的开发者。

监控与管理

  • bull:

    Bull 提供了强大的监控和管理功能,能够实时查看任务状态和进度,适合需要详细监控的应用。

  • agenda:

    Agenda 不提供内置的监控工具,开发者需要自行实现监控功能。

  • bee-queue:

    Bee-Queue 提供了简单的监控功能,但不如其他库丰富。

  • bree:

    Bree 提供了基本的任务管理和监控功能,适合简单的任务调度。

  • kue:

    Kue 提供了丰富的监控界面,能够实时查看任务的状态和历史记录,适合需要可视化管理的应用。

  • node-resque:

    Node-Resque 提供了基本的监控功能,适合需要与 Redis 集成的开发者。

社区支持

  • bull:

    Bull 拥有强大的社区支持,文档丰富,适合需要深入学习的开发者。

  • agenda:

    Agenda 拥有活跃的社区支持,文档齐全,适合开发者查阅和学习。

  • bee-queue:

    Bee-Queue 的社区相对较小,但文档清晰,易于理解。

  • bree:

    Bree 的社区正在成长,文档逐步完善,适合新手使用。

  • kue:

    Kue 的社区活跃,文档详细,适合开发者快速上手。

  • node-resque:

    Node-Resque 拥有良好的社区支持,适合需要与 Redis 集成的开发者。

如何选择: bull vs agenda vs bee-queue vs bree vs kue vs node-resque
  • bull:

    选择 Bull 如果你需要一个功能强大的任务队列,支持优先级、延迟任务和重复任务,且能够处理复杂的任务流。它适合需要高可靠性和可扩展性的场景。

  • agenda:

    选择 Agenda 如果你需要一个基于 MongoDB 的任务调度器,支持定时任务和重复任务,且易于集成到现有的应用程序中。它适合需要灵活调度的场景。

  • bee-queue:

    选择 Bee-Queue 如果你需要一个轻量级、高性能的任务队列,特别是在需要快速处理大量短小任务时。它的设计简单,适合高并发的场景。

  • bree:

    选择 Bree 如果你需要一个简单易用的任务调度库,支持 cron 表达式,并且可以轻松管理工作进程。它适合需要定时任务的应用。

  • kue:

    选择 Kue 如果你需要一个易于使用的任务队列,提供丰富的 UI 界面来监控任务状态,适合需要可视化管理的应用。

  • node-resque:

    选择 Node-Resque 如果你需要一个与 Redis 兼容的任务队列,支持多种后端和工作进程管理,适合需要高可用性和分布式处理的场景。

bull的README



The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.


Sponsors · Features · UIs · Install · Quick Guide · Documentation

Check the new Guide!


🚀 Sponsors 🚀

Dragonfly Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ.

📻 News and updates

Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.

Follow me on Twitter for other important news and updates.

🛠 Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

Atlassian Autodesk Mozilla Nest Salesforce


Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Bull Features

  • Minimal CPU usage due to a polling-free design.
  • Robust design based on Redis.
  • Delayed jobs.
  • Schedule and repeat jobs according to a cron specification.
  • Rate limiter for jobs.
  • Retries.
  • Priority.
  • Concurrency.
  • Pause/resume—globally or locally.
  • Multiple job types per queue.
  • Threaded (sandboxed) processing functions.
  • Automatic recovery from process crashes.

And coming up on the roadmap...

  • Job completion acknowledgement (you can use the message queue pattern in the meantime).
  • Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

BullMQ

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

FeatureBullMQ-ProBullMQBullKueBeeAgenda
Backendredisredisredisredisredismongo
Observables
Group Rate Limit
Group Support
Batches Support
Parent/Child Dependencies
Priorities
Concurrency
Delayed jobs
Global events
Rate Limiter
Pause/Resume
Sandboxed worker
Repeatable jobs
Atomic ops
Persistence
UI
Optimized forJobs / MessagesJobs / MessagesJobs / MessagesJobsMessagesJobs

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

const Queue = require('bull');

const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function (job, done) {

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function (job, done) {
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function (job, done) {
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function (job) {
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });

Using promises

Alternatively, you can return promises instead of using the done callback:

videoQueue.process(function (job) { // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function (job) {
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function (job) {
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });

As a tip, check your expressions here to verify they are correct: cron expression generator

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function () {
  // queue is paused now
});

queue.resume().then(function () {
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function (job, result) {
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

const Queue = require('bull');
const cluster = require('cluster');

const numWorkers = 8;
const queue = new Queue('test concurrent queue');

if (cluster.isMaster) {
  for (let i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function (worker) {
    // Let's create a few jobs for the queue workers
    for (let i = 0; i < 500; i++) {
      queue.add({ foo: 'bar' });
    };
  });

  cluster.on('exit', function (worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  queue.process(function (job, jobDone) {
    console.log('Job done by worker', cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).