The fastest, most reliable, Redis-based distributed queue for Node.
Carefully written for rock solid stability and atomicity.
Follow @manast for *important* Bull/BullMQ/BullMQ-Pro news and updates!
You can find tutorials and news in this blog: https://blog.taskforce.sh/
Do you need to work with BullMQ on platforms other than Node.js? If so, check out the BullMQ Proxy
Supercharge your queues with a professional front end:
Sign up at Taskforce.sh
| Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ. |
Some notable organizations using BullMQ:
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
Install:
$ yarn add bullmq
Add jobs to the queue:
import { Queue } from 'bullmq';
const queue = new Queue('Paint');
queue.add('cars', { color: 'blue' });
Process the jobs in your workers:
import { Worker } from 'bullmq';
const worker = new Worker('Paint', async job => {
if (job.name === 'cars') {
await paintCar(job.data.color);
}
});
Listen to jobs for completion:
import { QueueEvents } from 'bullmq';
const queueEvents = new QueueEvents('Paint');
queueEvents.on('completed', ({ jobId }) => {
console.log('done painting');
});
queueEvents.on(
'failed',
({ jobId, failedReason }: { jobId: string; failedReason: string }) => {
console.error('error painting', failedReason);
},
);
This is just scratching the surface, check all the features and more in the official documentation
Since there are a few job queue solutions, here is a table comparing them:
| Feature | BullMQ-Pro | BullMQ | Bull | Kue | Bee | Agenda | | :------------------------ | :-----------------------------------------: | :-------------------------: | :-------------: | :---: | -------- | ------ | | Backend | redis | redis | redis | redis | redis | mongo | | Observables | ✓ | | | | | | | Group Rate Limit | ✓ | | | | | | | Group Support | ✓ | | | | | | | Batches Support | ✓ | | | | | | | Parent/Child Dependencies | ✓ | ✓ | | | | | | Debouncing | ✓ | ✓ | ✓ | | | | | Priorities | ✓ | ✓ | ✓ | ✓ | | ✓ | | Concurrency | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | Delayed jobs | ✓ | ✓ | ✓ | ✓ | | ✓ | | Global events | ✓ | ✓ | ✓ | ✓ | | | | Rate Limiter | ✓ | ✓ | ✓ | | | | | Pause/Resume | ✓ | ✓ | ✓ | ✓ | | | | Sandboxed worker | ✓ | ✓ | ✓ | | | | | Repeatable jobs | ✓ | ✓ | ✓ | | | ✓ | | Atomic ops | ✓ | ✓ | ✓ | | ✓ | | | Persistence | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | UI | ✓ | ✓ | ✓ | ✓ | | ✓ | | Optimized for | Jobs / Messages | Jobs / Messages | Jobs / Messages | Jobs | Messages | Jobs |
Fork the repo, make some changes, submit a pull-request! Here is the contributing doc that has more details.
Thanks for all the contributors that made this library possible, also a special mention to Leon van Kammen that kindly donated his npm bullmq repo.