bull and pg-boss are both robust job queue libraries for Node.js that enable reliable background processing using persistent storage. They allow developers to offload time-consuming tasks—like sending emails, processing files, or syncing data—into asynchronous jobs that can be retried, delayed, or scheduled without blocking the main application thread. While both solve similar problems, they differ significantly in their underlying storage engines, API design, and operational trade-offs.
Both bull and pg-boss help you run background jobs reliably in Node.js applications—think sending welcome emails after sign-up, resizing uploaded images, or syncing data with third-party APIs. But they take very different paths to get there. Let’s compare them head-to-head based on real engineering concerns.
bull uses Redis as its backing store.
// bull: Connects to Redis
import { Queue, Worker } from 'bull';
const emailQueue = new Queue('email', {
connection: {
host: 'localhost',
port: 6379
}
});
pg-boss uses PostgreSQL.
pgboss.job, etc.), which you can inspect directly.// pg-boss: Connects to PostgreSQL
import PgBoss from 'pg-boss';
const boss = new PgBoss({
connectionString: 'postgres://user:pass@localhost/mydb'
});
await boss.start();
💡 Key implication: If your app already uses Postgres and you don’t have Redis, adding
pg-bossavoids operational overhead. If you already run Redis for caching or sessions,bullintegrates cleanly.
bull uses an event-driven model with callbacks and async/await support.
// bull: Add a job
await emailQueue.add('welcome', { userId: 123 });
// Define worker elsewhere
new Worker('email', async (job) => {
await sendEmail(job.data.userId);
});
pg-boss uses a promise-based subscription model.
.on() or .work(), and the handler returns a promise.// pg-boss: Publish and process a job
await boss.publish('welcome', { userId: 123 });
boss.work('welcome', async (job) => {
await sendEmail(job.data.userId);
});
Both approaches are clean, but pg-boss’s API feels more aligned with modern async/await patterns out of the box.
bull offers fine-grained retry control.
// bull: Retry with exponential backoff
await emailQueue.add('welcome', { userId: 123 }, {
attempts: 3,
backoff: {
type: 'exponential',
delay: 1000
}
});
pg-boss also supports retries with similar flexibility.
retryLimit and retryBackoff options.// pg-boss: Retry configuration
await boss.publish('welcome', { userId: 123 }, {
retryLimit: 3,
retryBackoff: true // enables exponential backoff
});
Both handle failures well, but bull provides slightly more customization (e.g., custom backoff functions).
bull supports job delays and cron-like scheduling via repeat.
// bull: Delayed job
await emailQueue.add('reminder', {}, { delay: 5 * 60 * 1000 }); // 5 min
// Recurring job
await emailQueue.add('cleanup', {}, {
repeat: { cron: '0 2 * * *' } // daily at 2am
});
pg-boss also supports delays and cron scheduling.
startAfter for one-time delays.cron option for recurring jobs.// pg-boss: Delayed job
await boss.publish('reminder', {}, { startAfter: '5 minutes' });
// Recurring job
await boss.subscribe('cleanup', { cron: '0 2 * * *' }, async () => {
// cleanup logic
});
Note: pg-boss accepts human-readable time strings ('5 minutes') in addition to milliseconds, which can improve readability.
This is where the storage engine really matters.
bull (Redis) cannot participate in database transactions.
pg-boss (PostgreSQL) supports transactional job creation.
// pg-boss: Enqueue job inside DB transaction
await sql.begin(async (tx) => {
await tx`INSERT INTO users (name) VALUES (${name})`;
await boss.insertJob(tx, 'welcome', { name });
});
✅ This is a major advantage for
pg-bossin systems where data consistency is non-negotiable.
bull has Bull Board, a popular UI dashboard for monitoring queues, inspecting jobs, and retrying failures.
// bull: Add Bull Board
import { createBullBoard } from '@bull-board/api';
import { BullAdapter } from '@bull-board/api/bullAdapter';
createBullBoard({
queues: [new BullAdapter(emailQueue)]
});
pg-boss does not include a built-in UI, but since jobs live in Postgres tables, you can:
While less turnkey than Bull Board, this approach gives you full control and integrates with existing database observability practices.
bull: Supports per-queue and per-job rate limiting (e.g., max 10 jobs per second).
const queue = new Queue('api-call', {
limiter: { max: 10, duration: 1000 }
});
pg-boss: No built-in rate limiting. You’d need to implement it yourself or use external tools.bull: Assign priority levels (1–255) to jobs; higher-priority jobs jump the queue.
await queue.add('urgent', {}, { priority: 1 });
pg-boss: Does not support job priorities.bull: Workers can report progress back to the job (e.g., “50% complete”).
processor: async (job) => {
job.updateProgress(50);
}
pg-boss: No native progress tracking. You’d store progress in your own table.| Concern | bull (Redis) | pg-boss (PostgreSQL) |
|---|---|---|
| Infrastructure | Requires Redis + Postgres | Only Postgres |
| Throughput | Very high (Redis is fast) | High, but bound by Postgres I/O |
| Data Consistency | Eventual (no cross-store transactions) | Strong (ACID within Postgres) |
| Debugging | Requires Redis CLI or Bull Board | Plain SQL queries |
| Learning Curve | Moderate (Redis concepts + Bull API) | Low (just Postgres + familiar JS) |
bull if:pg-boss if:Both libraries are mature, well-maintained, and production-ready. The decision often boils down to your existing data infrastructure and consistency requirements. If you’re all-in on Postgres and value transactional safety, pg-boss is a natural fit. If you need Redis anyway or require advanced queue features, bull delivers power and polish. Neither is deprecated—both are actively developed as of 2024.
Choose bull if you're already using Redis in your stack or need high-throughput job processing with advanced features like rate limiting, prioritized queues, and real-time progress tracking. It’s ideal for systems where low-latency job dispatch and rich observability (via Bull Board) are critical, and you’re comfortable managing Redis as a dependency.
Choose pg-boss if your application already relies on PostgreSQL and you want to avoid introducing Redis just for queuing. It’s well-suited for teams that prefer keeping infrastructure simple by using a single database, and who value strong transactional guarantees—especially when job creation must be atomic with other database operations.
The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.
Sponsors · Features · UIs · Install · Quick Guide · Documentation
Check the new Guide!
|
| Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ. |
Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.
Follow me on Twitter for other important news and updates.
You can find tutorials and news in this blog: https://blog.taskforce.sh/
Bull is popular among large and small organizations, like the following ones:
|
|
|
|
|
Supercharge your queues with a professional front end:
Sign up at Taskforce.sh
And coming up on the roadmap...
There are a few third-party UIs that you can use for monitoring:
BullMQ
Bull v3
Bull <= v2
Since there are a few job queue solutions, here is a table comparing them:
| Feature | BullMQ-Pro | BullMQ | Bull | Kue | Bee | Agenda |
|---|---|---|---|---|---|---|
| Backend | redis | redis | redis | redis | redis | mongo |
| Observables | ✓ | |||||
| Group Rate Limit | ✓ | |||||
| Group Support | ✓ | |||||
| Batches Support | ✓ | |||||
| Parent/Child Dependencies | ✓ | ✓ | ||||
| Priorities | ✓ | ✓ | ✓ | ✓ | ✓ | |
| Concurrency | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Delayed jobs | ✓ | ✓ | ✓ | ✓ | ✓ | |
| Global events | ✓ | ✓ | ✓ | ✓ | ||
| Rate Limiter | ✓ | ✓ | ✓ | |||
| Pause/Resume | ✓ | ✓ | ✓ | ✓ | ||
| Sandboxed worker | ✓ | ✓ | ✓ | |||
| Repeatable jobs | ✓ | ✓ | ✓ | ✓ | ||
| Atomic ops | ✓ | ✓ | ✓ | ✓ | ||
| Persistence | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| UI | ✓ | ✓ | ✓ | ✓ | ✓ | |
| Optimized for | Jobs / Messages | Jobs / Messages | Jobs / Messages | Jobs | Messages | Jobs |
npm install bull --save
or
yarn add bull
Requirements: Bull requires a Redis version greater than or equal to 2.8.18.
npm install @types/bull --save-dev
yarn add --dev @types/bull
Definitions are currently maintained in the DefinitelyTyped repo.
We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.
const Queue = require('bull');
const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');
videoQueue.process(function (job, done) {
// job.data contains the custom data passed when the job was created
// job.id contains id of this job.
// transcode video asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { framerate: 29.5 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
audioQueue.process(function (job, done) {
// transcode audio asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { samplerate: 48000 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
imageQueue.process(function (job, done) {
// transcode image asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { width: 1280, height: 720 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
pdfQueue.process(function (job) {
// Processors can also return promises instead of using the done callback
return pdfAsyncProcessor();
});
videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });
Alternatively, you can return promises instead of using the done callback:
videoQueue.process(function (job) { // don't forget to remove the done callback!
// Simply return a promise
return fetchVideo(job.data.url).then(transcodeVideo);
// Handles promise rejection
return Promise.reject(new Error('error transcoding'));
// Passes the value the promise is resolved with to the "completed" event
return Promise.resolve({ framerate: 29.5 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
// same as
return Promise.reject(new Error('some unexpected error'));
});
The process function can also be run in a separate process. This has several advantages:
In order to use this feature just create a separate file with the processor:
// processor.js
module.exports = function (job) {
// Do some heavy work
return Promise.resolve(result);
}
And define the processor like this:
// Single process:
queue.process('/path/to/my/processor.js');
// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');
// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');
A job can be added to a queue and processed repeatedly according to a cron specification:
paymentsQueue.process(function (job) {
// Check payments
});
// Repeat payment job once every day at 3:15 (am)
paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });
As a tip, check your expressions here to verify they are correct: cron expression generator
A queue can be paused and resumed globally (pass true to pause processing for
just this worker):
queue.pause().then(function () {
// queue is paused now
});
queue.resume().then(function () {
// queue is resumed now
})
A queue emits some useful events, for example...
.on('completed', function (job, result) {
// Job completed with output result!
})
For more information on events, including the full list of events that are fired, check out the Events reference
Queues are cheap, so if you need many of them just create new ones with different names:
const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.
However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.
NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.
Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:
const Queue = require('bull');
const cluster = require('cluster');
const numWorkers = 8;
const queue = new Queue('test concurrent queue');
if (cluster.isMaster) {
for (let i = 0; i < numWorkers; i++) {
cluster.fork();
}
cluster.on('online', function (worker) {
// Let's create a few jobs for the queue workers
for (let i = 0; i < 500; i++) {
queue.add({ foo: 'bar' });
};
});
cluster.on('exit', function (worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
queue.process(function (job, jobDone) {
console.log('Job done by worker', cluster.worker.id, job.id);
jobDone();
});
}
For the full documentation, check out the reference and common patterns:
If you see anything that could use more docs, please submit a pull request!
The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.
When a worker is processing a job it will keep the job "locked" so other workers can't process it.
It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled -
and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval
lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed,
the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:
lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.
As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).