bull vs pg-boss
Managing Background Jobs and Task Queues in Node.js
bullpg-boss类似的npm包:

Managing Background Jobs and Task Queues in Node.js

bull and pg-boss are both robust job queue libraries for Node.js that enable reliable background processing using persistent storage. They allow developers to offload time-consuming tasks—like sending emails, processing files, or syncing data—into asynchronous jobs that can be retried, delayed, or scheduled without blocking the main application thread. While both solve similar problems, they differ significantly in their underlying storage engines, API design, and operational trade-offs.

npm下载趋势

3 年

GitHub Stars 排名

统计详情

npm包名称
下载量
Stars
大小
Issues
发布时间
License
bull1,356,72416,240309 kB1461 年前MIT
pg-boss324,1853,267268 kB369 天前MIT

Bull vs pg-boss: Choosing the Right Job Queue for Your Node.js App

Both bull and pg-boss help you run background jobs reliably in Node.js applications—think sending welcome emails after sign-up, resizing uploaded images, or syncing data with third-party APIs. But they take very different paths to get there. Let’s compare them head-to-head based on real engineering concerns.

🗄️ Storage Engine: Redis vs PostgreSQL

bull uses Redis as its backing store.

  • Redis is fast and memory-optimized, making job enqueue/dequeue operations extremely quick.
  • Requires running and maintaining a separate Redis instance alongside your database.
// bull: Connects to Redis
import { Queue, Worker } from 'bull';

const emailQueue = new Queue('email', {
  connection: {
    host: 'localhost',
    port: 6379
  }
});

pg-boss uses PostgreSQL.

  • Leverages your existing Postgres database—no extra service needed.
  • Job data lives in regular Postgres tables (pgboss.job, etc.), which you can inspect directly.
// pg-boss: Connects to PostgreSQL
import PgBoss from 'pg-boss';

const boss = new PgBoss({
  connectionString: 'postgres://user:pass@localhost/mydb'
});
await boss.start();

💡 Key implication: If your app already uses Postgres and you don’t have Redis, adding pg-boss avoids operational overhead. If you already run Redis for caching or sessions, bull integrates cleanly.

⚙️ Job Creation: Callbacks vs Promises

bull uses an event-driven model with callbacks and async/await support.

  • You define workers separately from job creation.
  • Job data is passed as a plain object.
// bull: Add a job
await emailQueue.add('welcome', { userId: 123 });

// Define worker elsewhere
new Worker('email', async (job) => {
  await sendEmail(job.data.userId);
});

pg-boss uses a promise-based subscription model.

  • You subscribe to job types using .on() or .work(), and the handler returns a promise.
  • Also supports direct job publishing.
// pg-boss: Publish and process a job
await boss.publish('welcome', { userId: 123 });

boss.work('welcome', async (job) => {
  await sendEmail(job.data.userId);
});

Both approaches are clean, but pg-boss’s API feels more aligned with modern async/await patterns out of the box.

🔁 Retries and Failure Handling

bull offers fine-grained retry control.

  • Configure per-job or per-queue retry attempts, backoff strategies (exponential, fixed), and custom retry logic.
// bull: Retry with exponential backoff
await emailQueue.add('welcome', { userId: 123 }, {
  attempts: 3,
  backoff: {
    type: 'exponential',
    delay: 1000
  }
});

pg-boss also supports retries with similar flexibility.

  • Uses retryLimit and retryBackoff options.
  • Failed jobs go into a dead-letter queue automatically after max retries.
// pg-boss: Retry configuration
await boss.publish('welcome', { userId: 123 }, {
  retryLimit: 3,
  retryBackoff: true // enables exponential backoff
});

Both handle failures well, but bull provides slightly more customization (e.g., custom backoff functions).

🕒 Scheduling and Delays

bull supports job delays and cron-like scheduling via repeat.

  • Delay a job by milliseconds.
  • Schedule recurring jobs using cron syntax.
// bull: Delayed job
await emailQueue.add('reminder', {}, { delay: 5 * 60 * 1000 }); // 5 min

// Recurring job
await emailQueue.add('cleanup', {}, {
  repeat: { cron: '0 2 * * *' } // daily at 2am
});

pg-boss also supports delays and cron scheduling.

  • Use startAfter for one-time delays.
  • Use cron option for recurring jobs.
// pg-boss: Delayed job
await boss.publish('reminder', {}, { startAfter: '5 minutes' });

// Recurring job
await boss.subscribe('cleanup', { cron: '0 2 * * *' }, async () => {
  // cleanup logic
});

Note: pg-boss accepts human-readable time strings ('5 minutes') in addition to milliseconds, which can improve readability.

🔄 Transactional Guarantees

This is where the storage engine really matters.

bull (Redis) cannot participate in database transactions.

  • If you create a user in Postgres and then add a “send welcome email” job to Redis, a crash between those steps leaves your system inconsistent.
  • You’d need two-phase commit or idempotency to recover.

pg-boss (PostgreSQL) supports transactional job creation.

  • You can insert a job inside the same Postgres transaction as your business data.
  • Ensures job and data are either both committed or both rolled back.
// pg-boss: Enqueue job inside DB transaction
await sql.begin(async (tx) => {
  await tx`INSERT INTO users (name) VALUES (${name})`;
  await boss.insertJob(tx, 'welcome', { name });
});

✅ This is a major advantage for pg-boss in systems where data consistency is non-negotiable.

📊 Observability and Monitoring

bull has Bull Board, a popular UI dashboard for monitoring queues, inspecting jobs, and retrying failures.

  • Easy to plug in with Express or Fastify.
  • Shows real-time stats, job history, and error logs.
// bull: Add Bull Board
import { createBullBoard } from '@bull-board/api';
import { BullAdapter } from '@bull-board/api/bullAdapter';

createBullBoard({
  queues: [new BullAdapter(emailQueue)]
});

pg-boss does not include a built-in UI, but since jobs live in Postgres tables, you can:

  • Query job status directly with SQL.
  • Build custom dashboards using standard Postgres tooling.
  • Use third-party tools that monitor Postgres.

While less turnkey than Bull Board, this approach gives you full control and integrates with existing database observability practices.

🧩 Advanced Features

Rate Limiting

  • bull: Supports per-queue and per-job rate limiting (e.g., max 10 jobs per second).
    const queue = new Queue('api-call', {
      limiter: { max: 10, duration: 1000 }
    });
    
  • pg-boss: No built-in rate limiting. You’d need to implement it yourself or use external tools.

Job Priorities

  • bull: Assign priority levels (1–255) to jobs; higher-priority jobs jump the queue.
    await queue.add('urgent', {}, { priority: 1 });
    
  • pg-boss: Does not support job priorities.

Job Progress Tracking

  • bull: Workers can report progress back to the job (e.g., “50% complete”).
    processor: async (job) => {
      job.updateProgress(50);
    }
    
  • pg-boss: No native progress tracking. You’d store progress in your own table.

🛠️ Operational Considerations

Concernbull (Redis)pg-boss (PostgreSQL)
InfrastructureRequires Redis + PostgresOnly Postgres
ThroughputVery high (Redis is fast)High, but bound by Postgres I/O
Data ConsistencyEventual (no cross-store transactions)Strong (ACID within Postgres)
DebuggingRequires Redis CLI or Bull BoardPlain SQL queries
Learning CurveModerate (Redis concepts + Bull API)Low (just Postgres + familiar JS)

🎯 When to Pick Which

Choose bull if:

  • You already run Redis.
  • You need high job throughput (thousands per second).
  • You rely on features like priorities, rate limiting, or real-time progress.
  • You want a ready-made UI (Bull Board) for ops visibility.

Choose pg-boss if:

  • You only use Postgres and want to avoid adding Redis.
  • Job creation must be atomic with your business data.
  • Your team prefers SQL-based debugging and monitoring.
  • Your workload is moderate (hundreds of jobs per second is plenty).

💡 Final Thought

Both libraries are mature, well-maintained, and production-ready. The decision often boils down to your existing data infrastructure and consistency requirements. If you’re all-in on Postgres and value transactional safety, pg-boss is a natural fit. If you need Redis anyway or require advanced queue features, bull delivers power and polish. Neither is deprecated—both are actively developed as of 2024.

如何选择: bull vs pg-boss

  • bull:

    Choose bull if you're already using Redis in your stack or need high-throughput job processing with advanced features like rate limiting, prioritized queues, and real-time progress tracking. It’s ideal for systems where low-latency job dispatch and rich observability (via Bull Board) are critical, and you’re comfortable managing Redis as a dependency.

  • pg-boss:

    Choose pg-boss if your application already relies on PostgreSQL and you want to avoid introducing Redis just for queuing. It’s well-suited for teams that prefer keeping infrastructure simple by using a single database, and who value strong transactional guarantees—especially when job creation must be atomic with other database operations.

bull的README




The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.


Sponsors · Features · UIs · Install · Quick Guide · Documentation

Check the new Guide!


🚀 Sponsors 🚀

Dragonfly Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ.

📻 News and updates

Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.

Follow me on Twitter for other important news and updates.

🛠 Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

Atlassian Autodesk Mozilla Nest Salesforce


Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Bull Features

  • Minimal CPU usage due to a polling-free design.
  • Robust design based on Redis.
  • Delayed jobs.
  • Schedule and repeat jobs according to a cron specification.
  • Rate limiter for jobs.
  • Retries.
  • Priority.
  • Concurrency.
  • Pause/resume—globally or locally.
  • Multiple job types per queue.
  • Threaded (sandboxed) processing functions.
  • Automatic recovery from process crashes.

And coming up on the roadmap...

  • Job completion acknowledgement (you can use the message queue pattern in the meantime).
  • Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

BullMQ

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

FeatureBullMQ-ProBullMQBullKueBeeAgenda
Backendredisredisredisredisredismongo
Observables
Group Rate Limit
Group Support
Batches Support
Parent/Child Dependencies
Priorities
Concurrency
Delayed jobs
Global events
Rate Limiter
Pause/Resume
Sandboxed worker
Repeatable jobs
Atomic ops
Persistence
UI
Optimized forJobs / MessagesJobs / MessagesJobs / MessagesJobsMessagesJobs

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

const Queue = require('bull');

const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function (job, done) {

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function (job, done) {
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function (job, done) {
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function (job) {
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });

Using promises

Alternatively, you can return promises instead of using the done callback:

videoQueue.process(function (job) { // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function (job) {
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function (job) {
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });

As a tip, check your expressions here to verify they are correct: cron expression generator

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function () {
  // queue is paused now
});

queue.resume().then(function () {
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function (job, result) {
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

const Queue = require('bull');
const cluster = require('cluster');

const numWorkers = 8;
const queue = new Queue('test concurrent queue');

if (cluster.isMaster) {
  for (let i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function (worker) {
    // Let's create a few jobs for the queue workers
    for (let i = 0; i < 500; i++) {
      queue.add({ foo: 'bar' });
    };
  });

  cluster.on('exit', function (worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  queue.process(function (job, jobDone) {
    console.log('Job done by worker', cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).