bull, cluster, comlink, pm2, threads, and web-worker are all tools that help JavaScript applications handle concurrency, parallelism, or process isolation—but they solve very different problems in distinct environments. bull is a Redis-backed queue system for deferring and distributing work in Node.js. cluster and pm2 manage multiple Node.js processes to utilize multi-core systems—cluster is a built-in Node module, while pm2 is a production process manager with clustering, monitoring, and restart capabilities. On the frontend, comlink, threads, and web-worker enable off-main-thread execution: web-worker provides a minimal wrapper to create Web Workers in browsers or Node (via worker_threads), threads offers a higher-level API for shared-memory threading in Node.js using worker_threads, and comlink simplifies communication between main threads and workers by abstracting postMessage into async/await-friendly proxies. These packages span backend job processing, process orchestration, and frontend/browser-based parallelism.
JavaScript is single-threaded by design, but real-world applications often need to do more than one thing at a time—whether it’s handling thousands of requests, processing large datasets, or keeping the UI smooth during heavy computation. The packages bull, cluster, comlink, pm2, threads, and web-worker all address this challenge, but they operate in different layers (backend vs frontend), different environments (Node.js vs browser), and solve different problems (queuing vs process management vs worker communication). Let’s break down how and when to use each.
Before comparing APIs, it’s crucial to understand what problem each tool is designed for:
bull: A job queue system. It defers work to be processed later, possibly by other processes or machines, using Redis as a durable message store.cluster: A process forking utility. It lets a single Node.js app spawn multiple OS processes to handle incoming network traffic across CPU cores.pm2: A production process manager. It runs, monitors, clusters, and restarts Node.js apps automatically, with CLI and programmatic APIs.web-worker: A cross-environment worker factory. It creates Web Workers in browsers and worker_threads in Node.js using the same API.threads: A high-level threading library for Node.js. It wraps worker_threads with promises, shared memory support, and easy data transfer.comlink: A communication abstraction for workers. It turns postMessage into async/await calls, making workers feel like local modules.These aren’t interchangeable—they belong to different categories. Confusing them leads to architectural mistakes (e.g., trying to use bull for UI responsiveness).
| Package | Browser | Node.js | Requires Redis? |
|---|---|---|---|
bull | ❌ | ✅ | ✅ |
cluster | ❌ | ✅ (core) | ❌ |
pm2 | ❌ | ✅ | ❌ |
web-worker | ✅ | ✅* | ❌ |
threads | ❌ | ✅ | ❌ |
comlink | ✅ | ✅* | ❌ |
* web-worker and comlink work in Node.js only when paired with a worker_threads polyfill or bundler config that maps Web Worker APIs to Node equivalents.
💡 Key insight: If you’re building a frontend app, your options are limited to
web-workerandcomlink. If you’re in Node.js, all six are technically available—but only some make sense for your use case.
Let’s see how each package starts its core unit of work.
bull: Enqueue a Job// producer.js
import Queue from 'bull';
const emailQueue = new Queue('emails', 'redis://127.0.0.1:6379');
// Add a job to the queue
await emailQueue.add({ to: 'user@example.com', subject: 'Welcome!' });
// processor.js
emailQueue.process(async (job) => {
const { to, subject } = job.data;
await sendEmail(to, subject); // actual work happens here
});
cluster: Fork Child Processes// server.js
import cluster from 'node:cluster';
import http from 'node:http';
import { availableParallelism } from 'node:os';
if (cluster.isPrimary) {
const numCPUs = availableParallelism();
for (let i = 0; i < numCPUs; i++) {
cluster.fork(); // spawns a new worker process
}
} else {
// Worker process: start HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello from worker!');
}).listen(8000);
}
pm2: Start and Cluster an App// app.js
import http from 'node:http';
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello from PM2!');
}).listen(8000);
Then run via CLI:
# Start with 4 clustered instances
pm2 start app.js -i 4
Or programmatically:
// launcher.js
import pm2 from 'pm2';
pm2.connect((err) => {
if (err) throw err;
pm2.start({
script: 'app.js',
instances: 4,
exec_mode: 'cluster'
}, (err, apps) => {
pm2.disconnect();
});
});
web-worker: Create a Worker// main.js
import { Worker } from 'web-worker';
const worker = new Worker(new URL('./worker.js', import.meta.url));
worker.postMessage('Hello');
worker.onmessage = (e) => console.log(e.data); // 'World'
// worker.js
self.onmessage = (e) => {
self.postMessage('World');
};
threads: Spawn a Thread// main.js
import { spawn, Thread, Worker } from 'threads';
const hasher = await spawn(new Worker('./hasher'));
const hash = await hasher.sha256('secret');
await Thread.terminate(hasher);
// hasher.js
import { expose } from 'threads/worker';
expose({
sha256(input) {
// compute hash...
return computedHash;
}
});
comlink: Proxy a Worker// main.js
import * as Comlink from 'comlink';
const worker = new Worker(new URL('./api.js', import.meta.url));
const api = Comlink.wrap(worker);
const result = await api.expensiveCalculation(42); // feels local!
// api.js
import * as Comlink from 'comlink';
const obj = {
async expensiveCalculation(n) {
// heavy CPU work...
return n * n;
}
};
Comlink.expose(obj);
How you pass data between units of work varies dramatically:
bull: Jobs carry JSON-serializable data. Workers pull jobs from Redis.cluster: No direct communication. Use IPC (process.send()) sparingly, or external stores (Redis, DB) for coordination.pm2: Similar to cluster; relies on external coordination or PM2’s built-in pub/sub (pm2.sendDataToProcessId()).web-worker: Raw postMessage with structured cloning (no shared memory by default).threads: Supports structured cloning and SharedArrayBuffer for zero-copy data sharing.comlink: Abstracts postMessage into async method calls—no manual event handling.With threads, you can share memory:
// main.js
import { Pool, spawn, Transfer, Worker } from 'threads';
const pool = Pool(() => spawn(new Worker('./processor')));
const buffer = new SharedArrayBuffer(1024);
const view = new Uint8Array(buffer);
// ... fill buffer ...
const result = await (await pool.queue()).process(Transfer(buffer, [buffer]));
With comlink, you still pay serialization cost unless you use TransferHandler:
// main.js
import * as Comlink from 'comlink';
Comlink.transferHandlers.set('shared-buffer', {
canHandle: (obj) => obj instanceof SharedArrayBuffer,
serialize: (obj) => ({ /*...*/ }, [obj]),
deserialize: (data) => /*...*/
});
const result = await api.process(buffer); // now transfers ownership
web-worker gives you no help—you must manage Transferable objects manually.
bull: Provides job states (waiting, active, completed, failed), retries, backoff strategies, and pause/resume. Monitor via Bull Board or custom UIs.cluster: No built-in monitoring. If a worker dies, you must detect and respawn it yourself.pm2: Built-in uptime tracking, log streaming, health checks, auto-restart on crash, and graceful reloads (pm2 reload).web-worker / threads / comlink: No lifecycle management beyond .terminate(). You handle errors, restarts, and resource cleanup.For production Node.js services, pm2 is often preferred over raw cluster because it solves operational headaches out of the box.
You receive a user signup request and need to send a welcome email without blocking the response.
bullYour Express app is CPU-bound and you want to use all cores on a single machine.
pm2 (for production) or cluster (for minimal setups)pm2 adds restarts, logs, and zero-downtime deploys—critical for production.You let users upload photos and apply filters without freezing the UI.
comlink + web-workercomlink makes calling worker functions feel natural. web-worker ensures the same code works in dev (browser) and test (Node via JSDOM or similar).You have a 1GB CSV and want to split parsing across threads to speed it up.
threadsSharedArrayBuffer for efficient data sharing and has a clean promise API. web-worker would require manual message handling.bull for frontend tasks: bull requires Redis and runs in Node.js—it won’t help with browser UI jank.cluster for non-I/O-bound work: If your app is CPU-heavy per request (e.g., real-time analytics), cluster helps. But if it’s mostly waiting on DB calls, async/await is sufficient.comlink eliminates serialization cost: It hides postMessage, but data is still copied unless you use transferables.pm2 in serverless environments: Platforms like AWS Lambda manage scaling for you—pm2 adds unnecessary overhead.| Package | Primary Use Case | Environment | Parallelism Type | Data Sharing | Production Ready? |
|---|---|---|---|---|---|
bull | Job queueing | Node.js | Distributed (Redis) | JSON jobs | ✅ |
cluster | Multi-process HTTP servers | Node.js | Process-based | IPC (limited) | ⚠️ (basic) |
pm2 | Process management & clustering | Node.js | Process-based | External only | ✅ |
web-worker | Cross-env worker creation | Browser/Node | Thread-based | Structured clone | ✅ |
threads | High-level threading | Node.js | Thread-based | SharedArrayBuffer | ✅ |
comlink | Worker RPC abstraction | Browser/Node | Thread-based | Structured clone* | ✅ |
* With manual setup for transferables.
Ask yourself these questions:
bull) vs parallelism (threads, cluster).pm2 for production ops; cluster for simple cases.threads with shared memory; small messages work with any.These tools aren’t competitors—they’re specialists. The right choice depends entirely on your runtime environment, workload type, and operational requirements.
Choose bull when you need reliable, persistent background job processing in Node.js with features like retries, prioritization, rate limiting, and delayed jobs. It’s ideal for decoupling time-consuming tasks (e.g., sending emails, processing uploads) from your main request flow using Redis as a message broker. Avoid it if you don’t already use Redis or if your workload doesn’t require queuing semantics.
Choose cluster when you’re running a Node.js server on a multi-core machine and want to fork child processes to share a single port without external dependencies. It’s part of Node’s core, so it’s lightweight and integrates directly with your app. However, it lacks advanced features like zero-downtime reloads, health monitoring, or log aggregation—use it only for basic horizontal scaling within a single host.
Choose comlink when you’re working with Web Workers (in browsers or Node via worker_threads) and want to eliminate the boilerplate of postMessage-based communication. It lets you call worker functions as if they were local async methods, making complex worker interactions feel natural. It’s especially valuable in frontend apps that offload heavy computation (e.g., image processing, data parsing) to keep the UI responsive.
Choose pm2 when you need a full-featured, production-grade process manager for Node.js applications. It handles clustering, automatic restarts, logging, monitoring, and zero-downtime reloads out of the box. Use it for deploying and maintaining long-running services where reliability, observability, and operational ease matter more than minimal footprint.
Choose threads when you’re in a Node.js environment and need true parallelism using worker_threads with a clean, promise-based API. It simplifies sharing memory via SharedArrayBuffer and transferring data between threads. Prefer it over raw worker_threads when you want ergonomic thread management without dealing with low-level message passing, but note it’s Node-only and not suitable for browser contexts.
Choose web-worker when you need a simple, cross-environment way to spawn Web Workers that works consistently in both browsers and Node.js (via worker_threads). It abstracts environment differences so you can write worker code once. Use it for lightweight offloading of CPU-intensive tasks without the overhead of higher-level abstractions, but be prepared to handle postMessage communication manually.
The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.
Sponsors · Features · UIs · Install · Quick Guide · Documentation
Check the new Guide!
|
| Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ. |
Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.
Follow me on Twitter for other important news and updates.
You can find tutorials and news in this blog: https://blog.taskforce.sh/
Bull is popular among large and small organizations, like the following ones:
|
|
|
|
|
Supercharge your queues with a professional front end:
Sign up at Taskforce.sh
And coming up on the roadmap...
There are a few third-party UIs that you can use for monitoring:
BullMQ
Bull v3
Bull <= v2
Since there are a few job queue solutions, here is a table comparing them:
| Feature | BullMQ-Pro | BullMQ | Bull | Kue | Bee | Agenda |
|---|---|---|---|---|---|---|
| Backend | redis | redis | redis | redis | redis | mongo |
| Observables | ✓ | |||||
| Group Rate Limit | ✓ | |||||
| Group Support | ✓ | |||||
| Batches Support | ✓ | |||||
| Parent/Child Dependencies | ✓ | ✓ | ||||
| Priorities | ✓ | ✓ | ✓ | ✓ | ✓ | |
| Concurrency | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Delayed jobs | ✓ | ✓ | ✓ | ✓ | ✓ | |
| Global events | ✓ | ✓ | ✓ | ✓ | ||
| Rate Limiter | ✓ | ✓ | ✓ | |||
| Pause/Resume | ✓ | ✓ | ✓ | ✓ | ||
| Sandboxed worker | ✓ | ✓ | ✓ | |||
| Repeatable jobs | ✓ | ✓ | ✓ | ✓ | ||
| Atomic ops | ✓ | ✓ | ✓ | ✓ | ||
| Persistence | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| UI | ✓ | ✓ | ✓ | ✓ | ✓ | |
| Optimized for | Jobs / Messages | Jobs / Messages | Jobs / Messages | Jobs | Messages | Jobs |
npm install bull --save
or
yarn add bull
Requirements: Bull requires a Redis version greater than or equal to 2.8.18.
npm install @types/bull --save-dev
yarn add --dev @types/bull
Definitions are currently maintained in the DefinitelyTyped repo.
We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.
const Queue = require('bull');
const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');
videoQueue.process(function (job, done) {
// job.data contains the custom data passed when the job was created
// job.id contains id of this job.
// transcode video asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { framerate: 29.5 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
audioQueue.process(function (job, done) {
// transcode audio asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { samplerate: 48000 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
imageQueue.process(function (job, done) {
// transcode image asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { width: 1280, height: 720 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
pdfQueue.process(function (job) {
// Processors can also return promises instead of using the done callback
return pdfAsyncProcessor();
});
videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });
Alternatively, you can return promises instead of using the done callback:
videoQueue.process(function (job) { // don't forget to remove the done callback!
// Simply return a promise
return fetchVideo(job.data.url).then(transcodeVideo);
// Handles promise rejection
return Promise.reject(new Error('error transcoding'));
// Passes the value the promise is resolved with to the "completed" event
return Promise.resolve({ framerate: 29.5 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
// same as
return Promise.reject(new Error('some unexpected error'));
});
The process function can also be run in a separate process. This has several advantages:
In order to use this feature just create a separate file with the processor:
// processor.js
module.exports = function (job) {
// Do some heavy work
return Promise.resolve(result);
}
And define the processor like this:
// Single process:
queue.process('/path/to/my/processor.js');
// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');
// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');
A job can be added to a queue and processed repeatedly according to a cron specification:
paymentsQueue.process(function (job) {
// Check payments
});
// Repeat payment job once every day at 3:15 (am)
paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });
As a tip, check your expressions here to verify they are correct: cron expression generator
A queue can be paused and resumed globally (pass true to pause processing for
just this worker):
queue.pause().then(function () {
// queue is paused now
});
queue.resume().then(function () {
// queue is resumed now
})
A queue emits some useful events, for example...
.on('completed', function (job, result) {
// Job completed with output result!
})
For more information on events, including the full list of events that are fired, check out the Events reference
Queues are cheap, so if you need many of them just create new ones with different names:
const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.
However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.
NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.
Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:
const Queue = require('bull');
const cluster = require('cluster');
const numWorkers = 8;
const queue = new Queue('test concurrent queue');
if (cluster.isMaster) {
for (let i = 0; i < numWorkers; i++) {
cluster.fork();
}
cluster.on('online', function (worker) {
// Let's create a few jobs for the queue workers
for (let i = 0; i < 500; i++) {
queue.add({ foo: 'bar' });
};
});
cluster.on('exit', function (worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
queue.process(function (job, jobDone) {
console.log('Job done by worker', cluster.worker.id, job.id);
jobDone();
});
}
For the full documentation, check out the reference and common patterns:
If you see anything that could use more docs, please submit a pull request!
The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.
When a worker is processing a job it will keep the job "locked" so other workers can't process it.
It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled -
and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval
lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed,
the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:
lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.
As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).