bull, piscina, and workerpool are all Node.js libraries designed to manage work distribution across multiple threads or processes, but they serve different architectural purposes. bull is a robust queue system built on Redis that enables job scheduling, retries, and distributed processing. piscina is a modern, high-performance worker thread pool implementation that simplifies running JavaScript tasks in parallel using Node.js Worker Threads. workerpool provides a flexible abstraction for executing functions in child processes or worker threads, supporting both local and remote (cluster-based) execution strategies.
Node.js runs JavaScript in a single thread, which means heavy computations can block your event loop and degrade responsiveness. To avoid this, developers use libraries like bull, piscina, and workerpool to move work off the main thread. But these tools solve different problems β one is a job queue, another is a worker thread pool, and the third is a generic parallel execution helper. Letβs break down how they work and when to use each.
bull: A Redis-Based Job Queuebull isnβt about parallelism per se β itβs about reliable, asynchronous job processing. It stores jobs in Redis, allowing them to persist across restarts, be shared across services, and support advanced features like retries, delays, and priorities.
// bull: enqueue a job
const Queue = require('bull');
const emailQueue = new Queue('email');
// Add a job to the queue
await emailQueue.add({ to: 'user@example.com', subject: 'Welcome!' });
// Process jobs (can run in a separate process or server)
emailQueue.process(async (job) => {
await sendEmail(job.data.to, job.data.subject);
});
β Use
bullwhen you care about durability, distribution, and workflow control β not just speed.
piscina: High-Performance Worker Thread Poolpiscina gives you a ready-to-use pool of Node.js Worker Threads with minimal overhead. Itβs built for CPU-heavy tasks that need to run fast within a single application instance.
// piscina: run a task in a worker thread
const Piscina = require('piscina');
const piscina = new Piscina({ filename: path.resolve(__dirname, 'worker.js') });
// worker.js exports a function
// export async function processData(data) { /* ... */ }
const result = await piscina.run({ input: 'large dataset' }, { name: 'processData' });
β Use
piscinawhen you need low-latency, in-process parallelism for tasks like hashing, compression, or math-heavy operations.
workerpool: Flexible Task Execution in Workers or Processesworkerpool abstracts away whether youβre using child processes or worker threads, letting you call functions as if they were local β even if they run remotely.
// workerpool: execute a function in a worker
const workerpool = require('workerpool');
const pool = workerpool.pool();
// Define a function in a worker file (e.g., 'mathWorker.js')
// function square(n) { return n * n; }
const result = await pool.exec('square', [5], { worker: 'mathWorker.js' });
// result === 25
β Use
workerpoolwhen you want a simple, unified API for offloading work without worrying about the underlying concurrency model.
| Package | Concurrency Model | Shared Memory? | Cross-Process? |
|---|---|---|---|
bull | Distributed (via Redis) | β | β |
piscina | Worker Threads (in-process) | β (via transferable objects) | β |
workerpool | Worker Threads or Child Processes | β οΈ (depends on mode) | β (with cluster) |
bull assumes workers may live on different machines. Communication happens through Redis, so no shared memory.piscina uses Worker Threads, which share memory via ArrayBuffer transfers β very fast, but limited to one machine.workerpool lets you choose: type: 'thread' for Worker Threads, or type: 'process' for full isolation (slower, but safer for unstable code).Only bull offers persistent job storage. If your app crashes:
bull: Jobs stay in Redis and resume when the worker restarts.piscina/workerpool: In-flight tasks are lost. No retry, no history.This makes bull essential for critical background work (e.g., payment processing), while the others suit ephemeral, best-effort tasks (e.g., real-time analytics).
bull uses an event-driven, queue-based model:queue.on('completed', (job) => console.log('Done:', job.id));
queue.on('failed', (job, err) => console.error('Failed:', err));
You add jobs and listen for outcomes β great for long-running or batched work.
piscina uses async/await with named exports:// Main thread
const result = await piscina.run(data, { name: 'transform' });
// Worker file must export `transform`
export async function transform(data) { return data.map(x => x * 2); }
Clean and direct β feels like calling a local function.
workerpool uses dynamic function invocation:// Call any exported function by name
const result = await pool.exec('myFunction', [arg1, arg2]);
More flexible than piscina (no need to pre-declare entry points), but slightly more runtime overhead.
bull: Built-in retry mechanisms, failure tracking, and dead-letter queues.
queue.process({ attempts: 3, backoff: 'exponential' }, async (job) => { /* ... */ });
piscina: Errors bubble up as rejected promises. You handle them like any async call.
try {
await piscina.run(...);
} catch (err) {
// Handle worker error
}
workerpool: Same as piscina β errors reject the promise.
If you need automatic retries, only bull provides that out of the box.
bull β enqueue email jobs, let workers handle delivery with retry logic.piscina β spin up a thread pool, pass frames via SharedArrayBuffer for zero-copy.workerpool with type: 'process' β each script runs in its own process.bull for short-lived, non-critical tasks β Redis adds operational complexity you may not need.piscina if your task involves heavy I/O (like file reads) β threads wonβt help much; consider clustering instead.workerpool if you need fine-grained control over thread lifecycle or maximum performance β piscina is leaner and faster.| Feature | bull | piscina | workerpool |
|---|---|---|---|
| Primary Use Case | Distributed job queue | In-process CPU parallelism | Generic offloading |
| Persistence | β (Redis) | β | β |
| Retry Logic | β Built-in | β Manual | β Manual |
| Concurrency Model | Distributed workers | Worker Threads | Threads or Processes |
| API Style | Event-driven queue | Promise + named exports | Dynamic function calls |
| Cross-Machine | β | β | β (with cluster mode) |
| Best For | Email, reports, ETL | Math, crypto, parsing | Safe script execution |
bull.piscina.workerpool.All three are mature, actively maintained, and solve real problems β but theyβre not interchangeable. Choose based on whether you need reliability, raw speed, or flexibility.
Choose bull when you need durable, persistent job queues with features like delayed jobs, priority levels, rate limiting, and retry logic β especially in distributed systems where multiple services must coordinate background work. Itβs ideal for email sending, image processing pipelines, or any task that must survive process restarts and scale across machines.
Choose piscina when you need maximum performance for CPU-bound tasks within a single Node.js instance and want a clean, promise-based API over Worker Threads. Itβs well-suited for real-time data transformation, cryptographic operations, or parsing large JSON payloads without blocking the main thread.
Choose workerpool when you need flexibility to run tasks in either child processes or worker threads and want a simple function-call abstraction without managing thread lifecycle manually. It works well for moderate parallelism needs in monolithic apps where you donβt require Redis-backed persistence or advanced queue semantics.
The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.
Sponsors Β· Features Β· UIs Β· Install Β· Quick Guide Β· Documentation
Check the new Guide!
|
| Dragonfly is a new Redisβ’ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redisβ’ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ. |
Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.
Follow me on Twitter for other important news and updates.
You can find tutorials and news in this blog: https://blog.taskforce.sh/
Bull is popular among large and small organizations, like the following ones:
|
|
|
|
|
Supercharge your queues with a professional front end:
Sign up at Taskforce.sh
And coming up on the roadmap...
There are a few third-party UIs that you can use for monitoring:
BullMQ
Bull v3
Bull <= v2
Since there are a few job queue solutions, here is a table comparing them:
| Feature | BullMQ-Pro | BullMQ | Bull | Kue | Bee | Agenda |
|---|---|---|---|---|---|---|
| Backend | redis | redis | redis | redis | redis | mongo |
| Observables | β | |||||
| Group Rate Limit | β | |||||
| Group Support | β | |||||
| Batches Support | β | |||||
| Parent/Child Dependencies | β | β | ||||
| Priorities | β | β | β | β | β | |
| Concurrency | β | β | β | β | β | β |
| Delayed jobs | β | β | β | β | β | |
| Global events | β | β | β | β | ||
| Rate Limiter | β | β | β | |||
| Pause/Resume | β | β | β | β | ||
| Sandboxed worker | β | β | β | |||
| Repeatable jobs | β | β | β | β | ||
| Atomic ops | β | β | β | β | ||
| Persistence | β | β | β | β | β | β |
| UI | β | β | β | β | β | |
| Optimized for | Jobs / Messages | Jobs / Messages | Jobs / Messages | Jobs | Messages | Jobs |
npm install bull --save
or
yarn add bull
Requirements: Bull requires a Redis version greater than or equal to 2.8.18.
npm install @types/bull --save-dev
yarn add --dev @types/bull
Definitions are currently maintained in the DefinitelyTyped repo.
We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.
const Queue = require('bull');
const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');
videoQueue.process(function (job, done) {
// job.data contains the custom data passed when the job was created
// job.id contains id of this job.
// transcode video asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { framerate: 29.5 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
audioQueue.process(function (job, done) {
// transcode audio asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { samplerate: 48000 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
imageQueue.process(function (job, done) {
// transcode image asynchronously and report progress
job.progress(42);
// call done when finished
done();
// or give an error if error
done(new Error('error transcoding'));
// or pass it a result
done(null, { width: 1280, height: 720 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
});
pdfQueue.process(function (job) {
// Processors can also return promises instead of using the done callback
return pdfAsyncProcessor();
});
videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });
Alternatively, you can return promises instead of using the done callback:
videoQueue.process(function (job) { // don't forget to remove the done callback!
// Simply return a promise
return fetchVideo(job.data.url).then(transcodeVideo);
// Handles promise rejection
return Promise.reject(new Error('error transcoding'));
// Passes the value the promise is resolved with to the "completed" event
return Promise.resolve({ framerate: 29.5 /* etc... */ });
// If the job throws an unhandled exception it is also handled correctly
throw new Error('some unexpected error');
// same as
return Promise.reject(new Error('some unexpected error'));
});
The process function can also be run in a separate process. This has several advantages:
In order to use this feature just create a separate file with the processor:
// processor.js
module.exports = function (job) {
// Do some heavy work
return Promise.resolve(result);
}
And define the processor like this:
// Single process:
queue.process('/path/to/my/processor.js');
// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');
// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');
A job can be added to a queue and processed repeatedly according to a cron specification:
paymentsQueue.process(function (job) {
// Check payments
});
// Repeat payment job once every day at 3:15 (am)
paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });
As a tip, check your expressions here to verify they are correct: cron expression generator
A queue can be paused and resumed globally (pass true to pause processing for
just this worker):
queue.pause().then(function () {
// queue is paused now
});
queue.resume().then(function () {
// queue is resumed now
})
A queue emits some useful events, for example...
.on('completed', function (job, result) {
// Job completed with output result!
})
For more information on events, including the full list of events that are fired, check out the Events reference
Queues are cheap, so if you need many of them just create new ones with different names:
const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.
However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.
NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.
Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:
const Queue = require('bull');
const cluster = require('cluster');
const numWorkers = 8;
const queue = new Queue('test concurrent queue');
if (cluster.isMaster) {
for (let i = 0; i < numWorkers; i++) {
cluster.fork();
}
cluster.on('online', function (worker) {
// Let's create a few jobs for the queue workers
for (let i = 0; i < 500; i++) {
queue.add({ foo: 'bar' });
};
});
cluster.on('exit', function (worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
queue.process(function (job, jobDone) {
console.log('Job done by worker', cluster.worker.id, job.id);
jobDone();
});
}
For the full documentation, check out the reference and common patterns:
If you see anything that could use more docs, please submit a pull request!
The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.
When a worker is processing a job it will keep the job "locked" so other workers can't process it.
It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled -
and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval
lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed,
the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:
lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.
As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).