bull vs piscina vs workerpool
Offloading CPU-Intensive Tasks in Node.js Applications
bullpiscinaworkerpoolSimilar Packages:

Offloading CPU-Intensive Tasks in Node.js Applications

bull, piscina, and workerpool are all Node.js libraries designed to manage work distribution across multiple threads or processes, but they serve different architectural purposes. bull is a robust queue system built on Redis that enables job scheduling, retries, and distributed processing. piscina is a modern, high-performance worker thread pool implementation that simplifies running JavaScript tasks in parallel using Node.js Worker Threads. workerpool provides a flexible abstraction for executing functions in child processes or worker threads, supporting both local and remote (cluster-based) execution strategies.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
bull016,242309 kB149a year agoMIT
piscina05,123406 kB126 months agoMIT
workerpool02,298618 kB3515 days agoApache-2.0

Offloading CPU-Intensive Work: bull vs piscina vs workerpool

Node.js runs JavaScript in a single thread, which means heavy computations can block your event loop and degrade responsiveness. To avoid this, developers use libraries like bull, piscina, and workerpool to move work off the main thread. But these tools solve different problems β€” one is a job queue, another is a worker thread pool, and the third is a generic parallel execution helper. Let’s break down how they work and when to use each.

🧠 Core Purpose: What Problem Does Each Solve?

bull: A Redis-Based Job Queue

bull isn’t about parallelism per se β€” it’s about reliable, asynchronous job processing. It stores jobs in Redis, allowing them to persist across restarts, be shared across services, and support advanced features like retries, delays, and priorities.

// bull: enqueue a job
const Queue = require('bull');
const emailQueue = new Queue('email');

// Add a job to the queue
await emailQueue.add({ to: 'user@example.com', subject: 'Welcome!' });

// Process jobs (can run in a separate process or server)
emailQueue.process(async (job) => {
  await sendEmail(job.data.to, job.data.subject);
});

βœ… Use bull when you care about durability, distribution, and workflow control β€” not just speed.

piscina: High-Performance Worker Thread Pool

piscina gives you a ready-to-use pool of Node.js Worker Threads with minimal overhead. It’s built for CPU-heavy tasks that need to run fast within a single application instance.

// piscina: run a task in a worker thread
const Piscina = require('piscina');
const piscina = new Piscina({ filename: path.resolve(__dirname, 'worker.js') });

// worker.js exports a function
// export async function processData(data) { /* ... */ }

const result = await piscina.run({ input: 'large dataset' }, { name: 'processData' });

βœ… Use piscina when you need low-latency, in-process parallelism for tasks like hashing, compression, or math-heavy operations.

workerpool: Flexible Task Execution in Workers or Processes

workerpool abstracts away whether you’re using child processes or worker threads, letting you call functions as if they were local β€” even if they run remotely.

// workerpool: execute a function in a worker
const workerpool = require('workerpool');
const pool = workerpool.pool();

// Define a function in a worker file (e.g., 'mathWorker.js')
// function square(n) { return n * n; }

const result = await pool.exec('square', [5], { worker: 'mathWorker.js' });
// result === 25

βœ… Use workerpool when you want a simple, unified API for offloading work without worrying about the underlying concurrency model.

βš™οΈ Concurrency Model: Threads, Processes, or Distributed?

PackageConcurrency ModelShared Memory?Cross-Process?
bullDistributed (via Redis)βŒβœ…
piscinaWorker Threads (in-process)βœ… (via transferable objects)❌
workerpoolWorker Threads or Child Processes⚠️ (depends on mode)βœ… (with cluster)
  • bull assumes workers may live on different machines. Communication happens through Redis, so no shared memory.
  • piscina uses Worker Threads, which share memory via ArrayBuffer transfers β€” very fast, but limited to one machine.
  • workerpool lets you choose: type: 'thread' for Worker Threads, or type: 'process' for full isolation (slower, but safer for unstable code).

πŸ’Ύ Persistence & Reliability

Only bull offers persistent job storage. If your app crashes:

  • With bull: Jobs stay in Redis and resume when the worker restarts.
  • With piscina/workerpool: In-flight tasks are lost. No retry, no history.

This makes bull essential for critical background work (e.g., payment processing), while the others suit ephemeral, best-effort tasks (e.g., real-time analytics).

πŸ“¦ API Style: Promises, Events, or Function Calls?

bull uses an event-driven, queue-based model:

queue.on('completed', (job) => console.log('Done:', job.id));
queue.on('failed', (job, err) => console.error('Failed:', err));

You add jobs and listen for outcomes β€” great for long-running or batched work.

piscina uses async/await with named exports:

// Main thread
const result = await piscina.run(data, { name: 'transform' });

// Worker file must export `transform`
export async function transform(data) { return data.map(x => x * 2); }

Clean and direct β€” feels like calling a local function.

workerpool uses dynamic function invocation:

// Call any exported function by name
const result = await pool.exec('myFunction', [arg1, arg2]);

More flexible than piscina (no need to pre-declare entry points), but slightly more runtime overhead.

πŸ› οΈ Error Handling

  • bull: Built-in retry mechanisms, failure tracking, and dead-letter queues.

    queue.process({ attempts: 3, backoff: 'exponential' }, async (job) => { /* ... */ });
    
  • piscina: Errors bubble up as rejected promises. You handle them like any async call.

    try {
      await piscina.run(...);
    } catch (err) {
      // Handle worker error
    }
    
  • workerpool: Same as piscina β€” errors reject the promise.

If you need automatic retries, only bull provides that out of the box.

🌐 Real-World Scenarios

Scenario 1: Sending Transactional Emails

  • Need: Guaranteed delivery, retries on failure, scale across servers.
  • βœ… Use bull β€” enqueue email jobs, let workers handle delivery with retry logic.

Scenario 2: Real-Time Video Frame Processing

  • Need: Low-latency, high-throughput computation on each frame.
  • βœ… Use piscina β€” spin up a thread pool, pass frames via SharedArrayBuffer for zero-copy.

Scenario 3: Running User-Defined Scripts Safely

  • Need: Isolate untrusted code, prevent main thread crashes.
  • βœ… Use workerpool with type: 'process' β€” each script runs in its own process.

πŸ“‰ When Not to Use Each

  • Don’t use bull for short-lived, non-critical tasks β€” Redis adds operational complexity you may not need.
  • Don’t use piscina if your task involves heavy I/O (like file reads) β€” threads won’t help much; consider clustering instead.
  • Don’t use workerpool if you need fine-grained control over thread lifecycle or maximum performance β€” piscina is leaner and faster.

πŸ“Š Summary Table

Featurebullpiscinaworkerpool
Primary Use CaseDistributed job queueIn-process CPU parallelismGeneric offloading
Persistenceβœ… (Redis)❌❌
Retry Logicβœ… Built-in❌ Manual❌ Manual
Concurrency ModelDistributed workersWorker ThreadsThreads or Processes
API StyleEvent-driven queuePromise + named exportsDynamic function calls
Cross-Machineβœ…βŒβœ… (with cluster mode)
Best ForEmail, reports, ETLMath, crypto, parsingSafe script execution

πŸ’‘ Final Guidance

  • If your task must not be lost and might run on multiple servers, go with bull.
  • If you’re maximizing CPU usage on one machine and need speed, pick piscina.
  • If you want a simple way to run functions off-thread without committing to threads or processes, use workerpool.

All three are mature, actively maintained, and solve real problems β€” but they’re not interchangeable. Choose based on whether you need reliability, raw speed, or flexibility.

How to Choose: bull vs piscina vs workerpool

  • bull:

    Choose bull when you need durable, persistent job queues with features like delayed jobs, priority levels, rate limiting, and retry logic β€” especially in distributed systems where multiple services must coordinate background work. It’s ideal for email sending, image processing pipelines, or any task that must survive process restarts and scale across machines.

  • piscina:

    Choose piscina when you need maximum performance for CPU-bound tasks within a single Node.js instance and want a clean, promise-based API over Worker Threads. It’s well-suited for real-time data transformation, cryptographic operations, or parsing large JSON payloads without blocking the main thread.

  • workerpool:

    Choose workerpool when you need flexibility to run tasks in either child processes or worker threads and want a simple function-call abstraction without managing thread lifecycle manually. It works well for moderate parallelism needs in monolithic apps where you don’t require Redis-backed persistence or advanced queue semantics.

README for bull




The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.


Sponsors Β· Features Β· UIs Β· Install Β· Quick Guide Β· Documentation

Check the new Guide!


πŸš€ Sponsors πŸš€

Dragonfly Dragonfly is a new Redisβ„’ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redisβ„’ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ.

πŸ“» News and updates

Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.

Follow me on Twitter for other important news and updates.

πŸ›  Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

Atlassian Autodesk Mozilla Nest Salesforce


Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Bull Features

  • Minimal CPU usage due to a polling-free design.
  • Robust design based on Redis.
  • Delayed jobs.
  • Schedule and repeat jobs according to a cron specification.
  • Rate limiter for jobs.
  • Retries.
  • Priority.
  • Concurrency.
  • Pause/resumeβ€”globally or locally.
  • Multiple job types per queue.
  • Threaded (sandboxed) processing functions.
  • Automatic recovery from process crashes.

And coming up on the roadmap...

  • Job completion acknowledgement (you can use the message queue pattern in the meantime).
  • Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

BullMQ

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

FeatureBullMQ-ProBullMQBullKueBeeAgenda
Backendredisredisredisredisredismongo
Observablesβœ“
Group Rate Limitβœ“
Group Supportβœ“
Batches Supportβœ“
Parent/Child Dependenciesβœ“βœ“
Prioritiesβœ“βœ“βœ“βœ“βœ“
Concurrencyβœ“βœ“βœ“βœ“βœ“βœ“
Delayed jobsβœ“βœ“βœ“βœ“βœ“
Global eventsβœ“βœ“βœ“βœ“
Rate Limiterβœ“βœ“βœ“
Pause/Resumeβœ“βœ“βœ“βœ“
Sandboxed workerβœ“βœ“βœ“
Repeatable jobsβœ“βœ“βœ“βœ“
Atomic opsβœ“βœ“βœ“βœ“
Persistenceβœ“βœ“βœ“βœ“βœ“βœ“
UIβœ“βœ“βœ“βœ“βœ“
Optimized forJobs / MessagesJobs / MessagesJobs / MessagesJobsMessagesJobs

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

const Queue = require('bull');

const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function (job, done) {

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function (job, done) {
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function (job, done) {
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function (job) {
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });

Using promises

Alternatively, you can return promises instead of using the done callback:

videoQueue.process(function (job) { // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function (job) {
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function (job) {
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });

As a tip, check your expressions here to verify they are correct: cron expression generator

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function () {
  // queue is paused now
});

queue.resume().then(function () {
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function (job, result) {
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

const Queue = require('bull');
const cluster = require('cluster');

const numWorkers = 8;
const queue = new Queue('test concurrent queue');

if (cluster.isMaster) {
  for (let i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function (worker) {
    // Let's create a few jobs for the queue workers
    for (let i = 0; i < 500; i++) {
      queue.add({ foo: 'bar' });
    };
  });

  cluster.on('exit', function (worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  queue.process(function (job, jobDone) {
    console.log('Job done by worker', cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide β€” Your starting point for developing with Bull.
  • Reference β€” Reference document with all objects and methods available.
  • Patterns β€” a set of examples for common patterns.
  • License β€” the Bull licenseβ€”it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).