bull vs cluster vs comlink vs pm2 vs threads vs web-worker
Managing Concurrency and Parallelism in JavaScript Applications
bullclustercomlinkpm2threadsweb-workerSimilar Packages:

Managing Concurrency and Parallelism in JavaScript Applications

bull, cluster, comlink, pm2, threads, and web-worker are all tools that help JavaScript applications handle concurrency, parallelism, or process isolation—but they solve very different problems in distinct environments. bull is a Redis-backed queue system for deferring and distributing work in Node.js. cluster and pm2 manage multiple Node.js processes to utilize multi-core systems—cluster is a built-in Node module, while pm2 is a production process manager with clustering, monitoring, and restart capabilities. On the frontend, comlink, threads, and web-worker enable off-main-thread execution: web-worker provides a minimal wrapper to create Web Workers in browsers or Node (via worker_threads), threads offers a higher-level API for shared-memory threading in Node.js using worker_threads, and comlink simplifies communication between main threads and workers by abstracting postMessage into async/await-friendly proxies. These packages span backend job processing, process orchestration, and frontend/browser-based parallelism.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
bull1,174,09516,241309 kB146a year agoMIT
cluster02,289-6414 years ago-
comlink012,600252 kB117a year agoApache-2.0
pm2042,994838 kB1,0944 months agoAGPL-3.0
threads03,528-1254 years agoMIT
web-worker01,17431.1 kB15a year agoApache-2.0

Managing Concurrency and Parallelism in JavaScript: bull vs cluster vs comlink vs pm2 vs threads vs web-worker

JavaScript is single-threaded by design, but real-world applications often need to do more than one thing at a time—whether it’s handling thousands of requests, processing large datasets, or keeping the UI smooth during heavy computation. The packages bull, cluster, comlink, pm2, threads, and web-worker all address this challenge, but they operate in different layers (backend vs frontend), different environments (Node.js vs browser), and solve different problems (queuing vs process management vs worker communication). Let’s break down how and when to use each.

🧩 Core Problem Domains: What Each Package Actually Solves

Before comparing APIs, it’s crucial to understand what problem each tool is designed for:

  • bull: A job queue system. It defers work to be processed later, possibly by other processes or machines, using Redis as a durable message store.
  • cluster: A process forking utility. It lets a single Node.js app spawn multiple OS processes to handle incoming network traffic across CPU cores.
  • pm2: A production process manager. It runs, monitors, clusters, and restarts Node.js apps automatically, with CLI and programmatic APIs.
  • web-worker: A cross-environment worker factory. It creates Web Workers in browsers and worker_threads in Node.js using the same API.
  • threads: A high-level threading library for Node.js. It wraps worker_threads with promises, shared memory support, and easy data transfer.
  • comlink: A communication abstraction for workers. It turns postMessage into async/await calls, making workers feel like local modules.

These aren’t interchangeable—they belong to different categories. Confusing them leads to architectural mistakes (e.g., trying to use bull for UI responsiveness).

🖥️ Environment Compatibility: Where Each Package Runs

PackageBrowserNode.jsRequires Redis?
bull
cluster✅ (core)
pm2
web-worker✅*
threads
comlink✅*

* web-worker and comlink work in Node.js only when paired with a worker_threads polyfill or bundler config that maps Web Worker APIs to Node equivalents.

💡 Key insight: If you’re building a frontend app, your options are limited to web-worker and comlink. If you’re in Node.js, all six are technically available—but only some make sense for your use case.

⚙️ Basic Usage: Spawning Work or Workers

Let’s see how each package starts its core unit of work.

bull: Enqueue a Job

// producer.js
import Queue from 'bull';

const emailQueue = new Queue('emails', 'redis://127.0.0.1:6379');

// Add a job to the queue
await emailQueue.add({ to: 'user@example.com', subject: 'Welcome!' });
// processor.js
emailQueue.process(async (job) => {
  const { to, subject } = job.data;
  await sendEmail(to, subject); // actual work happens here
});

cluster: Fork Child Processes

// server.js
import cluster from 'node:cluster';
import http from 'node:http';
import { availableParallelism } from 'node:os';

if (cluster.isPrimary) {
  const numCPUs = availableParallelism();
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork(); // spawns a new worker process
  }
} else {
  // Worker process: start HTTP server
  http.createServer((req, res) => {
    res.writeHead(200);
    res.end('Hello from worker!');
  }).listen(8000);
}

pm2: Start and Cluster an App

// app.js
import http from 'node:http';

http.createServer((req, res) => {
  res.writeHead(200);
  res.end('Hello from PM2!');
}).listen(8000);

Then run via CLI:

# Start with 4 clustered instances
pm2 start app.js -i 4

Or programmatically:

// launcher.js
import pm2 from 'pm2';

pm2.connect((err) => {
  if (err) throw err;
  pm2.start({
    script: 'app.js',
    instances: 4,
    exec_mode: 'cluster'
  }, (err, apps) => {
    pm2.disconnect();
  });
});

web-worker: Create a Worker

// main.js
import { Worker } from 'web-worker';

const worker = new Worker(new URL('./worker.js', import.meta.url));
worker.postMessage('Hello');
worker.onmessage = (e) => console.log(e.data); // 'World'
// worker.js
self.onmessage = (e) => {
  self.postMessage('World');
};

threads: Spawn a Thread

// main.js
import { spawn, Thread, Worker } from 'threads';

const hasher = await spawn(new Worker('./hasher'));
const hash = await hasher.sha256('secret');
await Thread.terminate(hasher);
// hasher.js
import { expose } from 'threads/worker';

expose({
  sha256(input) {
    // compute hash...
    return computedHash;
  }
});

comlink: Proxy a Worker

// main.js
import * as Comlink from 'comlink';

const worker = new Worker(new URL('./api.js', import.meta.url));
const api = Comlink.wrap(worker);
const result = await api.expensiveCalculation(42); // feels local!
// api.js
import * as Comlink from 'comlink';

const obj = {
  async expensiveCalculation(n) {
    // heavy CPU work...
    return n * n;
  }
};

Comlink.expose(obj);

🔁 Communication Patterns: How Data Moves

How you pass data between units of work varies dramatically:

  • bull: Jobs carry JSON-serializable data. Workers pull jobs from Redis.
  • cluster: No direct communication. Use IPC (process.send()) sparingly, or external stores (Redis, DB) for coordination.
  • pm2: Similar to cluster; relies on external coordination or PM2’s built-in pub/sub (pm2.sendDataToProcessId()).
  • web-worker: Raw postMessage with structured cloning (no shared memory by default).
  • threads: Supports structured cloning and SharedArrayBuffer for zero-copy data sharing.
  • comlink: Abstracts postMessage into async method calls—no manual event handling.

Example: Passing Large Data

With threads, you can share memory:

// main.js
import { Pool, spawn, Transfer, Worker } from 'threads';

const pool = Pool(() => spawn(new Worker('./processor')));
const buffer = new SharedArrayBuffer(1024);
const view = new Uint8Array(buffer);
// ... fill buffer ...

const result = await (await pool.queue()).process(Transfer(buffer, [buffer]));

With comlink, you still pay serialization cost unless you use TransferHandler:

// main.js
import * as Comlink from 'comlink';

Comlink.transferHandlers.set('shared-buffer', {
  canHandle: (obj) => obj instanceof SharedArrayBuffer,
  serialize: (obj) => ({ /*...*/ }, [obj]),
  deserialize: (data) => /*...*/
});

const result = await api.process(buffer); // now transfers ownership

web-worker gives you no help—you must manage Transferable objects manually.

🛠️ Operational Concerns: Monitoring, Reliability, and Lifecycle

  • bull: Provides job states (waiting, active, completed, failed), retries, backoff strategies, and pause/resume. Monitor via Bull Board or custom UIs.
  • cluster: No built-in monitoring. If a worker dies, you must detect and respawn it yourself.
  • pm2: Built-in uptime tracking, log streaming, health checks, auto-restart on crash, and graceful reloads (pm2 reload).
  • web-worker / threads / comlink: No lifecycle management beyond .terminate(). You handle errors, restarts, and resource cleanup.

For production Node.js services, pm2 is often preferred over raw cluster because it solves operational headaches out of the box.

🌐 Real-World Scenarios: Which Tool Fits?

Scenario 1: Background Email Sending in a Node.js API

You receive a user signup request and need to send a welcome email without blocking the response.

  • Best choice: bull
  • Why? Emails can fail and need retries. You want to decouple the web server from the SMTP client. Redis ensures durability.

Scenario 2: Scaling a Node.js HTTP Server Across 8 Cores

Your Express app is CPU-bound and you want to use all cores on a single machine.

  • Best choice: pm2 (for production) or cluster (for minimal setups)
  • Why? Both fork processes, but pm2 adds restarts, logs, and zero-downtime deploys—critical for production.

Scenario 3: Running Image Processing in a React App

You let users upload photos and apply filters without freezing the UI.

  • Best choice: comlink + web-worker
  • Why? comlink makes calling worker functions feel natural. web-worker ensures the same code works in dev (browser) and test (Node via JSDOM or similar).

Scenario 4: Parallel CSV Parsing in a Node.js CLI Tool

You have a 1GB CSV and want to split parsing across threads to speed it up.

  • Best choice: threads
  • Why? It supports SharedArrayBuffer for efficient data sharing and has a clean promise API. web-worker would require manual message handling.

⚠️ Common Pitfalls and Misuses

  • Using bull for frontend tasks: bull requires Redis and runs in Node.js—it won’t help with browser UI jank.
  • Using cluster for non-I/O-bound work: If your app is CPU-heavy per request (e.g., real-time analytics), cluster helps. But if it’s mostly waiting on DB calls, async/await is sufficient.
  • Assuming comlink eliminates serialization cost: It hides postMessage, but data is still copied unless you use transferables.
  • Running pm2 in serverless environments: Platforms like AWS Lambda manage scaling for you—pm2 adds unnecessary overhead.

📊 Summary Table

PackagePrimary Use CaseEnvironmentParallelism TypeData SharingProduction Ready?
bullJob queueingNode.jsDistributed (Redis)JSON jobs
clusterMulti-process HTTP serversNode.jsProcess-basedIPC (limited)⚠️ (basic)
pm2Process management & clusteringNode.jsProcess-basedExternal only
web-workerCross-env worker creationBrowser/NodeThread-basedStructured clone
threadsHigh-level threadingNode.jsThread-basedSharedArrayBuffer
comlinkWorker RPC abstractionBrowser/NodeThread-basedStructured clone*

* With manual setup for transferables.

💡 Final Guidance

Ask yourself these questions:

  1. Am I in the browser or Node.js? → Eliminates half the options immediately.
  2. Do I need to defer work or just run it faster? → Queues (bull) vs parallelism (threads, cluster).
  3. Is this for development convenience or production resilience?pm2 for production ops; cluster for simple cases.
  4. How much data am I moving? → Large buffers favor threads with shared memory; small messages work with any.

These tools aren’t competitors—they’re specialists. The right choice depends entirely on your runtime environment, workload type, and operational requirements.

How to Choose: bull vs cluster vs comlink vs pm2 vs threads vs web-worker

  • bull:

    Choose bull when you need reliable, persistent background job processing in Node.js with features like retries, prioritization, rate limiting, and delayed jobs. It’s ideal for decoupling time-consuming tasks (e.g., sending emails, processing uploads) from your main request flow using Redis as a message broker. Avoid it if you don’t already use Redis or if your workload doesn’t require queuing semantics.

  • cluster:

    Choose cluster when you’re running a Node.js server on a multi-core machine and want to fork child processes to share a single port without external dependencies. It’s part of Node’s core, so it’s lightweight and integrates directly with your app. However, it lacks advanced features like zero-downtime reloads, health monitoring, or log aggregation—use it only for basic horizontal scaling within a single host.

  • comlink:

    Choose comlink when you’re working with Web Workers (in browsers or Node via worker_threads) and want to eliminate the boilerplate of postMessage-based communication. It lets you call worker functions as if they were local async methods, making complex worker interactions feel natural. It’s especially valuable in frontend apps that offload heavy computation (e.g., image processing, data parsing) to keep the UI responsive.

  • pm2:

    Choose pm2 when you need a full-featured, production-grade process manager for Node.js applications. It handles clustering, automatic restarts, logging, monitoring, and zero-downtime reloads out of the box. Use it for deploying and maintaining long-running services where reliability, observability, and operational ease matter more than minimal footprint.

  • threads:

    Choose threads when you’re in a Node.js environment and need true parallelism using worker_threads with a clean, promise-based API. It simplifies sharing memory via SharedArrayBuffer and transferring data between threads. Prefer it over raw worker_threads when you want ergonomic thread management without dealing with low-level message passing, but note it’s Node-only and not suitable for browser contexts.

  • web-worker:

    Choose web-worker when you need a simple, cross-environment way to spawn Web Workers that works consistently in both browsers and Node.js (via worker_threads). It abstracts environment differences so you can write worker code once. Use it for lightweight offloading of CPU-intensive tasks without the overhead of higher-level abstractions, but be prepared to handle postMessage communication manually.

README for bull




The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.


Sponsors · Features · UIs · Install · Quick Guide · Documentation

Check the new Guide!


🚀 Sponsors 🚀

Dragonfly Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ.

📻 News and updates

Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.

Follow me on Twitter for other important news and updates.

🛠 Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

Atlassian Autodesk Mozilla Nest Salesforce


Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Bull Features

  • Minimal CPU usage due to a polling-free design.
  • Robust design based on Redis.
  • Delayed jobs.
  • Schedule and repeat jobs according to a cron specification.
  • Rate limiter for jobs.
  • Retries.
  • Priority.
  • Concurrency.
  • Pause/resume—globally or locally.
  • Multiple job types per queue.
  • Threaded (sandboxed) processing functions.
  • Automatic recovery from process crashes.

And coming up on the roadmap...

  • Job completion acknowledgement (you can use the message queue pattern in the meantime).
  • Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

BullMQ

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

FeatureBullMQ-ProBullMQBullKueBeeAgenda
Backendredisredisredisredisredismongo
Observables
Group Rate Limit
Group Support
Batches Support
Parent/Child Dependencies
Priorities
Concurrency
Delayed jobs
Global events
Rate Limiter
Pause/Resume
Sandboxed worker
Repeatable jobs
Atomic ops
Persistence
UI
Optimized forJobs / MessagesJobs / MessagesJobs / MessagesJobsMessagesJobs

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

const Queue = require('bull');

const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function (job, done) {

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function (job, done) {
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function (job, done) {
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function (job) {
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });

Using promises

Alternatively, you can return promises instead of using the done callback:

videoQueue.process(function (job) { // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function (job) {
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function (job) {
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });

As a tip, check your expressions here to verify they are correct: cron expression generator

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function () {
  // queue is paused now
});

queue.resume().then(function () {
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function (job, result) {
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

const Queue = require('bull');
const cluster = require('cluster');

const numWorkers = 8;
const queue = new Queue('test concurrent queue');

if (cluster.isMaster) {
  for (let i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function (worker) {
    // Let's create a few jobs for the queue workers
    for (let i = 0; i < 500; i++) {
      queue.add({ foo: 'bar' });
    };
  });

  cluster.on('exit', function (worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  queue.process(function (job, jobDone) {
    console.log('Job done by worker', cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).