bull, piscina, and workerpool are all Node.js libraries designed to manage work distribution across multiple threads or processes, but they serve different architectural purposes. bull is a robust queue system built on Redis that enables job scheduling, retries, and distributed processing. piscina is a modern, high-performance worker thread pool implementation that simplifies running JavaScript tasks in parallel using Node.js Worker Threads. workerpool provides a flexible abstraction for executing functions in child processes or worker threads, supporting both local and remote (cluster-based) execution strategies.
Node.js runs JavaScript in a single thread, which means heavy computations can block your event loop and degrade responsiveness. To avoid this, developers use libraries like bull, piscina, and workerpool to move work off the main thread. But these tools solve different problems β one is a job queue, another is a worker thread pool, and the third is a generic parallel execution helper. Letβs break down how they work and when to use each.
bull: A Redis-Based Job Queuebull isnβt about parallelism per se β itβs about reliable, asynchronous job processing. It stores jobs in Redis, allowing them to persist across restarts, be shared across services, and support advanced features like retries, delays, and priorities.
// bull: enqueue a job
const Queue = require('bull');
const emailQueue = new Queue('email');
// Add a job to the queue
await emailQueue.add({ to: 'user@example.com', subject: 'Welcome!' });
// Process jobs (can run in a separate process or server)
emailQueue.process(async (job) => {
await sendEmail(job.data.to, job.data.subject);
});
β Use
bullwhen you care about durability, distribution, and workflow control β not just speed.
piscina: High-Performance Worker Thread Poolpiscina gives you a ready-to-use pool of Node.js Worker Threads with minimal overhead. Itβs built for CPU-heavy tasks that need to run fast within a single application instance.
// piscina: run a task in a worker thread
const Piscina = require('piscina');
const piscina = new Piscina({ filename: path.resolve(__dirname, 'worker.js') });
// worker.js exports a function
// export async function processData(data) { /* ... */ }
const result = await piscina.run({ input: 'large dataset' }, { name: 'processData' });
β Use
piscinawhen you need low-latency, in-process parallelism for tasks like hashing, compression, or math-heavy operations.
workerpool: Flexible Task Execution in Workers or Processesworkerpool abstracts away whether youβre using child processes or worker threads, letting you call functions as if they were local β even if they run remotely.
// workerpool: execute a function in a worker
const workerpool = require('workerpool');
const pool = workerpool.pool();
// Define a function in a worker file (e.g., 'mathWorker.js')
// function square(n) { return n * n; }
const result = await pool.exec('square', [5], { worker: 'mathWorker.js' });
// result === 25
β Use
workerpoolwhen you want a simple, unified API for offloading work without worrying about the underlying concurrency model.
| Package | Concurrency Model | Shared Memory? | Cross-Process? |
|---|---|---|---|
bull | Distributed (via Redis) | β | β |
piscina | Worker Threads (in-process) | β (via transferable objects) | β |
workerpool | Worker Threads or Child Processes | β οΈ (depends on mode) | β (with cluster) |
bull assumes workers may live on different machines. Communication happens through Redis, so no shared memory.piscina uses Worker Threads, which share memory via ArrayBuffer transfers β very fast, but limited to one machine.workerpool lets you choose: type: 'thread' for Worker Threads, or type: 'process' for full isolation (slower, but safer for unstable code).Only bull offers persistent job storage. If your app crashes:
bull: Jobs stay in Redis and resume when the worker restarts.piscina/workerpool: In-flight tasks are lost. No retry, no history.This makes bull essential for critical background work (e.g., payment processing), while the others suit ephemeral, best-effort tasks (e.g., real-time analytics).
bull uses an event-driven, queue-based model:queue.on('completed', (job) => console.log('Done:', job.id));
queue.on('failed', (job, err) => console.error('Failed:', err));
You add jobs and listen for outcomes β great for long-running or batched work.
piscina uses async/await with named exports:// Main thread
const result = await piscina.run(data, { name: 'transform' });
// Worker file must export `transform`
export async function transform(data) { return data.map(x => x * 2); }
Clean and direct β feels like calling a local function.
workerpool uses dynamic function invocation:// Call any exported function by name
const result = await pool.exec('myFunction', [arg1, arg2]);
More flexible than piscina (no need to pre-declare entry points), but slightly more runtime overhead.
bull: Built-in retry mechanisms, failure tracking, and dead-letter queues.
queue.process({ attempts: 3, backoff: 'exponential' }, async (job) => { /* ... */ });
piscina: Errors bubble up as rejected promises. You handle them like any async call.
try {
await piscina.run(...);
} catch (err) {
// Handle worker error
}
workerpool: Same as piscina β errors reject the promise.
If you need automatic retries, only bull provides that out of the box.
bull β enqueue email jobs, let workers handle delivery with retry logic.piscina β spin up a thread pool, pass frames via SharedArrayBuffer for zero-copy.workerpool with type: 'process' β each script runs in its own process.bull for short-lived, non-critical tasks β Redis adds operational complexity you may not need.piscina if your task involves heavy I/O (like file reads) β threads wonβt help much; consider clustering instead.workerpool if you need fine-grained control over thread lifecycle or maximum performance β piscina is leaner and faster.| Feature | bull | piscina | workerpool |
|---|---|---|---|
| Primary Use Case | Distributed job queue | In-process CPU parallelism | Generic offloading |
| Persistence | β (Redis) | β | β |
| Retry Logic | β Built-in | β Manual | β Manual |
| Concurrency Model | Distributed workers | Worker Threads | Threads or Processes |
| API Style | Event-driven queue | Promise + named exports | Dynamic function calls |
| Cross-Machine | β | β | β (with cluster mode) |
| Best For | Email, reports, ETL | Math, crypto, parsing | Safe script execution |
bull.piscina.workerpool.All three are mature, actively maintained, and solve real problems β but theyβre not interchangeable. Choose based on whether you need reliability, raw speed, or flexibility.
Choose workerpool when you need flexibility to run tasks in either child processes or worker threads and want a simple function-call abstraction without managing thread lifecycle manually. It works well for moderate parallelism needs in monolithic apps where you donβt require Redis-backed persistence or advanced queue semantics.
Choose piscina when you need maximum performance for CPU-bound tasks within a single Node.js instance and want a clean, promise-based API over Worker Threads. Itβs well-suited for real-time data transformation, cryptographic operations, or parsing large JSON payloads without blocking the main thread.
Choose bull when you need durable, persistent job queues with features like delayed jobs, priority levels, rate limiting, and retry logic β especially in distributed systems where multiple services must coordinate background work. Itβs ideal for email sending, image processing pipelines, or any task that must survive process restarts and scale across machines.
workerpool offers an easy way to create a pool of workers for both dynamically offloading computations as well as managing a pool of dedicated workers. workerpool basically implements a thread pool pattern. There is a pool of workers to execute tasks. New tasks are put in a queue. A worker executes one task at a time, and once finished, picks a new task from the queue. Workers can be accessed via a natural, promise based proxy, as if they are available straight in the main application.
workerpool runs on Node.js and in the browser.
JavaScript is based upon a single event loop which handles one event at a time. Jeremy Epstein explains this clearly:
In Node.js everything runs in parallel, except your code. What this means is that all I/O code that you write in Node.js is non-blocking, while (conversely) all non-I/O code that you write in Node.js is blocking.
This means that CPU heavy tasks will block other tasks from being executed. In case of a browser environment, the browser will not react to user events like a mouse click while executing a CPU intensive task (the browser "hangs"). In case of a node.js server, the server will not respond to any new request while executing a single, heavy request.
For front-end processes, this is not a desired situation. Therefore, CPU intensive tasks should be offloaded from the main event loop onto dedicated workers. In a browser environment, Web Workers can be used. In node.js, child processes and worker_threads are available. An application should be split in separate, decoupled parts, which can run independent of each other in a parallelized way. Effectively, this results in an architecture which achieves concurrency by means of isolated processes and message passing.
Install via npm:
npm install workerpool
To load workerpool in a node.js application (both main application as well as workers):
const workerpool = require('workerpool');
To load workerpool in the browser:
<script src="workerpool.js"></script>
To load workerpool in a web worker in the browser:
importScripts('workerpool.js');
Setting up the workerpool with React or webpack5 requires additional configuration steps, as outlined in the webpack5 section.
In the following example there is a function add, which is offloaded dynamically to a worker to be executed for a given set of arguments.
myApp.js
const workerpool = require('workerpool');
const pool = workerpool.pool();
function add(a, b) {
return a + b;
}
pool
.exec(add, [3, 4])
.then(function (result) {
console.log('result', result); // outputs 7
})
.catch(function (err) {
console.error(err);
})
.then(function () {
pool.terminate(); // terminate all workers when done
});
Note that both function and arguments must be static and stringifiable, as they need to be sent to the worker in a serialized form. In case of large functions or function arguments, the overhead of sending the data to the worker can be significant.
A dedicated worker can be created in a separate script, and then used via a worker pool.
myWorker.js
const workerpool = require('workerpool');
// a deliberately inefficient implementation of the fibonacci sequence
function fibonacci(n) {
if (n < 2) return n;
return fibonacci(n - 2) + fibonacci(n - 1);
}
// create a worker and register public functions
workerpool.worker({
fibonacci: fibonacci,
});
This worker can be used by a worker pool:
myApp.js
const workerpool = require('workerpool');
// create a worker pool using an external worker script
const pool = workerpool.pool(__dirname + '/myWorker.js');
// run registered functions on the worker via exec
pool
.exec('fibonacci', [10])
.then(function (result) {
console.log('Result: ' + result); // outputs 55
})
.catch(function (err) {
console.error(err);
})
.then(function () {
pool.terminate(); // terminate all workers when done
});
// or run registered functions on the worker via a proxy:
pool
.proxy()
.then(function (worker) {
return worker.fibonacci(10);
})
.then(function (result) {
console.log('Result: ' + result); // outputs 55
})
.catch(function (err) {
console.error(err);
})
.then(function () {
pool.terminate(); // terminate all workers when done
});
Worker can also initialize asynchronously:
myAsyncWorker.js
define(['workerpool/dist/workerpool'], function (workerpool) {
// a deliberately inefficient implementation of the fibonacci sequence
function fibonacci(n) {
if (n < 2) return n;
return fibonacci(n - 2) + fibonacci(n - 1);
}
// create a worker and register public functions
workerpool.worker({
fibonacci: fibonacci,
});
});
Examples are available in the examples directory:
https://github.com/josdejong/workerpool/tree/master/examples
The API of workerpool consists of two parts: a function workerpool.pool to create a worker pool, and a function workerpool.worker to create a worker.
A workerpool can be created using the function workerpool.pool:
workerpool.pool([script: string] [, options: Object]) : Pool
When a script argument is provided, the provided script will be started as a dedicated worker. When no script argument is provided, a default worker is started which can be used to offload functions dynamically via Pool.exec. Note that on node.js, script must be an absolute file path like __dirname + '/myWorker.js'. In a browser environment, script can also be a data URL like 'data:application/javascript;base64,...'. This allows embedding the bundled code of a worker in your main application. See examples/embeddedWorker for a demo.
The following options are available:
minWorkers: number | 'max'. The minimum number of workers that must be initialized and kept available. Setting this to 'max' will create maxWorkers default workers (see below).maxWorkers: number. The default number of maxWorkers is the number of CPU's minus one. When the number of CPU's could not be determined (for example in older browsers), maxWorkers is set to 3.maxQueueSize: number. The maximum number of tasks allowed to be queued. Can be used to prevent running out of memory. If the maximum is exceeded, adding a new task will throw an error. The default value is Infinity.workerType: 'auto' | 'web' | 'process' | 'thread'.
'auto' (default), workerpool will automatically pick a suitable type of worker: when in a browser environment, 'web' will be used. When in a node.js environment, worker_threads will be used if available (Node.js >= 11.7.0), else child_process will be used.'web', a Web Worker will be used. Only available in a browser environment.'process', child_process will be used. Only available in a node.js environment.'thread', worker_threads will be used. If worker_threads are not available, an error is thrown. Only available in a node.js environment.workerTerminateTimeout: number. The timeout in milliseconds to wait for a worker to cleanup it's resources on termination before stopping it forcefully. Default value is 1000.abortListenerTimeout: number. The timeout in milliseconds to wait for abort listener's before stopping it forcefully, triggering cleanup. Default value is 1000.forkArgs: String[]. For process worker type. An array passed as args to child_process.forkforkOpts: Object. For process worker type. An object passed as options to child_process.fork. See nodejs documentation for available options.workerOpts: Object. For web worker type. An object passed to the constructor of the web worker. See WorkerOptions specification for available options.workerThreadOpts: Object. For worker worker type. An object passed to worker_threads.options. See nodejs documentation for available options.onCreateWorker: Function. A callback that is called whenever a worker is being created. It can be used to allocate resources for each worker for example. The callback is passed as argument an object with the following properties:
forkArgs: String[]: the forkArgs option of this poolforkOpts: Object: the forkOpts option of this poolworkerOpts: Object: the workerOpts option of this poolscript: string: the script option of this pool
Optionally, this callback can return an object containing one or more of the above properties. The provided properties will be used to override the Pool properties for the worker being created.onTerminateWorker: Function. A callback that is called whenever a worker is being terminated. It can be used to release resources that might have been allocated for this specific worker. The callback is passed as argument an object as described for onCreateWorker, with each property sets with the value for the worker being terminated.emitStdStreams: boolean. For process or thread worker type. If true, the worker will emit stdout and stderr events instead of passing it through to the parent streams. Default value is false.Important note on
'workerType': when sending and receiving primitive data types (plain JSON) from and to a worker, the different worker types ('web','process','thread') can be used interchangeably. However, when using more advanced data types like buffers, the API and returned results can vary. In these cases, it is best not to use the'auto'setting but have a fixed'workerType'and good unit testing in place.
A worker pool contains the following functions:
Pool.exec(method: Function | string, params: Array | null [, options: Object]) : Promise<any, Error>
Execute a function on a worker with given arguments.
method is a string, a method with this name must exist at the worker and must be registered to make it accessible via the pool. The function will be executed on the worker with given parameters.method is a function, the provided function fn will be stringified, send to the worker, and executed there with the provided parameters. The provided function must be static, it must not depend on variables in a surrounding scope.Pool.proxy() : Promise<Object, Error>
Create a proxy for the worker pool. The proxy contains a proxy for all methods available on the worker. All methods return promises resolving the methods result.
Pool.stats() : Object
Retrieve statistics on workers, and active and pending tasks.
Returns an object containing the following properties:
{
totalWorkers: 0,
busyWorkers: 0,
idleWorkers: 0,
pendingTasks: 0,
activeTasks: 0
}
Pool.terminate([force: boolean [, timeout: number]]) : Promise<void, Error>
If parameter force is false (default), workers will finish the tasks they are working on before terminating themselves. Any pending tasks will be rejected with an error 'Pool terminated'. When force is true, all workers are terminated immediately without finishing running tasks. If timeout is provided, worker will be forced to terminate when the timeout expires and the worker has not finished.
The function Pool.exec and the proxy functions all return a Promise. The promise has the following functions available:
Promise.then(fn: Function<result: any>) : Promise<any, Error>Promise.catch(fn: Function<error: Error>) : Promise<any, Error>Promise.finally(fn: Function<void>)resolves or rejectsPromise.cancel() : Promise<any, Error>Promise.CancellationError.Promise.timeout(delay: number) : Promise<any, Error>Promise.TimeoutError.Example usage:
const workerpool = require('workerpool');
function add(a, b) {
return a + b;
}
const pool1 = workerpool.pool();
// offload a function to a worker
pool1
.exec(add, [2, 4])
.then(function (result) {
console.log(result); // will output 6
})
.catch(function (err) {
console.error(err);
});
// create a dedicated worker
const pool2 = workerpool.pool(__dirname + '/myWorker.js');
// supposed myWorker.js contains a function 'fibonacci'
pool2
.exec('fibonacci', [10])
.then(function (result) {
console.log(result); // will output 55
})
.catch(function (err) {
console.error(err);
});
// send a transferable object to the worker
// supposed myWorker.js contains a function 'sum'
const toTransfer = new Uint8Array(2).map((_v, i) => i)
pool2
.exec('sum', [toTransfer], { transfer: [toTransfer.buffer] })
.then(function (result) {
console.log(result); // will output 3
})
.catch(function (err) {
console.error(err);
});
// create a proxy to myWorker.js
pool2
.proxy()
.then(function (myWorker) {
return myWorker.fibonacci(10);
})
.then(function (result) {
console.log(result); // will output 55
})
.catch(function (err) {
console.error(err);
});
// create a pool with a specified maximum number of workers
const pool3 = workerpool.pool({ maxWorkers: 7 });
A worker is constructed as:
workerpool.worker([methods: Object<String, Function>] [, options: Object]) : void
Argument methods is optional and can be an object with functions available in the worker. Registered functions will be available via the worker pool.
The following options are available:
onTerminate: ([code: number]) => Promise<void> | void. A callback that is called whenever a worker is being terminated. It can be used to release resources that might have been allocated for this specific worker. The difference with pool's onTerminateWorker is that this callback runs in the worker context, while onTerminateWorker is executed on the main thread.Example usage:
// file myWorker.js
const workerpool = require('workerpool');
function add(a, b) {
return a + b;
}
function multiply(a, b) {
return a * b;
}
// create a worker and register functions
workerpool.worker({
add: add,
multiply: multiply,
});
Asynchronous results can be handled by returning a Promise from a function in the worker:
// file myWorker.js
const workerpool = require('workerpool');
function timeout(delay) {
return new Promise(function (resolve, reject) {
setTimeout(resolve, delay);
});
}
// create a worker and register functions
workerpool.worker({
timeout: timeout,
});
Transferable objects can be sent back to the pool using Transfer helper class:
// file myWorker.js
const workerpool = require('workerpool');
function array(size) {
var array = new Uint8Array(size).map((_v, i) => i);
return new workerpool.Transfer(array, [array.buffer]);
}
// create a worker and register functions
workerpool.worker({
array: array,
});
You can send data back from workers to the pool while the task is being executed using the workerEmit function:
workerEmit(payload: any) : unknown
This function only works inside a worker and during a task.
Example:
// file myWorker.js
const workerpool = require('workerpool');
function eventExample(delay) {
workerpool.workerEmit({
status: 'in_progress',
});
workerpool.workerEmit({
status: 'complete',
});
return true;
}
// create a worker and register functions
workerpool.worker({
eventExample: eventExample,
});
To receive those events, you can use the on option of the pool exec method:
pool.exec('eventExample', [], {
on: function (payload) {
if (payload.status === 'in_progress') {
console.log('In progress...');
} else if (payload.status === 'complete') {
console.log('Done!');
}
},
});
Workers have access to a worker api which contains the following methods
emit: (payload: unknown | Transfer): voidaddAbortListener: (listener: () => Promise<void>): voidWorker termination may be recoverable through abort listeners which are registered through worker.addAbortListener. If all registered listeners resolve then the worker will not be terminated, allowing for worker reuse in some cases.
NOTE: For operations to successfully clean up, a worker implementation should be async. If the worker thread is blocked, then the worker will be killed.
function asyncTimeout() {
var me = this;
return new Promise(function (resolve) {
let timeout = setTimeout(() => {
resolve();
}, 5000);
// Register a listener which will resolve before the time out
// above triggers.
me.worker.addAbortListener(async function () {
clearTimeout(timeout);
resolve();
});
});
}
// create a worker and register public functions
workerpool.worker(
{
asyncTimeout: asyncTimeout,
},
{
abortListenerTimeout: 1000
}
);
Events may also be emitted from the worker api through worker.emit
// file myWorker.js
const workerpool = require('workerpool');
function eventExample(delay) {
this.worker.emit({
status: "in_progress",
});
workerpool.workerEmit({
status: 'complete',
});
return true;
}
// create a worker and register functions
workerpool.worker({
eventExample: eventExample,
});
Following properties are available for convenience:
map, reduce, forEach,
filter, some, every, ...First clone the project from github:
git clone git://github.com/josdejong/workerpool.git
cd workerpool
Install the project dependencies:
npm install
Then, the project can be build by executing the build script via npm:
npm run build
This will build the library workerpool.js and workerpool.min.js from the source files and put them in the folder dist.
To execute tests for the library, install the project dependencies once:
npm install
Then, the tests can be executed:
npm test
To test code coverage of the tests:
npm run coverage
To see the coverage results, open the generated report in your browser:
./coverage/index.html
npm install to update it in package-lock.json too.npm publish.git tag v1.2.3
git push --tags
Copyright (C) 2014-2025 Jos de Jong wjosdejong@gmail.com
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.