workerpool vs piscina vs threads
Worker Thread Management for CPU-Intensive Tasks in Node.js
workerpoolpiscinathreadsSimilar Packages:
Worker Thread Management for CPU-Intensive Tasks in Node.js

piscina, threads, and workerpool are all libraries designed to simplify the use of worker threads in Node.js for offloading CPU-intensive tasks from the main event loop. They abstract away the low-level complexity of the built-in worker_threads module, offering higher-level APIs for task queuing, resource pooling, and communication between threads. While they share a common goal — improving performance by parallelizing work — they differ significantly in architecture, API design, runtime assumptions, and suitability for specific deployment environments.

Npm Package Weekly Downloads Trend
3 Years
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
workerpool11,467,3262,264591 kB3314 days agoApache-2.0
piscina5,812,5574,971406 kB15a month agoMIT
threads309,8873,512-1254 years agoMIT

Piscina vs Threads vs Workerpool: Choosing the Right Tool for Offloading Work in Node.js

When your Node.js app hits CPU bottlenecks — say, parsing large JSON files, running simulations, or encrypting data — you’ll want to move that work off the main thread. All three libraries (piscina, threads, and workerpool) help you do that, but they take very different approaches. Let’s break down how they work, where they shine, and what trade-offs you’ll face.

🧵 Core Architecture: Threads vs Processes vs Abstraction Layers

piscina is built exclusively on Node.js’s native worker_threads. It creates a fixed or dynamically sized pool of reusable threads and routes tasks to them efficiently. There’s no process spawning — just lightweight threads sharing memory (via MessageChannel-style messaging).

// piscina: Simple async task dispatch
const Piscina = require('piscina');
const piscina = new Piscina({ filename: path.resolve(__dirname, 'worker.js') });

// In worker.js
module.exports = ({ data }) => {
  return heavyComputation(data);
};

// Main thread
const result = await piscina.run({ data: input });

threads wraps both browser Worker and Node.js Worker under one API. It uses a proxy-based system to expose worker functions as if they were local, handling serialization automatically. This lets you write nearly identical code for web and Node environments.

// threads: Unified worker API
import { spawn, Thread, Worker } from "threads";

const worker = await spawn(new Worker('./worker'));
const result = await worker.doHeavyWork(input);
await Thread.terminate(worker);

// worker.js
import { expose } from "threads/worker";
expose({
  doHeavyWork(input) {
    return heavyComputation(input);
  }
});

workerpool supports both child_process (forked processes) and worker_threads via a single interface. By default, it uses processes for stronger isolation — useful if your tasks might crash or leak memory.

// workerpool: Task by function name
const workerpool = require('workerpool');
const pool = workerpool.pool(path.resolve(__dirname, 'worker.js'));

// In worker.js
function heavyComputation(input) {
  return /* ... */;
}
module.exports = { heavyComputation };

// Main thread
const result = await pool.exec('heavyComputation', [input]);

⚙️ Task Dispatching: How You Send Work to Workers

piscina uses a simple run() method that passes a single argument to the worker’s default export. You can also pass transferable objects (like ArrayBuffer) for zero-copy data movement — critical for large payloads.

// Transfer ArrayBuffer without copying
const buffer = new ArrayBuffer(1024 * 1024);
const result = await piscina.run({ buffer }, { transferList: [buffer] });

threads requires you to explicitly expose functions in the worker using expose(). The main thread gets a proxy object with those methods. Arguments and return values are automatically serialized via structuredClone (or a polyfill), which works well but can’t handle functions or complex objects.

workerpool relies on string-based function names. You call pool.exec('functionName', args), and the worker must export a function with that exact name. This is flexible but error-prone — typos cause runtime failures, and refactoring is harder.

📦 Error Handling and Debugging

piscina propagates worker errors directly as rejected promises. Stack traces include both main and worker context when source maps are enabled. Crashed workers are automatically replaced, keeping the pool alive.

threads also uses promise rejections for errors, but because of its proxy layer, debugging can feel indirect. You’ll often need to attach .catch() to individual calls or use global error listeners.

workerpool throws errors as Promise rejections too, but if a worker process crashes entirely, the pool may stall unless you configure timeouts or restart policies. Debugging is more opaque due to inter-process boundaries.

🌐 Environment Support: Where Can You Run It?

  • piscina: Node.js only. No browser support. Requires Node 12+ (but realistically 14+ for full features).
  • threads: Works in both modern browsers (via Web Workers) and Node.js (via Worker Threads). Great for libraries targeting multiple runtimes.
  • workerpool: Node.js only. Supports older Node versions (back to ~8.x) thanks to its child_process fallback, but lacks modern JS ergonomics.

🔁 Resource Management: Pooling and Lifecycle

piscina gives you precise control: set minThreads, maxThreads, idleTimeout, and even custom task queues. Idle threads shut down after timeout, freeing memory.

threads leaves lifecycle management to you. You must manually terminate() workers, or risk leaks. No built-in pooling — each spawn() creates a new worker.

workerpool includes dynamic scaling: it spins up new workers under load and kills idle ones. But because it defaults to processes, startup latency is higher than thread-based pools.

🚫 What Each Library Doesn’t Do Well

  • piscina won’t help you if you need browser compatibility or process-level isolation. It’s laser-focused on Node.js thread performance.
  • threads adds abstraction overhead. If you’re only targeting Node.js and need every millisecond of performance, the proxy layer and auto-serialization may slow you down.
  • workerpool feels dated. Its callback-heavy roots show in places, and the dual process/thread model complicates configuration. Not ideal for greenfield, latency-sensitive apps.

✅ When to Pick Which

ScenarioBest Choice
High-throughput Node.js API doing image resizing or data crunchingpiscina
Shared library used in both browser and Node.js (e.g., a math toolkit)threads
Legacy system needing crash-resistant workers with minimal code changesworkerpool
Need to pass huge binary buffers without copyingpiscina (via transferList)
Want simple, named function calls without bundler configworkerpool

💡 Final Recommendation

For most new Node.js projects dealing with CPU-heavy tasks, piscina is the strongest choice — it’s fast, lean, and built for the modern Node.js threading model. Reach for threads only if you truly need cross-platform worker code. Consider workerpool mainly for maintenance scenarios or when process isolation is non-negotiable.

Remember: none of these solve I/O-bound problems. If your bottleneck is database queries or HTTP calls, stick to async/await on the main thread — workers won’t help there.

How to Choose: workerpool vs piscina vs threads
  • workerpool:

    Choose workerpool if you need flexible worker management with support for both child processes and worker threads in Node.js, or if you’re maintaining legacy systems where process-based isolation is preferred. It offers dynamic scaling of workers and supports function registration by name, which simplifies task dispatching. Avoid it for latency-sensitive applications due to its heavier messaging layer and lack of modern async/await ergonomics.

  • piscina:

    Choose piscina if you're building a high-performance Node.js backend (especially with frameworks like Fastify or Express) and need fine-grained control over thread pools with minimal overhead. It’s optimized for low-latency, high-throughput scenarios and integrates cleanly with async/await patterns. Its tight coupling to Node.js makes it unsuitable for browser environments, but ideal for server-side compute-heavy workloads like image processing, data transformation, or cryptographic operations.

  • threads:

    Choose threads if you aim for code that can run consistently across both browser Web Workers and Node.js Worker Threads using a unified API. It provides a clean, promise-based interface and automatic serialization, making it easier to write isomorphic worker logic. However, this abstraction adds slight overhead compared to more native solutions, so it’s best when cross-environment compatibility outweighs raw performance needs.

README for workerpool

workerpool

NPM Version NPM Downloads NPM License

workerpool offers an easy way to create a pool of workers for both dynamically offloading computations as well as managing a pool of dedicated workers. workerpool basically implements a thread pool pattern. There is a pool of workers to execute tasks. New tasks are put in a queue. A worker executes one task at a time, and once finished, picks a new task from the queue. Workers can be accessed via a natural, promise based proxy, as if they are available straight in the main application.

workerpool runs on Node.js and in the browser.

Features

  • Easy to use
  • Runs in the browser and on node.js
  • Dynamically offload functions to a worker
  • Access workers via a proxy
  • Cancel running tasks
  • Set a timeout on tasks
  • Handles crashed workers
  • Small: 9 kB minified and gzipped
  • Supports transferable objects (only for web workers and worker_threads)

Why

JavaScript is based upon a single event loop which handles one event at a time. Jeremy Epstein explains this clearly:

In Node.js everything runs in parallel, except your code. What this means is that all I/O code that you write in Node.js is non-blocking, while (conversely) all non-I/O code that you write in Node.js is blocking.

This means that CPU heavy tasks will block other tasks from being executed. In case of a browser environment, the browser will not react to user events like a mouse click while executing a CPU intensive task (the browser "hangs"). In case of a node.js server, the server will not respond to any new request while executing a single, heavy request.

For front-end processes, this is not a desired situation. Therefore, CPU intensive tasks should be offloaded from the main event loop onto dedicated workers. In a browser environment, Web Workers can be used. In node.js, child processes and worker_threads are available. An application should be split in separate, decoupled parts, which can run independent of each other in a parallelized way. Effectively, this results in an architecture which achieves concurrency by means of isolated processes and message passing.

Install

Install via npm:

npm install workerpool

Load

To load workerpool in a node.js application (both main application as well as workers):

const workerpool = require('workerpool');

To load workerpool in the browser:

<script src="workerpool.js"></script>

To load workerpool in a web worker in the browser:

importScripts('workerpool.js');

Setting up the workerpool with React or webpack5 requires additional configuration steps, as outlined in the webpack5 section.

Use

Offload functions dynamically

In the following example there is a function add, which is offloaded dynamically to a worker to be executed for a given set of arguments.

myApp.js

const workerpool = require('workerpool');
const pool = workerpool.pool();

function add(a, b) {
  return a + b;
}

pool
  .exec(add, [3, 4])
  .then(function (result) {
    console.log('result', result); // outputs 7
  })
  .catch(function (err) {
    console.error(err);
  })
  .then(function () {
    pool.terminate(); // terminate all workers when done
  });

Note that both function and arguments must be static and stringifiable, as they need to be sent to the worker in a serialized form. In case of large functions or function arguments, the overhead of sending the data to the worker can be significant.

Dedicated workers

A dedicated worker can be created in a separate script, and then used via a worker pool.

myWorker.js

const workerpool = require('workerpool');

// a deliberately inefficient implementation of the fibonacci sequence
function fibonacci(n) {
  if (n < 2) return n;
  return fibonacci(n - 2) + fibonacci(n - 1);
}

// create a worker and register public functions
workerpool.worker({
  fibonacci: fibonacci,
});

This worker can be used by a worker pool:

myApp.js

const workerpool = require('workerpool');

// create a worker pool using an external worker script
const pool = workerpool.pool(__dirname + '/myWorker.js');

// run registered functions on the worker via exec
pool
  .exec('fibonacci', [10])
  .then(function (result) {
    console.log('Result: ' + result); // outputs 55
  })
  .catch(function (err) {
    console.error(err);
  })
  .then(function () {
    pool.terminate(); // terminate all workers when done
  });

// or run registered functions on the worker via a proxy:
pool
  .proxy()
  .then(function (worker) {
    return worker.fibonacci(10);
  })
  .then(function (result) {
    console.log('Result: ' + result); // outputs 55
  })
  .catch(function (err) {
    console.error(err);
  })
  .then(function () {
    pool.terminate(); // terminate all workers when done
  });

Worker can also initialize asynchronously:

myAsyncWorker.js

define(['workerpool/dist/workerpool'], function (workerpool) {
  // a deliberately inefficient implementation of the fibonacci sequence
  function fibonacci(n) {
    if (n < 2) return n;
    return fibonacci(n - 2) + fibonacci(n - 1);
  }

  // create a worker and register public functions
  workerpool.worker({
    fibonacci: fibonacci,
  });
});

Examples

Examples are available in the examples directory:

https://github.com/josdejong/workerpool/tree/master/examples

API

The API of workerpool consists of two parts: a function workerpool.pool to create a worker pool, and a function workerpool.worker to create a worker.

pool

A workerpool can be created using the function workerpool.pool:

workerpool.pool([script: string] [, options: Object]) : Pool

When a script argument is provided, the provided script will be started as a dedicated worker. When no script argument is provided, a default worker is started which can be used to offload functions dynamically via Pool.exec. Note that on node.js, script must be an absolute file path like __dirname + '/myWorker.js'. In a browser environment, script can also be a data URL like 'data:application/javascript;base64,...'. This allows embedding the bundled code of a worker in your main application. See examples/embeddedWorker for a demo.

The following options are available:

  • minWorkers: number | 'max'. The minimum number of workers that must be initialized and kept available. Setting this to 'max' will create maxWorkers default workers (see below).
  • maxWorkers: number. The default number of maxWorkers is the number of CPU's minus one. When the number of CPU's could not be determined (for example in older browsers), maxWorkers is set to 3.
  • maxQueueSize: number. The maximum number of tasks allowed to be queued. Can be used to prevent running out of memory. If the maximum is exceeded, adding a new task will throw an error. The default value is Infinity.
  • workerType: 'auto' | 'web' | 'process' | 'thread'.
    • In case of 'auto' (default), workerpool will automatically pick a suitable type of worker: when in a browser environment, 'web' will be used. When in a node.js environment, worker_threads will be used if available (Node.js >= 11.7.0), else child_process will be used.
    • In case of 'web', a Web Worker will be used. Only available in a browser environment.
    • In case of 'process', child_process will be used. Only available in a node.js environment.
    • In case of 'thread', worker_threads will be used. If worker_threads are not available, an error is thrown. Only available in a node.js environment.
  • workerTerminateTimeout: number. The timeout in milliseconds to wait for a worker to cleanup it's resources on termination before stopping it forcefully. Default value is 1000.
  • abortListenerTimeout: number. The timeout in milliseconds to wait for abort listener's before stopping it forcefully, triggering cleanup. Default value is 1000.
  • forkArgs: String[]. For process worker type. An array passed as args to child_process.fork
  • forkOpts: Object. For process worker type. An object passed as options to child_process.fork. See nodejs documentation for available options.
  • workerOpts: Object. For web worker type. An object passed to the constructor of the web worker. See WorkerOptions specification for available options.
  • workerThreadOpts: Object. For worker worker type. An object passed to worker_threads.options. See nodejs documentation for available options.
  • onCreateWorker: Function. A callback that is called whenever a worker is being created. It can be used to allocate resources for each worker for example. The callback is passed as argument an object with the following properties:
    • forkArgs: String[]: the forkArgs option of this pool
    • forkOpts: Object: the forkOpts option of this pool
    • workerOpts: Object: the workerOpts option of this pool
    • script: string: the script option of this pool Optionally, this callback can return an object containing one or more of the above properties. The provided properties will be used to override the Pool properties for the worker being created.
  • onTerminateWorker: Function. A callback that is called whenever a worker is being terminated. It can be used to release resources that might have been allocated for this specific worker. The callback is passed as argument an object as described for onCreateWorker, with each property sets with the value for the worker being terminated.
  • emitStdStreams: boolean. For process or thread worker type. If true, the worker will emit stdout and stderr events instead of passing it through to the parent streams. Default value is false.

Important note on 'workerType': when sending and receiving primitive data types (plain JSON) from and to a worker, the different worker types ('web', 'process', 'thread') can be used interchangeably. However, when using more advanced data types like buffers, the API and returned results can vary. In these cases, it is best not to use the 'auto' setting but have a fixed 'workerType' and good unit testing in place.

A worker pool contains the following functions:

  • Pool.exec(method: Function | string, params: Array | null [, options: Object]) : Promise<any, Error>
    Execute a function on a worker with given arguments.

    • When method is a string, a method with this name must exist at the worker and must be registered to make it accessible via the pool. The function will be executed on the worker with given parameters.
    • When method is a function, the provided function fn will be stringified, send to the worker, and executed there with the provided parameters. The provided function must be static, it must not depend on variables in a surrounding scope.
    • The following options are available:
      • on: (payload: any) => void. An event listener, to handle events sent by the worker for this execution. See Events for more details.
      • transfer: Object[]. A list of transferable objects to send to the worker. Not supported by process worker type. See example for usage.
  • Pool.proxy() : Promise<Object, Error>
    Create a proxy for the worker pool. The proxy contains a proxy for all methods available on the worker. All methods return promises resolving the methods result.

  • Pool.stats() : Object
    Retrieve statistics on workers, and active and pending tasks.

    Returns an object containing the following properties:

    {
      totalWorkers: 0,
      busyWorkers: 0,
      idleWorkers: 0,
      pendingTasks: 0,
      activeTasks: 0
    }
    
  • Pool.terminate([force: boolean [, timeout: number]]) : Promise<void, Error>

    If parameter force is false (default), workers will finish the tasks they are working on before terminating themselves. Any pending tasks will be rejected with an error 'Pool terminated'. When force is true, all workers are terminated immediately without finishing running tasks. If timeout is provided, worker will be forced to terminate when the timeout expires and the worker has not finished.

The function Pool.exec and the proxy functions all return a Promise. The promise has the following functions available:

  • Promise.then(fn: Function<result: any>) : Promise<any, Error>
    Get the result of the promise once resolve.
  • Promise.catch(fn: Function<error: Error>) : Promise<any, Error>
    Get the error of the promise when rejected.
  • Promise.finally(fn: Function<void>)
    Logic to run when the Promise either resolves or rejects
  • Promise.cancel() : Promise<any, Error>
    A running task can be cancelled. The worker executing the task is enforced to terminate immediately. The promise will be rejected with a Promise.CancellationError.
  • Promise.timeout(delay: number) : Promise<any, Error>
    Cancel a running task when it is not resolved or rejected within given delay in milliseconds. The timer will start when the task is actually started, not when the task is created and queued. The worker executing the task is enforced to terminate immediately. The promise will be rejected with a Promise.TimeoutError.

Example usage:

const workerpool = require('workerpool');

function add(a, b) {
  return a + b;
}

const pool1 = workerpool.pool();

// offload a function to a worker
pool1
  .exec(add, [2, 4])
  .then(function (result) {
    console.log(result); // will output 6
  })
  .catch(function (err) {
    console.error(err);
  });

// create a dedicated worker
const pool2 = workerpool.pool(__dirname + '/myWorker.js');

// supposed myWorker.js contains a function 'fibonacci'
pool2
  .exec('fibonacci', [10])
  .then(function (result) {
    console.log(result); // will output 55
  })
  .catch(function (err) {
    console.error(err);
  });

// send a transferable object to the worker
// supposed myWorker.js contains a function 'sum'
const toTransfer = new Uint8Array(2).map((_v, i) => i)
pool2
  .exec('sum', [toTransfer], { transfer: [toTransfer.buffer] })
  .then(function (result) {
    console.log(result); // will output 3
  })
  .catch(function (err) {
    console.error(err);
  });

// create a proxy to myWorker.js
pool2
  .proxy()
  .then(function (myWorker) {
    return myWorker.fibonacci(10);
  })
  .then(function (result) {
    console.log(result); // will output 55
  })
  .catch(function (err) {
    console.error(err);
  });

// create a pool with a specified maximum number of workers
const pool3 = workerpool.pool({ maxWorkers: 7 });

worker

A worker is constructed as:

workerpool.worker([methods: Object<String, Function>] [, options: Object]) : void

Argument methods is optional and can be an object with functions available in the worker. Registered functions will be available via the worker pool.

The following options are available:

  • onTerminate: ([code: number]) => Promise<void> | void. A callback that is called whenever a worker is being terminated. It can be used to release resources that might have been allocated for this specific worker. The difference with pool's onTerminateWorker is that this callback runs in the worker context, while onTerminateWorker is executed on the main thread.

Example usage:

// file myWorker.js
const workerpool = require('workerpool');

function add(a, b) {
  return a + b;
}

function multiply(a, b) {
  return a * b;
}

// create a worker and register functions
workerpool.worker({
  add: add,
  multiply: multiply,
});

Asynchronous results can be handled by returning a Promise from a function in the worker:

// file myWorker.js
const workerpool = require('workerpool');

function timeout(delay) {
  return new Promise(function (resolve, reject) {
    setTimeout(resolve, delay);
  });
}

// create a worker and register functions
workerpool.worker({
  timeout: timeout,
});

Transferable objects can be sent back to the pool using Transfer helper class:

// file myWorker.js
const workerpool = require('workerpool');

function array(size) {
  var array = new Uint8Array(size).map((_v, i) => i);
  return new workerpool.Transfer(array, [array.buffer]);
}

// create a worker and register functions
workerpool.worker({
  array: array,
});

Events

You can send data back from workers to the pool while the task is being executed using the workerEmit function:

workerEmit(payload: any) : unknown

This function only works inside a worker and during a task.

Example:

// file myWorker.js
const workerpool = require('workerpool');

function eventExample(delay) {
  workerpool.workerEmit({
    status: 'in_progress',
  });

  workerpool.workerEmit({
    status: 'complete',
  });

  return true;
}

// create a worker and register functions
workerpool.worker({
  eventExample: eventExample,
});

To receive those events, you can use the on option of the pool exec method:

pool.exec('eventExample', [], {
  on: function (payload) {
    if (payload.status === 'in_progress') {
      console.log('In progress...');
    } else if (payload.status === 'complete') {
      console.log('Done!');
    }
  },
});

Worker API

Workers have access to a worker api which contains the following methods

  • emit: (payload: unknown | Transfer): void
  • addAbortListener: (listener: () => Promise<void>): void

addAbortListener

Worker termination may be recoverable through abort listeners which are registered through worker.addAbortListener. If all registered listeners resolve then the worker will not be terminated, allowing for worker reuse in some cases.

NOTE: For operations to successfully clean up, a worker implementation should be async. If the worker thread is blocked, then the worker will be killed.

function asyncTimeout() {
  var me = this;
  return new Promise(function (resolve) {
    let timeout = setTimeout(() => {
        resolve();
    }, 5000);

    // Register a listener which will resolve before the time out
    // above triggers.
    me.worker.addAbortListener(async function () {
        clearTimeout(timeout);
        resolve();
    });
  });
}

// create a worker and register public functions
workerpool.worker(
  {
    asyncTimeout: asyncTimeout,
  },
  {
    abortListenerTimeout: 1000
  }
);

emit

Events may also be emitted from the worker api through worker.emit

// file myWorker.js
const workerpool = require('workerpool');

function eventExample(delay) {
  this.worker.emit({
    status: "in_progress",
  });
  workerpool.workerEmit({
    status: 'complete',
  });

  return true;
}

// create a worker and register functions
workerpool.worker({
  eventExample: eventExample,
});

Utilities

Following properties are available for convenience:

  • platform: The Javascript platform. Either node or browser
  • isMainThread: Whether the code is running in main thread or not (Workers)
  • cpus: The number of CPUs/cores available

Roadmap

  • Implement functions for parallel processing: map, reduce, forEach, filter, some, every, ...
  • Implement graceful degradation on old browsers not supporting webworkers: fallback to processing tasks in the main application.
  • Implement session support: be able to handle a series of related tasks by a single worker, which can keep a state for the session.

Related libraries

Build

First clone the project from github:

git clone git://github.com/josdejong/workerpool.git
cd workerpool

Install the project dependencies:

npm install

Then, the project can be build by executing the build script via npm:

npm run build

This will build the library workerpool.js and workerpool.min.js from the source files and put them in the folder dist.

Test

To execute tests for the library, install the project dependencies once:

npm install

Then, the tests can be executed:

npm test

To test code coverage of the tests:

npm run coverage

To see the coverage results, open the generated report in your browser:

./coverage/index.html

Publish

  • Describe changes in HISTORY.md.
  • Update version in package.json, run npm install to update it in package-lock.json too.
  • Push to GitHub.
  • Deploy to npm via npm publish.
  • Add a git tag with the version number like:
    git tag v1.2.3
    git push --tags
    

License

Copyright (C) 2014-2025 Jos de Jong wjosdejong@gmail.com

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.