p-throttle vs p-limit vs p-queue
Concurrency and Rate Limiting in JavaScript
p-throttlep-limitp-queueSimilar Packages:

Concurrency and Rate Limiting in JavaScript

p-limit, p-queue, and p-throttle are utilities for controlling asynchronous execution in Node.js and browser environments. p-limit restricts the number of promises running concurrently. p-queue provides a priority queue with concurrency limits, timeouts, and state management. p-throttle limits the rate of function execution over a specific time interval. Together, they help prevent resource exhaustion, API rate limit violations, and performance bottlenecks.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
p-throttle3,253,88751821.6 kB05 months agoMIT
p-limit02,84314.9 kB13 months agoMIT
p-queue04,18680.5 kB613 days agoMIT

Concurrency and Rate Limiting: p-limit vs p-queue vs p-throttle

When building robust JavaScript applications, uncontrolled asynchronous operations can lead to crashed servers, tripped API rate limits, or sluggish UIs. The p-* ecosystem offers three distinct tools to manage this flow: p-limit, p-queue, and p-throttle. While they all wrap async functions, they solve different problems. Let's compare how they handle execution control.

🚦 Execution Control Strategy: Count vs Time

The core difference lies in what they limit: simultaneous active tasks or the frequency of tasks over time.

p-limit restricts the number of promises that are active at the same time.

  • Once the limit is reached, new tasks wait in a hidden queue.
  • As soon as one finishes, the next starts.
import pLimit from 'p-limit';

const limit = pLimit(2); // Max 2 concurrent

const tasks = [1, 2, 3, 4].map(id => 
  limit(() => fetch(`/api/item/${id}`))
);

await Promise.all(tasks);
// Only 2 fetches happen at once

p-queue also restricts concurrency but wraps it in a managed queue object.

  • It tracks pending, active, and completed tasks.
  • Useful when you need to know when the queue is empty.
import PQueue from 'p-queue';

const queue = new PQueue({ concurrency: 2 });

[1, 2, 3, 4].forEach(id => {
  queue.add(() => fetch(`/api/item/${id}`));
});

await queue.onIdle();
// Waits until all tasks are done

p-throttle restricts how often a function can run over a time interval.

  • It does not care about concurrency count, but rather time spacing.
  • Ideal for APIs with "X requests per second" rules.
import pThrottle from 'p-throttle';

const throttled = pThrottle({
  limit: 2,
  interval: 1000 // 2 calls per 1000ms
});

const tasks = [1, 2, 3, 4].map(id => 
  throttled(() => fetch(`/api/item/${id}`))()
);

await Promise.all(tasks);
// Calls are spaced out over time

πŸ—„οΈ Queue & State Management: Stateful vs Stateless

Managing the lifecycle of tasks often requires state. p-queue is stateful, while the others are stateless wrappers.

p-limit is stateless.

  • It returns a function wrapper.
  • You cannot pause or clear the internal queue directly.
  • To stop, you must manage external flags.
import pLimit from 'p-limit';

const limit = pLimit(1);
let shouldStop = false;

const task = async () => {
  if (shouldStop) return;
  return limit(() => doWork());
};
// No built-in pause method

p-queue is stateful.

  • It exposes methods to pause, resume, and clear the queue.
  • Essential for long-running workers or UI interactions.
import PQueue from 'p-queue';

const queue = new PQueue();

queue.add(() => doWork());

queue.pause(); // Stops processing new items
queue.clear(); // Removes pending items
queue.start(); // Resumes processing

p-throttle is stateless.

  • It maintains internal timers but exposes no queue controls.
  • You cannot drain or pause the throttle itself.
import pThrottle from 'p-throttle';

const throttled = pThrottle({ limit: 1, interval: 1000 });

// No pause/clear methods available
await throttled(() => doWork())();

πŸ“‰ Priority and Ordering

In complex systems, not all tasks are equal. Some need to jump the line.

p-limit processes tasks in FIFO (First-In-First-Out) order.

  • No priority support.
  • Simple and predictable.
import pLimit from 'p-limit';

const limit = pLimit(1);

// First added runs first
limit(() => console.log('A'));
limit(() => console.log('B'));

p-queue supports priority levels.

  • Lower numbers usually mean higher priority.
  • Critical tasks can bypass less important ones.
import PQueue from 'p-queue';

const queue = new PQueue();

queue.add(() => console.log('Low'), { priority: 10 });
queue.add(() => console.log('High'), { priority: 1 });
// 'High' runs before 'Low' if both are waiting

p-throttle processes tasks in FIFO order.

  • No priority support.
  • Focuses strictly on time spacing.
import pThrottle from 'p-throttle';

const throttled = pThrottle({ limit: 1, interval: 1000 });

// First called runs first (when interval allows)
await throttled(() => console.log('A'))();
await throttled(() => console.log('B'))();

⚠️ Error Handling & Resilience

How failures affect the flow differs slightly, though all propagate rejections.

p-limit propagates errors directly.

  • If a task fails, the returned promise rejects.
  • Other queued tasks continue normally.
import pLimit from 'p-limit';

const limit = pLimit(1);

try {
  await limit(() => Promise.reject(new Error('Fail')));
} catch (err) {
  console.error(err); // Caught here
}
// Next task in queue still runs

p-queue propagates errors but emits events.

  • You can listen for error events globally.
  • Useful for logging without wrapping every task.
import PQueue from 'p-queue';

const queue = new PQueue();

queue.on('error', err => console.error(err));

queue.add(() => Promise.reject(new Error('Fail')));
// Error caught by listener, queue continues

p-throttle propagates errors directly.

  • Similar to p-limit, rejection is handled by the caller.
  • Throttling continues regardless of success/failure.
import pThrottle from 'p-throttle';

const throttled = pThrottle({ limit: 1, interval: 1000 });

try {
  await throttled(() => Promise.reject(new Error('Fail')))();
} catch (err) {
  console.error(err);
}
// Throttle timer continues

πŸ“Š Summary: Key Differences

Featurep-limitp-queuep-throttle
Primary GoalLimit concurrent countManage task queueLimit rate over time
StateStatelessStatefulStateless
Priority❌ Noβœ… Yes❌ No
Pause/Resume❌ Noβœ… Yes❌ No
Configconcurrencyconcurrency, timeoutlimit, interval
Best ForSimple concurrencyComplex workflowsAPI rate limits

πŸ’‘ The Big Picture

p-limit is the lightweight choice πŸͺΆ. Use it when you just need to stop your code from opening 1000 database connections at once. It adds almost no overhead and requires minimal setup.

p-queue is the command center πŸŽ›οΈ. Use it when you need visibility and control. If you need to pause a batch job, prioritize urgent tasks, or wait for everything to finish before shutting down, this is the tool.

p-throttle is the metronome 🎡. Use it when time is the constraint. If an API says "100 requests per minute," p-limit might still burst 100 requests in one second. p-throttle ensures they are spread out correctly.

Final Thought: While p-queue can technically replace p-limit (by setting concurrency), it brings extra weight. Conversely, p-throttle solves a problem the others cannot (time-based rate limiting). Choose the tool that matches your constraint: count, control, or time.

How to Choose: p-throttle vs p-limit vs p-queue

  • p-throttle:

    Choose p-throttle when you need to enforce a rate limit over time, such as adhering to third-party API constraints (e.g., 5 requests per second). Use this when the timing of execution matters more than the strict concurrency count.

  • p-limit:

    Choose p-limit when you need a lightweight, stateless solution to restrict the number of concurrent operations, such as limiting file uploads or database connections. It is ideal for simple scripts where you do not need to manage a queue state, pause execution, or prioritize tasks.

  • p-queue:

    Choose p-queue for complex workflows requiring task prioritization, pause/resume capabilities, or completion events. It is best suited for batch processing jobs, workers that need to drain completely before shutdown, or scenarios where tasks arrive dynamically and need ordering.

README for p-throttle

p-throttle

Throttle promise-returning & async functions

Also works with normal functions.

It rate-limits function calls without discarding them, making it ideal for external API interactions where avoiding call loss is crucial. All calls are queued and executedβ€”the last call is guaranteed to run with its original context and arguments preserved.

Install

npm install p-throttle

Browser

This package works in the browser with modern browsers that support WeakRef and FinalizationRegistry (Chrome 84+, Firefox 79+, Safari 14.1+, Edge 84+).

Usage

This calls the function at most twice per second:

import pThrottle from 'p-throttle';

const now = Date.now();

const throttle = pThrottle({
	limit: 2,
	interval: 1000
});

const throttled = throttle(async index => {
	const secDiff = ((Date.now() - now) / 1000).toFixed();
	return `${index}: ${secDiff}s`;
});

for (let index = 1; index <= 6; index++) {
	(async () => {
		console.log(await throttled(index));
	})();
}
//=> 1: 0s
//=> 2: 0s
//=> 3: 1s
//=> 4: 1s
//=> 5: 2s
//=> 6: 2s

API

pThrottle(options)

Returns a throttle function.

options

Type: object

Both the limit and interval options must be specified.

limit

Type: number

The maximum number of calls within an interval.

interval

Type: number

The timespan for limit in milliseconds.

strict

Type: boolean
Default: false

Use a strict, more resource-intensive, throttling algorithm. The default algorithm uses a windowed approach that will work correctly in most cases, limiting the total number of calls at the specified limit per interval window. The strict algorithm throttles each call individually, ensuring the limit is not exceeded for any interval.

signal

Type: AbortSignal

Abort pending executions. When aborted, all unresolved promises are rejected with signal.reason.

import pThrottle from 'p-throttle';

const controller = new AbortController();

const throttle = pThrottle({
	limit: 2,
	interval: 1000,
	signal: controller.signal
});

const throttled = throttle(() => {
	console.log('Executing...');
});

await throttled();
await throttled();
controller.abort('aborted');
await throttled();
//=> Executing...
//=> Executing...
//=> Promise rejected with reason `aborted`
onDelay

Type: Function

Get notified when function calls are delayed due to exceeding the limit of allowed calls within the given interval. The delayed call arguments are passed to the onDelay callback.

Can be useful for monitoring the throttling efficiency.

In the following example, the third call gets delayed and triggers the onDelay callback:

import pThrottle from 'p-throttle';

const throttle = pThrottle({
	limit: 2,
	interval: 1000,
	onDelay: (a, b) => {
		console.log(`Reached interval limit, call is delayed for ${a} ${b}`);
	},
});

const throttled = throttle((a, b) => {
	console.log(`Executing with ${a} ${b}...`);
});

await throttled(1, 2);
await throttled(3, 4);
await throttled(5, 6);
//=> Executing with 1 2...
//=> Executing with 3 4...
//=> Reached interval limit, call is delayed for 5 6
//=> Executing with 5 6...
weight

Type: Function

Calculate the weight/cost of each function call based on its arguments.

The weight determines how much of the limit is consumed by each call. This is useful for rate limiting APIs that use point-based or cost-based limits, where different operations consume different amounts of the quota.

By default, each call has a weight of 1.

In the following example, queries with different numbers of tables consume different amounts of the rate limit:

import pThrottle from 'p-throttle';

// Storyblok GraphQL API: 100 points per second
// Each query costs 1 point for the connection plus 1 point per table
const throttle = pThrottle({
	limit: 100,
	interval: 1000,
	weight: numberOfTables => 1 + numberOfTables
});

const fetchData = throttle(numberOfTables => {
	// Fetch GraphQL data
	return fetch('...');
});

await fetchData(1); // Costs 2 points
await fetchData(3); // Costs 4 points

throttle(function_)

Returns a throttled version of function_.

function_

Type: Function

A promise-returning/async function or a normal function.

throttledFn.isEnabled

Type: boolean
Default: true

Whether future function calls should be throttled and count towards throttling thresholds.

throttledFn.queueSize

Type: number

The number of queued items waiting to be executed.

This can be useful for implementing queue management strategies, such as using a fallback when the queue is too full.

import pThrottle from 'p-throttle';

const throttle = pThrottle({limit: 1, interval: 1000});

const accurateData = throttle(() => fetch('https://accurate-api.example.com'));
const roughData = () => fetch('https://rough-api.example.com');

async function getData() {
	if (accurateData.queueSize >= 3) {
		return roughData(); // Queue full, use fallback
	}

	return accurateData();
}

Related

  • p-debounce - Debounce promise-returning & async functions
  • p-limit - Run multiple promise-returning & async functions with limited concurrency
  • p-memoize - Memoize promise-returning & async functions
  • More…