p-limit, p-throttle, and limiter are utilities for controlling how asynchronous tasks execute in JavaScript applications. p-limit restricts the number of promises running at the same time (concurrency). p-throttle limits how many times a function can run over a specific time period (rate limiting). limiter provides a generic token bucket algorithm for rate limiting, often used in server-side contexts. While all three manage flow, they solve different problems regarding parallelism and time-based constraints.
When building robust JavaScript applications, you often need to control how asynchronous tasks execute. Running too many tasks at once can crash a browser or overload a server. Calling an API too fast can get your IP banned. The p-limit, p-throttle, and limiter packages solve these problems, but they work differently. Let's compare how they handle flow control.
p-limit focuses on concurrency. It ensures only a specific number of promises run at the same time. Once one finishes, the next in line starts.
import pLimit from 'p-limit';
const limit = pLimit(2); // Only 2 at once
const jobs = [
limit(() => fetch('/api/1')),
limit(() => fetch('/api/2')),
limit(() => fetch('/api/3')) // Waits for slot
];
await Promise.all(jobs);
p-throttle focuses on time-based rate limiting. It ensures a function only runs a certain number of times within a time window.
import pThrottle from 'p-throttle';
const throttle = pThrottle({ limit: 2, interval: 1000 }); // 2 per second
const jobs = [
throttle(() => fetch('/api/1')),
throttle(() => fetch('/api/2')),
throttle(() => fetch('/api/3')) // Waits for time window
];
await Promise.all(jobs);
limiter focuses on token buckets. It removes tokens from a bucket that refills over time. If no tokens are available, you wait or fail.
const { RateLimiter } = require('limiter');
const limiter = new RateLimiter({ tokensPerInterval: 2, interval: "second" });
limiter.removeTokens(1, (err, remaining) => {
if (err) return;
// Run task only if token removed
fetch('/api/1');
});
Modern JavaScript relies heavily on Promises and async/await. How each package fits into this workflow matters for code clarity.
p-limit is built for Promises. It returns a Promise that resolves when the task completes. No wrapping needed.
// p-limit: Direct Promise support
const limit = pLimit(5);
const result = await limit(() => asyncTask());
p-throttle is also built for Promises. The throttled function returns a Promise directly.
// p-throttle: Direct Promise support
const throttle = pThrottle({ limit: 1, interval: 1000 });
const result = await throttle(() => asyncTask());
limiter is primarily callback-based. To use it with async/await, you must wrap it in a Promise yourself.
// limiter: Requires Promise wrapping
const limiter = new RateLimiter({ tokensPerInterval: 1, interval: "second" });
const waitForToken = () => new Promise((resolve) => {
limiter.removeTokens(1, resolve);
});
await waitForToken();
await asyncTask();
Simple tasks need simple config. Complex systems need fine control. Here is how much setup each requires.
p-limit requires just one number: the concurrency limit. It is the simplest option.
// p-limit: Single integer config
const limit = pLimit(10);
p-throttle requires two numbers: the limit count and the time interval in milliseconds.
// p-throttle: Limit and interval config
const throttle = pThrottle({ limit: 10, interval: 1000 });
limiter allows complex config like token bucket size, refill rate, and even leaky bucket algorithms. It is more verbose.
// limiter: Token bucket config
const limiter = new RateLimiter({
tokensPerInterval: 10,
interval: "second",
fireImmediately: false
});
Sometimes you need to stop tasks or check status. Each package handles control differently.
p-limit tracks active and pending counts. You can check how many tasks are running.
// p-limit: Check active count
console.log(limit.activeCount); // Running now
console.log(limit.pendingCount); // Waiting
p-throttle allows you to enable or disable the throttle dynamically. Useful for pausing limits.
// p-throttle: Toggle control
throttle.disable(); // Run without limit
throttle.enable(); // Re-apply limit
limiter allows you to try removing tokens without waiting. If none are available, it returns false immediately.
// limiter: Try without waiting
const hasTokens = limiter.tryRemoveTokens(1);
if (hasTokens) {
// Run task
}
You have 100 images to upload. Uploading all at once freezes the browser.
p-limit// p-limit: Concurrency control
const limit = pLimit(5);
const uploads = images.map(img => limit(() => upload(img)));
await Promise.all(uploads);
You are calling a free API that allows 10 requests per second. You must not exceed this.
p-throttle// p-throttle: Rate control
const throttle = pThrottle({ limit: 10, interval: 1000 });
const requests = ids.map(id => throttle(() => api.get(id)));
await Promise.all(requests);
You are maintaining an older Node.js service using callbacks and need a leaky bucket algorithm.
limiter// limiter: Algorithm control
const limiter = new RateLimiter({ tokensPerInterval: 5, interval: "second" });
limiter.removeTokens(1, (err) => {
if (!err) next(); // Proceed to handler
});
While they differ in mechanism, these packages share common goals and traits.
// All prevent this:
// while(true) { fetch('/api') } // β Crashes or gets banned
// All handle queuing internally:
// Task 1 runs -> Task 2 waits -> Task 3 waits
// All turn chaotic calls into ordered flow:
// await controlledTask(); // Predictable timing
| Feature | p-limit | p-throttle | limiter |
|---|---|---|---|
| Primary Goal | Concurrency Control | Time-based Rate Limit | Token Bucket Algorithm |
| Input Config | Integer (count) | Object (limit, interval) | Object (tokens, interval) |
| Async Style | Promise-native | Promise-native | Callback-based |
| Best For | Parallel batches | API rate limits | Server-side logic |
| Queue Style | Wait for slot | Wait for time | Wait for token |
p-limit is your go-to for parallelism. Use it when you have a lot of work but limited resources (like network connections). It keeps your app smooth without slowing down unnecessarily.
p-throttle is your go-to for compliance. Use it when external rules dictate how fast you can go (like API rate limits). It keeps you safe from bans and errors.
limiter is your go-to for control. Use it when you need specific algorithms or are working in older callback-based systems. It offers depth but requires more setup.
Final Thought: For modern frontend development, prefer p-limit and p-throttle. They fit naturally into async/await code. Reach for limiter only if you need its specific algorithmic features or are working in a non-Promise environment.
Choose limiter if you require a specific token bucket or leaky bucket algorithm with fine-grained control over token refilling rates. It is suitable for server-side applications or legacy systems where callback-based patterns are still in use. Be aware that it primarily uses callbacks instead of Promises, which may require extra wrapping for modern async code.
Choose p-limit when you need to control concurrency, such as limiting simultaneous API calls to prevent browser freezing or server overload. It is ideal for scenarios where you have many tasks but only want a fixed number running at once, like uploading multiple files in parallel batches. This package is Promise-native and integrates seamlessly with async/await workflows.
Choose p-throttle when you need to enforce a rate limit based on time, such as respecting an API constraint of 10 requests per second. It is best for situations where tasks must be spaced out over time rather than just limited by parallel count. Like p-limit, it is designed for modern Promise-based code and offers simple enable/disable controls.
Provides a generic rate limiter for the web and node.js. Useful for API clients, web crawling, or other tasks that need to be throttled. Two classes are exposed, RateLimiter and TokenBucket. TokenBucket provides a lower level interface to rate limiting with a configurable burst rate and drip rate. RateLimiter sits on top of the token bucket and adds a restriction on the maximum number of tokens that can be removed each interval to comply with common API restrictions such as "150 requests per hour maximum".
yarn add limiter
A simple example allowing 150 requests per hour:
import { RateLimiter } from "limiter";
// Allow 150 requests per hour (the Twitter search limit). Also understands
// 'second', 'minute', 'day', or a number of milliseconds
const limiter = new RateLimiter({ tokensPerInterval: 150, interval: "hour" });
async function sendRequest() {
// This call will throw if we request more than the maximum number of requests
// that were set in the constructor
// remainingRequests tells us how many additional requests could be sent
// right this moment
const remainingRequests = await limiter.removeTokens(1);
callMyRequestSendingFunction(...);
}
Another example allowing one message to be sent every 250ms:
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 1, interval: 250 });
async function sendMessage() {
const remainingMessages = await limiter.removeTokens(1);
callMyMessageSendingFunction(...);
}
The default behaviour is to wait for the duration of the rate limiting that's
currently in effect before the promise is resolved, but if you pass in
"fireImmediately": true, the promise will be resolved immediately with
remainingRequests set to -1:
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({
tokensPerInterval: 150,
interval: "hour",
fireImmediately: true
});
async function requestHandler(request, response) {
// Immediately send 429 header to client when rate limiting is in effect
const remainingRequests = await limiter.removeTokens(1);
if (remainingRequests < 0) {
response.writeHead(429, {'Content-Type': 'text/plain;charset=UTF-8'});
response.end('429 Too Many Requests - your IP is being rate limited');
} else {
callMyMessageSendingFunction(...);
}
}
A synchronous method, tryRemoveTokens(), is available in both RateLimiter and TokenBucket. This will return immediately with a boolean value indicating if the token removal was successful.
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 10, interval: "second" });
if (limiter.tryRemoveTokens(5))
console.log('Tokens removed');
else
console.log('No tokens removed');
To get the number of remaining tokens outside the removeTokens promise,
simply use the getTokensRemaining method.
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 1, interval: 250 });
// Prints 1 since we did not remove a token and our number of tokens per
// interval is 1
console.log(limiter.getTokensRemaining());
Using the token bucket directly to throttle at the byte level:
import { TokenBucket } from "limiter";
const BURST_RATE = 1024 * 1024 * 150; // 150KB/sec burst rate
const FILL_RATE = 1024 * 1024 * 50; // 50KB/sec sustained rate
// We could also pass a parent token bucket in to create a hierarchical token
// bucket
// bucketSize, tokensPerInterval, interval
const bucket = new TokenBucket({
bucketSize: BURST_RATE,
tokensPerInterval: FILL_RATE,
interval: "second"
});
async function handleData(myData) {
await bucket.removeTokens(myData.byteLength);
sendMyData(myData);
}
Both the token bucket and rate limiter should be used with a message queue or some way of preventing multiple simultaneous calls to removeTokens(). Otherwise, earlier messages may get held up for long periods of time if more recent messages are continually draining the token bucket. This can lead to out of order messages or the appearance of "lost" messages under heavy load.
MIT License