limiter vs p-limit vs p-throttle
Node.js 限流库
limiterp-limitp-throttle类似的npm包:

Node.js 限流库

限流库用于控制对某些资源的访问频率,确保在高并发场景下不会过载系统。它们在处理 API 请求、数据库操作或其他需要限制调用频率的场景中非常有用。这些库提供了不同的策略来实现限流,帮助开发者在保证系统稳定性的同时,提高资源的利用率。

npm下载趋势

3 年

GitHub Stars 排名

统计详情

npm包名称
下载量
Stars
大小
Issues
发布时间
License
limiter11,188,3161,558158 kB141 年前MIT
p-limit02,80714.9 kB01 个月前MIT
p-throttle051321.6 kB04 个月前MIT

功能对比: limiter vs p-limit vs p-throttle

使用场景

  • limiter:

    limiter 适用于简单的限流需求,如限制 API 请求的频率。它提供了灵活的配置选项,适合小型项目或快速开发原型。

  • p-limit:

    p-limit 主要用于控制 Promise 的并发执行数量,适合需要处理大量异步操作的场景,如批量请求处理或文件读取。

  • p-throttle:

    p-throttle 适合需要控制函数调用频率的场景,如用户输入事件或滚动事件的处理,确保在高频率触发时不会造成性能问题。

配置灵活性

  • limiter:

    limiter 提供了多种配置选项,可以根据需求自定义限流策略,如时间窗口、最大请求数等,适合需要灵活配置的场景。

  • p-limit:

    p-limit 的配置相对简单,主要集中在并发数量的限制上,适合快速实现并发控制的需求。

  • p-throttle:

    p-throttle 提供了时间间隔的配置,允许开发者设置函数调用的频率,适合需要精细控制的场景。

性能影响

  • limiter:

    limiter 在处理请求时可能会引入一定的延迟,尤其是在高并发情况下,但其影响通常是可控的,适合大多数应用场景。

  • p-limit:

    p-limit 通过限制并发数量来提高性能,避免系统过载,确保在处理大量异步操作时保持良好的响应速度。

  • p-throttle:

    p-throttle 通过限制函数调用频率来减少性能开销,适合需要频繁触发的事件,确保系统稳定性。

学习曲线

  • limiter:

    limiter 的学习曲线较为平缓,易于上手,适合初学者和快速开发。

  • p-limit:

    p-limit 的使用相对简单,特别是对于熟悉 Promise 的开发者,能够快速掌握其用法。

  • p-throttle:

    p-throttle 的概念简单明了,易于理解,适合所有开发者,特别是在处理事件时非常直观。

社区支持

  • limiter:

    limiter 拥有较小的社区支持,适合简单项目,但在复杂场景下可能缺乏足够的文档和示例。

  • p-limit:

    p-limit 在社区中有一定的使用基础,提供了较为丰富的文档和示例,适合中小型项目。

  • p-throttle:

    p-throttle 也有良好的社区支持,提供了详细的文档和使用示例,适合广泛的应用场景。

如何选择: limiter vs p-limit vs p-throttle

  • limiter:

    选择 limiter 如果你需要一个简单易用的限流工具,适合于基本的限流需求,且不需要复杂的配置。它支持多种限流策略,适合小型项目或快速原型开发。

  • p-limit:

    选择 p-limit 如果你需要在 Promise 处理过程中限制并发数量。它非常适合处理异步操作,确保在执行大量异步任务时不会超过指定的并发限制。

  • p-throttle:

    选择 p-throttle 如果你需要在特定时间间隔内限制函数调用的频率。它适用于需要控制事件触发频率的场景,如滚动、窗口调整大小等。

limiter的README

limiter

Build Status NPM Downloads

Provides a generic rate limiter for the web and node.js. Useful for API clients, web crawling, or other tasks that need to be throttled. Two classes are exposed, RateLimiter and TokenBucket. TokenBucket provides a lower level interface to rate limiting with a configurable burst rate and drip rate. RateLimiter sits on top of the token bucket and adds a restriction on the maximum number of tokens that can be removed each interval to comply with common API restrictions such as "150 requests per hour maximum".

Installation

yarn add limiter

Usage

A simple example allowing 150 requests per hour:

import { RateLimiter } from "limiter";

// Allow 150 requests per hour (the Twitter search limit). Also understands
// 'second', 'minute', 'day', or a number of milliseconds
const limiter = new RateLimiter({ tokensPerInterval: 150, interval: "hour" });

async function sendRequest() {
  // This call will throw if we request more than the maximum number of requests
  // that were set in the constructor
  // remainingRequests tells us how many additional requests could be sent
  // right this moment
  const remainingRequests = await limiter.removeTokens(1);
  callMyRequestSendingFunction(...);
}

Another example allowing one message to be sent every 250ms:

import { RateLimiter } from "limiter";

const limiter = new RateLimiter({ tokensPerInterval: 1, interval: 250 });

async function sendMessage() {
  const remainingMessages = await limiter.removeTokens(1);
  callMyMessageSendingFunction(...);
}

The default behaviour is to wait for the duration of the rate limiting that's currently in effect before the promise is resolved, but if you pass in "fireImmediately": true, the promise will be resolved immediately with remainingRequests set to -1:

import { RateLimiter } from "limiter";

const limiter = new RateLimiter({
  tokensPerInterval: 150,
  interval: "hour",
  fireImmediately: true
});

async function requestHandler(request, response) {
  // Immediately send 429 header to client when rate limiting is in effect
  const remainingRequests = await limiter.removeTokens(1);
  if (remainingRequests < 0) {
    response.writeHead(429, {'Content-Type': 'text/plain;charset=UTF-8'});
    response.end('429 Too Many Requests - your IP is being rate limited');
  } else {
    callMyMessageSendingFunction(...);
  }
}

A synchronous method, tryRemoveTokens(), is available in both RateLimiter and TokenBucket. This will return immediately with a boolean value indicating if the token removal was successful.

import { RateLimiter } from "limiter";

const limiter = new RateLimiter({ tokensPerInterval: 10, interval: "second" });

if (limiter.tryRemoveTokens(5))
  console.log('Tokens removed');
else
  console.log('No tokens removed');

To get the number of remaining tokens outside the removeTokens promise, simply use the getTokensRemaining method.

import { RateLimiter } from "limiter";

const limiter = new RateLimiter({ tokensPerInterval: 1, interval: 250 });

// Prints 1 since we did not remove a token and our number of tokens per
// interval is 1
console.log(limiter.getTokensRemaining());

Using the token bucket directly to throttle at the byte level:

import { TokenBucket } from "limiter";

const BURST_RATE = 1024 * 1024 * 150; // 150KB/sec burst rate
const FILL_RATE = 1024 * 1024 * 50; // 50KB/sec sustained rate

// We could also pass a parent token bucket in to create a hierarchical token
// bucket
// bucketSize, tokensPerInterval, interval
const bucket = new TokenBucket({
  bucketSize: BURST_RATE,
  tokensPerInterval: FILL_RATE,
  interval: "second"
});

async function handleData(myData) {
  await bucket.removeTokens(myData.byteLength);
  sendMyData(myData);
}

Additional Notes

Both the token bucket and rate limiter should be used with a message queue or some way of preventing multiple simultaneous calls to removeTokens(). Otherwise, earlier messages may get held up for long periods of time if more recent messages are continually draining the token bucket. This can lead to out of order messages or the appearance of "lost" messages under heavy load.

License

MIT License