@alloc/quick-lru vs lru-cache vs quick-lru
In-Memory Caching Strategies in JavaScript
@alloc/quick-lrulru-cachequick-lruSimilar Packages:

In-Memory Caching Strategies in JavaScript

LRU (Least Recently Used) caches are essential for managing memory in JavaScript applications by automatically removing old data when limits are reached. lru-cache is the feature-rich standard for Node.js environments, offering time-to-live (TTL) and size-based eviction. quick-lru provides a minimal, fast implementation focused on entry count limits without extra overhead. @alloc/quick-lru is a specialized fork of quick-lru, often used in specific tooling ecosystems to ensure dependency consistency or apply targeted patches.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
@alloc/quick-lru0757-05 years agoMIT
lru-cache05,8721.78 MB118 days agoBlueOak-1.0.0
quick-lru075720.4 kB07 months agoMIT

In-Memory Caching Strategies: lru-cache vs quick-lru vs @alloc/quick-lru

When building high-performance JavaScript applications, managing memory effectively is crucial. Caching allows you to store expensive computations or API responses for quick retrieval. However, unbounded caches lead to memory leaks. This is where LRU (Least Recently Used) caches come in — they automatically remove the oldest items when the cache reaches its limit. Let's compare the three main options available in the npm ecosystem.

🏗️ Initialization and Configuration

The setup process varies significantly between the feature-heavy lru-cache and the minimal quick-lru variants.

lru-cache offers extensive configuration options, including max entries, max size in bytes, and time-to-live (TTL).

// lru-cache
import { LRUCache } from 'lru-cache';

const cache = new LRUCache({
  max: 500, // Max number of items
  maxSize: 1024 * 1024, // Max size in bytes
  ttl: 1000 * 60 * 5, // Time to live in ms (5 minutes)
  allowStale: false
});

quick-lru keeps it simple with a single maxSize option representing the number of items.

// quick-lru
import QuickLRU from 'quick-lru';

const cache = new QuickLRU({
  maxSize: 500 // Max number of items only
});

@alloc/quick-lru mirrors the quick-lru API as it is a fork, maintaining the same simple configuration.

// @alloc/quick-lru
import QuickLRU from '@alloc/quick-lru';

const cache = new QuickLRU({
  maxSize: 500 // Max number of items only
});

📥 Setting and Retrieving Data

All three packages support basic set and get operations, but lru-cache provides additional metadata handling.

lru-cache allows you to set values with specific TTL overrides and retrieve metadata.

// lru-cache
cache.set('key', 'value', { ttl: 10000 }); // Override TTL for this item
const value = cache.get('key');
const info = cache.info('key'); // Get metadata like hit count

quick-lru uses standard map-like methods without extra options.

// quick-lru
cache.set('key', 'value');
const value = cache.get('key');

@alloc/quick-lru functions identically to quick-lru for basic operations.

// @alloc/quick-lru
cache.set('key', 'value');
const value = cache.get('key');

⏳ Time-To-Live (TTL) and Expiration

This is the biggest differentiator. If you need data to expire automatically, lru-cache is the only choice here.

lru-cache has built-in TTL support. You can set a default TTL or override it per item.

// lru-cache
const cache = new LRUCache({ ttl: 5000 });
cache.set('temp', 'data'); // Expires in 5 seconds
// Check if valid
if (cache.has('temp')) { /* ... */ }

quick-lru does not support TTL natively. You would need to implement your own timestamp tracking.

// quick-lru
// No built-in TTL
// Manual implementation required:
cache.set('key', { value: 'data', expiry: Date.now() + 5000 });

@alloc/quick-lru also lacks native TTL, inheriting this limitation from quick-lru.

// @alloc/quick-lru
// No built-in TTL
// Manual implementation required:
cache.set('key', { value: 'data', expiry: Date.now() + 5000 });

🗑️ Eviction and Size Management

How the cache decides what to remove matters for memory safety.

lru-cache can evict based on entry count or total byte size. It also supports fetchMethod for automatic background updates.

// lru-cache
const cache = new LRUCache({
  maxSize: 1024,
  sizeCalculation: (value) => value.length // Evict based on bytes
});

quick-lru evicts strictly based on the number of entries (FIFO for least recently used).

// quick-lru
const cache = new QuickLRU({ maxSize: 100 });
// Adds 101st item -> oldest item is removed

@alloc/quick-lru follows the same entry-count eviction strategy as quick-lru.

// @alloc/quick-lru
const cache = new QuickLRU({ maxSize: 100 });
// Adds 101st item -> oldest item is removed

🛠️ Maintenance and Ecosystem

Trust and longevity are key when selecting infrastructure libraries.

lru-cache is maintained by Isaac Z. Schlueter (former npm CTO) and is used in Node.js core. It is highly stable and receives regular updates for performance and security.

quick-lru is maintained by Sindre Sorhus. It is stable but receives fewer updates as it is feature-complete for its minimal scope.

@alloc/quick-lru is a fork maintained by a specific organization. While useful for specific dependency trees, forks can sometimes lag behind upstream updates. Always check the repository activity before relying on it for critical paths.

📊 Summary Table

Featurelru-cachequick-lru@alloc/quick-lru
Primary UseServer-side, Complex CachingFrontend, Simple MapsSpecific Ecosystems
TTL Support✅ Yes (Native)❌ No❌ No
Max SizeEntries & BytesEntries OnlyEntries Only
Async Fetch✅ Yes (fetchMethod)❌ No❌ No
Bundle WeightHeavierLightweightLightweight
MaintenanceHigh (Node.js Core)StableVariable (Fork)

💡 Final Recommendation

For most server-side Node.js applications, lru-cache is the clear winner. Its support for TTL and byte-size calculation prevents memory leaks in long-running processes better than simple entry limits. It is the robust choice for production systems.

For frontend applications, bundlers, or CLI tools where bundle size matters and caching logic is simple, quick-lru is excellent. It does one thing and does it well without bloating your build.

Use @alloc/quick-lru only if you have a specific requirement to match an existing dependency tree or if you are working within a toolchain that explicitly requires this fork. Otherwise, prefer the upstream quick-lru for better long-term support.

Bottom Line: If you need time-based expiration or size limits, pick lru-cache. If you just need a fast map with a limit, pick quick-lru.

How to Choose: @alloc/quick-lru vs lru-cache vs quick-lru

  • @alloc/quick-lru:

    Choose @alloc/quick-lru if you are working within a specific ecosystem or monorepo that relies on this fork for dependency resolution or consistency. It is functionally similar to quick-lru but may include specific patches required by certain build tools. Verify that the fork is actively maintained relative to the upstream quick-lru package before adopting it for long-term projects.

  • lru-cache:

    Choose lru-cache if you need robust features like time-to-live (TTL), stale value handling, or async fetching methods. It is the industry standard for Node.js server-side caching where memory management and expiration policies are critical. This package is ideal when you need to cache data based on byte size or time rather than just entry count.

  • quick-lru:

    Choose quick-lru if you need a lightweight, synchronous cache with minimal dependencies and overhead. It is perfect for frontend bundlers, CLI tools, or simple in-memory maps where you only need to limit the number of stored items. Avoid this if you require TTL or complex eviction strategies, as it focuses on speed and simplicity.

README for @alloc/quick-lru

quick-lru Build Status Coverage Status

Simple “Least Recently Used” (LRU) cache

Useful when you need to cache something and limit memory usage.

Inspired by the hashlru algorithm, but instead uses Map to support keys of any type, not just strings, and values can be undefined.

Install

$ npm install quick-lru

Usage

const QuickLRU = require('quick-lru');

const lru = new QuickLRU({maxSize: 1000});

lru.set('🦄', '🌈');

lru.has('🦄');
//=> true

lru.get('🦄');
//=> '🌈'

API

new QuickLRU(options?)

Returns a new instance.

options

Type: object

maxSize

Required
Type: number

The maximum number of items before evicting the least recently used items.

maxAge

Type: number
Default: Infinity

The maximum number of milliseconds an item should remain in cache. By default maxAge will be Infinity, which means that items will never expire.

Lazy expiration happens upon the next write or read call.

Individual expiration of an item can be specified by the set(key, value, options) method.

onEviction

Optional
Type: (key, value) => void

Called right before an item is evicted from the cache.

Useful for side effects or for items like object URLs that need explicit cleanup (revokeObjectURL).

Instance

The instance is iterable so you can use it directly in a for…of loop.

Both key and value can be of any type.

.set(key, value, options?)

Set an item. Returns the instance.

Individual expiration of an item can be specified with the maxAge option. If not specified, the global maxAge value will be used in case it is specified on the constructor, otherwise the item will never expire.

.get(key)

Get an item.

.has(key)

Check if an item exists.

.peek(key)

Get an item without marking it as recently used.

.delete(key)

Delete an item.

Returns true if the item is removed or false if the item doesn't exist.

.clear()

Delete all items.

.resize(maxSize)

Update the maxSize, discarding items as necessary. Insertion order is mostly preserved, though this is not a strong guarantee.

Useful for on-the-fly tuning of cache sizes in live systems.

.keys()

Iterable for all the keys.

.values()

Iterable for all the values.

.entriesAscending()

Iterable for all entries, starting with the oldest (ascending in recency).

.entriesDescending()

Iterable for all entries, starting with the newest (descending in recency).

.size

The stored item count.


Get professional support for this package with a Tidelift subscription
Tidelift helps make open source sustainable for maintainers while giving companies
assurances about security, maintenance, and licensing for their dependencies.