lru-cache, memory-cache, node-cache, and quick-lru are all in-memory caching solutions for Node.js, but they serve different architectural needs. lru-cache is a strict Least Recently Used (LRU) implementation ideal for limiting memory usage based on access patterns. node-cache focuses on Time-To-Live (TTL) expiration with event hooks, suitable for general purpose data caching. quick-lru is a minimal, fast LRU cache without dependencies, often used in simpler contexts or browsers. memory-cache is a legacy package often found in older tutorials but lacks active maintenance. Choosing the right one depends on whether you need strict LRU eviction, TTL-based expiration, or long-term stability.
When building Node.js applications, in-memory caching is a common strategy to reduce database load, speed up API responses, or store temporary computation results. The packages lru-cache, memory-cache, node-cache, and quick-lru all solve this problem, but they approach it differently. Some focus on strict Least Recently Used (LRU) eviction, while others prioritize Time-To-Live (TTL) expiration. Let's break down how they work and when to use each.
The most important difference is how these packages decide when to remove data.
lru-cache strictly follows the Least Recently Used algorithm.
// lru-cache: Strict LRU with optional TTL
import { LRUCache } from 'lru-cache';
const cache = new LRUCache({ max: 100, ttl: 1000 * 60 * 5 });
node-cache focuses on Time-To-Live (TTL).
maxKeys option, but eviction is not strictly LRU.// node-cache: TTL focused
const NodeCache = require('node-cache');
const cache = new NodeCache({ stdTTL: 300, checkperiod: 120 });
quick-lru is a simple count-based LRU.
// quick-lru: Simple count-based LRU
import QuickLRU from 'quick-lru';
const cache = new QuickLRU({ maxSize: 100 });
memory-cache uses TTL primarily.
node-cache but with fewer features.// memory-cache: TTL based (Legacy)
const MemoryCache = require('memory-cache');
const cache = new MemoryCache();
All four packages allow you to store and retrieve values, but the method names and units differ.
lru-cache uses set and get.
set or constructor.// lru-cache
import { LRUCache } from 'lru-cache';
const cache = new LRUCache({ max: 100 });
cache.set('key', 'value', { ttl: 5000 });
const value = cache.get('key');
node-cache uses set and get.
// node-cache
const NodeCache = require('node-cache');
const cache = new NodeCache();
cache.set('key', 'value', 100); // 100 seconds
const value = cache.get('key');
quick-lru uses set and get.
// quick-lru
import QuickLRU from 'quick-lru';
const cache = new QuickLRU({ maxSize: 100 });
cache.set('key', 'value');
const value = cache.get('key');
memory-cache uses put and get.
// memory-cache
const MemoryCache = require('memory-cache');
const cache = new MemoryCache();
cache.put('key', 'value', 5000); // 5000 milliseconds
const value = cache.get('key');
Expiration logic varies significantly, especially between milliseconds and seconds.
lru-cache handles TTL in milliseconds.
// lru-cache: TTL in ms
cache.set('session', data, { ttl: 60000 }); // 1 minute
node-cache handles TTL in seconds.
// node-cache: TTL in seconds
cache.set('session', data, 60); // 1 minute
quick-lru does not support TTL.
// quick-lru: No TTL support
cache.set('session', data); // Stays until maxSize reached
memory-cache handles TTL in milliseconds.
lru-cache but less configurable.// memory-cache: TTL in ms
cache.put('session', data, 60000); // 1 minute
This is a critical factor for production applications.
lru-cache is actively maintained.
// lru-cache: Modern ESM import
import { LRUCache } from 'lru-cache';
node-cache is actively maintained.
// node-cache: CommonJS require
const NodeCache = require('node-cache');
quick-lru is actively maintained.
// quick-lru: Modern ESM import
import QuickLRU from 'quick-lru';
memory-cache is unmaintained.
// memory-cache: Legacy warning
// ⚠️ Not recommended for new development
const MemoryCache = require('memory-cache');
You need to cache API responses for 5 minutes to reduce external calls.
node-cache or lru-cache// node-cache example
if (!cache.has(url)) {
const data = await fetch(url);
cache.set(url, data, 300); // 5 minutes
}
You are caching file transformations in a build tool. Memory must not explode.
lru-cache or quick-lruquick-lru is lighter.// quick-lru example
const cache = new QuickLRU({ maxSize: 1000 });
cache.set(filePath, transformedContent);
Temporary user sessions that must expire automatically.
node-cache or lru-cache// lru-cache example
const cache = new LRUCache({ ttl: 1000 * 60 * 30 });
cache.set(userId, sessionData);
Maintaining an old Express app that already uses a specific cache.
memory-cache (Only if existing)// memory-cache example
// Only for legacy maintenance
cache.put('temp', value, 1000);
| Feature | lru-cache | node-cache | quick-lru | memory-cache |
|---|---|---|---|---|
| Strategy | Strict LRU + TTL | TTL + Max Keys | Strict LRU (Count) | TTL |
| TTL Unit | Milliseconds | Seconds | None | Milliseconds |
| Maintenance | ✅ Active | ✅ Active | ✅ Active | ❌ Unmaintained |
| Dependencies | None | None | None | None |
| Module Type | ESM (v9+) | CJS/ESM | ESM | CJS |
| Events | Limited | ✅ Rich (del, expired) | None | None |
For most modern Node.js applications, lru-cache is the safest and most powerful choice. It balances strict memory management with TTL support and is actively maintained. Use node-cache if you prefer TTL in seconds and need event hooks for cache misses or expirations. quick-lru is excellent for lightweight tools or browser-based caching where you just need to limit item count. Avoid memory-cache in any new project due to its lack of maintenance and potential security risks.
Choose based on whether you need time-based expiration (node-cache, lru-cache) or strict memory limits (lru-cache, quick-lru), and always prioritize actively maintained packages.
Choose lru-cache if you need a robust, strictly enforced LRU algorithm with advanced features like async fetching and size calculation. It is the industry standard for high-performance caching where memory limits must be respected based on usage frequency. Ideal for build tools, complex server-side caching, and scenarios requiring precise control over eviction.
Avoid memory-cache for new projects as it is unmaintained and has not seen updates in years. It may still work for simple key-value storage in legacy systems, but it lacks modern features and security patches. Only use this if you are maintaining an older codebase that already depends on it and refactoring is not an option.
Choose node-cache if your primary requirement is Time-To-Live (TTL) expiration rather than strict LRU eviction. It offers useful events like del and expired, making it great for session storage or temporary data that must vanish after a set time. It is a solid choice for general-purpose caching where simplicity and TTL are more important than access-pattern optimization.
Choose quick-lru if you need a lightweight, dependency-free LRU cache for simple use cases or browser environments. It is extremely fast and easy to set up but lacks TTL support and advanced features. Perfect for CLI tools, small utilities, or frontend build processes where you just need to limit the number of cached items by count.
A cache object that deletes the least-recently-used items.
Specify a max number of the most recently used items that you want to keep, and this cache will keep that many of the most recently accessed items.
This is not primarily a TTL cache, and does not make strong TTL
guarantees. There is no preemptive pruning of expired items by
default, but you may set a TTL on the cache or on a single
set. If you do so, it will treat expired items as missing, and
delete them when fetched. If you are more interested in TTL
caching than LRU caching, check out
@isaacs/ttlcache.
As of version 7, this is one of the most performant LRU implementations available in JavaScript, and supports a wide diversity of use cases. However, note that using some of the features will necessarily impact performance, by causing the cache to have to do more work. See the "Performance" section below.
npm install lru-cache --save
// hybrid module, either works
import { LRUCache } from 'lru-cache'
// or:
const { LRUCache } = require('lru-cache')
// or in minified form for web browsers:
import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'
// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
// unsafe unbounded storage.
//
// In most cases, it's best to specify a max for performance, so all
// the required memory allocation is done up-front.
//
// All the other options are optional, see the sections below for
// documentation on what each one does. Most of them can be
// overridden for specific items in get()/set()
const options = {
max: 500,
// for use with tracking overall storage size
maxSize: 5000,
sizeCalculation: (value, key) => {
return 1
},
// for use when you need to clean up something when objects
// are evicted from the cache
dispose: (value, key, reason) => {
freeFromMemoryOrWhatever(value)
},
// for use when you need to know that an item is being inserted
// note that this does NOT allow you to prevent the insertion,
// it just allows you to know about it.
onInsert: (value, key, reason) => {
logInsertionOrWhatever(key, value)
},
// how long to live in ms
ttl: 1000 * 60 * 5,
// return stale items before removing from cache?
allowStale: false,
updateAgeOnGet: false,
updateAgeOnHas: false,
// async method to use for cache.fetch(), for
// stale-while-revalidate type of behavior
fetchMethod: async (key, staleValue, { options, signal, context }) => {},
}
const cache = new LRUCache(options)
cache.set('key', 'value')
cache.get('key') // "value"
// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)
cache.clear() // empty the cache
If you put more stuff in the cache, then less recently used items will fall out. That's what an LRU cache is.
For full description of the API and all options, please see the LRUCache typedocs
This implementation aims to be as flexible as possible, within the limits of safe memory consumption and optimal performance.
At initial object creation, storage is allocated for max items.
If max is set to zero, then some performance is lost, and item
count is unbounded. Either maxSize or ttl must be set if
max is not specified.
If maxSize is set, then this creates a safe limit on the
maximum storage consumed, but without the performance benefits of
pre-allocation. When maxSize is set, every item must provide
a size, either via the sizeCalculation method provided to the
constructor, or via a size or sizeCalculation option provided
to cache.set(). The size of every item must be a positive
integer.
If neither max nor maxSize are set, then ttl tracking must
be enabled. Note that, even when tracking item ttl, items are
not preemptively deleted when they become stale, unless
ttlAutopurge is enabled. Instead, they are only purged the
next time the key is requested. Thus, if ttlAutopurge, max,
and maxSize are all not set, then the cache will potentially
grow unbounded.
In this case, a warning is printed to standard error. Future
versions may require the use of ttlAutopurge if max and
maxSize are not specified.
If you truly wish to use a cache that is bound only by TTL
expiration, consider using a Map object, and calling
setTimeout to delete entries when they expire. It will perform
much better than an LRU cache.
Here is an implementation you may use, under the same license as this package:
// a storage-unbounded ttl cache that is not an lru-cache
const cache = {
data: new Map(),
timers: new Map(),
set: (k, v, ttl) => {
if (cache.timers.has(k)) {
clearTimeout(cache.timers.get(k))
}
cache.timers.set(
k,
setTimeout(() => cache.delete(k), ttl),
)
cache.data.set(k, v)
},
get: k => cache.data.get(k),
has: k => cache.data.has(k),
delete: k => {
if (cache.timers.has(k)) {
clearTimeout(cache.timers.get(k))
}
cache.timers.delete(k)
return cache.data.delete(k)
},
clear: () => {
cache.data.clear()
for (const v of cache.timers.values()) {
clearTimeout(v)
}
cache.timers.clear()
},
}
If that isn't to your liking, check out @isaacs/ttlcache.
This cache never stores undefined values, as undefined is used
internally in a few places to indicate that a key is not in the
cache.
You may call cache.set(key, undefined), but this is just
an alias for cache.delete(key). Note that this has the effect
that cache.has(key) will return false after setting it to
undefined.
cache.set(myKey, undefined)
cache.has(myKey) // false!
If you need to track undefined values, and still note that the
key is in the cache, an easy workaround is to use a sigil object
of your own.
import { LRUCache } from 'lru-cache'
const undefinedValue = Symbol('undefined')
const cache = new LRUCache(...)
const mySet = (key, value) =>
cache.set(key, value === undefined ? undefinedValue : value)
const myGet = (key, value) => {
const v = cache.get(key)
return v === undefinedValue ? undefined : v
}
As of January 2022, version 7 of this library is one of the most performant LRU cache implementations in JavaScript.
Benchmarks can be extremely difficult to get right. In particular, the performance of set/get/delete operations on objects will vary wildly depending on the type of key used. V8 is highly optimized for objects with keys that are short strings, especially integer numeric strings. Thus any benchmark which tests solely using numbers as keys will tend to find that an object-based approach performs the best.
Note that coercing anything to strings to use as object keys is unsafe, unless you can be 100% certain that no other type of value will be used. For example:
const myCache = {}
const set = (k, v) => (myCache[k] = v)
const get = k => myCache[k]
set({}, 'please hang onto this for me')
set('[object Object]', 'oopsie')
Also beware of "Just So" stories regarding performance. Garbage collection of large (especially: deep) object graphs can be incredibly costly, with several "tipping points" where it increases exponentially. As a result, putting that off until later can make it much worse, and less predictable. If a library performs well, but only in a scenario where the object graph is kept shallow, then that won't help you if you are using large objects as keys.
In general, when attempting to use a library to improve performance (such as a cache like this one), it's best to choose an option that will perform well in the sorts of scenarios where you'll actually use it.
This library is optimized for repeated gets and minimizing eviction time, since that is the expected need of a LRU. Set operations are somewhat slower on average than a few other options, in part because of that optimization. It is assumed that you'll be caching some costly operation, ideally as rarely as possible, so optimizing set over get would be unwise.
If performance matters to you:
If it's at all possible to use small integer values as keys, and you can guarantee that no other types of values will be used as keys, then do that, and use a cache such as lru-fast, or mnemonist's LRUCache which uses an Object as its data store.
Failing that, if at all possible, use short non-numeric strings (ie, less than 256 characters) as your keys, and use mnemonist's LRUCache.
If the types of your keys will be anything else, especially long strings, strings that look like floats, objects, or some mix of types, or if you aren't sure, then this library will work well for you.
If you do not need the features that this library provides (like asynchronous fetching, a variety of TTL staleness options, and so on), then mnemonist's LRUMap is a very good option, and just slightly faster than this module (since it does considerably less).
Do not use a dispose function, size tracking, or especially
ttl behavior, unless absolutely needed. These features are
convenient, and necessary in some use cases, and every attempt
has been made to make the performance impact minimal, but it
isn't nothing.
When writing tests that involve TTL-related functionality, note
that this module creates an internal reference to the global
performance or Date objects at import time. If you import it
statically at the top level, those references cannot be mocked or
overridden in your test environment.
To avoid this, dynamically import the package within your tests so that the references are captured after your mocks are applied. For example:
// ❌ Not recommended
import { LRUCache } from 'lru-cache'
// mocking timers, e.g. jest.useFakeTimers()
// ✅ Recommended for TTL tests
// mocking timers, e.g. jest.useFakeTimers()
const { LRUCache } = await import('lru-cache')
This ensures that your mocked timers or time sources are respected when testing TTL behavior.
Additionally, you can pass in a perf option when creating your
LRUCache instance. This option accepts any object with a now
method that returns a number.
For example, this would be a very bare-bones time-mocking system you could use in your tests, without any particular test framework:
import { LRUCache } from 'lru-cache'
let myClockTime = 0
const cache = new LRUCache<string>({
max: 10,
ttl: 1000,
perf: {
now: () => myClockTime,
},
})
// run tests, updating myClockTime as needed
This library changed to a different algorithm and internal data structure in version 7, yielding significantly better performance, albeit with some subtle changes as a result.
If you were relying on the internals of LRUCache in version 6 or before, it probably will not work in version 7 and above.
fetchContext option was renamed to context, and may no
longer be set on the cache instance itself.null or undefined.'lru-cache/min', for both CJS
and MJS builds.cache.fetch() return type is now Promise<V | undefined>
instead of Promise<V | void>. This is an irrelevant change
practically speaking, but can require changes for TypeScript
users.For more info, see the change log.