These libraries provide caching solutions for Node.js and frontend build environments, ranging from simple key-value stores to HTTP request interceptors. lru-cache, node-cache, and memory-cache offer direct in-memory storage with varying features for expiration and limits. cache-manager acts as an abstraction layer to swap storage backends without changing code. cacheable-request and axios-cache-adapter focus on caching HTTP responses transparently, reducing network load for API calls.
When building server-side rendered apps or Node.js backends, caching reduces load times and server stress. The packages listed here fall into two groups: raw storage engines (lru-cache, node-cache, memory-cache), abstraction layers (cache-manager), and HTTP-specific tools (cacheable-request, axios-cache-adapter). Let's break down how they handle data, expiration, and integration.
The core job of any cache is to save and fetch values. The API style varies from synchronous to asynchronous, and from simple objects to specialized classes.
lru-cache uses a class-based approach with strict memory limits.
import { LRUCache } from 'lru-cache';
const cache = new LRUCache({ max: 500 });
cache.set('key', 'value');
const val = cache.get('key');
node-cache offers a simple synchronous API with event support.
const NodeCache = require('node-cache');
const cache = new NodeCache();
cache.set('key', 'value');
const val = cache.get('key');
memory-cache provides a very basic static interface.
const cache = require('memory-cache');
cache.put('key', 'value');
const val = cache.get('key');
cache-manager uses an asynchronous promise-based API.
import { caching } from 'cache-manager';
const cache = await caching('memory', { ttl: 5000 });
await cache.set('key', 'value');
const val = await cache.get('key');
cacheable-request wraps HTTP methods directly.
const CacheableRequest = require('cacheable-request');
const cr = new CacheableRequest(http.request);
const req = cr.get('http://example.com');
axios-cache-adapter integrates into the axios adapter chain.
import { setupCache } from 'axios-cache-adapter';
const cache = setupCache({ maxAge: 15 * 60 * 1000 });
const api = axios.create({ adapter: cache.adapter });
Controlling how long data lives is critical to prevent stale data or memory leaks. Some packages use Time-To-Live (TTL), while others use count limits.
lru-cache focuses on item count and optional TTL.
// Limits to 100 items, auto-deletes old ones
const cache = new LRUCache({ max: 100, ttl: 1000 * 60 });
node-cache allows global or per-key TTL.
// Global 60s TTL
const cache = new NodeCache({ stdTTL: 60 });
// Per-key override
cache.set('key', 'value', 120);
memory-cache requires TTL on every put operation.
// Must specify milliseconds every time
cache.put('key', 'value', 5000);
cache-manager defines TTL during setup or call.
// Setup TTL
const cache = await caching('memory', { ttl: 5000 });
// Or per set
await cache.set('key', 'value', 3000);
cacheable-request respects HTTP headers primarily.
// Uses Cache-Control headers from server
const req = cr.get('http://example.com');
axios-cache-adapter sets maxAge in configuration.
// Cache for 15 minutes
const cache = setupCache({ maxAge: 15 * 60 * 1000 });
Some tools cache any JavaScript value, while others specialize in network responses. Mixing these up can lead to architecture issues.
lru-cache, node-cache, and memory-cache store any data type. You must manually manage when to invalidate data after an API call.
// Manual HTTP caching logic
const data = await fetchAPI();
cache.set('api-result', data);
cache-manager stores any data but often pairs with HTTP logic.
// Wrap function to cache result
const user = await cache.wrap('user-1', fetchUser);
cacheable-request and axios-cache-adapter handle the network layer directly. They skip the request if a valid cache exists.
// cacheable-request: Automatic based on URL
const req = cr.get('http://api.com/data');
// axios-cache-adapter: Automatic based on config
const response = await api.get('/data');
Choosing a library involves checking if it is still safe to use. Some packages are legacy or have newer successors.
memory-cache is simple but lacks recent updates. It does not support events or advanced TTL features found in node-cache. Use it only for quick scripts.
// Limited feature set
const val = cache.get('key');
axios-cache-adapter has seen less activity compared to axios-cache-interceptor. For new projects, evaluate if the interceptor fits better.
// Legacy adapter pattern
axios.create({ adapter: cache.adapter });
lru-cache, node-cache, and cache-manager are actively maintained. They receive security patches and feature updates regularly.
// Safe for production
const cache = new LRUCache({ max: 100 });
cacheable-request is stable and follows HTTP standards closely.
// Standard compliant
const cr = new CacheableRequest(http.request);
| Package | Type | Async | TTL Support | Max Items | HTTP Aware |
|---|---|---|---|---|---|
lru-cache | Storage | No | Yes | Yes | No |
node-cache | Storage | No | Yes | No | No |
memory-cache | Storage | No | Yes | No | No |
cache-manager | Abstraction | Yes | Yes | Depends | No |
cacheable-request | HTTP | Yes | Header-based | No | Yes |
axios-cache-adapter | HTTP | Yes | Config-based | No | Yes |
For general server-side caching where you control the data, node-cache offers the best balance of features and stability. If you need strict memory limits to prevent crashes, lru-cache is the industry standard.
For HTTP requests, cacheable-request is ideal for native modules, while axios-cache-adapter works for axios users — though check if axios-cache-interceptor suits your needs better. Use cache-manager if you plan to switch to Redis later. Avoid memory-cache for critical production systems due to limited features.
Choose this only for simple prototypes or scripts where you need basic key-value storage with minimal setup. For production systems, prefer node-cache for better event support and maintenance.
Choose this if you are already using axios and need a quick way to cache HTTP responses without changing your request logic. It bridges axios with storage engines like cache-manager. Note that newer projects might prefer axios-cache-interceptor for active maintenance.
Choose this if you need a unified interface that allows switching between memory, Redis, or file stores later. It is ideal for applications that might scale from single-server memory to distributed caching without rewriting core logic.
Choose this if you are using native http or got modules and want transparent HTTP caching based on standard headers. It works well for low-level network tools where you need strict adherence to HTTP caching rules.
Choose this if you need a fast, dependency-free Least Recently Used cache with precise control over memory limits. It is the standard choice for internal data structures where you must prevent memory leaks.
Choose this if you need a robust in-memory cache with event emitters, flexible TTL per key, and a stable API. It is the go-to for general-purpose server-side caching without external dependencies.
A simple in-memory cache for node.js
npm install memory-cache --save
var cache = require('memory-cache');
// now just use the cache
cache.put('foo', 'bar');
console.log(cache.get('foo'));
// that wasn't too interesting, here's the good part
cache.put('houdini', 'disappear', 100, function(key, value) {
console.log(key + ' did ' + value);
}); // Time in ms
console.log('Houdini will now ' + cache.get('houdini'));
setTimeout(function() {
console.log('Houdini is ' + cache.get('houdini'));
}, 200);
// create new cache instance
var newCache = new cache.Cache();
newCache.put('foo', 'newbaz');
setTimeout(function() {
console.log('foo in old cache is ' + cache.get('foo'));
console.log('foo in new cache is ' + newCache.get('foo'));
}, 200);
which should print
bar
Houdini will now disappear
houdini did disappear
Houdini is null
foo in old cache is baz
foo in new cache is newbaz
setTimeout)function(key, value) {})null== size() unless a setTimeout removal went wrongexport into the cacheimport will remain in the cacheskipDuplicates is trueoptions:
skipDuplicates: If true, any duplicate keys will be ignored when importing them. Defaults to false.require('cache') would return the default instance of Cacherequire('cache').Cache is the actual class