cache-manager vs cacheable-request vs lru-cache vs memory-cache vs quick-lru
Caching Libraries in Node.js
cache-managercacheable-requestlru-cachememory-cachequick-lruSimilar Packages:

Caching Libraries in Node.js

Caching libraries in Node.js are essential tools that help improve application performance by storing frequently accessed data in memory or other storage mediums. These libraries reduce the need for repeated data retrieval from slower sources, such as databases or external APIs, thereby enhancing response times and overall user experience. Each library offers unique features and use cases, catering to different caching needs, from simple in-memory caching to more complex strategies involving multiple storage backends.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
cache-manager3,170,6361,97052.2 kB03 months agoMIT
cacheable-request01,97079.5 kB03 months agoMIT
lru-cache05,858844 kB320 days agoBlueOak-1.0.0
memory-cache01,602-329 years agoBSD-2-Clause
quick-lru075420.4 kB16 months agoMIT

Feature Comparison: cache-manager vs cacheable-request vs lru-cache vs memory-cache vs quick-lru

Caching Strategy

  • cache-manager:

    Cache-manager supports multiple caching strategies and backends, allowing you to choose the best fit for your application's needs. It abstracts the caching logic, enabling easy switching between different storage solutions without changing the application code.

  • cacheable-request:

    Cacheable-request focuses specifically on HTTP caching, implementing the HTTP caching specification to ensure that responses are cached based on their headers. This makes it suitable for web applications that interact heavily with APIs and need to optimize network calls.

  • lru-cache:

    Lru-cache implements the Least Recently Used (LRU) caching strategy, which evicts the least recently accessed items when the cache reaches its limit. This is effective for managing memory usage while keeping frequently accessed data readily available.

  • memory-cache:

    Memory-cache provides a simple key-value store for caching data in memory. It does not implement any eviction strategy, making it suitable for applications where data size is manageable and the overhead of managing cache is not required.

  • quick-lru:

    Quick-lru is designed for high performance with a minimal memory footprint. It uses a simple LRU eviction strategy, ensuring that the most frequently accessed items are kept in memory while older items are discarded efficiently.

Performance

  • cache-manager:

    Cache-manager is designed to be performant across various backends, leveraging the strengths of each backend to optimize data retrieval and storage. Its ability to switch backends allows developers to choose the most efficient storage solution for their specific use case.

  • cacheable-request:

    Cacheable-request is optimized for reducing network calls by caching HTTP responses, which can significantly improve performance in applications that rely on external APIs. By caching responses, it minimizes the latency associated with network requests.

  • lru-cache:

    Lru-cache is highly efficient in terms of speed and memory usage, making it suitable for applications that require rapid access to cached data. Its LRU strategy ensures that the cache remains performant even under heavy load.

  • memory-cache:

    Memory-cache is straightforward and fast, providing quick access to cached data without the overhead of complex eviction policies. It is suitable for applications with low to moderate caching needs where simplicity is key.

  • quick-lru:

    Quick-lru is optimized for speed, making it one of the fastest LRU cache implementations available. It is particularly useful in scenarios where performance is critical, such as real-time applications or high-frequency data access.

Ease of Use

  • cache-manager:

    Cache-manager offers a simple and consistent API that abstracts the complexities of different caching backends, making it easy to implement and manage caching in your application. Its flexibility allows for quick integration with minimal setup.

  • cacheable-request:

    Cacheable-request is easy to use, requiring minimal configuration to start caching HTTP requests. Its automatic handling of request and response caching simplifies the process for developers, allowing them to focus on application logic.

  • lru-cache:

    Lru-cache has a straightforward API that is easy to understand and implement. It requires only a few lines of code to set up, making it accessible for developers looking for a quick caching solution.

  • memory-cache:

    Memory-cache is extremely easy to use, with a simple key-value interface that allows developers to cache data with minimal effort. It is ideal for those who need a quick and uncomplicated caching mechanism.

  • quick-lru:

    Quick-lru provides a simple API that is easy to integrate into existing applications. Its lightweight nature makes it a good choice for developers who want a fast and efficient caching solution without unnecessary complexity.

Scalability

  • cache-manager:

    Cache-manager is highly scalable due to its support for various backends, including distributed caching solutions like Redis. This makes it suitable for applications that need to scale horizontally across multiple servers or instances.

  • cacheable-request:

    Cacheable-request is primarily focused on HTTP caching and may not scale as well in scenarios requiring distributed caching. However, it excels in optimizing API calls within a single instance or server environment.

  • lru-cache:

    Lru-cache is best suited for applications with limited memory requirements, as it operates in-memory. While it can handle a reasonable amount of data, it may not be the best choice for large-scale applications that require distributed caching.

  • memory-cache:

    Memory-cache is not inherently scalable, as it stores data in memory on a single instance. It is best for small to medium applications where data size is manageable and does not require distribution across multiple servers.

  • quick-lru:

    Quick-lru is efficient for in-memory caching but is limited in scalability due to its single-instance nature. It is ideal for applications that need fast access to frequently used data without the need for distributed caching.

Eviction Policy

  • cache-manager:

    Cache-manager allows for customizable eviction policies depending on the backend used. This flexibility lets developers choose the most appropriate strategy for their caching needs, whether it be time-based expiration or size limits.

  • cacheable-request:

    Cacheable-request does not implement its own eviction policy, as it relies on HTTP caching headers to determine cache validity. This means that the eviction strategy is dictated by the server's response rather than the library itself.

  • lru-cache:

    Lru-cache uses the LRU eviction policy, which ensures that the least recently accessed items are removed first when the cache reaches its limit. This is effective for keeping frequently accessed data in memory while managing memory usage efficiently.

  • memory-cache:

    Memory-cache does not implement any eviction policy, meaning that cached items will remain in memory until the application is terminated or the cache is manually cleared. This simplicity can be a drawback in larger applications with high memory usage.

  • quick-lru:

    Quick-lru employs the LRU eviction policy, ensuring that the most relevant data remains cached while older data is discarded. This makes it suitable for applications with dynamic data access patterns.

How to Choose: cache-manager vs cacheable-request vs lru-cache vs memory-cache vs quick-lru

  • cache-manager:

    Choose cache-manager if you need a versatile caching solution that supports multiple backends (like Redis, Memcached, etc.) and offers a consistent API for managing cache across different storage systems. It's ideal for applications that require a unified caching strategy with the ability to switch backends easily.

  • cacheable-request:

    Select cacheable-request if your primary focus is on caching HTTP requests and responses. It is particularly useful for applications that make frequent API calls, as it automatically handles the caching of requests and responses, reducing redundant network traffic and improving performance.

  • lru-cache:

    Opt for lru-cache when you need a simple and efficient in-memory cache that implements the Least Recently Used (LRU) eviction policy. This is suitable for scenarios where memory is limited, and you want to ensure that the most frequently accessed items remain in cache while older items are removed.

  • memory-cache:

    Use memory-cache for a straightforward in-memory caching solution that is easy to set up and use. It’s best for applications that require a simple caching mechanism without the overhead of more complex features, making it ideal for small to medium-sized applications.

  • quick-lru:

    Choose quick-lru if you need a lightweight and fast LRU cache implementation. It is optimized for performance and is particularly effective in scenarios where speed is critical, such as high-frequency data access patterns.

README for cache-manager

Cacheable

cache-manager

codecov tests npm npm license

Simple and fast NodeJS caching module.

A cache module for NodeJS that allows easy wrapping of functions in cache, tiered caches, and a consistent interface.

  • Made with Typescript and compatible with ESModules.
  • Easy way to wrap any function in cache, supports a mechanism to refresh expiring cache keys in background.
  • Tiered caches -- data gets stored in each cache and fetched from the highest priority cache(s) first.
  • nonBlocking option that optimizes how the system handles multiple stores.
  • Use with any Keyv compatible storage adapter.
  • 100% test coverage via vitest.

We moved to using Keyv which are more actively maintained and have a larger community.

A special thanks to Tim Phan who took cache-manager v5 and ported it to Keyv which is the foundation of v6. 🎉 Another special thanks to Doug Ayers who wrote promise-coalesce which was used in v5 and now embedded in v6.

Migration from v6 to v7

v7 has only one breaking change which is changing the return type from null to undefined when there is no data to return. This is to align with the Keyv API and to make it more consistent with the rest of the methods. Below is an example of how to migrate from v6 to v7:

import { createCache } from 'cache-manager';

const cache = createCache();
const result = await cache.get('key');
// result will be undefined if the key is not found or expired
console.log(result); // undefined

Migration from v5 to v6

v6 is a major update and has breaking changes primarily around the storage adapters. We have moved to using Keyv which are more actively maintained and have a larger community. Below are the changes you need to make to migrate from v5 to v6. In v5 the memoryStore was used to create a memory store, in v6 you can use any storage adapter that Keyv supports. Below is an example of how to migrate from v5 to v6:

import { createCache, memoryStore } from 'cache-manager';

// Create memory cache synchronously
const memoryCache = createCache(memoryStore({
  max: 100,
  ttl: 10 * 1000 /*milliseconds*/,
}));

In v6 you can use any storage adapter that Keyv supports. Below is an example of using the in memory store with Keyv:

import { createCache } from 'cache-manager';

const cache = createCache();

If you would like to do multiple stores you can do the following:

import { createCache } from 'cache-manager';
import { createKeyv } from 'cacheable';
import { createKeyv as createKeyvRedis } from '@keyv/redis';

const memoryStore = createKeyv();
const redisStore = createKeyvRedis('redis://user:pass@localhost:6379');

const cache = createCache({
  stores: [memoryStore, redisStore],
});

When doing in memory caching and getting errors on symbol or if the object is coming back wrong like on Uint8Array you will want to set the serialization and deserialization options in Keyv to undefined as it will try to do json serialization.

import { createCache } from "cache-manager";
import { Keyv } from "keyv";

const keyv = new Keyv();
keyv.serialize = undefined;
keyv.deserialize = undefined;

const memoryCache = createCache({
	stores: [keyv],
});

The other option is to set the serialization to something that is not JSON.stringify. You can read more about it here: https://keyv.org/docs/keyv/#custom-serializers

If you would like a more robust in memory storage adapter you can use CacheableMemory from Cacheable. Below is an example of how to migrate from v5 to v6 using CacheableMemory:

import { createCache } from 'cache-manager';
import { createKeyv } from 'cacheable';

const cache = createCache({
  stores: [createKeyv({ ttl: 60000, lruSize: 5000 })],
});

To learn more about CacheableMemory please visit: http://cacheable.org/docs/cacheable/#cacheablememory---in-memory-cache

If you are still wanting to use the legacy storage adapters you can use the KeyvAdapter to wrap the storage adapter. Below is an example of how to migrate from v5 to v6 using cache-manager-redis-yet by going to Using Legacy Storage Adapters.

If you are looking for older documentation you can find it here:

Table of Contents

Installation

npm install cache-manager

By default, everything is stored in memory; you can optionally also install a storage adapter; choose one from any of the storage adapters supported by Keyv:

npm install @keyv/redis
npm install @keyv/memcache
npm install @keyv/mongo
npm install @keyv/sqlite
npm install @keyv/postgres
npm install @keyv/mysql
npm install @keyv/etcd

In addition Keyv supports other storage adapters such as lru-cache and CacheableMemory from Cacheable (more examples below). Please read Keyv document for more information.

Quick start

import { Keyv } from 'keyv';
import { createCache } from 'cache-manager';

// Memory store by default
const cache = createCache()

// Single store which is in memory
const cache = createCache({
  stores: [new Keyv()],
})

Here is an example of doing layer 1 and layer 2 caching with the in-memory being CacheableMemory from Cacheable and the second layer being @keyv/redis:

import { Keyv } from 'keyv';
import KeyvRedis from '@keyv/redis';
import { CacheableMemory } from 'cacheable';
import { createCache } from 'cache-manager';

// Multiple stores
const cache = createCache({
  stores: [
    //  High performance in-memory cache with LRU and TTL
    new Keyv({
      store: new CacheableMemory({ ttl: 60000, lruSize: 5000 }),
    }),

    //  Redis Store
    new Keyv({
      store: new KeyvRedis('redis://user:pass@localhost:6379'),
    }),
  ],
})

Once it is created, you can use the cache object to set, get, delete, and wrap functions in cache.


// With default ttl and refreshThreshold
const cache = createCache({
  ttl: 10000,
  refreshThreshold: 3000,
})

await cache.set('foo', 'bar')
// => bar

await cache.get('foo')
// => bar

await cache.del('foo')
// => true

await cache.get('foo')
// => null

await cache.wrap('key', () => 'value')
// => value

Using CacheableMemory or lru-cache as storage adapter

Because we are using Keyv, you can use any storage adapter that Keyv supports such as lru-cache or CacheableMemory from Cacheable. Below is an example of using CacheableMemory:

In this example we are using CacheableMemory from Cacheable which is a fast in-memory cache that supports LRU and and TTL expiration.

import { createCache } from 'cache-manager';
import { Keyv } from 'keyv';
import { KeyvCacheableMemory } from 'cacheable';

const store = new KeyvCacheableMemory({ ttl: 60000, lruSize: 5000 });
const keyv = new Keyv({ store });
const cache = createCache({ stores: [keyv] });

Here is an example using lru-cache:

import { createCache } from 'cache-manager';
import { Keyv } from 'keyv';
import { LRU } from 'lru-cache';

const keyv = new Keyv({ store: new LRU({ max: 5000, maxAge: 60000 }) });
const cache = createCache({ stores: [keyv] });

Options

  • stores?: Keyv[]

    List of Keyv instance. Please refer to the Keyv document for more information.

  • ttl?: number - Default time to live in milliseconds.

    The time to live in milliseconds. This is the maximum amount of time that an item can be in the cache before it is removed.

  • refreshThreshold?: number | (value:T) => number - Default refreshThreshold in milliseconds. You can also provide a function that will return the refreshThreshold based on the value.

    If the remaining TTL is less than refreshThreshold, the system will update the value asynchronously in background.

  • refreshAllStores?: boolean - Default false

    If set to true, the system will update the value of all stores when the refreshThreshold is met. Otherwise, it will only update from the top to the store that triggered the refresh.

  • nonBlocking?: boolean - Default false

    If set to true, the system will not block when multiple stores are used. Here is how it affects the type of functions:

    • set and mset - will not wait for all stores to finish.
    • get and mget - will return the first (fastest) value found.
    • del and mdel - will not wait for all stores to finish.
    • clear - will not wait for all stores to finish.
    • wrap - will do the same as get and set (return the first value found and not wait for all stores to finish).
  • cacheId?: string - Defaults to random string

    Unique identifier for the cache instance. This is primarily used to not have conflicts when using wrap with multiple cache instances.

Methods

set

set(key, value, [ttl]): Promise<value>

Sets a key value pair. It is possible to define a ttl (in milliseconds). An error will be throw on any failed

await cache.set('key-1', 'value 1')

// expires after 5 seconds
await cache.set('key 2', 'value 2', 5000)

See unit tests in test/set.test.ts for more information.

mset

mset(keys: [ { key, value, ttl } ]): Promise<true>

Sets multiple key value pairs. It is possible to define a ttl (in milliseconds). An error will be throw on any failed

await cache.mset([
  { key: 'key-1', value: 'value 1' },
  { key: 'key-2', value: 'value 2', ttl: 5000 },
]);

get

get(key): Promise<value>

Gets a saved value from the cache. Returns a null if not found or expired. If the value was found it returns the value.

await cache.set('key', 'value')

await cache.get('key')
// => value

await cache.get('foo')
// => null

See unit tests in test/get.test.ts for more information.

mget

mget(keys: [key]): Promise<value[]>

Gets multiple saved values from the cache. Returns a null if not found or expired. If the value was found it returns the value.

await cache.mset([
  { key: 'key-1', value: 'value 1' },
  { key: 'key-2', value: 'value 2' },
]);

await cache.mget(['key-1', 'key-2', 'key-3'])
// => ['value 1', 'value 2', null]

ttl

ttl(key): Promise<number | null>

Gets the expiration time of a key in milliseconds. Returns a null if not found or expired.

await cache.set('key', 'value', 1000); // expires after 1 second

await cache.ttl('key'); // => the expiration time in milliseconds

await cache.get('foo'); // => null

See unit tests in test/ttl.test.ts for more information.

del

del(key): Promise<true>

Delete a key, an error will be throw on any failed.

await cache.set('key', 'value')

await cache.get('key')
// => value

await cache.del('key')

await cache.get('key')
// => null

See unit tests in test/del.test.ts for more information.

mdel

mdel(keys: [key]): Promise<true>

Delete multiple keys, an error will be throw on any failed.

await cache.mset([
  { key: 'key-1', value: 'value 1' },
  { key: 'key-2', value: 'value 2' },
]);

await cache.mdel(['key-1', 'key-2'])

clear

clear(): Promise<true>

Flush all data, an error will be throw on any failed.

await cache.set('key-1', 'value 1')
await cache.set('key-2', 'value 2')

await cache.get('key-1')
// => value 1
await cache.get('key-2')
// => value 2

await cache.clear()

await cache.get('key-1')
// => null
await cache.get('key-2')
// => null

See unit tests in test/clear.test.ts for more information.

wrap

wrap(key, fn: async () => value, [ttl], [refreshThreshold]): Promise<value>

Alternatively, with optional parameters as options object supporting a raw parameter:

wrap(key, fn: async () => value, { ttl?: number, refreshThreshold?: number, raw?: true }): Promise<value>

Wraps a function in cache. The first time the function is run, its results are stored in cache so subsequent calls retrieve from cache instead of calling the function.

If refreshThreshold is set and the remaining TTL is less than refreshThreshold, the system will update the value asynchronously. In the meantime, the system will return the old value until expiration. You can also provide a function that will return the refreshThreshold based on the value (value:T) => number.

If the object format for the optional parameters is used, an additional raw parameter can be applied, changing the function return type to raw data including expiration timestamp as { value: [data], expires: [timestamp] }.

await cache.wrap('key', () => 1, 5000, 3000)
// call function then save the result to cache
// =>  1

await cache.wrap('key', () => 2, 5000, 3000)
// return data from cache, function will not be called again
// => 1

await cache.wrap('key', () => 2, { ttl: 5000, refreshThreshold: 3000, raw: true })
// returns raw data including expiration timestamp
// => { value: 1, expires: [timestamp] }

// wait 3 seconds
await sleep(3000)

await cache.wrap('key', () => 2, 5000, 3000)
// return data from cache, call function in background and save the result to cache
// =>  1

await cache.wrap('key', () => 3, 5000, 3000)
// return data from cache, function will not be called
// =>  2

await cache.wrap('key', () => 4, 5000, () => 3000);
// return data from cache, function will not be called
// =>  4

await cache.wrap('error', () => {
  throw new Error('failed')
})
// => error

NOTES:

  • The store that will be checked for refresh is the one where the key will be found first (highest priority).
  • If the threshold is low and the worker function is slow, the key may expire and you may encounter a racing condition with updating values.
  • If no ttl is set for the key, the refresh mechanism will not be triggered.

See unit tests in test/wrap.test.ts for more information.

disconnect

disconnect(): Promise<void>

Will disconnect from the relevant store(s). It is highly recommended to use this when using a Keyv storage adapter that requires a disconnect. For each storage adapter, the use case for when to use disconnect is different. An example is that @keyv/redis should be used only when you are done with the cache.

await cache.disconnect();

See unit tests in test/disconnect.test.ts for more information.

Properties

cacheId

cacheId(): string

Returns cache instance id. This is primarily used to not have conflicts when using wrap with multiple cache instances.

stores

stores(): Keyv[]

Returns the list of Keyv instances. This can be used to get the list of stores and then use the Keyv API to interact with the store directly.

const cache = createCache({cacheId: 'my-cache-id'});
cache.cacheId(); // => 'my-cache-id'

See unit tests in test/cache-id.test.ts for more information.

Events

set

Fired when a key has been added or changed.

cache.on('set', ({ key, value, error }) => {
	// ... do something ...
})

del

Fired when a key has been removed manually.

cache.on('del', ({ key, error }) => {
	// ... do something ...
})

clear

Fired when the cache has been flushed.

cache.on('clear', (error) => {
  if (error) {
    // ... do something ...
  }
})

refresh

Fired when the cache has been refreshed in the background.

cache.on('refresh', ({ key, value, error }) => {
  if (error) {
    // ... do something ...
  }
})

See unit tests in test/events.test.ts for more information.

Doing Iteration on Stores

You can use the stores method to get the list of stores and then use the Keyv API to interact with the store directly. Below is an example of iterating over all stores and getting all keys:

import Keyv from 'keyv';
import { createKeyv } from '@keyv/redis';
import { createCache } from 'cache-manager';

const keyv = new Keyv();
const keyvRedis = createKeyv('redis://user:pass@localhost:6379');

const cache = createCache({
  stores: [keyv, keyvRedis],
});

// add some data
await cache.set('key-1', 'value 1');
await cache.set('key-2', 'value 2');

// get the store you want to iterate over. In this example we are using the second store (redis)
const store = cache.stores[1];

if(store?.iterator) {
  for await (const [key, value] of store.iterator({})) {
    console.log(key, value);
  }
}

WARNING: Be careful when using iterator as it can cause major performance issues with the amount of data being retrieved. Also, Not all storage adapters support iterator so you may need to check the documentation for the storage adapter you are using.

Update on redis and ioredis Support

We will not be supporting cache-manager-ioredis-yet or cache-manager-redis-yet in the future as we have moved to using Keyv as the storage adapter @keyv/redis.

Using Legacy Storage Adapters

There are many storage adapters built for cache-manager and because of that we wanted to provide a way to use them with KeyvAdapter. Below is an example of using cache-manager-redis-yet:

import { createCache, KeyvAdapter } from 'cache-manager';
import { Keyv } from 'keyv';
import { redisStore } from 'cache-manager-redis-yet';

const adapter = new KeyvAdapter( await redisStore() );
const keyv = new Keyv({ store: adapter });
const cache = createCache({ stores: [keyv]});

This adapter will allow you to add in any storage adapter. If there are issues it needs to follow CacheManagerStore interface.

Contribute

If you would like to contribute to the project, please read how to contribute here CONTRIBUTING.md.

License

MIT © Jared Wray