cache-manager vs memcached vs memjs vs node-cache
Caching Libraries in Node.js
cache-managermemcachedmemjsnode-cacheSimilar Packages:

Caching Libraries in Node.js

Caching libraries in Node.js are tools that help improve the performance of applications by temporarily storing data in memory or other fast storage systems. This reduces the need to repeatedly fetch data from slower sources like databases or external APIs. These libraries provide various caching strategies, such as in-memory caching, distributed caching, and support for different storage backends like Redis, Memcached, and more. By using caching libraries, developers can optimize their applications for faster response times, reduced server load, and better scalability. cache-manager is a versatile caching library that supports multiple storage backends, including in-memory, Redis, and Memcached. It provides a unified API for managing caches and allows for easy integration with various storage systems. memcached is a client library for interacting with Memcached servers, a high-performance, distributed memory caching system. It is designed for fast, scalable caching and is suitable for applications that require low-latency data access. memjs is a lightweight Memcached client for Node.js that focuses on simplicity and performance. It provides a minimalistic API for interacting with Memcached servers and is ideal for applications that need a fast and efficient caching solution with minimal overhead. node-cache is an in-memory caching library for Node.js that provides a simple and efficient way to store data temporarily. It is designed for quick access to cached data within a single application instance and supports features like TTL (time-to-live) for automatic cache expiration.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
cache-manager01,96952.2 kB03 months agoMIT
memcached01,316-13310 years agoMIT
memjs020688.5 kB292 years agoMIT
node-cache02,375-776 years agoMIT

Feature Comparison: cache-manager vs memcached vs memjs vs node-cache

Caching Strategy

  • cache-manager:

    cache-manager supports multiple caching strategies, including in-memory, Redis, and Memcached. It allows for hierarchical caching, where data can be cached at different levels, providing flexibility in how data is stored and retrieved.

  • memcached:

    memcached focuses on distributed caching, where data is stored across multiple servers to provide scalability and fault tolerance. It is designed for high-performance caching with low latency.

  • memjs:

    memjs is a simple Memcached client that supports distributed caching through Memcached servers. It provides a lightweight interface for storing and retrieving data from a distributed cache.

  • node-cache:

    node-cache provides in-memory caching within a single application instance. It does not support distributed caching, making it suitable for applications that only need to cache data locally.

Data Expiration

  • cache-manager:

    cache-manager allows for setting TTL (time-to-live) values for cached data, enabling automatic expiration and eviction of stale data. It supports configurable expiration times for different cache stores.

  • memcached:

    memcached supports TTL for cached data, allowing items to expire after a specified period. This helps manage cache size and ensures that outdated data is automatically removed.

  • memjs:

    memjs supports TTL for cached items, allowing data to expire after a set time. This feature helps manage cache lifecycle and ensures that stale data is evicted automatically.

  • node-cache:

    node-cache provides TTL support for cached data, allowing items to expire and be removed from the cache after a specified duration. It also supports manual cache invalidation.

Scalability

  • cache-manager:

    cache-manager scalability depends on the underlying storage backend. When used with distributed stores like Redis or Memcached, it can scale to handle large amounts of data and traffic.

  • memcached:

    memcached is designed for high scalability, allowing data to be distributed across multiple servers. This makes it suitable for applications with high traffic and large datasets.

  • memjs:

    memjs inherits the scalability features of Memcached, allowing data to be distributed across multiple servers for high availability and performance.

  • node-cache:

    node-cache is limited to in-memory caching within a single application instance, making it less scalable for distributed applications. It is best suited for small to medium-sized applications.

Ease of Integration

  • cache-manager:

    cache-manager provides a unified API for integrating with various caching backends, making it easy to switch between different storage systems. It also supports custom stores, allowing developers to extend its functionality.

  • memcached:

    memcached requires setting up Memcached servers and configuring the client to connect to them. Integration is straightforward but requires managing the Memcached infrastructure.

  • memjs:

    memjs is easy to integrate with Node.js applications, providing a simple API for connecting to Memcached servers. It is lightweight and requires minimal configuration.

  • node-cache:

    node-cache is simple to integrate into Node.js applications, requiring no external dependencies or infrastructure. It provides a straightforward API for caching data in memory.

Code Examples

  • cache-manager:

    Example of using cache-manager with in-memory and Redis stores

    const cacheManager = require('cache-manager');
    const redisStore = require('cache-manager-redis-store');
    
    // In-memory cache
    const memoryCache = cacheManager.caching({ store: 'memory', max: 100, ttl: 60 /* seconds */ });
    
    // Redis cache
    const redisCache = cacheManager.caching({ store: redisStore, host: 'localhost', port: 6379, ttl: 600 });
    
    // Set and get cache
    memoryCache.set('key', 'value', { ttl: 10 }, (err) => {
      if (err) throw err;
      memoryCache.get('key', (err, result) => {
        if (err) throw err;
        console.log('Cached Value:', result);
      });
    });
    
  • memcached:

    Example of using memcached client

    const Memcached = require('memcached');
    const memcached = new Memcached('localhost:11211');
    
    // Set a value
    memcached.set('key', 'value', 10, (err) => {
      if (err) throw err;
      // Get the value
      memcached.get('key', (err, data) => {
        if (err) throw err;
        console.log('Cached Value:', data);
      });
    });
    
  • memjs:

    Example of using memjs client

    const memjs = require('memjs');
    const client = memjs.Client.create();
    
    // Set a value
    client.set('key', 'value', { expires: 10 }, (err) => {
      if (err) throw err;
      // Get the value
      client.get('key', (err, value) => {
        if (err) throw err;
        console.log('Cached Value:', value.toString());
      });
    });
    
  • node-cache:

    Example of using node-cache

    const NodeCache = require('node-cache');
    const myCache = new NodeCache();
    
    // Set a value
    myCache.set('key', 'value', 10); // TTL is 10 seconds
    
    // Get the value
    const value = myCache.get('key');
    console.log('Cached Value:', value);
    

How to Choose: cache-manager vs memcached vs memjs vs node-cache

  • cache-manager:

    Choose cache-manager if you need a flexible caching solution that supports multiple storage backends and allows for easy integration with various systems. It is ideal for applications that require a unified caching interface and the ability to switch between different storage types.

  • memcached:

    Choose memcached if you need a high-performance, distributed caching solution for applications that require low-latency data access. It is suitable for large-scale applications that need to share cached data across multiple servers.

  • memjs:

    Choose memjs if you need a lightweight and simple Memcached client for Node.js that prioritizes performance and ease of use. It is ideal for applications that require a fast caching solution with minimal configuration.

  • node-cache:

    Choose node-cache if you need a straightforward in-memory caching solution for a single application instance. It is best suited for applications that require quick access to cached data without the need for distributed caching or external storage.

README for cache-manager

Cacheable

cache-manager

codecov tests npm npm license

Simple and fast NodeJS caching module.

A cache module for NodeJS that allows easy wrapping of functions in cache, tiered caches, and a consistent interface.

  • Made with Typescript and compatible with ESModules.
  • Easy way to wrap any function in cache, supports a mechanism to refresh expiring cache keys in background.
  • Tiered caches -- data gets stored in each cache and fetched from the highest priority cache(s) first.
  • nonBlocking option that optimizes how the system handles multiple stores.
  • Use with any Keyv compatible storage adapter.
  • 100% test coverage via vitest.

We moved to using Keyv which are more actively maintained and have a larger community.

A special thanks to Tim Phan who took cache-manager v5 and ported it to Keyv which is the foundation of v6. 🎉 Another special thanks to Doug Ayers who wrote promise-coalesce which was used in v5 and now embedded in v6.

Migration from v6 to v7

v7 has only one breaking change which is changing the return type from null to undefined when there is no data to return. This is to align with the Keyv API and to make it more consistent with the rest of the methods. Below is an example of how to migrate from v6 to v7:

import { createCache } from 'cache-manager';

const cache = createCache();
const result = await cache.get('key');
// result will be undefined if the key is not found or expired
console.log(result); // undefined

Migration from v5 to v6

v6 is a major update and has breaking changes primarily around the storage adapters. We have moved to using Keyv which are more actively maintained and have a larger community. Below are the changes you need to make to migrate from v5 to v6. In v5 the memoryStore was used to create a memory store, in v6 you can use any storage adapter that Keyv supports. Below is an example of how to migrate from v5 to v6:

import { createCache, memoryStore } from 'cache-manager';

// Create memory cache synchronously
const memoryCache = createCache(memoryStore({
  max: 100,
  ttl: 10 * 1000 /*milliseconds*/,
}));

In v6 you can use any storage adapter that Keyv supports. Below is an example of using the in memory store with Keyv:

import { createCache } from 'cache-manager';

const cache = createCache();

If you would like to do multiple stores you can do the following:

import { createCache } from 'cache-manager';
import { createKeyv } from 'cacheable';
import { createKeyv as createKeyvRedis } from '@keyv/redis';

const memoryStore = createKeyv();
const redisStore = createKeyvRedis('redis://user:pass@localhost:6379');

const cache = createCache({
  stores: [memoryStore, redisStore],
});

When doing in memory caching and getting errors on symbol or if the object is coming back wrong like on Uint8Array you will want to set the serialization and deserialization options in Keyv to undefined as it will try to do json serialization.

import { createCache } from "cache-manager";
import { Keyv } from "keyv";

const keyv = new Keyv();
keyv.serialize = undefined;
keyv.deserialize = undefined;

const memoryCache = createCache({
	stores: [keyv],
});

The other option is to set the serialization to something that is not JSON.stringify. You can read more about it here: https://keyv.org/docs/keyv/#custom-serializers

If you would like a more robust in memory storage adapter you can use CacheableMemory from Cacheable. Below is an example of how to migrate from v5 to v6 using CacheableMemory:

import { createCache } from 'cache-manager';
import { createKeyv } from 'cacheable';

const cache = createCache({
  stores: [createKeyv({ ttl: 60000, lruSize: 5000 })],
});

To learn more about CacheableMemory please visit: http://cacheable.org/docs/cacheable/#cacheablememory---in-memory-cache

If you are still wanting to use the legacy storage adapters you can use the KeyvAdapter to wrap the storage adapter. Below is an example of how to migrate from v5 to v6 using cache-manager-redis-yet by going to Using Legacy Storage Adapters.

If you are looking for older documentation you can find it here:

Table of Contents

Installation

npm install cache-manager

By default, everything is stored in memory; you can optionally also install a storage adapter; choose one from any of the storage adapters supported by Keyv:

npm install @keyv/redis
npm install @keyv/memcache
npm install @keyv/mongo
npm install @keyv/sqlite
npm install @keyv/postgres
npm install @keyv/mysql
npm install @keyv/etcd

In addition Keyv supports other storage adapters such as lru-cache and CacheableMemory from Cacheable (more examples below). Please read Keyv document for more information.

Quick start

import { Keyv } from 'keyv';
import { createCache } from 'cache-manager';

// Memory store by default
const cache = createCache()

// Single store which is in memory
const cache = createCache({
  stores: [new Keyv()],
})

Here is an example of doing layer 1 and layer 2 caching with the in-memory being CacheableMemory from Cacheable and the second layer being @keyv/redis:

import { Keyv } from 'keyv';
import KeyvRedis from '@keyv/redis';
import { CacheableMemory } from 'cacheable';
import { createCache } from 'cache-manager';

// Multiple stores
const cache = createCache({
  stores: [
    //  High performance in-memory cache with LRU and TTL
    new Keyv({
      store: new CacheableMemory({ ttl: 60000, lruSize: 5000 }),
    }),

    //  Redis Store
    new Keyv({
      store: new KeyvRedis('redis://user:pass@localhost:6379'),
    }),
  ],
})

Once it is created, you can use the cache object to set, get, delete, and wrap functions in cache.


// With default ttl and refreshThreshold
const cache = createCache({
  ttl: 10000,
  refreshThreshold: 3000,
})

await cache.set('foo', 'bar')
// => bar

await cache.get('foo')
// => bar

await cache.del('foo')
// => true

await cache.get('foo')
// => null

await cache.wrap('key', () => 'value')
// => value

Using CacheableMemory or lru-cache as storage adapter

Because we are using Keyv, you can use any storage adapter that Keyv supports such as lru-cache or CacheableMemory from Cacheable. Below is an example of using CacheableMemory:

In this example we are using CacheableMemory from Cacheable which is a fast in-memory cache that supports LRU and and TTL expiration.

import { createCache } from 'cache-manager';
import { Keyv } from 'keyv';
import { KeyvCacheableMemory } from 'cacheable';

const store = new KeyvCacheableMemory({ ttl: 60000, lruSize: 5000 });
const keyv = new Keyv({ store });
const cache = createCache({ stores: [keyv] });

Here is an example using lru-cache:

import { createCache } from 'cache-manager';
import { Keyv } from 'keyv';
import { LRU } from 'lru-cache';

const keyv = new Keyv({ store: new LRU({ max: 5000, maxAge: 60000 }) });
const cache = createCache({ stores: [keyv] });

Options

  • stores?: Keyv[]

    List of Keyv instance. Please refer to the Keyv document for more information.

  • ttl?: number - Default time to live in milliseconds.

    The time to live in milliseconds. This is the maximum amount of time that an item can be in the cache before it is removed.

  • refreshThreshold?: number | (value:T) => number - Default refreshThreshold in milliseconds. You can also provide a function that will return the refreshThreshold based on the value.

    If the remaining TTL is less than refreshThreshold, the system will update the value asynchronously in background.

  • refreshAllStores?: boolean - Default false

    If set to true, the system will update the value of all stores when the refreshThreshold is met. Otherwise, it will only update from the top to the store that triggered the refresh.

  • nonBlocking?: boolean - Default false

    If set to true, the system will not block when multiple stores are used. Here is how it affects the type of functions:

    • set and mset - will not wait for all stores to finish.
    • get and mget - will return the first (fastest) value found.
    • del and mdel - will not wait for all stores to finish.
    • clear - will not wait for all stores to finish.
    • wrap - will do the same as get and set (return the first value found and not wait for all stores to finish).
  • cacheId?: string - Defaults to random string

    Unique identifier for the cache instance. This is primarily used to not have conflicts when using wrap with multiple cache instances.

Methods

set

set(key, value, [ttl]): Promise<value>

Sets a key value pair. It is possible to define a ttl (in milliseconds). An error will be throw on any failed

await cache.set('key-1', 'value 1')

// expires after 5 seconds
await cache.set('key 2', 'value 2', 5000)

See unit tests in test/set.test.ts for more information.

mset

mset(keys: [ { key, value, ttl } ]): Promise<true>

Sets multiple key value pairs. It is possible to define a ttl (in milliseconds). An error will be throw on any failed

await cache.mset([
  { key: 'key-1', value: 'value 1' },
  { key: 'key-2', value: 'value 2', ttl: 5000 },
]);

get

get(key): Promise<value>

Gets a saved value from the cache. Returns a null if not found or expired. If the value was found it returns the value.

await cache.set('key', 'value')

await cache.get('key')
// => value

await cache.get('foo')
// => null

See unit tests in test/get.test.ts for more information.

mget

mget(keys: [key]): Promise<value[]>

Gets multiple saved values from the cache. Returns a null if not found or expired. If the value was found it returns the value.

await cache.mset([
  { key: 'key-1', value: 'value 1' },
  { key: 'key-2', value: 'value 2' },
]);

await cache.mget(['key-1', 'key-2', 'key-3'])
// => ['value 1', 'value 2', null]

ttl

ttl(key): Promise<number | null>

Gets the expiration time of a key in milliseconds. Returns a null if not found or expired.

await cache.set('key', 'value', 1000); // expires after 1 second

await cache.ttl('key'); // => the expiration time in milliseconds

await cache.get('foo'); // => null

See unit tests in test/ttl.test.ts for more information.

del

del(key): Promise<true>

Delete a key, an error will be throw on any failed.

await cache.set('key', 'value')

await cache.get('key')
// => value

await cache.del('key')

await cache.get('key')
// => null

See unit tests in test/del.test.ts for more information.

mdel

mdel(keys: [key]): Promise<true>

Delete multiple keys, an error will be throw on any failed.

await cache.mset([
  { key: 'key-1', value: 'value 1' },
  { key: 'key-2', value: 'value 2' },
]);

await cache.mdel(['key-1', 'key-2'])

clear

clear(): Promise<true>

Flush all data, an error will be throw on any failed.

await cache.set('key-1', 'value 1')
await cache.set('key-2', 'value 2')

await cache.get('key-1')
// => value 1
await cache.get('key-2')
// => value 2

await cache.clear()

await cache.get('key-1')
// => null
await cache.get('key-2')
// => null

See unit tests in test/clear.test.ts for more information.

wrap

wrap(key, fn: async () => value, [ttl], [refreshThreshold]): Promise<value>

Alternatively, with optional parameters as options object supporting a raw parameter:

wrap(key, fn: async () => value, { ttl?: number, refreshThreshold?: number, raw?: true }): Promise<value>

Wraps a function in cache. The first time the function is run, its results are stored in cache so subsequent calls retrieve from cache instead of calling the function.

If refreshThreshold is set and the remaining TTL is less than refreshThreshold, the system will update the value asynchronously. In the meantime, the system will return the old value until expiration. You can also provide a function that will return the refreshThreshold based on the value (value:T) => number.

If the object format for the optional parameters is used, an additional raw parameter can be applied, changing the function return type to raw data including expiration timestamp as { value: [data], expires: [timestamp] }.

await cache.wrap('key', () => 1, 5000, 3000)
// call function then save the result to cache
// =>  1

await cache.wrap('key', () => 2, 5000, 3000)
// return data from cache, function will not be called again
// => 1

await cache.wrap('key', () => 2, { ttl: 5000, refreshThreshold: 3000, raw: true })
// returns raw data including expiration timestamp
// => { value: 1, expires: [timestamp] }

// wait 3 seconds
await sleep(3000)

await cache.wrap('key', () => 2, 5000, 3000)
// return data from cache, call function in background and save the result to cache
// =>  1

await cache.wrap('key', () => 3, 5000, 3000)
// return data from cache, function will not be called
// =>  2

await cache.wrap('key', () => 4, 5000, () => 3000);
// return data from cache, function will not be called
// =>  4

await cache.wrap('error', () => {
  throw new Error('failed')
})
// => error

NOTES:

  • The store that will be checked for refresh is the one where the key will be found first (highest priority).
  • If the threshold is low and the worker function is slow, the key may expire and you may encounter a racing condition with updating values.
  • If no ttl is set for the key, the refresh mechanism will not be triggered.

See unit tests in test/wrap.test.ts for more information.

disconnect

disconnect(): Promise<void>

Will disconnect from the relevant store(s). It is highly recommended to use this when using a Keyv storage adapter that requires a disconnect. For each storage adapter, the use case for when to use disconnect is different. An example is that @keyv/redis should be used only when you are done with the cache.

await cache.disconnect();

See unit tests in test/disconnect.test.ts for more information.

Properties

cacheId

cacheId(): string

Returns cache instance id. This is primarily used to not have conflicts when using wrap with multiple cache instances.

stores

stores(): Keyv[]

Returns the list of Keyv instances. This can be used to get the list of stores and then use the Keyv API to interact with the store directly.

const cache = createCache({cacheId: 'my-cache-id'});
cache.cacheId(); // => 'my-cache-id'

See unit tests in test/cache-id.test.ts for more information.

Events

set

Fired when a key has been added or changed.

cache.on('set', ({ key, value, error }) => {
	// ... do something ...
})

del

Fired when a key has been removed manually.

cache.on('del', ({ key, error }) => {
	// ... do something ...
})

clear

Fired when the cache has been flushed.

cache.on('clear', (error) => {
  if (error) {
    // ... do something ...
  }
})

refresh

Fired when the cache has been refreshed in the background.

cache.on('refresh', ({ key, value, error }) => {
  if (error) {
    // ... do something ...
  }
})

See unit tests in test/events.test.ts for more information.

Doing Iteration on Stores

You can use the stores method to get the list of stores and then use the Keyv API to interact with the store directly. Below is an example of iterating over all stores and getting all keys:

import Keyv from 'keyv';
import { createKeyv } from '@keyv/redis';
import { createCache } from 'cache-manager';

const keyv = new Keyv();
const keyvRedis = createKeyv('redis://user:pass@localhost:6379');

const cache = createCache({
  stores: [keyv, keyvRedis],
});

// add some data
await cache.set('key-1', 'value 1');
await cache.set('key-2', 'value 2');

// get the store you want to iterate over. In this example we are using the second store (redis)
const store = cache.stores[1];

if(store?.iterator) {
  for await (const [key, value] of store.iterator({})) {
    console.log(key, value);
  }
}

WARNING: Be careful when using iterator as it can cause major performance issues with the amount of data being retrieved. Also, Not all storage adapters support iterator so you may need to check the documentation for the storage adapter you are using.

Update on redis and ioredis Support

We will not be supporting cache-manager-ioredis-yet or cache-manager-redis-yet in the future as we have moved to using Keyv as the storage adapter @keyv/redis.

Using Legacy Storage Adapters

There are many storage adapters built for cache-manager and because of that we wanted to provide a way to use them with KeyvAdapter. Below is an example of using cache-manager-redis-yet:

import { createCache, KeyvAdapter } from 'cache-manager';
import { Keyv } from 'keyv';
import { redisStore } from 'cache-manager-redis-yet';

const adapter = new KeyvAdapter( await redisStore() );
const keyv = new Keyv({ store: adapter });
const cache = createCache({ stores: [keyv]});

This adapter will allow you to add in any storage adapter. If there are issues it needs to follow CacheManagerStore interface.

Contribute

If you would like to contribute to the project, please read how to contribute here CONTRIBUTING.md.

License

MIT © Jared Wray