Node.js Caching Libraries Comparison
lru-cache vs quick-lru vs node-cache vs memory-cache
1 Year
lru-cachequick-lrunode-cachememory-cacheSimilar Packages:
What's Node.js Caching Libraries?

Caching libraries in Node.js are essential tools that enhance application performance by temporarily storing data for quick access, reducing the need for repeated data retrieval from slower sources like databases or APIs. They help in optimizing resource usage, improving response times, and providing a smoother user experience. Each caching library has its own unique features and use cases, making it important to understand their differences to choose the right one for your application needs.

NPM Package Downloads Trend
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
lru-cache179,983,8865,399808 kB82 months agoISC
quick-lru21,688,83168015.1 kB4a year agoMIT
node-cache3,440,3182,290-744 years agoMIT
memory-cache829,8411,593-328 years agoBSD-2-Clause
Feature Comparison: lru-cache vs quick-lru vs node-cache vs memory-cache

Eviction Policy

  • lru-cache:

    lru-cache implements a Least Recently Used (LRU) eviction policy, which means that when the cache reaches its maximum size, it automatically removes the least recently accessed items to make space for new entries. This is particularly useful for scenarios where you want to keep frequently accessed data readily available while managing memory efficiently.

  • quick-lru:

    quick-lru uses an LRU eviction policy similar to lru-cache, but it is optimized for speed and performance. It efficiently manages memory by evicting the least recently used items, making it ideal for high-performance applications.

  • node-cache:

    node-cache supports TTL (time-to-live) expiration, allowing you to set a lifespan for each cache entry. After the specified time, the entry is automatically removed, which helps manage memory usage effectively and ensures that stale data does not persist in the cache.

  • memory-cache:

    memory-cache does not implement any eviction policy; it simply stores items in memory until the process ends. This makes it suitable for temporary caching but can lead to memory issues if not managed properly, as it does not automatically remove old entries.

Performance

  • lru-cache:

    lru-cache is designed for high performance, providing fast access times due to its efficient data structure. It is particularly effective in scenarios where cache hits are frequent, as it minimizes the overhead of managing the cache.

  • quick-lru:

    quick-lru is optimized for speed, ensuring that cache operations are performed with minimal latency. It is particularly beneficial in applications where performance is critical, and it can handle high-throughput scenarios effectively.

  • node-cache:

    node-cache provides good performance with additional features like TTL, making it suitable for applications that require a balance between performance and cache management capabilities. However, it may introduce some overhead compared to simpler libraries.

  • memory-cache:

    memory-cache offers decent performance for simple caching needs, but it may not be as fast as other libraries when handling large datasets or high concurrency due to its lack of advanced features.

API Simplicity

  • lru-cache:

    lru-cache offers a straightforward API that is easy to understand and use. It provides essential methods for setting, getting, and deleting cache entries, making it user-friendly for developers.

  • quick-lru:

    quick-lru features a minimalistic API that focuses on performance while still providing essential LRU cache functionalities. Its simplicity makes it easy to integrate into existing applications.

  • node-cache:

    node-cache provides a more comprehensive API with additional features like TTL management and cache statistics. While it may have a steeper learning curve than simpler libraries, it offers greater control over caching behavior.

  • memory-cache:

    memory-cache has a very simple API with basic methods for setting and getting cache entries. Its minimalistic design is ideal for quick implementations without the need for complex configurations.

Use Cases

  • lru-cache:

    lru-cache is well-suited for applications that require efficient caching of frequently accessed data, such as web applications, APIs, and microservices where memory management is crucial.

  • quick-lru:

    quick-lru is perfect for high-performance applications that demand fast caching solutions, such as gaming servers or real-time data processing systems where speed is critical.

  • node-cache:

    node-cache is great for applications that need more control over cache expiration and management, such as web servers or applications with dynamic data that can become stale over time.

  • memory-cache:

    memory-cache is ideal for lightweight applications or scripts where caching needs are minimal and simplicity is preferred, such as small utilities or one-off scripts.

Memory Management

  • lru-cache:

    lru-cache efficiently manages memory by evicting the least recently used items when the cache limit is reached, preventing memory overflow and ensuring optimal resource usage.

  • quick-lru:

    quick-lru maintains efficient memory usage through its LRU policy, ensuring that the cache remains within specified limits while providing quick access to frequently used items.

  • node-cache:

    node-cache provides TTL management, allowing for better memory control by automatically removing stale entries, thus optimizing memory usage over time.

  • memory-cache:

    memory-cache does not manage memory actively, which can lead to potential memory leaks if not monitored, as it retains all cached items until the process ends.

How to Choose: lru-cache vs quick-lru vs node-cache vs memory-cache
  • lru-cache:

    Choose lru-cache if you need a simple and efficient Least Recently Used (LRU) cache implementation. It is ideal for scenarios where you want to limit memory usage and automatically evict the least recently accessed items when the cache reaches its size limit.

  • quick-lru:

    Use quick-lru for a fast and efficient LRU cache that is optimized for performance. It is particularly useful in high-throughput applications where speed is crucial, and you want a minimalistic API with excellent performance characteristics.

  • node-cache:

    Opt for node-cache if you need a more feature-rich caching solution with built-in support for time-to-live (TTL) expiration and the ability to handle cache keys and values as objects. It is great for applications that require more control over cache management and expiration.

  • memory-cache:

    Select memory-cache for straightforward in-memory caching without complex features. It is suitable for lightweight applications where simplicity is key, and you do not require advanced eviction policies or expiration capabilities.

README for lru-cache

lru-cache

A cache object that deletes the least-recently-used items.

Specify a max number of the most recently used items that you want to keep, and this cache will keep that many of the most recently accessed items.

This is not primarily a TTL cache, and does not make strong TTL guarantees. There is no preemptive pruning of expired items by default, but you may set a TTL on the cache or on a single set. If you do so, it will treat expired items as missing, and delete them when fetched. If you are more interested in TTL caching than LRU caching, check out @isaacs/ttlcache.

As of version 7, this is one of the most performant LRU implementations available in JavaScript, and supports a wide diversity of use cases. However, note that using some of the features will necessarily impact performance, by causing the cache to have to do more work. See the "Performance" section below.

Installation

npm install lru-cache --save

Usage

// hybrid module, either works
import { LRUCache } from 'lru-cache'
// or:
const { LRUCache } = require('lru-cache')
// or in minified form for web browsers:
import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'

// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
// unsafe unbounded storage.
//
// In most cases, it's best to specify a max for performance, so all
// the required memory allocation is done up-front.
//
// All the other options are optional, see the sections below for
// documentation on what each one does.  Most of them can be
// overridden for specific items in get()/set()
const options = {
  max: 500,

  // for use with tracking overall storage size
  maxSize: 5000,
  sizeCalculation: (value, key) => {
    return 1
  },

  // for use when you need to clean up something when objects
  // are evicted from the cache
  dispose: (value, key) => {
    freeFromMemoryOrWhatever(value)
  },

  // how long to live in ms
  ttl: 1000 * 60 * 5,

  // return stale items before removing from cache?
  allowStale: false,

  updateAgeOnGet: false,
  updateAgeOnHas: false,

  // async method to use for cache.fetch(), for
  // stale-while-revalidate type of behavior
  fetchMethod: async (
    key,
    staleValue,
    { options, signal, context }
  ) => {},
}

const cache = new LRUCache(options)

cache.set('key', 'value')
cache.get('key') // "value"

// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)

cache.clear() // empty the cache

If you put more stuff in the cache, then less recently used items will fall out. That's what an LRU cache is.

For full description of the API and all options, please see the LRUCache typedocs

Storage Bounds Safety

This implementation aims to be as flexible as possible, within the limits of safe memory consumption and optimal performance.

At initial object creation, storage is allocated for max items. If max is set to zero, then some performance is lost, and item count is unbounded. Either maxSize or ttl must be set if max is not specified.

If maxSize is set, then this creates a safe limit on the maximum storage consumed, but without the performance benefits of pre-allocation. When maxSize is set, every item must provide a size, either via the sizeCalculation method provided to the constructor, or via a size or sizeCalculation option provided to cache.set(). The size of every item must be a positive integer.

If neither max nor maxSize are set, then ttl tracking must be enabled. Note that, even when tracking item ttl, items are not preemptively deleted when they become stale, unless ttlAutopurge is enabled. Instead, they are only purged the next time the key is requested. Thus, if ttlAutopurge, max, and maxSize are all not set, then the cache will potentially grow unbounded.

In this case, a warning is printed to standard error. Future versions may require the use of ttlAutopurge if max and maxSize are not specified.

If you truly wish to use a cache that is bound only by TTL expiration, consider using a Map object, and calling setTimeout to delete entries when they expire. It will perform much better than an LRU cache.

Here is an implementation you may use, under the same license as this package:

// a storage-unbounded ttl cache that is not an lru-cache
const cache = {
  data: new Map(),
  timers: new Map(),
  set: (k, v, ttl) => {
    if (cache.timers.has(k)) {
      clearTimeout(cache.timers.get(k))
    }
    cache.timers.set(
      k,
      setTimeout(() => cache.delete(k), ttl)
    )
    cache.data.set(k, v)
  },
  get: k => cache.data.get(k),
  has: k => cache.data.has(k),
  delete: k => {
    if (cache.timers.has(k)) {
      clearTimeout(cache.timers.get(k))
    }
    cache.timers.delete(k)
    return cache.data.delete(k)
  },
  clear: () => {
    cache.data.clear()
    for (const v of cache.timers.values()) {
      clearTimeout(v)
    }
    cache.timers.clear()
  },
}

If that isn't to your liking, check out @isaacs/ttlcache.

Storing Undefined Values

This cache never stores undefined values, as undefined is used internally in a few places to indicate that a key is not in the cache.

You may call cache.set(key, undefined), but this is just an alias for cache.delete(key). Note that this has the effect that cache.has(key) will return false after setting it to undefined.

cache.set(myKey, undefined)
cache.has(myKey) // false!

If you need to track undefined values, and still note that the key is in the cache, an easy workaround is to use a sigil object of your own.

import { LRUCache } from 'lru-cache'
const undefinedValue = Symbol('undefined')
const cache = new LRUCache(...)
const mySet = (key, value) =>
  cache.set(key, value === undefined ? undefinedValue : value)
const myGet = (key, value) => {
  const v = cache.get(key)
  return v === undefinedValue ? undefined : v
}

Performance

As of January 2022, version 7 of this library is one of the most performant LRU cache implementations in JavaScript.

Benchmarks can be extremely difficult to get right. In particular, the performance of set/get/delete operations on objects will vary wildly depending on the type of key used. V8 is highly optimized for objects with keys that are short strings, especially integer numeric strings. Thus any benchmark which tests solely using numbers as keys will tend to find that an object-based approach performs the best.

Note that coercing anything to strings to use as object keys is unsafe, unless you can be 100% certain that no other type of value will be used. For example:

const myCache = {}
const set = (k, v) => (myCache[k] = v)
const get = k => myCache[k]

set({}, 'please hang onto this for me')
set('[object Object]', 'oopsie')

Also beware of "Just So" stories regarding performance. Garbage collection of large (especially: deep) object graphs can be incredibly costly, with several "tipping points" where it increases exponentially. As a result, putting that off until later can make it much worse, and less predictable. If a library performs well, but only in a scenario where the object graph is kept shallow, then that won't help you if you are using large objects as keys.

In general, when attempting to use a library to improve performance (such as a cache like this one), it's best to choose an option that will perform well in the sorts of scenarios where you'll actually use it.

This library is optimized for repeated gets and minimizing eviction time, since that is the expected need of a LRU. Set operations are somewhat slower on average than a few other options, in part because of that optimization. It is assumed that you'll be caching some costly operation, ideally as rarely as possible, so optimizing set over get would be unwise.

If performance matters to you:

  1. If it's at all possible to use small integer values as keys, and you can guarantee that no other types of values will be used as keys, then do that, and use a cache such as lru-fast, or mnemonist's LRUCache which uses an Object as its data store.

  2. Failing that, if at all possible, use short non-numeric strings (ie, less than 256 characters) as your keys, and use mnemonist's LRUCache.

  3. If the types of your keys will be anything else, especially long strings, strings that look like floats, objects, or some mix of types, or if you aren't sure, then this library will work well for you.

    If you do not need the features that this library provides (like asynchronous fetching, a variety of TTL staleness options, and so on), then mnemonist's LRUMap is a very good option, and just slightly faster than this module (since it does considerably less).

  4. Do not use a dispose function, size tracking, or especially ttl behavior, unless absolutely needed. These features are convenient, and necessary in some use cases, and every attempt has been made to make the performance impact minimal, but it isn't nothing.

Breaking Changes in Version 7

This library changed to a different algorithm and internal data structure in version 7, yielding significantly better performance, albeit with some subtle changes as a result.

If you were relying on the internals of LRUCache in version 6 or before, it probably will not work in version 7 and above.

Breaking Changes in Version 8

  • The fetchContext option was renamed to context, and may no longer be set on the cache instance itself.
  • Rewritten in TypeScript, so pretty much all the types moved around a lot.
  • The AbortController/AbortSignal polyfill was removed. For this reason, Node version 16.14.0 or higher is now required.
  • Internal properties were moved to actual private class properties.
  • Keys and values must not be null or undefined.
  • Minified export available at 'lru-cache/min', for both CJS and MJS builds.

Breaking Changes in Version 9

  • Named export only, no default export.
  • AbortController polyfill returned, albeit with a warning when used.

Breaking Changes in Version 10

  • cache.fetch() return type is now Promise<V | undefined> instead of Promise<V | void>. This is an irrelevant change practically speaking, but can require changes for TypeScript users.

For more info, see the change log.