lru-cache vs node-cache vs @isaacs/ttlcache vs memory-cache
Node.js Caching Libraries Comparison
1 Year
lru-cachenode-cache@isaacs/ttlcachememory-cacheSimilar Packages:
What's Node.js Caching Libraries?

Caching libraries in Node.js are designed to temporarily store data in memory to improve application performance by reducing the time it takes to access frequently used data. These libraries implement various caching strategies, allowing developers to optimize data retrieval and manage memory effectively. By leveraging caching, applications can handle higher loads and provide faster responses, especially for read-heavy workloads. Each library has its own unique features and use cases, making it essential to choose the right one based on specific project requirements.

Package Weekly Downloads Trend
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
lru-cache195,830,8905,474808 kB114 months agoISC
node-cache3,966,5652,305-745 years agoMIT
@isaacs/ttlcache3,317,51717625.2 kB62 years agoISC
memory-cache878,3381,600-328 years agoBSD-2-Clause
Feature Comparison: lru-cache vs node-cache vs @isaacs/ttlcache vs memory-cache

Expiration Management

  • lru-cache:

    lru-cache does not inherently manage expiration but removes the least recently used items when the cache size limit is reached. This approach is effective for managing memory but requires manual handling of data expiration if needed.

  • node-cache:

    node-cache supports TTL for cache entries, allowing developers to set expiration times. It also provides automatic cleanup of expired items, making it a robust choice for applications needing both caching and expiration management.

  • @isaacs/ttlcache:

    @isaacs/ttlcache provides built-in TTL management for each cache entry, allowing developers to specify how long data should remain in the cache before it is automatically removed. This feature is crucial for maintaining data freshness and preventing stale data from being served.

  • memory-cache:

    memory-cache does not provide expiration management, meaning cached items will remain until explicitly removed. This can lead to stale data if not managed properly, making it less suitable for dynamic data scenarios.

Data Structure

  • lru-cache:

    lru-cache implements a least-recently-used algorithm, which means it maintains a fixed-size cache and evicts the least recently accessed items. This structure is beneficial for applications with predictable access patterns.

  • node-cache:

    node-cache also uses a key-value store structure, with added features for TTL and automatic cleanup. This makes it versatile for various caching needs while maintaining simplicity.

  • @isaacs/ttlcache:

    @isaacs/ttlcache uses a simple key-value store structure, allowing for easy retrieval and storage of data. It is optimized for performance with a focus on TTL management, making it efficient for transient data.

  • memory-cache:

    memory-cache provides a straightforward key-value store without any eviction policy, making it simple to use but potentially leading to memory bloat if not managed carefully.

Complexity and Learning Curve

  • lru-cache:

    lru-cache has a simple API but requires understanding of LRU caching principles. It is suitable for developers who need efficient memory management but may require additional effort to implement effectively.

  • node-cache:

    node-cache strikes a balance between simplicity and functionality. It is easy to learn for developers familiar with caching concepts, while also offering advanced features for more complex use cases.

  • @isaacs/ttlcache:

    @isaacs/ttlcache is relatively easy to use, with a straightforward API focused on TTL management. Developers familiar with basic caching concepts will find it intuitive to implement.

  • memory-cache:

    memory-cache is very easy to use, making it ideal for beginners or quick prototypes. However, its simplicity comes at the cost of advanced features, which may limit its use in complex applications.

Performance

  • lru-cache:

    lru-cache is highly performant due to its LRU eviction strategy, ensuring that the most relevant data remains in memory while older, less-used data is removed. This makes it ideal for applications with limited memory resources.

  • node-cache:

    node-cache provides solid performance with its TTL and cleanup features, making it a reliable choice for applications that require both speed and effective memory management.

  • @isaacs/ttlcache:

    @isaacs/ttlcache is optimized for performance with efficient TTL management, making it suitable for applications that require quick access to frequently used data while ensuring data freshness.

  • memory-cache:

    memory-cache offers good performance for small-scale applications but may suffer from memory bloat if not managed properly, as it does not have an eviction policy.

Use Cases

  • lru-cache:

    lru-cache is ideal for applications with predictable access patterns, such as caching database query results or frequently accessed files where memory efficiency is a concern.

  • node-cache:

    node-cache is versatile and can be used in various scenarios, including caching computed results, API responses, or any data that benefits from TTL and automatic cleanup.

  • @isaacs/ttlcache:

    @isaacs/ttlcache is best suited for applications where data freshness is critical, such as caching API responses or session data that should expire after a certain time.

  • memory-cache:

    memory-cache is suitable for lightweight applications or prototyping where caching needs are minimal and simplicity is preferred.

How to Choose: lru-cache vs node-cache vs @isaacs/ttlcache vs memory-cache
  • lru-cache:

    Select lru-cache if you require a simple and efficient least-recently-used (LRU) caching mechanism. It is suitable for applications where memory usage is a concern, as it automatically removes the least recently accessed items when the cache reaches its limit.

  • node-cache:

    Use node-cache if you want a feature-rich caching library that supports TTL, automatic cleanup, and a simple API. It is suitable for applications that require more control over cache management and the ability to store complex data types.

  • @isaacs/ttlcache:

    Choose @isaacs/ttlcache if you need a cache with time-to-live (TTL) expiration for each entry, allowing for automatic cleanup of stale data. This is ideal for scenarios where data freshness is critical and you want to avoid manual cache management.

  • memory-cache:

    Opt for memory-cache if you need a straightforward in-memory caching solution without complex features. It is easy to use and best for lightweight applications or prototyping where advanced caching strategies are not necessary.

README for lru-cache

lru-cache

A cache object that deletes the least-recently-used items.

Specify a max number of the most recently used items that you want to keep, and this cache will keep that many of the most recently accessed items.

This is not primarily a TTL cache, and does not make strong TTL guarantees. There is no preemptive pruning of expired items by default, but you may set a TTL on the cache or on a single set. If you do so, it will treat expired items as missing, and delete them when fetched. If you are more interested in TTL caching than LRU caching, check out @isaacs/ttlcache.

As of version 7, this is one of the most performant LRU implementations available in JavaScript, and supports a wide diversity of use cases. However, note that using some of the features will necessarily impact performance, by causing the cache to have to do more work. See the "Performance" section below.

Installation

npm install lru-cache --save

Usage

// hybrid module, either works
import { LRUCache } from 'lru-cache'
// or:
const { LRUCache } = require('lru-cache')
// or in minified form for web browsers:
import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'

// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
// unsafe unbounded storage.
//
// In most cases, it's best to specify a max for performance, so all
// the required memory allocation is done up-front.
//
// All the other options are optional, see the sections below for
// documentation on what each one does.  Most of them can be
// overridden for specific items in get()/set()
const options = {
  max: 500,

  // for use with tracking overall storage size
  maxSize: 5000,
  sizeCalculation: (value, key) => {
    return 1
  },

  // for use when you need to clean up something when objects
  // are evicted from the cache
  dispose: (value, key) => {
    freeFromMemoryOrWhatever(value)
  },

  // how long to live in ms
  ttl: 1000 * 60 * 5,

  // return stale items before removing from cache?
  allowStale: false,

  updateAgeOnGet: false,
  updateAgeOnHas: false,

  // async method to use for cache.fetch(), for
  // stale-while-revalidate type of behavior
  fetchMethod: async (
    key,
    staleValue,
    { options, signal, context }
  ) => {},
}

const cache = new LRUCache(options)

cache.set('key', 'value')
cache.get('key') // "value"

// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)

cache.clear() // empty the cache

If you put more stuff in the cache, then less recently used items will fall out. That's what an LRU cache is.

For full description of the API and all options, please see the LRUCache typedocs

Storage Bounds Safety

This implementation aims to be as flexible as possible, within the limits of safe memory consumption and optimal performance.

At initial object creation, storage is allocated for max items. If max is set to zero, then some performance is lost, and item count is unbounded. Either maxSize or ttl must be set if max is not specified.

If maxSize is set, then this creates a safe limit on the maximum storage consumed, but without the performance benefits of pre-allocation. When maxSize is set, every item must provide a size, either via the sizeCalculation method provided to the constructor, or via a size or sizeCalculation option provided to cache.set(). The size of every item must be a positive integer.

If neither max nor maxSize are set, then ttl tracking must be enabled. Note that, even when tracking item ttl, items are not preemptively deleted when they become stale, unless ttlAutopurge is enabled. Instead, they are only purged the next time the key is requested. Thus, if ttlAutopurge, max, and maxSize are all not set, then the cache will potentially grow unbounded.

In this case, a warning is printed to standard error. Future versions may require the use of ttlAutopurge if max and maxSize are not specified.

If you truly wish to use a cache that is bound only by TTL expiration, consider using a Map object, and calling setTimeout to delete entries when they expire. It will perform much better than an LRU cache.

Here is an implementation you may use, under the same license as this package:

// a storage-unbounded ttl cache that is not an lru-cache
const cache = {
  data: new Map(),
  timers: new Map(),
  set: (k, v, ttl) => {
    if (cache.timers.has(k)) {
      clearTimeout(cache.timers.get(k))
    }
    cache.timers.set(
      k,
      setTimeout(() => cache.delete(k), ttl)
    )
    cache.data.set(k, v)
  },
  get: k => cache.data.get(k),
  has: k => cache.data.has(k),
  delete: k => {
    if (cache.timers.has(k)) {
      clearTimeout(cache.timers.get(k))
    }
    cache.timers.delete(k)
    return cache.data.delete(k)
  },
  clear: () => {
    cache.data.clear()
    for (const v of cache.timers.values()) {
      clearTimeout(v)
    }
    cache.timers.clear()
  },
}

If that isn't to your liking, check out @isaacs/ttlcache.

Storing Undefined Values

This cache never stores undefined values, as undefined is used internally in a few places to indicate that a key is not in the cache.

You may call cache.set(key, undefined), but this is just an alias for cache.delete(key). Note that this has the effect that cache.has(key) will return false after setting it to undefined.

cache.set(myKey, undefined)
cache.has(myKey) // false!

If you need to track undefined values, and still note that the key is in the cache, an easy workaround is to use a sigil object of your own.

import { LRUCache } from 'lru-cache'
const undefinedValue = Symbol('undefined')
const cache = new LRUCache(...)
const mySet = (key, value) =>
  cache.set(key, value === undefined ? undefinedValue : value)
const myGet = (key, value) => {
  const v = cache.get(key)
  return v === undefinedValue ? undefined : v
}

Performance

As of January 2022, version 7 of this library is one of the most performant LRU cache implementations in JavaScript.

Benchmarks can be extremely difficult to get right. In particular, the performance of set/get/delete operations on objects will vary wildly depending on the type of key used. V8 is highly optimized for objects with keys that are short strings, especially integer numeric strings. Thus any benchmark which tests solely using numbers as keys will tend to find that an object-based approach performs the best.

Note that coercing anything to strings to use as object keys is unsafe, unless you can be 100% certain that no other type of value will be used. For example:

const myCache = {}
const set = (k, v) => (myCache[k] = v)
const get = k => myCache[k]

set({}, 'please hang onto this for me')
set('[object Object]', 'oopsie')

Also beware of "Just So" stories regarding performance. Garbage collection of large (especially: deep) object graphs can be incredibly costly, with several "tipping points" where it increases exponentially. As a result, putting that off until later can make it much worse, and less predictable. If a library performs well, but only in a scenario where the object graph is kept shallow, then that won't help you if you are using large objects as keys.

In general, when attempting to use a library to improve performance (such as a cache like this one), it's best to choose an option that will perform well in the sorts of scenarios where you'll actually use it.

This library is optimized for repeated gets and minimizing eviction time, since that is the expected need of a LRU. Set operations are somewhat slower on average than a few other options, in part because of that optimization. It is assumed that you'll be caching some costly operation, ideally as rarely as possible, so optimizing set over get would be unwise.

If performance matters to you:

  1. If it's at all possible to use small integer values as keys, and you can guarantee that no other types of values will be used as keys, then do that, and use a cache such as lru-fast, or mnemonist's LRUCache which uses an Object as its data store.

  2. Failing that, if at all possible, use short non-numeric strings (ie, less than 256 characters) as your keys, and use mnemonist's LRUCache.

  3. If the types of your keys will be anything else, especially long strings, strings that look like floats, objects, or some mix of types, or if you aren't sure, then this library will work well for you.

    If you do not need the features that this library provides (like asynchronous fetching, a variety of TTL staleness options, and so on), then mnemonist's LRUMap is a very good option, and just slightly faster than this module (since it does considerably less).

  4. Do not use a dispose function, size tracking, or especially ttl behavior, unless absolutely needed. These features are convenient, and necessary in some use cases, and every attempt has been made to make the performance impact minimal, but it isn't nothing.

Breaking Changes in Version 7

This library changed to a different algorithm and internal data structure in version 7, yielding significantly better performance, albeit with some subtle changes as a result.

If you were relying on the internals of LRUCache in version 6 or before, it probably will not work in version 7 and above.

Breaking Changes in Version 8

  • The fetchContext option was renamed to context, and may no longer be set on the cache instance itself.
  • Rewritten in TypeScript, so pretty much all the types moved around a lot.
  • The AbortController/AbortSignal polyfill was removed. For this reason, Node version 16.14.0 or higher is now required.
  • Internal properties were moved to actual private class properties.
  • Keys and values must not be null or undefined.
  • Minified export available at 'lru-cache/min', for both CJS and MJS builds.

Breaking Changes in Version 9

  • Named export only, no default export.
  • AbortController polyfill returned, albeit with a warning when used.

Breaking Changes in Version 10

  • cache.fetch() return type is now Promise<V | undefined> instead of Promise<V | void>. This is an irrelevant change practically speaking, but can require changes for TypeScript users.

For more info, see the change log.