lzutf8 vs compression vs lz-string vs lz4 vs pako
Client-Side and Server-Side Data Compression Libraries in JavaScript
lzutf8compressionlz-stringlz4pakoSimilar Packages:

Client-Side and Server-Side Data Compression Libraries in JavaScript

compression, lz-string, lz4, lzutf8, and pako are JavaScript libraries focused on data compression, but they serve very different contexts and use cases. compression is an Express middleware for Node.js servers that automatically compresses HTTP responses using gzip or deflate. The other four are client-side compatible libraries: lz-string specializes in compressing JavaScript strings into compact representations suitable for localStorage or URL parameters; lz4 provides extremely fast compression/decompression of binary data using the LZ4 algorithm; lzutf8 offers UTF-8-aware compression with support for multiple output encodings; and pako is a full zlib reimplementation that supports deflate, inflate, gzip, and gunzip operations, making it ideal for interoperability with server-side compressed data. Each library targets specific data types (strings vs. binary), performance profiles (speed vs. ratio), and compatibility requirements.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
lzutf8122,387328149 kB15-MIT
compression02,80227.7 kB298 months agoMIT
lz-string04,397176 kB553 years agoMIT
lz40443-415 years agoMIT
pako06,0551.64 MB273 years ago(MIT AND Zlib)

Compression in the Browser: A Practical Guide to lz-string, lz4, lzutf8, pako, and compression

When you need to shrink data before sending it over the network or storing it locally in the browser, choosing the right compression library matters. The packages compression, lz-string, lz4, lzutf8, and pako all offer ways to compress and decompress data—but they target different use cases, algorithms, and environments. Let’s cut through the noise and see how they really stack up for frontend developers.

⚠️ First Things First: Is compression Even for the Browser?

The compression package is not a client-side library—it’s a Node.js middleware designed for Express apps. It automatically compresses HTTP responses (like HTML, JSON, or JS bundles) using gzip or deflate based on the client’s Accept-Encoding header.

// Node.js only – never use this in frontend code
const compression = require('compression');
const express = require('express');

const app = express();
app.use(compression()); // Compresses outgoing responses

If you’re building a frontend app and import compression, it won’t work in the browser—it relies on Node.js streams and HTTP modules. Do not use compression in client-side code. For frontend compression, stick with the other four packages.

🔤 String vs Binary: What Kind of Data Are You Compressing?

This is the biggest decision point. Some libraries work best with strings (like JSON or user input), while others handle raw binary data (like ArrayBuffer or Uint8Array). Mixing them up leads to bugs or bloated output.

lz-string: Built for JavaScript Strings

lz-string is purpose-built for compressing JavaScript strings (UTF-16). It outputs either a compact string (using a custom base64-like encoding) or a Uint8Array. It’s simple, widely used, and works great for localStorage or URL parameters.

import { compress, decompress } from 'lz-string';

const original = '{"user":"alice","prefs":{"theme":"dark"}}';
const compressed = compress(original); // Returns string like "N4Igz..."
const restored = decompress(compressed);
console.assert(restored === original);

It also offers compressToBase64() and compressToEncodedURIComponent() for safe embedding in URLs or cookies.

lzutf8: UTF-8 Aware Compression

Unlike lz-string, lzutf8 treats input as UTF-8 encoded text, which aligns better with how data is typically serialized (e.g., JSON.stringify() produces UTF-8 when sent over HTTP). It can compress strings or Uint8Array and supports multiple output formats.

import * as lzutf8 from 'lzutf8';

const original = '{"score":987654321}';
const compressed = lzutf8.compress(original, { outputEncoding: 'ByteArray' }); // Uint8Array
const restored = lzutf8.decompress(compressed, { inputEncoding: 'ByteArray', outputEncoding: 'String' });

This makes it more interoperable with backend systems that expect standard UTF-8 byte streams.

lz4: Speed Over Ratio

lz4 implements the LZ4 algorithm, which prioritizes extremely fast compression and decompression at the cost of lower compression ratios. It works exclusively with binary data (Buffer in Node.js, Uint8Array in browsers).

import { compress, decompress } from 'lz4';

const encoder = new TextEncoder();
const decoder = new TextDecoder();

const original = encoder.encode('{"fast":true}'); // Uint8Array
const compressed = compress(original);
const restored = decoder.decode(decompress(compressed));

Best when you need to compress large volumes of data quickly (e.g., real-time logs or game state snapshots) and bandwidth isn’t your main bottleneck.

pako: Full zlib Compatibility

pako is a pure-JavaScript reimplementation of zlib, supporting both deflate/inflate and gzip/gunzip. It’s ideal when you need compatibility with server-side gzip or existing zlib-compressed data.

import { deflate, inflate } from 'pako';

const original = new TextEncoder().encode('{"zlib":true}');
const compressed = deflate(original); // Uint8Array
const restored = new TextDecoder().decode(inflate(compressed));

You can also compress strings directly:

const compressedStr = deflate('{"hello":"world"}', { to: 'string' });
const restoredStr = inflate(compressedStr, { to: 'string' });

Use pako if your backend sends gzip-compressed payloads you need to decompress in-browser, or if you must produce zlib-compatible output.

📊 Algorithm Comparison: Speed, Ratio, and Use Case Fit

PackageAlgorithmBest Input TypeOutput TypeBrowser Safe?Typical Use Case
compressiongzip/deflateHTTP response bodyCompressed stream❌ (Node-only)Server middleware
lz-stringLZ-basedJavaScript stringString / Uint8ArraylocalStorage, URL params
lzutf8LZ-basedUTF-8 stringUint8Array / Base64Cross-platform text sync
lz4LZ4Uint8ArrayUint8ArrayHigh-speed binary compression
pakozlib (deflate)Uint8Array / stringUint8Array / stringGzip/zlib compatibility

🧪 Real-World Scenarios

Scenario 1: Saving User Preferences to localStorage

You’ve got a settings object you want to persist without hitting storage limits.

  • Best choice: lz-string
  • Why? Simple string-in/string-out API, minimal setup.
localStorage.setItem('prefs', LZString.compress(JSON.stringify(prefs)));
const prefs = JSON.parse(LZString.decompress(localStorage.getItem('prefs')));

Scenario 2: Syncing Large Text Documents Between Client and Server

Your app edits markdown files and needs efficient sync with a Python backend that uses zlib.

  • Best choice: pako
  • Why? Ensures compatibility with server-side deflate.
// Client
const payload = pako.deflate(JSON.stringify(doc), { to: 'string' });
fetch('/save', { method: 'POST', body: payload });

// Server (Python)
# data = zlib.decompress(request.body)

Scenario 3: Compressing Real-Time Game State for WebSockets

You’re sending player positions 30 times/sec and need minimal CPU overhead.

  • Best choice: lz4
  • Why? Fastest round-trip time, even if payload is larger.
const stateBytes = encodeGameState();
const compressed = lz4.compress(stateBytes);
socket.send(compressed);

Scenario 4: Embedding Compressed Data in a URL

You want to share a pre-filled form via a link.

  • Best choice: lz-string with compressToEncodedURIComponent
  • Why? Produces URL-safe strings out of the box.
const url = `/editor?data=${LZString.compressToEncodedURIComponent(JSON.stringify(form))}`;

🛑 Common Pitfalls to Avoid

  • Don’t use compression in the browser. It’s a server-only Express middleware.
  • Don’t mix string and binary APIs. lz4 and pako expect Uint8Array for binary mode—passing a string will compress the UTF-16 bytes, not the intended UTF-8 content.
  • Avoid double-encoding. If your server already gzips responses (which it should), compressing JSON again with pako usually wastes CPU for little gain.
  • Test decompression edge cases. Always verify that compressed data can be fully restored—corruption risks increase with aggressive compression.

💡 Final Recommendation

  • Need simple string compression for client-side storage? → lz-string
  • Need UTF-8 fidelity and cross-platform compatibility? → lzutf8
  • Need maximum speed for binary data? → lz4
  • Need zlib/gzip compatibility? → pako
  • Building a Node.js server? → compression (but never in frontend code)

Choose based on your data type, performance constraints, and interoperability needs—not just compression ratio. In most frontend scenarios, lz-string or pako will cover 90% of use cases cleanly and reliably.

How to Choose: lzutf8 vs compression vs lz-string vs lz4 vs pako

  • lzutf8:

    Choose lzutf8 when you require strict UTF-8 encoding compliance and need to interoperate with systems that expect standard UTF-8 byte streams. It supports multiple output formats (Base64, ByteArray) and handles both string and binary inputs gracefully.

  • compression:

    Choose compression only if you're building a Node.js Express server and need automatic HTTP response compression (gzip/deflate) based on client capabilities. Never use it in frontend/browser code—it relies on Node.js-specific APIs and will not work in the browser.

  • lz-string:

    Choose lz-string when you need to compress JavaScript strings (like JSON) for client-side storage in localStorage, sessionStorage, or URL parameters. It's simple, battle-tested, and provides convenient methods like compressToEncodedURIComponent() for safe URL embedding.

  • lz4:

    Choose lz4 when you need maximum compression/decompression speed for binary data (e.g., real-time game state, telemetry) and can accept lower compression ratios. It works exclusively with Uint8Array and is ideal when CPU time is more constrained than bandwidth.

  • pako:

    Choose pako when you need zlib/gzip compatibility—either to decompress server-sent gzip payloads in the browser or to produce deflate-compressed data that backends can natively handle. It supports both string and binary modes and is the go-to for cross-platform zlib interoperability.

README for lzutf8

LZ-UTF8

Build Status npm version

Note: this library is significantly out-of-date and will require a full rewrite to update with recent technologies like JS modules and WebAssembly, and for better compatibility with modern frameworks like Angular and Rect.js. The design and documentation was mostly written in 2014, before any of these were relevant. Unfortunately, it is not maintained anymore, and a rewrite is unlikely to take place in the foreseeable future.

LZ-UTF8 is a string compression library and format. Is an extension to the UTF-8 character encoding, augmenting the UTF-8 bytestream with optional compression based the LZ77 algorithm. Some of its properties:

  • Compresses strings only. Doesn't support arbitrary byte sequences.
  • Strongly optimized for speed, both in the choice of algorithm and its implementation. Approximate measurements using a low-end desktops and 1MB strings: 3-14MB/s compression , 20-120MB/s decompression (detailed benchmarks and comparison to other Javascript libraries can be found in the technical paper). Due to the concentration on time efficiency, the resulting compression ratio can be significantly lower when compared to more size efficient algorithms like LZW + entropy coding.
  • Byte-level superset of UTF-8. Any valid UTF-8 bytestream is also a valid LZ-UTF8 stream (but not vice versa). This special property allows both compressed and plain UTF-8 streams to be freely concatenated and decompressed as single unit (or with any arbitrary partitioning). Some possible applications:
    • Sending static pre-compressed data followed by dynamically generated uncompressed data from a server (and possibly appending a compressed static "footer", or repeating the process several times).
    • Appending both uncompressed/compressed data to a compressed log file/journal without needing to rewrite it.
    • Joining multiple source files, where some are possibly pre-compressed, and serving them as a single concatenated file without additional processing.
  • Patent free (all relevant patents have long expired).

Javascript implementation:

  • Tested on most popular browsers and platforms: Node.js 4+, Chrome, Firefox, Opera, Edge, IE10+ (IE8 and IE9 may work with a typed array polyfill), Android 4+, Safari 5+.
  • Allows compressed data to be efficiently packed in plain Javascript UTF-16 strings (see the "BinaryString" encoding described later in this document) when binary storage is not available or desired (e.g. when using LocalStorage or older IndexedDB).
  • Can operate asynchronously, both in Node.js and in the browser. Uses web workers when available (and takes advantage of transferable objects if supported) and falls back to async iterations when not.
  • Supports Node.js streams.
  • Written in TypeScript.

Quick start

Table of Contents

API Reference

Getting started

Node.js:

npm install lzutf8
var LZUTF8 = require('lzutf8');

Browser:

<script id="lzutf8" src="https://cdn.jsdelivr.net/npm/lzutf8/build/production/lzutf8.js"></script>

or the minified version:

<script id="lzutf8" src="https://cdn.jsdelivr.net/npm/lzutf8/build/production/lzutf8.min.js"></script>

to reference a particular version use the pattern, where x.x.x should be replaced with the exact version number (e.g. 0.4.6):

<script id="lzutf8" src="https://unpkg.com/lzutf8@x.x.x/build/production/lzutf8.min.js"></script>

note: the id attribute and its exact value are necessary for the library to make use of web workers.

Type Identifier Strings

"ByteArray" - An array of bytes. As of 0.3.2, always a Uint8Array. In versions up to 0.2.3 the type was determined by the platform (Array for browsers that don't support typed arrays, Uint8Array for supporting browsers and Buffer for Node.js).

IE8/9 and support was dropped at 0.3.0 though these browsers can still be used with a typed array polyfill.

"Buffer" - A Node.js Buffer object.

"StorageBinaryString" - A string containing compacted binary data encoded to fit in valid UTF-16 strings. Please note the older, deprecated, "BinaryString" encoding, is still internally supported in the library but has been removed from this document. More details are included further in this document.

"Base64" - A base 64 string.

Core Methods

LZUTF8.compress(..)

var output = LZUTF8.compress(input, [options]);

Compresses the given input data.

input can be either a String or UTF-8 bytes stored in a Uint8Array or Buffer

options (optional): an object that may have any of the properties:

  • outputEncoding: "ByteArray" (default), "Buffer", "StorageBinaryString" or "Base64"

returns: compressed data encoded by encoding, or ByteArray if not specified.

LZUTF8.decompress(..)

var output = LZUTF8.decompress(input, [options]);

Decompresses the given compressed data.

input: can be either a Uint8Array, Buffer or String (where encoding scheme is then specified in inputEncoding)

options (optional): an object that may have the properties:

  • inputEncoding: "ByteArray" (default), "StorageBinaryString" or "Base64"
  • outputEncoding: "String" (default), "ByteArray" or "Buffer" to return UTF-8 bytes

returns: decompressed bytes encoded as encoding, or as String if not specified.

Asynchronous Methods

LZUTF8.compressAsync(..)

LZUTF8.compressAsync(input, [options], callback);

Asynchronously compresses the given input data.

input can be either a String, or UTF-8 bytes stored in an Uint8Array or Buffer.

options (optional): an object that may have any of the properties:

  • outputEncoding: "ByteArray" (default), "Buffer", "StorageBinaryString" or "Base64"
  • useWebWorker: true (default) would use a web worker if available. false would use iterated yielding instead.

callback: a user-defined callback function accepting a first argument containing the resulting compressed data as specified by outputEncoding (or ByteArray if not specified) and a possible second parameter containing an Error object.

On error: invokes the callback with a first argument of undefined and a second one containing the Error object.

Example:

LZUTF8.compressAsync(input, {outputEncoding: "StorageBinaryString"}, function (result, error) {
    if (error === undefined)
        console.log("Data successfully compressed and encoded to " + result.length + " characters");
    else
        console.log("Compression error: " + error.message);
});

LZUTF8.decompressAsync(..)

LZUTF8.decompressAsync(input, [options], callback);

Asynchronously decompresses the given compressed input.

input: can be either a Uint8Array, Buffer or String (where encoding is set with inputEncoding).

options (optional): an object that may have the properties:

  • inputEncoding: "ByteArray" (default), "StorageBinaryString" or "Base64"
  • outputEncoding: "String" (default), "ByteArray" or "Buffer" to return UTF-8 bytes.
  • useWebWorker: true (default) would use a web worker if available. false would use incremental yielding instead.

callback: a user-defined callback function accepting a first argument containing the resulting decompressed data as specified by outputEncoding and a possible second parameter containing an Error object.

On error: invokes the callback with a first argument of undefined and a second one containing the Error object.

Example:

LZUTF8.decompressAsync(input, {inputEncoding: "StorageBinaryString", outputEncoding: "ByteArray"}, function (result, error) {
    if (error === undefined)
        console.log("Data successfully decompressed to " + result.length + " UTF-8 bytes");
    else
        console.log("Decompression error: " + error.message);
});

General notes on async operations

Web workers are available if supported by the browser and the library's script source is referenced in the document with a script tag having id of "lzutf8" (its src attribute is then used as the source URI for the web worker). In cases where a script tag is not available (such as when the script is dynamically loaded or bundled with other scripts) the value of LZUTF8.WebWorker.scriptURI may alternatively be set before the first async method call.

Workers are optimized for various input and output encoding schemes, so only the minimal amount of work is done in the main Javascript thread. Internally, conversion to or from various encodings is performed within the worker itself, reducing delays and allowing greater parallelization. Additionally, if transferable objects are supported by the browser, binary arrays will be transferred virtually instantly to and from the worker.

Only one worker instance is spawned per page - multiple operations are processed sequentially.

In case a worker is not available (such as in Node.js, IE8, IE9, Android browser < 4.4) or desired, it will iteratively process 64KB blocks while yielding to the event loop whenever a 20ms interval has elapsed. Note: In this execution method, parallel operations are not guaranteed to complete by their initiation order.

Lower-level Methods

LZUTF8.Compressor

var compressor = new LZUTF8.Compressor();

Creates a compressor object. Can be used to incrementally compress a multi-part stream of data.

returns: a new LZUTF8.Compressor object

LZUTF8.Compressor.compressBlock(..)

var compressor = new LZUTF8.Compressor();
var compressedBlock = compressor.compressBlock(input);

Compresses the given input UTF-8 block.

input can be either a String, or UTF-8 bytes stored in a Uint8Array or Buffer

returns: compressed bytes as ByteArray

This can be used to incrementally create a single compressed stream. For example:

var compressor = new LZUTF8.Compressor();
var compressedBlock1 = compressor.compressBlock(block1);
var compressedBlock2 = compressor.compressBlock(block2);
var compressedBlock3 = compressor.compressBlock(block3);
..

LZUTF8.Decompressor

var decompressor = new LZUTF8.Decompressor();

Creates a decompressor object. Can be used to incrementally decompress a multi-part stream of data.

returns: a new LZUTF8.Decompressor object

LZUTF8.Decompressor.decompressBlock(..)

var decompressor = new LZUTF8.Decompressor();
var decompressedBlock = decompressor.decompressBlock(input);

Decompresses the given block of compressed bytes.

input can be either a Uint8Array or Buffer

returns: decompressed UTF-8 bytes as ByteArray

Remarks: will always return the longest valid UTF-8 stream of bytes possible from the given input block. Incomplete input or output byte sequences will be prepended to the next block.

Note: This can be used to incrementally decompress a single compressed stream. For example:

var decompressor = new LZUTF8.Decompressor();
var decompressedBlock1 = decompressor.decompressBlock(block1);
var decompressedBlock2 = decompressor.decompressBlock(block2);
var decompressedBlock3 = decompressor.decompressBlock(block3);
..

LZUTF8.Decompressor.decompressBlockToString(..)

var decompressor = new LZUTF8.Decompressor();
var decompressedBlockAsString = decompressor.decompressBlockToString(input);

Decompresses the given block of compressed bytes and converts the result to a String.

input can be either a Uint8Array or Buffer

returns: decompressed String

Remarks: will always return the longest valid string possible from the given input block. Incomplete input or output byte sequences will be prepended to the next block.

Node.js only methods

LZUTF8.createCompressionStream()

var compressionStream = LZUTF8.createCompressionStream();

Creates a compression stream. The stream will accept both Buffers and Strings in any encoding supported by Node.js (e.g. utf8, utf16, ucs2, base64, hex, binary etc.) and return Buffers.

example:

var sourceReadStream = fs.createReadStream(“content.txt”);
var destWriteStream = fs.createWriteStream(“content.txt.lzutf8”);
var compressionStream = LZUTF8.createCompressionStream();

sourceReadStrem.pipe(compressionStream).pipe(destWriteStream);

On error: emits an error event with the Error object as parameter.

LZUTF8.createDecompressionStream()

var decompressionStream = LZUTF8.createDecompressionStream();

Creates a decompression stream. The stream will accept and return Buffers.

On error: emits an error event with the Error object as parameter.

Character encoding methods

LZUTF8.encodeUTF8(..)

var output = LZUTF8.encodeUTF8(input);

Encodes a string to UTF-8.

input as String

returns: encoded bytes as ByteArray

LZUTF8.decodeUTF8(..)

var outputString = LZUTF8.decodeUTF8(input);

Decodes UTF-8 bytes to a String.

input as either a Uint8Array or Buffer

returns: decoded bytes as String

LZUTF8.encodeBase64(..)

var outputString = LZUTF8.encodeBase64(bytes);

Encodes bytes to a Base64 string.

input as either a Uint8Array or Buffer

returns: resulting Base64 string.

remarks: Maps every 3 consecutive input bytes to 4 output characters of the set A-Z,a-z,0-9,+,/ (a total of 64 characters). Increases stored byte size to 133.33% of original (when stored as ASCII or UTF-8) or 266% (stored as UTF-16).

LZUTF8.decodeBase64(..)

var output = LZUTF8.decodeBase64(input);

Decodes UTF-8 bytes to a String.

input as String

returns: decoded bytes as ByteArray

remarks: the decoder cannot decode concatenated base64 strings. Although it is possible to add this capability to the JS version, compatibility with other decoders (such as the Node.js decoder) prevents this feature to be added.

LZUTF8.encodeStorageBinaryString(..)

Note: the older BinaryString encoding has been deprecated due to a compatibility issue with the IE browser's LocalStorage/SessionStorage implementation. This newer version works around that issue by avoiding the 0 codepoint.

var outputString = LZUTF8.encodeStorageBinaryString(input);

Encodes binary bytes to a valid UTF-16 string.

input as either a Uint8Array or Buffer

returns: String

remarks: To comply with the UTF-16 standard, it only uses the bottom 15 bits of each character, effectively mapping every 15 input bits to a single 16 bit output character. This Increases the stored byte size to 106.66% of original.

LZUTF8.decodeStorageBinaryString(..)

Note: the older BinaryString encoding has been deprecated due to a compatibility issue with the IE browser's LocalStorage/SessionStorage implementation. This newer version works around that issue by avoiding the 0 codepoint.

var output = LZUTF8.decodeStorageBinaryString(input);

Decodes a binary string.

input as String

returns: decoded bytes as ByteArray

remarks: Multiple binary strings may be freely concatenated and decoded as a single string. This is made possible by ending every sequence with special marker (char code 32768 for an even-length sequence and 32769 for a an odd-length sequence).

Release history

  • 0.1.x: Initial release.
  • 0.2.x: Added async error handling. Added support for TextEncoder and TextDecoder when available.
  • 0.3.x: Removed support to IE8/9. Removed support for plain Array inputs. All "ByteArray" outputs are now Uint8Array objects. A separate "Buffer" encoding setting can be used to return Buffer objects.
  • 0.4.x: Major code restructuring. Removed support for versions of Node.js prior to 4.0.
  • 0.5.x: Added the "StorageBinaryString" encoding.

License

Copyright (c) 2014-2018, Rotem Dan <rotemdan@gmail.com>.

Source code and documentation are available under the MIT license.