adm-zip, archiver, jszip, yazl, zip-lib, and zip-stream are JavaScript libraries for creating, reading, and manipulating ZIP archives. They differ significantly in their architecture: some are designed for in-browser use, others for Node.js; some support streaming for memory efficiency, while others load entire archives into memory; and not all support both creation and extraction. These differences make each library better suited for specific scenarios, such as serving dynamic ZIPs from a server, processing large files without memory spikes, or enabling ZIP manipulation directly in a web application.
When you need to create or extract ZIP files in a JavaScript environment — whether in Node.js or the browser — choosing the right library can make a big difference in performance, memory use, and code clarity. The six packages under review (adm-zip, archiver, jszip, yazl, zip-lib, and zip-stream) all handle ZIP operations but with very different design goals, APIs, and trade-offs. Let’s break down what each does well and where it falls short.
adm-zip: Full-featured ZIP reader/writer for Node.jsadm-zip supports both reading and writing ZIP files, including extracting entries to disk and adding files from buffers or paths. It loads the entire archive into memory, which makes it simple but unsuitable for large files.
// Read and extract
const zip = new AdmZip("./archive.zip");
zip.extractAllTo("./output/");
// Add file and write
zip.addFile("new.txt", Buffer.from("Hello"));
zip.writeZip("./updated.zip");
archiver: High-level streaming ZIP (and TAR) creatorarchiver is built on streams and excels at creating ZIPs incrementally without loading everything into memory. It supports compression, directory recursion, and piping to any writable stream (like HTTP responses). However, it cannot read or extract ZIPs.
const archive = archiver('zip');
const output = fs.createWriteStream('archive.zip');
archive.pipe(output);
archive.file('file.txt', { name: 'file.txt' });
archive.finalize();
jszip: Browser-first ZIP manipulation with limited Node supportjszip works well in browsers and can load, modify, and generate ZIPs from memory. In Node.js, it requires additional setup (e.g., using fs.promises to read files as buffers). It doesn’t support streaming and holds everything in RAM.
// Load and add
const zip = await JSZip.loadAsync(fs.readFileSync('input.zip'));
zip.file('new.txt', 'Hello');
const buffer = await zip.generateAsync({ type: 'nodebuffer' });
fs.writeFileSync('output.zip', buffer);
yazl: Low-level, streaming ZIP generator onlyyazl (Yet Another Zip Library) is a minimal, streaming-only ZIP writer. It gives fine control over entry metadata and compression but provides no extraction capability. Ideal when you need predictable memory usage and full control over the ZIP structure.
const zipfile = new yazl.ZipFile();
zipfile.addBuffer(Buffer.from('Hello'), 'hello.txt');
zipfile.end();
zipfile.outputStream.pipe(fs.createWriteStream('out.zip'));
zip-lib: Simple promise-based ZIP utility (Node.js only)zip-lib wraps lower-level libraries to offer a clean promise API for basic tasks like compressing directories or decompressing archives. It’s easy to use but lacks advanced features like streaming or fine-grained entry control.
// Compress a folder
await zipLib.compress('./folder', './archive.zip');
// Extract
await zipLib.extract('./archive.zip', './output');
zip-stream: Barebones streaming ZIP core (used by archiver)zip-stream is the underlying engine that powers archiver’s ZIP functionality. It’s a transform stream that converts input entries into ZIP format. You typically won’t use it directly unless you’re building your own archiving tool.
const zip = new ZipStream();
const output = fs.createWriteStream('out.zip');
zip.pipe(output);
zip.entry('Hello', { name: 'hello.txt' }, () => {
zip.finish();
});
archiver, yazl, and zip-stream use streams and scale to large datasets. adm-zip, jszip, and zip-lib load everything into memory — avoid them for files >100MB.adm-zip, jszip, and zip-lib can extract ZIPs. If you need to read archives, rule out archiver, yazl, and zip-stream.jszip is designed for the browser. The others are Node.js-only (they rely on fs, stream, or native modules).Use archiver — it streams directly to the HTTP response without buffering.
app.get('/download', (req, res) => {
res.setHeader('Content-Type', 'application/zip');
const archive = archiver('zip');
archive.pipe(res);
archive.file('report.pdf', { name: 'report.pdf' });
archive.finalize();
});
Use jszip — it runs in the browser and handles in-memory archives cleanly.
// In browser
const arrayBuffer = await file.arrayBuffer();
const zip = await JSZip.loadAsync(arrayBuffer);
const text = await zip.file('readme.txt').async('text');
Use yazl or archiver — both stream and avoid memory spikes.
Use zip-lib — its promise API keeps your script concise.
As of 2024:
adm-zip is actively maintained but has known security issues in older versions; always use the latest release.zip-lib appears minimally maintained but still functional for basic tasks.yazl and zip-stream are low-level tools best used indirectly via archiver unless you need their specific control.| Package | Create ZIP | Extract ZIP | Streaming | Browser | Best For |
|---|---|---|---|---|---|
adm-zip | ✅ | ✅ | ❌ | ❌ | Simple Node scripts needing full read/write |
archiver | ✅ | ❌ | ✅ | ❌ | Server-side dynamic ZIP generation |
jszip | ✅ | ✅ | ❌ | ✅ | Browser-based ZIP manipulation |
yazl | ✅ | ❌ | ✅ | ❌ | Low-level, high-control ZIP writing |
zip-lib | ✅ | ✅ | ❌ | ❌ | Quick CLI zipping/unzipping |
zip-stream | ✅ | ❌ | ✅ | ❌ | Building custom archiving pipelines |
archiver — it’s robust, streaming, and well-documented.jszip is your only realistic choice.adm-zip, jszip, zip-lib) when handling user-uploaded or large files.yazl or zip-stream directly unless you’re replacing archiver for a specific reason (e.g., bundle size or custom logic).Choose based on your environment (Node vs browser), data size (small vs large), and whether you need to read, write, or both.
Choose archiver when you need to generate ZIP files dynamically in Node.js with streaming support—ideal for web servers that must pipe ZIPs directly to HTTP responses without buffering everything in memory. Note that it cannot extract or read existing ZIPs.
Choose zip-stream only if you're building your own archiving pipeline and need the raw streaming ZIP engine that powers archiver. For almost all practical purposes, archiver is a more complete and user-friendly alternative.
Choose jszip for browser-based applications where users need to upload, inspect, or download ZIP files entirely client-side. It also works in Node.js but requires manual file I/O and isn't suitable for large files due to its in-memory model.
Choose adm-zip if you're working in a Node.js environment and need a straightforward, synchronous API for both reading and writing small ZIP files entirely in memory. Avoid it for large archives or streaming scenarios due to its memory footprint.
Choose yazl only if you require fine-grained, low-level control over ZIP creation in Node.js with streaming output and want to avoid higher-level abstractions. It’s a good fit for custom tooling but overkill for typical use cases already covered by archiver.
Choose zip-lib for simple command-line or scripting tasks in Node.js where you need a clean promise-based API to compress folders or extract archives quickly. It’s not suitable for streaming, large files, or browser environments.
A streaming interface for archive generation
Visit the API documentation for a list of all methods available.
npm install archiver --save
// require modules
const fs = require('fs');
const archiver = require('archiver');
// create a file to stream archive data to.
const output = fs.createWriteStream(__dirname + '/example.zip');
const archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
// listen for all archive data to be written
// 'close' event is fired only when a file descriptor is involved
output.on('close', function() {
console.log(archive.pointer() + ' total bytes');
console.log('archiver has been finalized and the output file descriptor has closed.');
});
// This event is fired when the data source is drained no matter what was the data source.
// It is not part of this library but rather from the NodeJS Stream API.
// @see: https://nodejs.org/api/stream.html#stream_event_end
output.on('end', function() {
console.log('Data has been drained');
});
// good practice to catch warnings (ie stat failures and other non-blocking errors)
archive.on('warning', function(err) {
if (err.code === 'ENOENT') {
// log warning
} else {
// throw error
throw err;
}
});
// good practice to catch this error explicitly
archive.on('error', function(err) {
throw err;
});
// pipe archive data to the file
archive.pipe(output);
// append a file from stream
const file1 = __dirname + '/file1.txt';
archive.append(fs.createReadStream(file1), { name: 'file1.txt' });
// append a file from string
archive.append('string cheese!', { name: 'file2.txt' });
// append a file from buffer
const buffer3 = Buffer.from('buff it!');
archive.append(buffer3, { name: 'file3.txt' });
// append a file
archive.file('file1.txt', { name: 'file4.txt' });
// append files from a sub-directory and naming it `new-subdir` within the archive
archive.directory('subdir/', 'new-subdir');
// append files from a sub-directory, putting its contents at the root of archive
archive.directory('subdir/', false);
// append files from a glob pattern
archive.glob('file*.txt', {cwd:__dirname});
// finalize the archive (ie we are done appending files but streams have to finish yet)
// 'close', 'end' or 'finish' may be fired right after calling this method so register to them beforehand
archive.finalize();
Archiver ships with out of the box support for TAR and ZIP archives.
You can register additional formats with registerFormat.
You can check if format already exists before to register a new one with isRegisteredFormat.