csv-parser, csv-writer, fast-csv, and papaparse are npm packages for handling CSV (Comma-Separated Values) data in JavaScript applications. csv-parser focuses exclusively on parsing CSV data in Node.js using streams. csv-writer provides CSV writing capabilities for Node.js, converting JavaScript objects or arrays into properly formatted CSV output. fast-csv offers a comprehensive solution for both parsing and writing CSV in Node.js with strong streaming support. papaparse is a versatile library that works in both browser and Node.js environments, supporting CSV parsing from files or strings and generating CSV output, with features tailored for frontend use cases like file uploads and chunked processing.
When building web or Node.js applications that handle tabular data, you’ll often need to read from or write to CSV files. The four packages — csv-parser, csv-writer, fast-csv, and papaparse — each solve parts of this problem, but with different scopes, environments, and design philosophies. Let’s compare them in real-world engineering terms.
csv-parser is a Node.js-only stream-based parser. It reads CSV from a readable stream (like a file) and emits JavaScript objects row by row. It does not support writing CSV, and it cannot run in the browser.
// csv-parser: Node.js stream parsing
const csv = require('csv-parser');
const fs = require('fs');
fs.createReadStream('data.csv')
.pipe(csv())
.on('data', (row) => console.log(row))
.on('end', () => console.log('Finished'));
csv-writer is a Node.js-only writer. It takes arrays or objects and writes them to a CSV file. It has no parsing capability, and like csv-parser, it only works in Node.js.
// csv-writer: Node.js writing
const createCsvWriter = require('csv-writer').createObjectCsvWriter;
const csvWriter = createCsvWriter({
path: 'out.csv',
header: [{id: 'name', title: 'NAME'}, {id: 'age', title: 'AGE'}]
});
await csvWriter.writeRecords([{name: 'Alice', age: 30}]);
fast-csv supports both parsing and writing, and works in Node.js only. It uses streams for memory efficiency and offers extensive formatting and parsing options.
// fast-csv: parse
const fs = require('fs');
const csv = require('fast-csv');
fs.createReadStream('in.csv')
.pipe(csv.parse({ headers: true }))
.on('data', console.log);
// fast-csv: write
const ws = fs.createWriteStream('out.csv');
csv.write([{ name: 'Bob', age: 25 }], { headers: true }).pipe(ws);
papaparse is the only package that works in both browser and Node.js. It can parse CSV strings or files (including File objects from <input>), and optionally generate CSV strings (but not write directly to disk). It’s designed first and foremost for frontend use.
// papaparse: browser parsing
Papa.parse(fileInput.files[0], {
header: true,
complete: (results) => console.log(results.data)
});
// papaparse: string-to-CSV
const csvString = Papa.unparse([
{ name: 'Charlie', age: 40 }
]);
This is the biggest architectural split:
papaparse.csv-parser, csv-writer, and fast-csv.If your app runs in the browser (e.g., uploading a CSV and previewing it), you must use papaparse. None of the others will work — they rely on Node.js streams or fs.
In Node.js, all four can technically be used, but note that csv-parser and csv-writer are single-purpose: one parses, the other writes. You’d need both if you’re doing round-trip processing.
Streaming (memory-efficient):
csv-parser: streams only.fast-csv: streams for both read and write.In-memory (simpler API):
papaparse: loads entire input into memory (unless using worker mode in browser).csv-writer: writes records in batches but doesn’t expose streaming interface.For large files (>100 MB), streaming is essential to avoid crashing your process or browser tab. In Node.js, prefer fast-csv or csv-parser + csv-writer combo for large datasets. In the browser, papaparse supports chunked parsing to mitigate memory pressure:
// papaparse: chunked parsing for large files
Papa.parse(file, {
header: true,
chunk: (results) => {
// Process a batch of rows
console.log(results.data);
},
complete: () => console.log('Done')
});
All packages support common CSV dialects (quotes, delimiters, escaping), but their APIs differ.
Custom delimiter example:
// csv-parser
fs.createReadStream('data.tsv').pipe(csv({ separator: '\t' }));
// csv-writer
createObjectCsvWriter({
path: 'out.tsv',
fieldDelimiter: '\t',
header: [...]
});
// fast-csv
csv.parse({ delimiter: '\t', headers: true });
// papaparse
Papa.parse(input, { delimiter: '\t', header: true });
Header handling:
fast-csv and papaparse allow renaming or transforming headers during parse/write.csv-writer requires explicit header definition when writing objects.Suppose you need to read a CSV, uppercase all names, and write it back.
In Node.js with fast-csv (most efficient):
const fs = require('fs');
const csv = require('fast-csv');
fs.createReadStream('in.csv')
.pipe(csv.parse({ headers: true }))
.transform(row => ({ ...row, name: row.name.toUpperCase() }))
.pipe(csv.format({ headers: true }))
.pipe(fs.createWriteStream('out.csv'));
In Node.js with csv-parser + csv-writer:
const csvParser = require('csv-parser');
const createCsvWriter = require('csv-writer').createObjectCsvWriter;
const fs = require('fs');
const results = [];
fs.createReadStream('in.csv')
.pipe(csvParser())
.on('data', row => results.push({ ...row, name: row.name.toUpperCase() }))
.on('end', async () => {
const writer = createCsvWriter({
path: 'out.csv',
header: Object.keys(results[0]).map(k => ({ id: k, title: k.toUpperCase() }))
});
await writer.writeRecords(results);
});
In browser with papaparse:
// Parse uploaded file
Papa.parse(file, {
header: true,
complete: (results) => {
const transformed = results.data.map(r => ({
...r,
name: r.name.toUpperCase()
}));
const csvString = Papa.unparse(transformed);
// Trigger download
const blob = new Blob([csvString], { type: 'text/csv' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'out.csv';
a.click();
}
});
As of the latest official sources:
csv-parser: Actively maintained. No deprecation notice.csv-writer: Actively maintained. No deprecation notice.fast-csv: Actively maintained. Regular releases.papaparse: Actively maintained. Widely used in frontend projects.None are deprecated. All are safe for production use in their intended environments.
| Package | Parse? | Write? | Browser? | Node.js? | Streaming? |
|---|---|---|---|---|---|
csv-parser | ✅ | ❌ | ❌ | ✅ | ✅ |
csv-writer | ❌ | ✅ | ❌ | ✅ | ❌ (batch) |
fast-csv | ✅ | ✅ | ❌ | ✅ | ✅ |
papaparse | ✅ | ✅* | ✅ | ✅ | ❌ (chunked) |
* papaparse generates CSV strings, not files — you handle file writing yourself.
papaparse. It’s the only choice that works reliably in browsers.fast-csv for its unified streaming API for both read and write.csv-parser is lightweight and focused.csv-writer has a clean, declarative API for object-to-CSV conversion.Don’t combine csv-parser and csv-writer unless you specifically need their simplicity — fast-csv usually covers both needs more cohesively in Node.js. And never try to use the Node.js-only packages in the browser; they will fail silently or throw obscure errors.
Choose papaparse if your application runs in the browser or needs to handle CSV files uploaded by users, as it's the only option that works reliably across both browser and Node.js environments. It excels at parsing File objects, supports chunked processing for large files, and can generate CSV strings for download, making it the go-to for frontend-heavy CSV workflows.
Choose fast-csv if you're in a Node.js environment and need a full-featured, streaming-capable solution for both parsing and writing CSV with consistent APIs. It's well-suited for processing large datasets efficiently and offers extensive configuration for formatting, escaping, and transformation without switching between multiple libraries.
Choose csv-parser if you're working exclusively in Node.js and need a lightweight, stream-based CSV parser with minimal dependencies. It's ideal for reading large CSV files efficiently without loading everything into memory, but remember it cannot write CSV or run in the browser.
Choose csv-writer if your Node.js application only needs to generate CSV files from JavaScript data structures and you want a simple, declarative API for defining headers and records. It doesn't support parsing or browser environments, so pair it with a parser like csv-parser if you need both directions.
Papa Parse is the fastest in-browser CSV (or delimited text) parser for JavaScript. It is reliable and correct according to RFC 4180, and it comes with these features:
<input type="file"> elementsPapa Parse has no dependencies - not even jQuery.
papaparse is available on npm. It can be installed with the following command:
npm install papaparse
If you don't want to use npm, papaparse.min.js can be downloaded to your project source.
import Papa from 'papaparse';
Papa.parse(file, config);
const csv = Papa.unparse(data[, config]);
To learn how to use Papa Parse:
The website is hosted on Github Pages. Its content is also included in the docs folder of this repository. If you want to contribute on it just clone the master of this repository and open a pull request.
Papa Parse can parse a Readable Stream instead of a File when used in Node.js environments (in addition to plain strings). In this mode, encoding must, if specified, be a Node-supported character encoding. The Papa.LocalChunkSize, Papa.RemoteChunkSize , download, withCredentials and worker config options are unavailable.
Papa Parse can also parse in a node streaming style which makes .pipe available. Simply pipe the Readable Stream to the stream returned from Papa.parse(Papa.NODE_STREAM_INPUT, options). The Papa.LocalChunkSize, Papa.RemoteChunkSize , download, withCredentials, worker, step, and complete config options are unavailable. To register a callback with the stream to process data, use the data event like so: stream.on('data', callback) and to signal the end of stream, use the 'end' event like so: stream.on('end', callback).
For usage instructions, see the homepage and, for more detail, the documentation.
Papa Parse is under test. Download this repository, run npm install, then npm test to run the tests.
To discuss a new feature or ask a question, open an issue. To fix a bug, submit a pull request to be credited with the contributors! Remember, a pull request, with test, is best. You may also discuss on Twitter with #PapaParse or directly to me, @mholt6.
If you contribute a patch, ensure the tests suite is running correctly. We run continuous integration on each pull request and will not accept a patch that breaks the tests.