csv-parser, csv-writer, fast-csv, and papaparse are npm packages for handling CSV (Comma-Separated Values) data in JavaScript applications. csv-parser focuses exclusively on parsing CSV data in Node.js using streams. csv-writer provides CSV writing capabilities for Node.js, converting JavaScript objects or arrays into properly formatted CSV output. fast-csv offers a comprehensive solution for both parsing and writing CSV in Node.js with strong streaming support. papaparse is a versatile library that works in both browser and Node.js environments, supporting CSV parsing from files or strings and generating CSV output, with features tailored for frontend use cases like file uploads and chunked processing.
When building web or Node.js applications that handle tabular data, youβll often need to read from or write to CSV files. The four packages β csv-parser, csv-writer, fast-csv, and papaparse β each solve parts of this problem, but with different scopes, environments, and design philosophies. Letβs compare them in real-world engineering terms.
csv-parser is a Node.js-only stream-based parser. It reads CSV from a readable stream (like a file) and emits JavaScript objects row by row. It does not support writing CSV, and it cannot run in the browser.
// csv-parser: Node.js stream parsing
const csv = require('csv-parser');
const fs = require('fs');
fs.createReadStream('data.csv')
.pipe(csv())
.on('data', (row) => console.log(row))
.on('end', () => console.log('Finished'));
csv-writer is a Node.js-only writer. It takes arrays or objects and writes them to a CSV file. It has no parsing capability, and like csv-parser, it only works in Node.js.
// csv-writer: Node.js writing
const createCsvWriter = require('csv-writer').createObjectCsvWriter;
const csvWriter = createCsvWriter({
path: 'out.csv',
header: [{id: 'name', title: 'NAME'}, {id: 'age', title: 'AGE'}]
});
await csvWriter.writeRecords([{name: 'Alice', age: 30}]);
fast-csv supports both parsing and writing, and works in Node.js only. It uses streams for memory efficiency and offers extensive formatting and parsing options.
// fast-csv: parse
const fs = require('fs');
const csv = require('fast-csv');
fs.createReadStream('in.csv')
.pipe(csv.parse({ headers: true }))
.on('data', console.log);
// fast-csv: write
const ws = fs.createWriteStream('out.csv');
csv.write([{ name: 'Bob', age: 25 }], { headers: true }).pipe(ws);
papaparse is the only package that works in both browser and Node.js. It can parse CSV strings or files (including File objects from <input>), and optionally generate CSV strings (but not write directly to disk). Itβs designed first and foremost for frontend use.
// papaparse: browser parsing
Papa.parse(fileInput.files[0], {
header: true,
complete: (results) => console.log(results.data)
});
// papaparse: string-to-CSV
const csvString = Papa.unparse([
{ name: 'Charlie', age: 40 }
]);
This is the biggest architectural split:
papaparse.csv-parser, csv-writer, and fast-csv.If your app runs in the browser (e.g., uploading a CSV and previewing it), you must use papaparse. None of the others will work β they rely on Node.js streams or fs.
In Node.js, all four can technically be used, but note that csv-parser and csv-writer are single-purpose: one parses, the other writes. Youβd need both if youβre doing round-trip processing.
Streaming (memory-efficient):
csv-parser: streams only.fast-csv: streams for both read and write.In-memory (simpler API):
papaparse: loads entire input into memory (unless using worker mode in browser).csv-writer: writes records in batches but doesnβt expose streaming interface.For large files (>100 MB), streaming is essential to avoid crashing your process or browser tab. In Node.js, prefer fast-csv or csv-parser + csv-writer combo for large datasets. In the browser, papaparse supports chunked parsing to mitigate memory pressure:
// papaparse: chunked parsing for large files
Papa.parse(file, {
header: true,
chunk: (results) => {
// Process a batch of rows
console.log(results.data);
},
complete: () => console.log('Done')
});
All packages support common CSV dialects (quotes, delimiters, escaping), but their APIs differ.
Custom delimiter example:
// csv-parser
fs.createReadStream('data.tsv').pipe(csv({ separator: '\t' }));
// csv-writer
createObjectCsvWriter({
path: 'out.tsv',
fieldDelimiter: '\t',
header: [...]
});
// fast-csv
csv.parse({ delimiter: '\t', headers: true });
// papaparse
Papa.parse(input, { delimiter: '\t', header: true });
Header handling:
fast-csv and papaparse allow renaming or transforming headers during parse/write.csv-writer requires explicit header definition when writing objects.Suppose you need to read a CSV, uppercase all names, and write it back.
In Node.js with fast-csv (most efficient):
const fs = require('fs');
const csv = require('fast-csv');
fs.createReadStream('in.csv')
.pipe(csv.parse({ headers: true }))
.transform(row => ({ ...row, name: row.name.toUpperCase() }))
.pipe(csv.format({ headers: true }))
.pipe(fs.createWriteStream('out.csv'));
In Node.js with csv-parser + csv-writer:
const csvParser = require('csv-parser');
const createCsvWriter = require('csv-writer').createObjectCsvWriter;
const fs = require('fs');
const results = [];
fs.createReadStream('in.csv')
.pipe(csvParser())
.on('data', row => results.push({ ...row, name: row.name.toUpperCase() }))
.on('end', async () => {
const writer = createCsvWriter({
path: 'out.csv',
header: Object.keys(results[0]).map(k => ({ id: k, title: k.toUpperCase() }))
});
await writer.writeRecords(results);
});
In browser with papaparse:
// Parse uploaded file
Papa.parse(file, {
header: true,
complete: (results) => {
const transformed = results.data.map(r => ({
...r,
name: r.name.toUpperCase()
}));
const csvString = Papa.unparse(transformed);
// Trigger download
const blob = new Blob([csvString], { type: 'text/csv' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'out.csv';
a.click();
}
});
As of the latest official sources:
csv-parser: Actively maintained. No deprecation notice.csv-writer: Actively maintained. No deprecation notice.fast-csv: Actively maintained. Regular releases.papaparse: Actively maintained. Widely used in frontend projects.None are deprecated. All are safe for production use in their intended environments.
| Package | Parse? | Write? | Browser? | Node.js? | Streaming? |
|---|---|---|---|---|---|
csv-parser | β | β | β | β | β |
csv-writer | β | β | β | β | β (batch) |
fast-csv | β | β | β | β | β |
papaparse | β | β * | β | β | β (chunked) |
* papaparse generates CSV strings, not files β you handle file writing yourself.
papaparse. Itβs the only choice that works reliably in browsers.fast-csv for its unified streaming API for both read and write.csv-parser is lightweight and focused.csv-writer has a clean, declarative API for object-to-CSV conversion.Donβt combine csv-parser and csv-writer unless you specifically need their simplicity β fast-csv usually covers both needs more cohesively in Node.js. And never try to use the Node.js-only packages in the browser; they will fail silently or throw obscure errors.
Choose csv-parser if you're working exclusively in Node.js and need a lightweight, stream-based CSV parser with minimal dependencies. It's ideal for reading large CSV files efficiently without loading everything into memory, but remember it cannot write CSV or run in the browser.
Choose csv-writer if your Node.js application only needs to generate CSV files from JavaScript data structures and you want a simple, declarative API for defining headers and records. It doesn't support parsing or browser environments, so pair it with a parser like csv-parser if you need both directions.
Choose fast-csv if you're in a Node.js environment and need a full-featured, streaming-capable solution for both parsing and writing CSV with consistent APIs. It's well-suited for processing large datasets efficiently and offers extensive configuration for formatting, escaping, and transformation without switching between multiple libraries.
Choose papaparse if your application runs in the browser or needs to handle CSV files uploaded by users, as it's the only option that works reliably across both browser and Node.js environments. It excels at parsing File objects, supports chunked processing for large files, and can generate CSV strings for download, making it the go-to for frontend-heavy CSV workflows.
Streaming CSV parser that aims for maximum speed as well as compatibility with the csv-spectrum CSV acid test suite.
csv-parser can convert CSV into JSON at at rate of around 90,000 rows per
second. Performance varies with the data used; try bin/bench.js <your file>
to benchmark your data.
csv-parser can be used in the browser with browserify.
neat-csv can be used if a Promise
based interface to csv-parser is needed.
Note: This module requires Node v8.16.0 or higher.
β‘οΈ csv-parser is greased-lightning fast
β npm run bench
Filename Rows Parsed Duration
backtick.csv 2 3.5ms
bad-data.csv 3 0.55ms
basic.csv 1 0.26ms
comma-in-quote.csv 1 0.29ms
comment.csv 2 0.40ms
empty-columns.csv 1 0.40ms
escape-quotes.csv 3 0.38ms
geojson.csv 3 0.46ms
large-dataset.csv 7268 73ms
newlines.csv 3 0.35ms
no-headers.csv 3 0.26ms
option-comment.csv 2 0.24ms
option-escape.csv 3 0.25ms
option-maxRowBytes.csv 4577 39ms
option-newline.csv 0 0.47ms
option-quote-escape.csv 3 0.33ms
option-quote-many.csv 3 0.38ms
option-quote.csv 2 0.22ms
quotes+newlines.csv 3 0.20ms
strict.csv 3 0.22ms
latin.csv 2 0.38ms
mac-newlines.csv 2 0.28ms
utf16-big.csv 2 0.33ms
utf16.csv 2 0.26ms
utf8.csv 2 0.24ms
Using npm:
$ npm install csv-parser
Using yarn:
$ yarn add csv-parser
To use the module, create a readable stream to a desired CSV file, instantiate
csv, and pipe the stream to csv.
Suppose you have a CSV file data.csv which contains the data:
NAME,AGE
Daffy Duck,24
Bugs Bunny,22
It could then be parsed, and results shown like so:
const csv = require('csv-parser')
const fs = require('fs')
const results = [];
fs.createReadStream('data.csv')
.pipe(csv())
.on('data', (data) => results.push(data))
.on('end', () => {
console.log(results);
// [
// { NAME: 'Daffy Duck', AGE: '24' },
// { NAME: 'Bugs Bunny', AGE: '22' }
// ]
});
To specify options for csv, pass an object argument to the function. For
example:
csv({ separator: '\t' });
Returns: Array[Object]
Type: Object
As an alternative to passing an options object, you may pass an Array[String]
which specifies the headers to use. For example:
csv(['Name', 'Age']);
If you need to specify options and headers, please use the the object notation
with the headers property as shown below.
Type: String
Default: "
A single-character string used to specify the character used to escape strings in a CSV row.
Type: Array[String] | Boolean
Specifies the headers to use. Headers define the property key for each value in
a CSV row. If no headers option is provided, csv-parser will use the first
line in a CSV file as the header specification.
If false, specifies that the first row in a data file does not contain
headers, and instructs the parser to use the column index as the key for each column.
Using headers: false with the same data.csv example from above would yield:
[
{ '0': 'Daffy Duck', '1': 24 },
{ '0': 'Bugs Bunny', '1': 22 }
]
Note: If using the headers for an operation on a file which contains headers on the first line, specify skipLines: 1 to skip over the row, or the headers row will appear as normal row data. Alternatively, use the mapHeaders option to manipulate existing headers in that scenario.
Type: Function
A function that can be used to modify the values of each header. Return a String to modify the header. Return null to remove the header, and it's column, from the results.
csv({
mapHeaders: ({ header, index }) => header.toLowerCase()
})
header String The current column header.
index Number The current column index.
Type: Function
A function that can be used to modify the content of each column. The return value will replace the current column content.
csv({
mapValues: ({ header, index, value }) => value.toLowerCase()
})
header String The current column header.
index Number The current column index.
value String The current column value (or content).
Type: String
Default: \n
Specifies a single-character string to denote the end of a line in a CSV file.
Type: String
Default: "
Specifies a single-character string to denote a quoted string.
Type: Boolean
If true, instructs the parser not to decode UTF-8 strings.
Type: String
Default: ,
Specifies a single-character string to use as the column separator for each row.
Type: Boolean | String
Default: false
Instructs the parser to ignore lines which represent comments in a CSV file. Since there is no specification that dictates what a CSV comment looks like, comments should be considered non-standard. The "most common" character used to signify a comment in a CSV file is "#". If this option is set to true, lines which begin with # will be skipped. If a custom character is needed to denote a commented line, this option may be set to a string which represents the leading character(s) signifying a comment line.
Type: Number
Default: 0
Specifies the number of lines at the beginning of a data file that the parser should skip over, prior to parsing headers.
Type: Number
Default: Number.MAX_SAFE_INTEGER
Maximum number of bytes per row. An error is thrown if a line exeeds this value. The default value is on 8 peta byte.
Type: Boolean
Default: false
If true, instructs the parser that the number of columns in each row must match
the number of headers specified or throws an exception.
if false: the headers are mapped to the column index
less columns: any missing column in the middle will result in a wrong property mapping!
more columns: the aditional columns will create a "_"+index properties - eg. "_10":"value"
Type: Boolean
Default: false
If true, instructs the parser to emit each row with a byteOffset property.
The byteOffset represents the offset in bytes of the beginning of the parsed row in the original stream.
Will change the output format of stream to be { byteOffset, row }.
The following events are emitted during parsing:
dataEmitted for each row of data parsed with the notable exception of the header row. Please see Usage for an example.
headersEmitted after the header row is parsed. The first parameter of the event
callback is an Array[String] containing the header names.
fs.createReadStream('data.csv')
.pipe(csv())
.on('headers', (headers) => {
console.log(`First header: ${headers[0]}`)
})
Events available on Node built-in
Readable Streams
are also emitted. The end event should be used to detect the end of parsing.
This module also provides a CLI which will convert CSV to newline-delimited JSON. The following CLI flags can be used to control how input is parsed:
Usage: csv-parser [filename?] [options]
--escape,-e Set the escape character (defaults to quote value)
--headers,-h Explicitly specify csv headers as a comma separated list
--help Show this help
--output,-o Set output file. Defaults to stdout
--quote,-q Set the quote character ('"' by default)
--remove Remove columns from output by header name
--separator,-s Set the separator character ("," by default)
--skipComments,-c Skip CSV comments that begin with '#'. Set a value to change the comment character.
--skipLines,-l Set the number of lines to skip to before parsing headers
--strict Require column length match headers length
--version,-v Print out the installed version
For example; to parse a TSV file:
cat data.tsv | csv-parser -s $'\t'
Users may encounter issues with the encoding of a CSV file. Transcoding the source stream can be done neatly with a modules such as:
Or native iconv if part
of a pipeline.
Some CSV files may be generated with, or contain a leading Byte Order Mark. This may cause issues parsing headers and/or data from your file. From Wikipedia:
The Unicode Standard permits the BOM in UTF-8, but does not require nor recommend its use. Byte order has no meaning in UTF-8.
To use this module with a file containing a BOM, please use a module like strip-bom-stream in your pipeline:
const fs = require('fs');
const csv = require('csv-parser');
const stripBom = require('strip-bom-stream');
fs.createReadStream('data.csv')
.pipe(stripBom())
.pipe(csv())
...
When using the CLI, the BOM can be removed by first running:
$ sed $'s/\xEF\xBB\xBF//g' data.csv