node-fetch vs axios vs got vs request vs cheerio vs puppeteer vs selenium-webdriver vs scrapingbee
Web Scraping and HTTP Client Libraries Comparison
1 Year
node-fetchaxiosgotrequestcheeriopuppeteerselenium-webdriverscrapingbeeSimilar Packages:
What's Web Scraping and HTTP Client Libraries?

These libraries serve various purposes in web development, particularly in making HTTP requests and scraping web content. They provide developers with tools to interact with web APIs, manipulate HTML documents, and automate browser tasks. Each library has its strengths and use cases, making them suitable for different scenarios in web scraping and data retrieval.

Package Weekly Downloads Trend
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
node-fetch63,611,7678,838107 kB2192 years agoMIT
axios60,747,731106,8232.16 MB68116 days agoMIT
got23,588,48914,579242 kB127a month agoMIT
request14,458,13025,668-1345 years agoApache-2.0
cheerio10,418,06529,4171.25 MB529 months agoMIT
puppeteer4,666,69090,616362 kB2652 days agoApache-2.0
selenium-webdriver1,661,92132,27218 MB2458 days agoApache-2.0
scrapingbee12,618727.1 kB28 months agoISC
Feature Comparison: node-fetch vs axios vs got vs request vs cheerio vs puppeteer vs selenium-webdriver vs scrapingbee

Ease of Use

  • node-fetch:

    Node-fetch mimics the Fetch API found in browsers, making it easy for developers familiar with client-side JavaScript to make HTTP requests in Node.js.

  • axios:

    Axios provides a simple and intuitive API for making HTTP requests, with built-in support for promises and async/await syntax, making it easy to handle asynchronous operations.

  • got:

    Got has a modern and user-friendly API that simplifies HTTP requests. It includes features like automatic retries and hooks, making it easy to customize request behavior.

  • request:

    Request has a simple API for making HTTP requests, but it is less feature-rich compared to newer libraries. It's easy to use for basic tasks but lacks modern features.

  • cheerio:

    Cheerio offers a jQuery-like syntax that makes it easy to traverse and manipulate the DOM, allowing developers to quickly extract data from HTML documents without complex parsing logic.

  • puppeteer:

    Puppeteer provides a high-level API that abstracts away the complexities of browser automation, allowing developers to focus on writing scripts without dealing with low-level browser details.

  • selenium-webdriver:

    Selenium WebDriver provides a comprehensive API for browser automation, but it can be more complex to set up and use compared to other libraries.

  • scrapingbee:

    ScrapingBee offers a straightforward API for web scraping, allowing developers to focus on data extraction without managing infrastructure or handling proxies.

Performance

  • node-fetch:

    Node-fetch is lightweight and performs well for making simple HTTP requests, but it may not have the advanced features of other libraries.

  • axios:

    Axios is optimized for performance with features like request cancellation and automatic JSON data transformation, ensuring efficient data handling.

  • got:

    Got is designed for performance, with support for streams and efficient handling of large payloads, making it suitable for high-throughput applications.

  • request:

    Request is not as performant as newer libraries and may struggle with large payloads or high concurrency due to its older architecture.

  • cheerio:

    Cheerio is lightweight and fast, making it suitable for parsing large HTML documents quickly without the overhead of a full browser environment.

  • puppeteer:

    Puppeteer can be resource-intensive due to its headless browser nature, but it excels in tasks that require rendering and interaction with web pages.

  • selenium-webdriver:

    Selenium WebDriver can be slower due to its reliance on browser interactions, but it is powerful for tasks that require full browser capabilities.

  • scrapingbee:

    ScrapingBee is optimized for web scraping tasks, handling proxies and rendering efficiently, which can improve performance compared to self-hosted solutions.

Community Support

  • node-fetch:

    Node-fetch is popular among developers familiar with the Fetch API, and it has a supportive community, although it may not be as large as Axios.

  • axios:

    Axios has a large and active community, with extensive documentation and numerous tutorials available, making it easy to find help and resources.

  • got:

    Got has a growing community and is well-documented, providing examples and support for developers looking to implement advanced HTTP features.

  • request:

    Request had a large community, but since it is deprecated, support is dwindling, and developers are encouraged to migrate to alternatives.

  • cheerio:

    Cheerio is widely used in the web scraping community, with good documentation and community support, although it may not be as extensive as larger libraries.

  • puppeteer:

    Puppeteer has a strong community and is actively maintained, with plenty of resources and examples available for browser automation tasks.

  • selenium-webdriver:

    Selenium WebDriver has a vast community and extensive documentation, making it a reliable choice for browser automation and testing.

  • scrapingbee:

    ScrapingBee has a dedicated support team and documentation, but its community is smaller compared to open-source libraries.

Flexibility

  • node-fetch:

    Node-fetch is straightforward and flexible, allowing developers to use it in various scenarios without much overhead.

  • axios:

    Axios is flexible and can be easily integrated into various frameworks and libraries, making it suitable for a wide range of applications.

  • got:

    Got offers a high degree of flexibility with features like hooks and middleware, allowing developers to customize request handling extensively.

  • request:

    Request is flexible for basic HTTP requests but lacks the advanced features and customization options of newer libraries.

  • cheerio:

    Cheerio is designed specifically for server-side DOM manipulation, providing flexibility in how developers can extract and manipulate data from HTML.

  • puppeteer:

    Puppeteer provides flexibility in automating browser tasks, allowing developers to script complex interactions and workflows with ease.

  • selenium-webdriver:

    Selenium WebDriver is highly flexible and supports multiple programming languages and browsers, making it suitable for a wide range of automation tasks.

  • scrapingbee:

    ScrapingBee offers flexibility in how developers can scrape data, with options for handling proxies and rendering, but it is a managed service with some limitations.

Error Handling

  • node-fetch:

    Node-fetch provides basic error handling for network errors, but developers need to implement additional logic for handling HTTP response errors.

  • axios:

    Axios provides built-in error handling for HTTP requests, allowing developers to easily catch and manage errors in a consistent manner.

  • got:

    Got has robust error handling features, including automatic retries and detailed error messages, making it easier to manage failed requests.

  • request:

    Request has basic error handling capabilities, but it is less sophisticated compared to newer libraries, making it harder to manage complex error scenarios.

  • cheerio:

    Cheerio does not handle errors related to HTTP requests, as it is focused on DOM manipulation, so developers must manage request errors separately.

  • puppeteer:

    Puppeteer includes error handling for browser automation tasks, allowing developers to catch exceptions and manage timeouts effectively.

  • selenium-webdriver:

    Selenium WebDriver provides comprehensive error handling for browser interactions, allowing developers to catch and manage exceptions during automation.

  • scrapingbee:

    ScrapingBee handles many common scraping errors internally, providing a simpler experience for developers, but custom error handling may be limited.

How to Choose: node-fetch vs axios vs got vs request vs cheerio vs puppeteer vs selenium-webdriver vs scrapingbee
  • node-fetch:

    Use Node-fetch for a lightweight and simple implementation of the Fetch API in Node.js. It is a good choice if you want a familiar API similar to the browser's Fetch API for making HTTP requests.

  • axios:

    Choose Axios for its simplicity and ease of use when making HTTP requests. It supports promises and is widely adopted in the community, making it a great choice for projects that require straightforward API interactions.

  • got:

    Opt for Got if you require a powerful and flexible HTTP request library with built-in support for retries, streams, and advanced features like hooks. It is suitable for more complex HTTP interactions and offers a modern API.

  • request:

    Select Request if you are working on legacy projects or require a simple way to make HTTP requests. However, note that it is deprecated, and alternatives like Axios or Got are recommended for new projects.

  • cheerio:

    Select Cheerio if you need to parse and manipulate HTML on the server side. It provides a jQuery-like syntax for traversing and manipulating the DOM, making it ideal for web scraping tasks where you need to extract data from HTML documents.

  • puppeteer:

    Choose Puppeteer when you need to control a headless browser for tasks like web scraping, automated testing, or generating screenshots and PDFs. It provides a high-level API to interact with Chrome or Chromium, allowing for complex interactions with web pages.

  • selenium-webdriver:

    Choose Selenium WebDriver for comprehensive browser automation and testing. It supports multiple browsers and programming languages, making it ideal for complex web scraping tasks that require user interactions.

  • scrapingbee:

    Opt for ScrapingBee if you want a managed web scraping service that handles proxies, headless browsers, and CAPTCHA solving. It is suitable for developers who want to focus on data extraction without worrying about infrastructure.

README for node-fetch
Node Fetch

A light-weight module that brings Fetch API to Node.js.

Build status Coverage status Current version Install size Mentioned in Awesome Node.js Discord

Consider supporting us on our Open Collective:

Open Collective

You might be looking for the v2 docs

Motivation

Instead of implementing XMLHttpRequest in Node.js to run browser-specific Fetch polyfill, why not go from native http to fetch API directly? Hence, node-fetch, minimal code for a window.fetch compatible API on Node.js runtime.

See Jason Miller's isomorphic-unfetch or Leonardo Quixada's cross-fetch for isomorphic usage (exports node-fetch for server-side, whatwg-fetch for client-side).

Features

  • Stay consistent with window.fetch API.
  • Make conscious trade-off when following WHATWG fetch spec and stream spec implementation details, document known differences.
  • Use native promise and async functions.
  • Use native Node streams for body, on both request and response.
  • Decode content encoding (gzip/deflate/brotli) properly, and convert string output (such as res.text() and res.json()) to UTF-8 automatically.
  • Useful extensions such as redirect limit, response size limit, explicit errors for troubleshooting.

Difference from client-side fetch

  • See known differences:
  • If you happen to use a missing feature that window.fetch offers, feel free to open an issue.
  • Pull requests are welcomed too!

Installation

Current stable release (3.x) requires at least Node.js 12.20.0.

npm install node-fetch

Loading and configuring the module

ES Modules (ESM)

import fetch from 'node-fetch';

CommonJS

node-fetch from v3 is an ESM-only module - you are not able to import it with require().

If you cannot switch to ESM, please use v2 which remains compatible with CommonJS. Critical bug fixes will continue to be published for v2.

npm install node-fetch@2

Alternatively, you can use the async import() function from CommonJS to load node-fetch asynchronously:

// mod.cjs
const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args));

Providing global access

To use fetch() without importing it, you can patch the global object in node:

// fetch-polyfill.js
import fetch, {
  Blob,
  blobFrom,
  blobFromSync,
  File,
  fileFrom,
  fileFromSync,
  FormData,
  Headers,
  Request,
  Response,
} from 'node-fetch'

if (!globalThis.fetch) {
  globalThis.fetch = fetch
  globalThis.Headers = Headers
  globalThis.Request = Request
  globalThis.Response = Response
}

// index.js
import './fetch-polyfill'

// ...

Upgrading

Using an old version of node-fetch? Check out the following files:

Common Usage

NOTE: The documentation below is up-to-date with 3.x releases, if you are using an older version, please check how to upgrade.

Plain text or HTML

import fetch from 'node-fetch';

const response = await fetch('https://github.com/');
const body = await response.text();

console.log(body);

JSON

import fetch from 'node-fetch';

const response = await fetch('https://api.github.com/users/github');
const data = await response.json();

console.log(data);

Simple Post

import fetch from 'node-fetch';

const response = await fetch('https://httpbin.org/post', {method: 'POST', body: 'a=1'});
const data = await response.json();

console.log(data);

Post with JSON

import fetch from 'node-fetch';

const body = {a: 1};

const response = await fetch('https://httpbin.org/post', {
	method: 'post',
	body: JSON.stringify(body),
	headers: {'Content-Type': 'application/json'}
});
const data = await response.json();

console.log(data);

Post with form parameters

URLSearchParams is available on the global object in Node.js as of v10.0.0. See official documentation for more usage methods.

NOTE: The Content-Type header is only set automatically to x-www-form-urlencoded when an instance of URLSearchParams is given as such:

import fetch from 'node-fetch';

const params = new URLSearchParams();
params.append('a', 1);

const response = await fetch('https://httpbin.org/post', {method: 'POST', body: params});
const data = await response.json();

console.log(data);

Handling exceptions

NOTE: 3xx-5xx responses are NOT exceptions, and should be handled in then(), see the next section.

Wrapping the fetch function into a try/catch block will catch all exceptions, such as errors originating from node core libraries, like network errors, and operational errors which are instances of FetchError. See the error handling document for more details.

import fetch from 'node-fetch';

try {
	await fetch('https://domain.invalid/');
} catch (error) {
	console.log(error);
}

Handling client and server errors

It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses:

import fetch from 'node-fetch';

class HTTPResponseError extends Error {
	constructor(response) {
		super(`HTTP Error Response: ${response.status} ${response.statusText}`);
		this.response = response;
	}
}

const checkStatus = response => {
	if (response.ok) {
		// response.status >= 200 && response.status < 300
		return response;
	} else {
		throw new HTTPResponseError(response);
	}
}

const response = await fetch('https://httpbin.org/status/400');

try {
	checkStatus(response);
} catch (error) {
	console.error(error);

	const errorBody = await error.response.text();
	console.error(`Error body: ${errorBody}`);
}

Handling cookies

Cookies are not stored by default. However, cookies can be extracted and passed by manipulating request and response headers. See Extract Set-Cookie Header for details.

Advanced Usage

Streams

The "Node.js way" is to use streams when possible. You can pipe res.body to another stream. This example uses stream.pipeline to attach stream error handlers and wait for the download to complete.

import {createWriteStream} from 'node:fs';
import {pipeline} from 'node:stream';
import {promisify} from 'node:util'
import fetch from 'node-fetch';

const streamPipeline = promisify(pipeline);

const response = await fetch('https://github.githubassets.com/images/modules/logos_page/Octocat.png');

if (!response.ok) throw new Error(`unexpected response ${response.statusText}`);

await streamPipeline(response.body, createWriteStream('./octocat.png'));

In Node.js 14 you can also use async iterators to read body; however, be careful to catch errors -- the longer a response runs, the more likely it is to encounter an error.

import fetch from 'node-fetch';

const response = await fetch('https://httpbin.org/stream/3');

try {
	for await (const chunk of response.body) {
		console.dir(JSON.parse(chunk.toString()));
	}
} catch (err) {
	console.error(err.stack);
}

In Node.js 12 you can also use async iterators to read body; however, async iterators with streams did not mature until Node.js 14, so you need to do some extra work to ensure you handle errors directly from the stream and wait on it response to fully close.

import fetch from 'node-fetch';

const read = async body => {
	let error;
	body.on('error', err => {
		error = err;
	});

	for await (const chunk of body) {
		console.dir(JSON.parse(chunk.toString()));
	}

	return new Promise((resolve, reject) => {
		body.on('close', () => {
			error ? reject(error) : resolve();
		});
	});
};

try {
	const response = await fetch('https://httpbin.org/stream/3');
	await read(response.body);
} catch (err) {
	console.error(err.stack);
}

Accessing Headers and other Metadata

import fetch from 'node-fetch';

const response = await fetch('https://github.com/');

console.log(response.ok);
console.log(response.status);
console.log(response.statusText);
console.log(response.headers.raw());
console.log(response.headers.get('content-type'));

Extract Set-Cookie Header

Unlike browsers, you can access raw Set-Cookie headers manually using Headers.raw(). This is a node-fetch only API.

import fetch from 'node-fetch';

const response = await fetch('https://example.com');

// Returns an array of values, instead of a string of comma-separated values
console.log(response.headers.raw()['set-cookie']);

Post data using a file

import fetch, {
  Blob,
  blobFrom,
  blobFromSync,
  File,
  fileFrom,
  fileFromSync,
} from 'node-fetch'

const mimetype = 'text/plain'
const blob = fileFromSync('./input.txt', mimetype)
const url = 'https://httpbin.org/post'

const response = await fetch(url, { method: 'POST', body: blob })
const data = await response.json()

console.log(data)

node-fetch comes with a spec-compliant FormData implementations for posting multipart/form-data payloads

import fetch, { FormData, File, fileFrom } from 'node-fetch'

const httpbin = 'https://httpbin.org/post'
const formData = new FormData()
const binary = new Uint8Array([ 97, 98, 99 ])
const abc = new File([binary], 'abc.txt', { type: 'text/plain' })

formData.set('greeting', 'Hello, world!')
formData.set('file-upload', abc, 'new name.txt')

const response = await fetch(httpbin, { method: 'POST', body: formData })
const data = await response.json()

console.log(data)

If you for some reason need to post a stream coming from any arbitrary place, then you can append a Blob or a File look-a-like item.

The minimum requirement is that it has:

  1. A Symbol.toStringTag getter or property that is either Blob or File
  2. A known size.
  3. And either a stream() method or a arrayBuffer() method that returns a ArrayBuffer.

The stream() must return any async iterable object as long as it yields Uint8Array (or Buffer) so Node.Readable streams and whatwg streams works just fine.

formData.append('upload', {
	[Symbol.toStringTag]: 'Blob',
	size: 3,
  *stream() {
    yield new Uint8Array([97, 98, 99])
	},
	arrayBuffer() {
		return new Uint8Array([97, 98, 99]).buffer
	}
}, 'abc.txt')

Request cancellation with AbortSignal

You may cancel requests with AbortController. A suggested implementation is abort-controller.

An example of timing out a request after 150ms could be achieved as the following:

import fetch, { AbortError } from 'node-fetch';

// AbortController was added in node v14.17.0 globally
const AbortController = globalThis.AbortController || await import('abort-controller')

const controller = new AbortController();
const timeout = setTimeout(() => {
	controller.abort();
}, 150);

try {
	const response = await fetch('https://example.com', {signal: controller.signal});
	const data = await response.json();
} catch (error) {
	if (error instanceof AbortError) {
		console.log('request was aborted');
	}
} finally {
	clearTimeout(timeout);
}

See test cases for more examples.

API

fetch(url[, options])

  • url A string representing the URL for fetching
  • options Options for the HTTP(S) request
  • Returns: Promise<Response>

Perform an HTTP(S) fetch.

url should be an absolute URL, such as https://example.com/. A path-relative URL (/file/under/root) or protocol-relative URL (//can-be-http-or-https.com/) will result in a rejected Promise.

Options

The default values are shown after each option key.

{
	// These properties are part of the Fetch Standard
	method: 'GET',
	headers: {},            // Request headers. format is the identical to that accepted by the Headers constructor (see below)
	body: null,             // Request body. can be null, or a Node.js Readable stream
	redirect: 'follow',     // Set to `manual` to extract redirect headers, `error` to reject redirect
	signal: null,           // Pass an instance of AbortSignal to optionally abort requests

	// The following properties are node-fetch extensions
	follow: 20,             // maximum redirect count. 0 to not follow redirect
	compress: true,         // support gzip/deflate content encoding. false to disable
	size: 0,                // maximum response body size in bytes. 0 to disable
	agent: null,            // http(s).Agent instance or function that returns an instance (see below)
	highWaterMark: 16384,   // the maximum number of bytes to store in the internal buffer before ceasing to read from the underlying resource.
	insecureHTTPParser: false	// Use an insecure HTTP parser that accepts invalid HTTP headers when `true`.
}

Default Headers

If no values are set, the following request headers will be sent automatically:

| Header | Value | | ------------------- | ------------------------------------------------------ | | Accept-Encoding | gzip, deflate, br (when options.compress === true) | | Accept | */* | | Content-Length | (automatically calculated, if possible) | | Host | (host and port information from the target URI) | | Transfer-Encoding | chunked (when req.body is a stream) | | User-Agent | node-fetch |

Note: when body is a Stream, Content-Length is not set automatically.

Custom Agent

The agent option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following:

  • Support self-signed certificate
  • Use only IPv4 or IPv6
  • Custom DNS Lookup

See http.Agent for more information.

If no agent is specified, the default agent provided by Node.js is used. Note that this changed in Node.js 19 to have keepalive true by default. If you wish to enable keepalive in an earlier version of Node.js, you can override the agent as per the following code sample.

In addition, the agent option accepts a function that returns http(s).Agent instance given current URL, this is useful during a redirection chain across HTTP and HTTPS protocol.

import http from 'node:http';
import https from 'node:https';

const httpAgent = new http.Agent({
	keepAlive: true
});
const httpsAgent = new https.Agent({
	keepAlive: true
});

const options = {
	agent: function(_parsedURL) {
		if (_parsedURL.protocol == 'http:') {
			return httpAgent;
		} else {
			return httpsAgent;
		}
	}
};

Custom highWaterMark

Stream on Node.js have a smaller internal buffer size (16kB, aka highWaterMark) from client-side browsers (>1MB, not consistent across browsers). Because of that, when you are writing an isomorphic app and using res.clone(), it will hang with large response in Node.

The recommended way to fix this problem is to resolve cloned response in parallel:

import fetch from 'node-fetch';

const response = await fetch('https://example.com');
const r1 = response.clone();

const results = await Promise.all([response.json(), r1.text()]);

console.log(results[0]);
console.log(results[1]);

If for some reason you don't like the solution above, since 3.x you are able to modify the highWaterMark option:

import fetch from 'node-fetch';

const response = await fetch('https://example.com', {
	// About 1MB
	highWaterMark: 1024 * 1024
});

const result = await res.clone().arrayBuffer();
console.dir(result);

Insecure HTTP Parser

Passed through to the insecureHTTPParser option on http(s).request. See http.request for more information.

Manual Redirect

The redirect: 'manual' option for node-fetch is different from the browser & specification, which results in an opaque-redirect filtered response. node-fetch gives you the typical basic filtered response instead.

import fetch from 'node-fetch';

const response = await fetch('https://httpbin.org/status/301', { redirect: 'manual' });

if (response.status === 301 || response.status === 302) {
	const locationURL = new URL(response.headers.get('location'), response.url);
	const response2 = await fetch(locationURL, { redirect: 'manual' });
	console.dir(response2);
}

Class: Request

An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the Body interface.

Due to the nature of Node.js, the following properties are not implemented at this moment:

  • type
  • destination
  • mode
  • credentials
  • cache
  • integrity
  • keepalive

The following node-fetch extension properties are provided:

  • follow
  • compress
  • counter
  • agent
  • highWaterMark

See options for exact meaning of these extensions.

new Request(input[, options])

(spec-compliant)

  • input A string representing a URL, or another Request (which will be cloned)
  • options Options for the HTTP(S) request

Constructs a new Request object. The constructor is identical to that in the browser.

In most cases, directly fetch(url, options) is simpler than creating a Request object.

Class: Response

An HTTP(S) response. This class implements the Body interface.

The following properties are not implemented in node-fetch at this moment:

  • trailer

new Response([body[, options]])

(spec-compliant)

Constructs a new Response object. The constructor is identical to that in the browser.

Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a Response directly.

response.ok

(spec-compliant)

Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300.

response.redirected

(spec-compliant)

Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0.

response.type

(deviation from spec)

Convenience property representing the response's type. node-fetch only supports 'default' and 'error' and does not make use of filtered responses.

Class: Headers

This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the Fetch Standard are implemented.

new Headers([init])

(spec-compliant)

  • init Optional argument to pre-fill the Headers object

Construct a new Headers object. init can be either null, a Headers object, an key-value map object or any iterable object.

// Example adapted from https://fetch.spec.whatwg.org/#example-headers-class
import {Headers} from 'node-fetch';

const meta = {
	'Content-Type': 'text/xml'
};
const headers = new Headers(meta);

// The above is equivalent to
const meta = [['Content-Type', 'text/xml']];
const headers = new Headers(meta);

// You can in fact use any iterable objects, like a Map or even another Headers
const meta = new Map();
meta.set('Content-Type', 'text/xml');
const headers = new Headers(meta);
const copyOfHeaders = new Headers(headers);

Interface: Body

Body is an abstract interface with methods that are applicable to both Request and Response classes.

body.body

(deviation from spec)

Data are encapsulated in the Body object. Note that while the Fetch Standard requires the property to always be a WHATWG ReadableStream, in node-fetch it is a Node.js Readable stream.

body.bodyUsed

(spec-compliant)

  • Boolean

A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again.

body.arrayBuffer()

body.formData()

body.blob()

body.json()

body.text()

fetch comes with methods to parse multipart/form-data payloads as well as x-www-form-urlencoded bodies using .formData() this comes from the idea that Service Worker can intercept such messages before it's sent to the server to alter them. This is useful for anybody building a server so you can use it to parse & consume payloads.

Code example
import http from 'node:http'
import { Response } from 'node-fetch'

http.createServer(async function (req, res) {
  const formData = await new Response(req, {
    headers: req.headers // Pass along the boundary value
  }).formData()
  const allFields = [...formData]

  const file = formData.get('uploaded-files')
  const arrayBuffer = await file.arrayBuffer()
  const text = await file.text()
  const whatwgReadableStream = file.stream()

  // other was to consume the request could be to do:
  const json = await new Response(req).json()
  const text = await new Response(req).text()
  const arrayBuffer = await new Response(req).arrayBuffer()
  const blob = await new Response(req, {
    headers: req.headers // So that `type` inherits `Content-Type`
  }.blob()
})

Class: FetchError

(node-fetch extension)

An operational error in the fetching process. See ERROR-HANDLING.md for more info.

Class: AbortError

(node-fetch extension)

An Error thrown when the request is aborted in response to an AbortSignal's abort event. It has a name property of AbortError. See ERROR-HANDLING.MD for more info.

TypeScript

Since 3.x types are bundled with node-fetch, so you don't need to install any additional packages.

For older versions please use the type definitions from DefinitelyTyped:

npm install --save-dev @types/node-fetch@2.x

Acknowledgement

Thanks to github/fetch for providing a solid implementation reference.

Team

| David Frank | Jimmy Wärting | Antoni Kepinski | Richie Bendall | Gregor Martynus | | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------- | | David Frank | Jimmy Wärting | Antoni Kepinski | Richie Bendall | Gregor Martynus |

Former

License

MIT