These libraries serve various purposes in web development, particularly in making HTTP requests and scraping web content. They provide developers with tools to interact with web APIs, manipulate HTML documents, and automate browser tasks. Each library has its strengths and use cases, making them suitable for different scenarios in web scraping and data retrieval.
Npm Package Weekly Downloads Trend
3 Years
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
Package
Downloads
Stars
Size
Issues
Publish
License
axios
0
108,625
2.42 MB
348
11 days ago
MIT
cheerio
0
30,174
1.01 MB
35
2 months ago
MIT
got
0
14,876
304 kB
1
2 months ago
MIT
node-fetch
0
8,859
107 kB
237
3 years ago
MIT
puppeteer
0
93,776
63 kB
289
6 hours ago
Apache-2.0
request
0
25,600
-
142
6 years ago
Apache-2.0
scrapingbee
0
10
101 kB
1
2 months ago
ISC
selenium-webdriver
0
34,104
17.9 MB
204
19 days ago
Apache-2.0
Feature Comparison: axios vs cheerio vs got vs node-fetch vs puppeteer vs request vs scrapingbee vs selenium-webdriver
Ease of Use
axios:
Axios provides a simple and intuitive API for making HTTP requests, with built-in support for promises and async/await syntax, making it easy to handle asynchronous operations.
cheerio:
Cheerio offers a jQuery-like syntax that makes it easy to traverse and manipulate the DOM, allowing developers to quickly extract data from HTML documents without complex parsing logic.
got:
Got has a modern and user-friendly API that simplifies HTTP requests. It includes features like automatic retries and hooks, making it easy to customize request behavior.
node-fetch:
Node-fetch mimics the Fetch API found in browsers, making it easy for developers familiar with client-side JavaScript to make HTTP requests in Node.js.
puppeteer:
Puppeteer provides a high-level API that abstracts away the complexities of browser automation, allowing developers to focus on writing scripts without dealing with low-level browser details.
request:
Request has a simple API for making HTTP requests, but it is less feature-rich compared to newer libraries. It's easy to use for basic tasks but lacks modern features.
scrapingbee:
ScrapingBee offers a straightforward API for web scraping, allowing developers to focus on data extraction without managing infrastructure or handling proxies.
selenium-webdriver:
Selenium WebDriver provides a comprehensive API for browser automation, but it can be more complex to set up and use compared to other libraries.
Performance
axios:
Axios is optimized for performance with features like request cancellation and automatic JSON data transformation, ensuring efficient data handling.
cheerio:
Cheerio is lightweight and fast, making it suitable for parsing large HTML documents quickly without the overhead of a full browser environment.
got:
Got is designed for performance, with support for streams and efficient handling of large payloads, making it suitable for high-throughput applications.
node-fetch:
Node-fetch is lightweight and performs well for making simple HTTP requests, but it may not have the advanced features of other libraries.
puppeteer:
Puppeteer can be resource-intensive due to its headless browser nature, but it excels in tasks that require rendering and interaction with web pages.
request:
Request is not as performant as newer libraries and may struggle with large payloads or high concurrency due to its older architecture.
scrapingbee:
ScrapingBee is optimized for web scraping tasks, handling proxies and rendering efficiently, which can improve performance compared to self-hosted solutions.
selenium-webdriver:
Selenium WebDriver can be slower due to its reliance on browser interactions, but it is powerful for tasks that require full browser capabilities.
Community Support
axios:
Axios has a large and active community, with extensive documentation and numerous tutorials available, making it easy to find help and resources.
cheerio:
Cheerio is widely used in the web scraping community, with good documentation and community support, although it may not be as extensive as larger libraries.
got:
Got has a growing community and is well-documented, providing examples and support for developers looking to implement advanced HTTP features.
node-fetch:
Node-fetch is popular among developers familiar with the Fetch API, and it has a supportive community, although it may not be as large as Axios.
puppeteer:
Puppeteer has a strong community and is actively maintained, with plenty of resources and examples available for browser automation tasks.
request:
Request had a large community, but since it is deprecated, support is dwindling, and developers are encouraged to migrate to alternatives.
scrapingbee:
ScrapingBee has a dedicated support team and documentation, but its community is smaller compared to open-source libraries.
selenium-webdriver:
Selenium WebDriver has a vast community and extensive documentation, making it a reliable choice for browser automation and testing.
Flexibility
axios:
Axios is flexible and can be easily integrated into various frameworks and libraries, making it suitable for a wide range of applications.
cheerio:
Cheerio is designed specifically for server-side DOM manipulation, providing flexibility in how developers can extract and manipulate data from HTML.
got:
Got offers a high degree of flexibility with features like hooks and middleware, allowing developers to customize request handling extensively.
node-fetch:
Node-fetch is straightforward and flexible, allowing developers to use it in various scenarios without much overhead.
puppeteer:
Puppeteer provides flexibility in automating browser tasks, allowing developers to script complex interactions and workflows with ease.
request:
Request is flexible for basic HTTP requests but lacks the advanced features and customization options of newer libraries.
scrapingbee:
ScrapingBee offers flexibility in how developers can scrape data, with options for handling proxies and rendering, but it is a managed service with some limitations.
selenium-webdriver:
Selenium WebDriver is highly flexible and supports multiple programming languages and browsers, making it suitable for a wide range of automation tasks.
Error Handling
axios:
Axios provides built-in error handling for HTTP requests, allowing developers to easily catch and manage errors in a consistent manner.
cheerio:
Cheerio does not handle errors related to HTTP requests, as it is focused on DOM manipulation, so developers must manage request errors separately.
got:
Got has robust error handling features, including automatic retries and detailed error messages, making it easier to manage failed requests.
node-fetch:
Node-fetch provides basic error handling for network errors, but developers need to implement additional logic for handling HTTP response errors.
puppeteer:
Puppeteer includes error handling for browser automation tasks, allowing developers to catch exceptions and manage timeouts effectively.
request:
Request has basic error handling capabilities, but it is less sophisticated compared to newer libraries, making it harder to manage complex error scenarios.
scrapingbee:
ScrapingBee handles many common scraping errors internally, providing a simpler experience for developers, but custom error handling may be limited.
selenium-webdriver:
Selenium WebDriver provides comprehensive error handling for browser interactions, allowing developers to catch and manage exceptions during automation.
How to Choose: axios vs cheerio vs got vs node-fetch vs puppeteer vs request vs scrapingbee vs selenium-webdriver
axios:
Choose Axios for its simplicity and ease of use when making HTTP requests. It supports promises and is widely adopted in the community, making it a great choice for projects that require straightforward API interactions.
cheerio:
Select Cheerio if you need to parse and manipulate HTML on the server side. It provides a jQuery-like syntax for traversing and manipulating the DOM, making it ideal for web scraping tasks where you need to extract data from HTML documents.
got:
Opt for Got if you require a powerful and flexible HTTP request library with built-in support for retries, streams, and advanced features like hooks. It is suitable for more complex HTTP interactions and offers a modern API.
node-fetch:
Use Node-fetch for a lightweight and simple implementation of the Fetch API in Node.js. It is a good choice if you want a familiar API similar to the browser's Fetch API for making HTTP requests.
puppeteer:
Choose Puppeteer when you need to control a headless browser for tasks like web scraping, automated testing, or generating screenshots and PDFs. It provides a high-level API to interact with Chrome or Chromium, allowing for complex interactions with web pages.
request:
Select Request if you are working on legacy projects or require a simple way to make HTTP requests. However, note that it is deprecated, and alternatives like Axios or Got are recommended for new projects.
scrapingbee:
Opt for ScrapingBee if you want a managed web scraping service that handles proxies, headless browsers, and CAPTCHA solving. It is suitable for developers who want to focus on data extraction without worrying about infrastructure.
selenium-webdriver:
Choose Selenium WebDriver for comprehensive browser automation and testing. It supports multiple browsers and programming languages, making it ideal for complex web scraping tasks that require user interactions.
Popular Comparisons
Similar Npm Packages to axios
axios is a popular promise-based HTTP client for both the browser and Node.js. It simplifies making HTTP requests and handling responses, providing a clean and intuitive API. Axios supports features such as request and response interception, automatic JSON data transformation, and the ability to cancel requests. It is widely used in web applications for interacting with RESTful APIs and is known for its ease of use and flexibility. However, there are several alternatives to axios that developers may consider based on their specific needs:
node-fetch is a lightweight module that brings window.fetch to Node.js. It is a simple and minimalistic implementation of the Fetch API, allowing developers to make HTTP requests using a familiar API. Node-fetch is particularly useful for server-side applications or when working with APIs in a Node.js environment. It supports promises and is a great choice for those who prefer the Fetch API's syntax and behavior over traditional XMLHttpRequest or other libraries.
request was once one of the most popular HTTP request libraries for Node.js. It provided a simple and flexible API for making HTTP requests and handling responses. However, it has been deprecated and is no longer actively maintained. While it may still be found in legacy projects, developers are encouraged to use more modern alternatives like axios or node-fetch for new applications.
superagent is another powerful HTTP request library for Node.js and browsers. It offers a flexible and expressive API for making HTTP requests, supporting features like chaining, automatic content type handling, and file uploads. Superagent is particularly useful for developers who need a more feature-rich alternative to axios or want to work with a library that provides a more expressive syntax.
cheerio is a fast, flexible, and lean implementation of core jQuery designed specifically for the server. It allows developers to manipulate and traverse HTML and XML documents easily, making it a popular choice for web scraping and server-side DOM manipulation in Node.js applications. Cheerio provides a familiar jQuery-like syntax, enabling developers to work with the document structure in a straightforward and efficient manner.
While Cheerio is a powerful tool for parsing and manipulating HTML, there are several alternatives in the ecosystem that offer different features and capabilities. Here are a few notable alternatives:
htmlparser2 is a forgiving HTML/XML parser that is designed to be fast and efficient. It can handle malformed HTML and provides a simple API for parsing and manipulating documents. Unlike Cheerio, which focuses on jQuery-like manipulation, htmlparser2 is more low-level, allowing developers to work directly with the parsing process. This makes it a great choice for projects that require fine-grained control over parsing and processing HTML or XML data.
jsdom is a powerful library that simulates a web browser's environment in Node.js, allowing developers to manipulate the DOM as if they were working in a browser. It provides a complete implementation of the DOM and HTML standards, making it suitable for testing and server-side rendering of web applications. While Cheerio is lightweight and focused on parsing, jsdom offers a more comprehensive solution for developers who need a full DOM environment for their applications.
parse5 is an HTML parser that adheres closely to the HTML5 specification. It is designed to be fast and efficient, providing a simple API for parsing HTML documents. parse5 is particularly useful for projects that require strict adherence to HTML5 standards and need to handle complex HTML structures. While it does not provide the same manipulation capabilities as Cheerio, it excels in parsing accuracy and performance.
got is a popular HTTP request library for Node.js, designed to simplify the process of making HTTP requests while providing a rich set of features. It supports promises and async/await syntax, making it easy to work with asynchronous code. got is known for its performance, ease of use, and built-in support for advanced features like retries, timeouts, and streaming. It is an excellent choice for developers looking to make HTTP requests in a Node.js environment.
While got is a powerful option, there are several alternatives available that also provide robust HTTP request capabilities. Here are a few notable ones:
axios is a widely-used promise-based HTTP client for both the browser and Node.js. It offers a simple API and supports features such as interceptors, request cancellation, and automatic JSON data transformation. axios is particularly popular for frontend applications but can also be used effectively in server-side code. If you need a versatile HTTP client that works seamlessly in both environments, axios is a great choice.
node-fetch is a lightweight module that brings the fetch API to Node.js. It is designed to be a minimalistic and straightforward implementation of the fetch API, making it easy for developers familiar with the browser's fetch to make HTTP requests in a Node.js environment. If you prefer a simple and native-like API for making requests, node-fetch is an excellent alternative.
request was once a popular HTTP request library for Node.js, known for its simplicity and ease of use. However, it has been deprecated in favor of more modern alternatives like got and axios. While it is still available, it is recommended to use more actively maintained libraries for new projects.
node-fetch is a lightweight module that brings window.fetch to Node.js, allowing developers to make HTTP requests in a simple and familiar way. It is particularly useful for server-side applications that need to interact with APIs or fetch resources over the network. While node-fetch is a popular choice for making HTTP requests in Node.js, there are several alternatives that also provide robust solutions. Here are a few noteworthy alternatives:
axios is a widely-used promise-based HTTP client for both the browser and Node.js. It offers a rich feature set, including automatic JSON data transformation, request and response interceptors, and the ability to cancel requests. Axios is known for its simplicity and ease of use, making it a go-to choice for many developers when it comes to making HTTP requests. If you need a powerful and flexible HTTP client that works seamlessly in both environments, axios is an excellent option.
got is another popular HTTP request library for Node.js that focuses on simplicity and performance. It provides a rich set of features, including retries, timeouts, and streaming support. Got is designed to be a more modern and user-friendly alternative to the built-in http module in Node.js, making it a great choice for developers who want a straightforward and efficient way to handle HTTP requests. If you are building a Node.js application and need a feature-rich HTTP client, got is worth considering.
isomorphic-fetch is a library that provides a universal fetch API for both the browser and Node.js. It is built on top of the native fetch API and aims to provide a consistent interface across different environments. This makes it a suitable choice for applications that need to run both on the server and the client without changing the code for making HTTP requests. If you are looking for a solution that works seamlessly in both environments, isomorphic-fetch can be a good fit.
puppeteer is a popular Node.js library that provides a high-level API for controlling headless Chrome or Chromium browsers. It is widely used for web scraping, automated testing, and generating screenshots or PDFs of web pages. Puppeteer allows developers to interact with web pages programmatically, making it an essential tool for tasks that require browser automation. While Puppeteer is a powerful solution, there are several alternatives that also offer browser automation capabilities. Here are a few notable options:
nightmare is a high-level browser automation library for Node.js that is designed for simplicity and ease of use. It provides a straightforward API for performing tasks such as clicking buttons, filling out forms, and navigating web pages. Nightmare is particularly well-suited for developers who need a quick and easy way to automate browser interactions without the complexity of more extensive frameworks. However, it may not be as feature-rich or performant as Puppeteer for more demanding tasks.
playwright is a newer library developed by Microsoft that offers a powerful and flexible API for browser automation. It supports multiple browsers, including Chromium, Firefox, and WebKit, allowing developers to write cross-browser tests and automation scripts. Playwright provides advanced features such as auto-waiting, intercepting network requests, and handling multiple pages or contexts, making it a robust choice for complex automation scenarios. If you require cross-browser support and advanced capabilities, Playwright is an excellent alternative to Puppeteer.
selenium-webdriver is part of the Selenium project, which has been a long-standing solution for browser automation. It provides a comprehensive API for controlling various browsers and is widely used for automated testing of web applications. Selenium supports multiple programming languages and browser drivers, making it a versatile choice for teams working in diverse environments. However, it may have a steeper learning curve compared to Puppeteer and other modern libraries, and its setup can be more complex.
request is a popular HTTP client library for Node.js that simplifies making HTTP requests. It provides a straightforward API for sending requests and handling responses, making it a go-to choice for many developers. However, as the JavaScript ecosystem evolves, several alternatives have emerged that offer similar or enhanced functionality. Here are a few notable alternatives:
axios is a promise-based HTTP client for both the browser and Node.js. It has gained popularity due to its ease of use and powerful features, such as interceptors, automatic JSON data transformation, and the ability to cancel requests. Axios is particularly well-suited for applications that require a robust and flexible HTTP client, making it a great alternative to request for modern web development.
got is a lightweight and powerful HTTP request library for Node.js. It is designed to be simple and efficient, providing a rich set of features such as retries, timeouts, and streaming support. Got is particularly well-suited for applications that require high performance and flexibility in handling HTTP requests, making it a strong contender against request.
node-fetch is a lightweight module that brings window.fetch to Node.js, allowing developers to use the Fetch API in their server-side applications. It is designed to be simple and minimalistic, making it a great choice for those who prefer the Fetch API's syntax and behavior. Node-fetch is an excellent alternative for developers looking for a modern and straightforward way to make HTTP requests in Node.js.
scrapingbee is a web scraping API that simplifies the process of extracting data from websites. It handles the complexities of web scraping, such as managing proxies, headless browsers, and CAPTCHAs, allowing developers to focus on data extraction without worrying about the underlying infrastructure. ScrapingBee is particularly useful for those who want to scrape data quickly and efficiently without managing their own scraping servers.
While ScrapingBee offers a robust solution for web scraping, there are several alternatives that developers can consider:
axios is a promise-based HTTP client for the browser and Node.js. It is widely used for making HTTP requests and can be easily integrated with web scraping tasks. While axios does not provide scraping capabilities out of the box, it can be used in conjunction with other libraries to fetch HTML content from web pages, which can then be parsed for data extraction.
cheerio is a fast, flexible, and lean implementation of jQuery for the server. It allows developers to parse and manipulate HTML and XML documents easily. Cheerio is often used alongside axios or node-fetch to scrape and extract data from web pages after fetching the HTML content.
got is a powerful and flexible HTTP request library for Node.js. It provides a simple API for making HTTP requests and supports features like retries, streams, and hooks. Like axios, got can be used for fetching web pages, which can then be processed with libraries like cheerio for data extraction.
node-fetch is a lightweight module that brings window.fetch to Node.js. It is a simple way to make HTTP requests and can be used in web scraping tasks to retrieve HTML content from web pages. Once the content is fetched, developers can utilize cheerio or other parsing libraries to extract the required data.
puppeteer is a Node library that provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. Puppeteer is particularly useful for scraping dynamic websites that rely on JavaScript for rendering content. It allows developers to automate browser actions, making it easier to extract data from complex web pages.
request was a popular HTTP request library for Node.js, but it has been deprecated. While it is still used in some projects, developers are encouraged to use alternatives like axios or got for making HTTP requests in web scraping tasks.
selenium-webdriver is a powerful tool for automating web browsers. It can be used for web scraping, especially for sites that require interaction or are heavily reliant on JavaScript. Selenium allows developers to simulate user actions in a browser, making it suitable for scraping dynamic content.
selenium-webdriver is a popular library for automating web browsers. It provides a robust framework for writing tests and automating user interactions with web applications. Selenium WebDriver supports multiple programming languages and allows developers to control browsers programmatically, making it an essential tool for end-to-end testing. While Selenium WebDriver is widely used, there are several alternatives that offer different features and capabilities. Here are a few notable options:
nightwatch is an end-to-end testing framework that uses the Selenium WebDriver API. It is designed for easy setup and execution of tests, providing a simple syntax for writing tests in JavaScript. Nightwatch comes with built-in support for running tests in various browsers and can be easily integrated with CI/CD pipelines. If you are looking for a straightforward testing framework that leverages Selenium while providing additional features like automatic waiting and easy configuration, Nightwatch is a great choice.
puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium browsers. Unlike Selenium, which supports multiple browsers, Puppeteer is specifically designed for Chrome and offers powerful features such as taking screenshots, generating PDFs, and scraping web pages. If your primary focus is on testing or automating tasks in Chrome, Puppeteer is an excellent option due to its speed and ease of use.
webdriverio is a custom implementation of Selenium's WebDriver API that is designed to be simple and easy to use. It supports both WebDriver and DevTools protocols, allowing for flexible testing across different browsers. WebdriverIO offers a rich set of features, including built-in support for various testing frameworks, plugins, and a powerful assertion library. If you want a versatile testing framework that can work with both Selenium and modern browser automation protocols, WebdriverIO is a strong candidate.
For some bundlers and some ES6 linters you may need to do the following:
import { default as axios } from "axios";
For cases where something went wrong when trying to import a module into a custom or legacy environment,
you can try importing the module package directly:
The available instance methods are listed below. The specified config will be merged with the instance config.
axios#request(config)
axios#get(url[, config])
axios#delete(url[, config])
axios#head(url[, config])
axios#options(url[, config])
axios#post(url[, data[, config]])
axios#put(url[, data[, config]])
axios#patch(url[, data[, config]])
axios#getUri([config])
Request Config
These are the available config options for making requests. Only the url is required. Requests will default to GET if method is not specified.
{
// `url` is the server URL that will be used for the request
url: '/user',
// `method` is the request method to be used when making the request
method: 'get', // default
// `baseURL` will be prepended to `url` unless `url` is absolute and the option `allowAbsoluteUrls` is set to true.
// It can be convenient to set `baseURL` for an instance of axios to pass relative URLs
// to the methods of that instance.
baseURL: 'https://some-domain.com/api/',
// `allowAbsoluteUrls` determines whether or not absolute URLs will override a configured `baseUrl`.
// When set to true (default), absolute values for `url` will override `baseUrl`.
// When set to false, absolute values for `url` will always be prepended by `baseUrl`.
allowAbsoluteUrls: true,
// `transformRequest` allows changes to the request data before it is sent to the server
// This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE'
// The last function in the array must return a string or an instance of Buffer, ArrayBuffer,
// FormData or Stream
// You may modify the headers object.
transformRequest: [function (data, headers) {
// Do whatever you want to transform the data
return data;
}],
// `transformResponse` allows changes to the response data to be made before
// it is passed to then/catch
transformResponse: [function (data) {
// Do whatever you want to transform the data
return data;
}],
// `headers` are custom headers to be sent
headers: {'X-Requested-With': 'XMLHttpRequest'},
// `params` are the URL parameters to be sent with the request
// Must be a plain object or a URLSearchParams object
params: {
ID: 12345
},
// `paramsSerializer` is an optional config that allows you to customize serializing `params`.
paramsSerializer: {
// Custom encoder function which sends key/value pairs in an iterative fashion.
encode?: (param: string): string => { /* Do custom operations here and return transformed string */ },
// Custom serializer function for the entire parameter. Allows the user to mimic pre 1.x behaviour.
serialize?: (params: Record<string, any>, options?: ParamsSerializerOptions ),
// Configuration for formatting array indexes in the params.
indexes: false // Three available options: (1) indexes: null (leads to no brackets), (2) (default) indexes: false (leads to empty brackets), (3) indexes: true (leads to brackets with indexes).
},
// `data` is the data to be sent as the request body
// Only applicable for request methods 'PUT', 'POST', 'DELETE', and 'PATCH'
// When no `transformRequest` is set, it must be of one of the following types:
// - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams
// - Browser only: FormData, File, Blob
// - Node only: Stream, Buffer, FormData (form-data package)
data: {
firstName: 'Fred'
},
// syntax alternative to send data into the body
// method post
// only the value is sent, not the key
data: 'Country=Brasil&City=Belo Horizonte',
// `timeout` specifies the number of milliseconds before the request times out.
// If the request takes longer than `timeout`, the request will be aborted.
timeout: 1000, // default is `0` (no timeout)
// `withCredentials` indicates whether or not cross-site Access-Control requests
// should be made using credentials
withCredentials: false, // default
// `adapter` allows custom handling of requests which makes testing easier.
// Return a promise and supply a valid response (see lib/adapters/README.md)
adapter: function (config) {
/* ... */
},
// Also, you can set the name of the built-in adapter, or provide an array with their names
// to choose the first available in the environment
adapter: 'xhr', // 'fetch' | 'http' | ['xhr', 'http', 'fetch']
// `auth` indicates that HTTP Basic auth should be used, and supplies credentials.
// This will set an `Authorization` header, overwriting any existing
// `Authorization` custom headers you have set using `headers`.
// Please note that only HTTP Basic auth is configurable through this parameter.
// For Bearer tokens and such, use `Authorization` custom headers instead.
auth: {
username: 'janedoe',
password: 's00pers3cret'
},
// `responseType` indicates the type of data that the server will respond with
// options are: 'arraybuffer', 'document', 'json', 'text', 'stream'
// browser only: 'blob'
responseType: 'json', // default
// `responseEncoding` indicates encoding to use for decoding responses (Node.js only)
// Note: Ignored for `responseType` of 'stream' or client-side requests
// options are: 'ascii', 'ASCII', 'ansi', 'ANSI', 'binary', 'BINARY', 'base64', 'BASE64', 'base64url',
// 'BASE64URL', 'hex', 'HEX', 'latin1', 'LATIN1', 'ucs-2', 'UCS-2', 'ucs2', 'UCS2', 'utf-8', 'UTF-8',
// 'utf8', 'UTF8', 'utf16le', 'UTF16LE'
responseEncoding: 'utf8', // default
// `xsrfCookieName` is the name of the cookie to use as a value for the xsrf token
xsrfCookieName: 'XSRF-TOKEN', // default
// `xsrfHeaderName` is the name of the http header that carries the xsrf token value
xsrfHeaderName: 'X-XSRF-TOKEN', // default
// `undefined` (default) - set XSRF header only for the same origin requests
withXSRFToken: boolean | undefined | ((config: InternalAxiosRequestConfig) => boolean | undefined),
// `onUploadProgress` allows handling of progress events for uploads
// browser & node.js
onUploadProgress: function ({loaded, total, progress, bytes, estimated, rate, upload = true}) {
// Do whatever you want with the Axios progress event
},
// `onDownloadProgress` allows handling of progress events for downloads
// browser & node.js
onDownloadProgress: function ({loaded, total, progress, bytes, estimated, rate, download = true}) {
// Do whatever you want with the Axios progress event
},
// `maxContentLength` defines the max size of the http response content in bytes allowed in node.js
maxContentLength: 2000,
// `maxBodyLength` (Node only option) defines the max size of the http request content in bytes allowed
maxBodyLength: 2000,
// `validateStatus` defines whether to resolve or reject the promise for a given
// HTTP response status code. If `validateStatus` returns `true` (or is set to `null`
// or `undefined`), the promise will be resolved; otherwise, the promise will be
// rejected.
validateStatus: function (status) {
return status >= 200 && status < 300; // default
},
// `maxRedirects` defines the maximum number of redirects to follow in node.js.
// If set to 0, no redirects will be followed.
maxRedirects: 21, // default
// `beforeRedirect` defines a function that will be called before redirect.
// Use this to adjust the request options upon redirecting,
// to inspect the latest response headers,
// or to cancel the request by throwing an error
// If maxRedirects is set to 0, `beforeRedirect` is not used.
beforeRedirect: (options, { headers }) => {
if (options.hostname === "example.com") {
options.auth = "user:password";
}
},
// `socketPath` defines a UNIX Socket to be used in node.js.
// e.g. '/var/run/docker.sock' to send requests to the docker daemon.
// Only either `socketPath` or `proxy` can be specified.
// If both are specified, `socketPath` is used.
socketPath: null, // default
// `transport` determines the transport method that will be used to make the request.
// If defined, it will be used. Otherwise, if `maxRedirects` is 0,
// the default `http` or `https` library will be used, depending on the protocol specified in `protocol`.
// Otherwise, the `httpFollow` or `httpsFollow` library will be used, again depending on the protocol,
// which can handle redirects.
transport: undefined, // default
// `httpAgent` and `httpsAgent` define a custom agent to be used when performing http
// and https requests, respectively, in node.js. This allows options to be added like
// `keepAlive` that are not enabled by default before Node.js v19.0.0. After Node.js
// v19.0.0, you no longer need to customize the agent to enable `keepAlive` because
// `http.globalAgent` has `keepAlive` enabled by default.
httpAgent: new http.Agent({ keepAlive: true }),
httpsAgent: new https.Agent({ keepAlive: true }),
// `proxy` defines the hostname, port, and protocol of the proxy server.
// You can also define your proxy using the conventional `http_proxy` and
// `https_proxy` environment variables. If you are using environment variables
// for your proxy configuration, you can also define a `no_proxy` environment
// variable as a comma-separated list of domains that should not be proxied.
// Use `false` to disable proxies, ignoring environment variables.
// `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and
// supplies credentials.
// This will set a `Proxy-Authorization` header, overwriting any existing
// `Proxy-Authorization` custom headers you have set using `headers`.
// If the proxy server uses HTTPS, then you must set the protocol to `https`.
proxy: {
protocol: 'https',
host: '127.0.0.1',
// hostname: '127.0.0.1' // Takes precedence over 'host' if both are defined
port: 9000,
auth: {
username: 'mikeymike',
password: 'rapunz3l'
}
},
// `cancelToken` specifies a cancel token that can be used to cancel the request
// (see Cancellation section below for details)
cancelToken: new CancelToken(function (cancel) {
}),
// an alternative way to cancel Axios requests using AbortController
signal: new AbortController().signal,
// `decompress` indicates whether or not the response body should be decompressed
// automatically. If set to `true` will also remove the 'content-encoding' header
// from the responses objects of all decompressed responses
// - Node only (XHR cannot turn off decompression)
decompress: true, // default
// `insecureHTTPParser` boolean.
// Indicates where to use an insecure HTTP parser that accepts invalid HTTP headers.
// This may allow interoperability with non-conformant HTTP implementations.
// Using the insecure parser should be avoided.
// see options https://nodejs.org/dist/latest-v12.x/docs/api/http.html#http_http_request_url_options_callback
// see also https://nodejs.org/en/blog/vulnerability/february-2020-security-releases/#strict-http-header-parsing-none
insecureHTTPParser: undefined, // default
// transitional options for backward compatibility that may be removed in the newer versions
transitional: {
// silent JSON parsing mode
// `true` - ignore JSON parsing errors and set response.data to null if parsing failed (old behaviour)
// `false` - throw SyntaxError if JSON parsing failed (Note: responseType must be set to 'json')
silentJSONParsing: true, // default value for the current Axios version
// try to parse the response string as JSON even if `responseType` is not 'json'
forcedJSONParsing: true,
// throw ETIMEDOUT error instead of generic ECONNABORTED on request timeouts
clarifyTimeoutError: false,
// use the legacy interceptor request/response ordering
legacyInterceptorReqResOrdering: true, // default
},
env: {
// The FormData class to be used to automatically serialize the payload into a FormData object
FormData: window?.FormData || global?.FormData
},
formSerializer: {
visitor: (value, key, path, helpers) => {}; // custom visitor function to serialize form values
dots: boolean; // use dots instead of brackets format
metaTokens: boolean; // keep special endings like {} in parameter key
indexes: boolean; // array indexes format null - no brackets, false - empty brackets, true - brackets with indexes
},
// http adapter only (node.js)
maxRate: [
100 * 1024, // 100KB/s upload limit,
100 * 1024 // 100KB/s download limit
]
}
Response Schema
The response to a request contains the following information.
{
// `data` is the response that was provided by the server
data: {},
// `status` is the HTTP status code from the server response
status: 200,
// `statusText` is the HTTP status message from the server response
statusText: 'OK',
// `headers` the HTTP headers that the server responded with
// All header names are lowercase and can be accessed using the bracket notation.
// Example: `response.headers['content-type']`
headers: {},
// `config` is the config that was provided to `axios` for the request
config: {},
// `request` is the request that generated this response
// It is the last ClientRequest instance in node.js (in redirects)
// and an XMLHttpRequest instance in the browser
request: {}
}
When using then, you will receive the response as follows:
When using catch, or passing a rejection callback as second parameter of then, the response will be available through the error object as explained in the Handling Errors section.
Config Defaults
You can specify config defaults that will be applied to every request.
Global axios defaults
axios.defaults.baseURL = "https://api.example.com";
// Important: If axios is used with multiple domains, the AUTH_TOKEN will be sent to all of them.
// See below for an example using Custom instance defaults instead.
axios.defaults.headers.common["Authorization"] = AUTH_TOKEN;
axios.defaults.headers.post["Content-Type"] =
"application/x-www-form-urlencoded";
Custom instance defaults
// Set config defaults when creating the instance
const instance = axios.create({
baseURL: "https://api.example.com",
});
// Alter defaults after instance has been created
instance.defaults.headers.common["Authorization"] = AUTH_TOKEN;
Config order of precedence
Config will be merged with an order of precedence. The order is library defaults found in lib/defaults/index.js, then defaults property of the instance, and finally config argument for the request. The latter will take precedence over the former. Here's an example.
// Create an instance using the config defaults provided by the library
// At this point the timeout config value is `0` as is the default for the library
const instance = axios.create();
// Override timeout default for the library
// Now all requests using this instance will wait 2.5 seconds before timing out
instance.defaults.timeout = 2500;
// Override timeout for this request as it's known to take a long time
instance.get("/longRequest", {
timeout: 5000,
});
Interceptors
You can intercept requests or responses before methods like .get() or .post()
resolve their promises (before code inside then or catch, or after await)
const instance = axios.create();
// Add a request interceptor
instance.interceptors.request.use(
function (config) {
// Do something before the request is sent
return config;
},
function (error) {
// Do something with the request error
return Promise.reject(error);
},
);
// Add a response interceptor
instance.interceptors.response.use(
function (response) {
// Any status code that lies within the range of 2xx causes this function to trigger
// Do something with response data
return response;
},
function (error) {
// Any status codes that fall outside the range of 2xx cause this function to trigger
// Do something with response error
return Promise.reject(error);
},
);
If you need to remove an interceptor later you can.
When you add request interceptors, they are presumed to be asynchronous by default. This can cause a delay
in the execution of your axios request when the main thread is blocked (a promise is created under the hood for
the interceptor and your request gets put at the bottom of the call stack). If your request interceptors are synchronous you can add a flag
to the options object that will tell axios to run the code synchronously and avoid any delays in request execution.
axios.interceptors.request.use(
function (config) {
config.headers.test = "I am only a header!";
return config;
},
null,
{ synchronous: true },
);
If you want to execute a particular interceptor based on a runtime check,
you can add a runWhen function to the options object. The request interceptor will not be executed if and only if the return
of runWhen is false. The function will be called with the config
object (don't forget that you can bind your own arguments to it as well.) This can be handy when you have an
asynchronous request interceptor that only needs to run at certain times.
function onGetCall(config) {
return config.method === "get";
}
axios.interceptors.request.use(
function (config) {
config.headers.test = "special get headers";
return config;
},
null,
{ runWhen: onGetCall },
);
Note: The options parameter(having synchronous and runWhen properties) is only supported for request interceptors at the moment.
Interceptor Execution Order
Important: Interceptors have different execution orders depending on their type!
Request interceptors are executed in reverse order (LIFO - Last In, First Out). This means the last interceptor added is executed first.
Response interceptors are executed in the order they were added (FIFO - First In, First Out). This means the first interceptor added is executed first.
There are many different axios error messages that can appear which can provide basic information about the specifics of the error and where opportunities may lie in debugging.
The general structure of axios errors is as follows:
Property
Definition
message
A quick summary of the error message and the status it failed with.
name
This defines where the error originated from. For axios, it will always be an 'AxiosError'.
stack
Provides the stack trace of the error.
config
An axios config object with specific instance configurations defined by the user from when the request was made
code
Represents an axios identified error. The table below lists specific definitions for internal axios error.
status
HTTP response status code. See here for common HTTP response status code meanings.
Below is a list of potential axios identified error:
Code
Definition
ERR_BAD_OPTION_VALUE
Invalid value provided in axios configuration.
ERR_BAD_OPTION
Invalid option provided in axios configuration.
ERR_NOT_SUPPORT
Feature or method not supported in the current axios environment.
ERR_DEPRECATED
Deprecated feature or method used in axios.
ERR_INVALID_URL
Invalid URL provided for axios request.
ECONNABORTED
Typically indicates that the request has been timed out (unless transitional.clarifyTimeoutError is set) or aborted by the browser or its plugin.
ERR_CANCELED
Feature or method is canceled explicitly by the user using an AbortSignal (or a CancelToken).
ETIMEDOUT
Request timed out due to exceeding the default axios timelimit. transitional.clarifyTimeoutError must be set to true, otherwise a generic ECONNABORTED error will be thrown instead.
ERR_NETWORK
Network-related issue. In the browser, this error can also be caused by a CORS or Mixed Content policy violation. The browser does not allow the JS code to clarify the real reason for the error caused by security issues, so please check the console.
ERR_FR_TOO_MANY_REDIRECTS
Request is redirected too many times; exceeds max redirects specified in axios configuration.
ERR_BAD_RESPONSE
Response cannot be parsed properly or is in an unexpected format. Usually related to a response with 5xx status code.
ERR_BAD_REQUEST
The request has an unexpected format or is missing required parameters. Usually related to a response with 4xx status code.
Handling Errors
The default behavior is to reject every response that returns with a status code that falls out of the range of 2xx and treat it as an error.
axios.get("/user/12345").catch(function (error) {
if (error.response) {
// The request was made and the server responded with a status code
// that falls out of the range of 2xx
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
// The request was made but no response was received
// `error.request` is an instance of XMLHttpRequest in the browser and an instance of
// http.ClientRequest in node.js
console.log(error.request);
} else {
// Something happened in setting up the request that triggered an Error
console.log("Error", error.message);
}
console.log(error.config);
});
Using the validateStatus config option, you can override the default condition (status >= 200 && status < 300) and define HTTP code(s) that should throw an error.
axios.get("/user/12345", {
validateStatus: function (status) {
return status < 500; // Resolve only if the status code is less than 500
},
});
Using toJSON you get an object with more information about the HTTP error.
This API is deprecated since v0.22.0 and shouldn't be used in new projects
You can create a cancel token using the CancelToken.source factory as shown below:
const CancelToken = axios.CancelToken;
const source = CancelToken.source();
axios
.get("/user/12345", {
cancelToken: source.token,
})
.catch(function (thrown) {
if (axios.isCancel(thrown)) {
console.log("Request canceled", thrown.message);
} else {
// handle error
}
});
axios.post(
"/user/12345",
{
name: "new name",
},
{
cancelToken: source.token,
},
);
// cancel the request (the message parameter is optional)
source.cancel("Operation canceled by the user.");
You can also create a cancel token by passing an executor function to the CancelToken constructor:
const CancelToken = axios.CancelToken;
let cancel;
axios.get("/user/12345", {
cancelToken: new CancelToken(function executor(c) {
// An executor function receives a cancel function as a parameter
cancel = c;
}),
});
// cancel the request
cancel();
Note: you can cancel several requests with the same cancel token/abort controller.
If a cancellation token is already cancelled at the moment of starting an Axios request, then the request is cancelled immediately, without any attempts to make a real request.
During the transition period, you can use both cancellation APIs, even for the same request:
Using application/x-www-form-urlencoded format
URLSearchParams
By default, axios serializes JavaScript objects to JSON. To send data in the application/x-www-form-urlencoded format instead, you can use the URLSearchParams API, which is supported in the vast majority of browsers, and Node starting with v10 (released in 2018).
If your backend body-parser (like body-parser of express.js) supports nested objects decoding, you will get the same object on the server-side automatically
const app = express();
app.use(bodyParser.urlencoded({ extended: true })); // support encoded bodies
app.post("/", function (req, res, next) {
// echo body as JSON
res.send(JSON.stringify(req.body));
});
server = app.listen(3000);
Using multipart/form-data format
FormData
To send the data as a multipart/form-data you need to pass a formData instance as a payload.
Setting the Content-Type header is not required as Axios guesses it based on the payload type.
const formData = new FormData();
formData.append("foo", "bar");
axios.post("https://httpbin.org/post", formData);
In node.js, you can use the form-data library as follows:
const FormData = require("form-data");
const form = new FormData();
form.append("my_field", "my value");
form.append("my_buffer", Buffer.alloc(10));
form.append("my_file", fs.createReadStream("/foo/bar.jpg"));
axios.post("https://example.com", form);
🆕 Automatic serialization to FormData
Starting from v0.27.0, Axios supports automatic object serialization to a FormData object if the request Content-Type
header is set to multipart/form-data.
The following request will submit the data in a FormData format (Browser & Node.js):
Axios FormData serializer supports some special endings to perform the following operations:
{} - serialize the value with JSON.stringify
[] - unwrap the array-like object as separate fields with the same key
Note: unwrap/expand operation will be used by default on arrays and FileList objects
FormData serializer supports additional options via config.formSerializer: object property to handle rare cases:
visitor: Function - user-defined visitor function that will be called recursively to serialize the data object
to a FormData object by following custom rules.
dots: boolean = false - use dot notation instead of brackets to serialize arrays and objects;
metaTokens: boolean = true - add the special ending (e.g user{}: '{"name": "John"}') in the FormData key.
The back-end body-parser could potentially use this meta-information to automatically parse the value as JSON.
indexes: null|false|true = false - controls how indexes will be added to unwrapped keys of flat array-like objects.
Axios supports the following shortcut methods: postForm, putForm, patchForm
which are just the corresponding http methods with the Content-Type header preset to multipart/form-data.
Sending Blobs/Files as JSON (base64) is not currently supported.
🆕 Progress capturing
Axios supports both browser and node environments to capture request upload/download progress.
The frequency of progress events is forced to be limited to 3 times per second.
await axios.post(url, data, {
onUploadProgress: function (axiosProgressEvent) {
/*{
loaded: number;
total?: number;
progress?: number; // in range [0..1]
bytes: number; // how many bytes have been transferred since the last trigger (delta)
estimated?: number; // estimated time in seconds
rate?: number; // upload speed in bytes
upload: true; // upload sign
}*/
},
onDownloadProgress: function (axiosProgressEvent) {
/*{
loaded: number;
total?: number;
progress?: number;
bytes: number;
estimated?: number;
rate?: number; // download speed in bytes
download: true; // download sign
}*/
},
});
You can also track stream upload/download progress in node.js:
Note:
Capturing FormData upload progress is not currently supported in node.js environments.
⚠️ Warning
It is recommended to disable redirects by setting maxRedirects: 0 to upload the stream in the node.js environment,
as the follow-redirects package will buffer the entire stream in RAM without following the "backpressure" algorithm.
🆕 Rate limiting
Download and upload rate limits can only be set for the http adapter (node.js):
Axios has its own AxiosHeaders class to manipulate headers using a Map-like API that guarantees caseless work.
Although HTTP is case-insensitive in headers, Axios will retain the case of the original header for stylistic reasons
and as a workaround when servers mistakenly consider the header's case.
The old approach of directly manipulating the headers object is still available, but deprecated and not recommended for future usage.
Working with headers
An AxiosHeaders object instance can contain different types of internal values. that control setting and merging logic.
The final headers object with string values is obtained by Axios by calling the toJSON method.
Note: By JSON here we mean an object consisting only of string values intended to be sent over the network.
The header value can be one of the following types:
string - normal string value that will be sent to the server
null - skip header when rendering to JSON
false - skip header when rendering to JSON, additionally indicates that set method must be called with rewrite option set to true
to overwrite this value (Axios uses this internally to allow users to opt out of installing certain headers like User-Agent or Content-Type)
undefined - value is not set
Note: The header value is considered set if it is not equal to undefined.
The headers object is always initialized inside interceptors and transformers:
axios.interceptors.request.use((request: InternalAxiosRequestConfig) => {
request.headers.set("My-header", "value");
request.headers.set({
"My-set-header1": "my-set-value1",
"My-set-header2": "my-set-value2",
});
request.headers.set("User-Agent", false); // disable subsequent setting the header by Axios
request.headers.setContentType("text/plain");
request.headers["My-set-header2"] = "newValue"; // direct access is deprecated
return request;
});
You can iterate over an AxiosHeaders instance using a for...of statement:
const headers = new AxiosHeaders({
foo: "1",
bar: "2",
baz: "3",
});
for (const [header, value] of headers) {
console.log(header, value);
}
// foo 1
// bar 2
// baz 3
Returns the internal value of the header. It can take an extra argument to parse the header's value with RegExp.exec,
matcher function or internal key-value parser.
Returns true if at least one header has been cleared.
AxiosHeaders#normalize(format);
If the headers object was changed directly, it can have duplicates with the same name but in different cases.
This method normalizes the headers object by combining duplicate keys into one.
Axios uses this method internally after calling each interceptor.
Set format to true for converting header names to lowercase and capitalizing the initial letters (cOntEnt-type => Content-Type)
Merges the instance with targets into a new AxiosHeaders instance. If the target is a string, it will be parsed as RAW HTTP headers.
Returns a new AxiosHeaders instance.
AxiosHeaders#toJSON(asStrings?)
toJSON(asStrings?: boolean): RawAxiosHeaders;
Resolve all internal header values into a new null prototype object.
Set asStrings to true to resolve arrays as a string containing all elements, separated by commas.
Returns a new AxiosHeaders instance created from the raw headers passed in,
or simply returns the given headers object if it's an AxiosHeaders instance.
Fetch adapter was introduced in v1.7.0. By default, it will be used if xhr and http adapters are not available in the build,
or not supported by the environment.
To use it by default, it must be selected explicitly:
const { data } = axios.get(url, {
adapter: "fetch", // by default ['xhr', 'http', 'fetch']
});
The adapter supports the same functionality as the xhr adapter, including upload and download progress capturing.
Also, it supports additional response types such as stream and formdata (if supported by the environment).
🔥 Custom fetch
Starting from v1.12.0, you can customize the fetch adapter to use a custom fetch API instead of environment globals.
You can pass a custom fetch function, Request, and Response constructors via env config.
This can be helpful in case of custom environments & app frameworks.
Also, when using a custom fetch, you may need to set custom Request and Response too. If you don't set them, global objects will be used.
If your custom fetch api does not have these objects, and the globals are incompatible with a custom fetch,
you must disable their use inside the fetch adapter by passing null.
Note: Setting Request & Response to null will make it impossible for the fetch adapter to capture the upload & download progress.
Basic example:
import customFetchFunction from "customFetchModule";
const instance = axios.create({
adapter: "fetch",
onDownloadProgress(e) {
console.log("downloadProgress", e);
},
env: {
fetch: customFetchFunction,
Request: null, // undefined -> use the global constructor
Response: null,
},
});
🔥 Using with Tauri
A minimal example of setting up Axios for use in a Tauri app with a platform fetch function that ignores CORS policy for requests.
SvelteKit framework has a custom implementation of the fetch function for server rendering (so called load functions), and also uses relative paths,
which makes it incompatible with the standard URL API. So, Axios must be configured to use the custom fetch API:
In version 1.13.0, experimental HTTP2 support was added to the http adapter.
The httpVersion option is now available to select the protocol version used.
Additional native options for the internal session.request() call can be passed via the http2Options config.
This config also includes the custom sessionTimeout parameter, which defaults to 1000ms.
Since Axios has reached a v.1.0.0 we will fully embrace semver as per the spec here
Promises
axios depends on a native ES6 Promise implementation to be supported.
If your environment doesn't support ES6 Promises, you can polyfill.
TypeScript
axios includes TypeScript definitions and a type guard for axios errors.
let user: User = null;
try {
const { data } = await axios.get("/user?ID=12345");
user = data.userDetails;
} catch (error) {
if (axios.isAxiosError(error)) {
handleAxiosError(error);
} else {
handleUnexpectedError(error);
}
}
Because axios dual publishes with an ESM default export and a CJS module.exports, there are some caveats.
The recommended setting is to use "moduleResolution": "node16" (this is implied by "module": "node16"). Note that this requires TypeScript 4.7 or greater.
If use ESM, your settings should be fine.
If you compile TypeScript to CJS and you can’t use "moduleResolution": "node 16", you have to enable esModuleInterop.
If you use TypeScript to type check CJS JavaScript code, your only option is to use "moduleResolution": "node16".
You can also create a custom instance with typed interceptors:
axios is heavily inspired by the $http service provided in AngularJS. Ultimately axios is an effort to provide a standalone $http-like service for use outside of AngularJS.