What's Web Development Monitoring and HTTP Client Libraries?
These libraries serve various purposes in web development, focusing on HTTP requests, metrics collection, tracing, and logging. Axios is a promise-based HTTP client for making requests to APIs, while Datadog Metrics and StatsD Client are used for sending application metrics to monitoring services. OpenTracing provides a standard for distributed tracing, and Prom Client is designed for collecting metrics in Prometheus format. Winston is a versatile logging library that supports multiple transports for logging messages, making it easier to manage application logs effectively.
Package Weekly Downloads Trend
Github Stars Ranking
Stat Detail
Package
Downloads
Stars
Size
Issues
Publish
License
axios
59,240,620
106,342
2.13 MB
661
2 months ago
MIT
winston
13,448,574
23,289
271 kB
510
3 months ago
MIT
opentracing
4,157,100
1,091
195 kB
35
-
Apache-2.0
prom-client
3,125,865
3,208
126 kB
118
8 months ago
Apache-2.0
datadog-metrics
1,587,985
141
93.9 kB
7
2 months ago
MIT
statsd-client
40,098
171
-
0
4 years ago
MIT
Feature Comparison: axios vs winston vs opentracing vs prom-client vs datadog-metrics vs statsd-client
HTTP Request Handling
axios:
Axios simplifies the process of making HTTP requests with its promise-based API. It supports request and response interceptors, allowing you to modify requests before they are sent and responses before they are handled. This makes it easy to handle errors and add authentication tokens automatically.
winston:
Winston does not handle HTTP requests but is a logging library that allows you to log messages from your application. It supports various logging levels and can log to multiple transports, such as console, files, or external services.
opentracing:
OpenTracing is not an HTTP client but provides a framework for tracing requests as they flow through different services. It allows you to instrument your HTTP requests to capture timing and context, which is crucial for understanding performance in distributed systems.
prom-client:
Prom Client is focused on exposing metrics rather than making HTTP requests. It provides an API for defining and collecting metrics that can be scraped by Prometheus, allowing you to monitor your application's performance and health.
datadog-metrics:
Datadog Metrics does not handle HTTP requests directly but is used to send metrics data to the Datadog service. It provides a simple API for recording metrics, which can be sent over HTTP to the Datadog platform for monitoring and analysis.
statsd-client:
Similar to Prom Client, StatsD Client is not for making HTTP requests but for sending metrics to a StatsD server. It provides a simple interface for counting events, timing operations, and recording gauges, which can be aggregated and visualized.
Metrics Collection
axios:
Axios does not collect metrics by itself, but you can integrate it with other libraries to track request performance, such as measuring response times and success rates for API calls.
winston:
Winston does not collect metrics but focuses on logging application events. However, you can log metrics-related information, such as error counts or performance logs, which can be useful for monitoring application behavior.
opentracing:
OpenTracing facilitates metrics collection by allowing you to trace requests across services. It helps in gathering performance metrics related to request latency and service interactions, which can be analyzed to improve system performance.
prom-client:
Prom Client is built for metrics collection, allowing you to define custom metrics and expose them in a format that Prometheus can scrape. It supports various metric types, including counters, gauges, and histograms, enabling comprehensive monitoring.
datadog-metrics:
Datadog Metrics is specifically designed for collecting and sending application metrics to the Datadog service. It can track custom metrics, such as request counts, error rates, and performance indicators, providing insights into application health.
statsd-client:
StatsD Client is designed for metrics collection, enabling you to send various types of metrics to a StatsD server. It aggregates data over time and provides a way to visualize application performance and usage statistics.
Integration and Ecosystem
axios:
Axios has a rich ecosystem and can be easily integrated with various frameworks like React, Vue, and Angular. It supports interceptors and request cancellation, making it versatile for different use cases in web applications.
winston:
Winston has a flexible architecture that allows it to integrate with various logging transports and formats. It can log to files, databases, and external services, making it adaptable to different logging requirements.
opentracing:
OpenTracing is designed to be vendor-neutral and can integrate with various tracing backends. It provides a consistent API for tracing across different services, making it easier to adopt distributed tracing in microservices architectures.
prom-client:
Prom Client is designed for use with Prometheus and integrates well with the Prometheus ecosystem. It allows you to expose metrics in a format that Prometheus can scrape, making it easy to monitor applications in a cloud-native environment.
datadog-metrics:
Datadog Metrics integrates seamlessly with the Datadog monitoring platform, allowing you to visualize and analyze your application's performance metrics in real-time. It is part of a larger ecosystem that includes logging and tracing capabilities.
statsd-client:
StatsD Client integrates with StatsD servers and can be used with various visualization tools like Grafana. It is lightweight and easy to set up, making it suitable for applications that need quick metrics reporting.
Error Handling
axios:
Axios provides built-in error handling capabilities, allowing you to catch and manage errors from HTTP requests easily. You can define interceptors to handle errors globally or locally, making it easier to implement consistent error handling across your application.
winston:
Winston provides robust error handling for logging, allowing you to log error messages with different severity levels. You can configure it to log errors to different transports, ensuring that critical errors are captured and monitored.
opentracing:
OpenTracing helps in error handling by allowing you to trace errors across services. You can capture error events and their context, which is useful for diagnosing issues in distributed systems and improving overall reliability.
prom-client:
Prom Client does not handle errors but can be used to track error metrics, such as the number of failed requests or application errors. This information can be vital for monitoring application health and performance.
datadog-metrics:
Datadog Metrics does not handle errors directly but allows you to track error metrics, such as error rates and counts, which can be monitored in the Datadog dashboard. This helps you identify and address issues in your application.
statsd-client:
StatsD Client allows you to send error metrics to a StatsD server, which can be visualized and monitored. You can track error rates and other performance-related metrics to identify issues in your application.
Logging Capabilities
axios:
Axios does not provide logging capabilities directly, but you can implement logging for HTTP requests and responses using interceptors to capture and log relevant information, such as request URLs and response statuses.
winston:
Winston excels in logging capabilities, supporting multiple logging levels and formats. It allows you to log messages to various transports, making it easy to manage application logs and monitor application behavior.
opentracing:
OpenTracing is not a logging library but can be used in conjunction with logging to provide context for logs. By correlating logs with trace data, you can gain deeper insights into application behavior and performance.
prom-client:
Prom Client does not handle logging but focuses on metrics collection. You can use it alongside logging libraries to provide a complete monitoring solution that includes both metrics and logs.
datadog-metrics:
Datadog Metrics does not handle logging but focuses on metrics collection. However, you can combine it with logging libraries to provide a comprehensive monitoring solution that includes both metrics and logs.
statsd-client:
StatsD Client is focused on metrics collection and does not provide logging capabilities. It can be used with logging libraries to enhance application monitoring by combining metrics and logs.
How to Choose: axios vs winston vs opentracing vs prom-client vs datadog-metrics vs statsd-client
axios:
Choose Axios if you need a simple and powerful HTTP client that supports promises and is easy to integrate with modern JavaScript frameworks. It is ideal for making API requests and handling responses in a clean and efficient manner.
winston:
Select Winston if you need a flexible logging library that can handle multiple transports and formats. It is ideal for applications that require structured logging and the ability to log to various destinations like files, databases, or external services.
opentracing:
Opt for OpenTracing if you are implementing distributed tracing in your microservices architecture. It provides a vendor-neutral API for tracing requests across different services, helping to diagnose performance bottlenecks and latency issues.
prom-client:
Use Prom Client if you are working with Prometheus for monitoring and need to expose application metrics in a format that Prometheus can scrape. It is particularly useful for applications that require real-time monitoring and alerting based on metrics.
datadog-metrics:
Select Datadog Metrics if you are using the Datadog monitoring platform and need to send custom application metrics. It is best suited for applications that require detailed performance monitoring and analytics.
statsd-client:
Choose StatsD Client if you want to send metrics to a StatsD server for aggregation and visualization. It is a good choice for applications that need lightweight metrics reporting without the overhead of a full monitoring solution.
Similar Npm Packages to axios
axios is a popular promise-based HTTP client for both the browser and Node.js. It simplifies making HTTP requests and handling responses, providing a clean and intuitive API. Axios supports features such as request and response interception, automatic JSON data transformation, and the ability to cancel requests. It is widely used in web applications for interacting with RESTful APIs and is known for its ease of use and flexibility. However, there are several alternatives to axios that developers may consider based on their specific needs:
node-fetch is a lightweight module that brings window.fetch to Node.js. It is a simple and minimalistic implementation of the Fetch API, allowing developers to make HTTP requests using a familiar API. Node-fetch is particularly useful for server-side applications or when working with APIs in a Node.js environment. It supports promises and is a great choice for those who prefer the Fetch API's syntax and behavior over traditional XMLHttpRequest or other libraries.
request was once one of the most popular HTTP request libraries for Node.js. It provided a simple and flexible API for making HTTP requests and handling responses. However, it has been deprecated and is no longer actively maintained. While it may still be found in legacy projects, developers are encouraged to use more modern alternatives like axios or node-fetch for new applications.
superagent is another powerful HTTP request library for Node.js and browsers. It offers a flexible and expressive API for making HTTP requests, supporting features like chaining, automatic content type handling, and file uploads. Superagent is particularly useful for developers who need a more feature-rich alternative to axios or want to work with a library that provides a more expressive syntax.
winston is a popular logging library for Node.js applications. It provides a simple and flexible way to log messages, allowing developers to create custom log formats, transports, and levels. With its extensive features, winston is widely used in various applications to manage logging effectively. However, there are several alternatives in the Node.js ecosystem that also provide robust logging solutions. Here are a few notable ones:
bunyan is a simple and fast JSON logging library for Node.js. It is designed to be easy to use and provides a structured logging format, making it suitable for production environments. Bunyan supports log levels, child loggers, and streams, allowing developers to customize how logs are output. If you prefer a logging library that focuses on performance and structured logging, bunyan is an excellent choice.
log4js is a logging library inspired by the popular log4j framework from the Java ecosystem. It offers a variety of appenders for logging to different destinations, such as files, console, and remote servers. Log4js supports log levels, categories, and layouts, providing a comprehensive logging solution. If you're looking for a feature-rich logging library with a familiar API for those coming from Java, log4js is a solid option.
morgan is a middleware for logging HTTP requests in Node.js applications, particularly those built with Express. It provides a simple way to log incoming requests, including details such as the request method, URL, response status, and response time. Morgan is lightweight and easy to integrate into existing applications, making it ideal for developers who need basic request logging without the overhead of a full logging library.
opentracing is a vendor-neutral API for distributed tracing in applications. It provides a standardized way to instrument applications for monitoring and performance analysis, allowing developers to track requests as they flow through various services. This is particularly useful in microservices architectures, where understanding the interactions between services is crucial for diagnosing performance issues and optimizing system behavior. While opentracing offers a robust foundation for distributed tracing, there are several alternatives that cater to specific needs or preferences. Here are a few notable options:
dd-trace is a powerful tracing library developed by Datadog. It provides automatic instrumentation for various frameworks and libraries, making it easy to collect traces without extensive manual setup. dd-trace is particularly beneficial for users already leveraging Datadog for monitoring and observability, as it integrates seamlessly with their platform. If you're looking for a comprehensive solution that simplifies the tracing process and provides rich insights into your application's performance, dd-trace is an excellent choice.
jaeger-client is the official client for Jaeger, an open-source distributed tracing system. It allows developers to instrument their applications and send trace data to a Jaeger backend for visualization and analysis. jaeger-client is ideal for those who prefer an open-source solution and want to leverage the capabilities of Jaeger for monitoring their distributed systems. It provides flexibility and control over the tracing process, making it suitable for various use cases.
prom-client is a client library for Prometheus, a popular monitoring and alerting toolkit. While not a tracing library in the traditional sense, prom-client allows developers to collect metrics and expose them to Prometheus for monitoring purposes. If your focus is on gathering metrics rather than tracing, and you are using Prometheus for observability, prom-client is a valuable tool to consider.
tracer is a lightweight and flexible logging and tracing library for Node.js applications. It allows developers to create trace logs that can be used for debugging and performance analysis. While it may not offer the full capabilities of dedicated tracing systems, tracer is a good option for those looking for a simple and straightforward solution to add tracing capabilities to their applications.
zipkin is another open-source distributed tracing system that helps gather timing data for requests in microservices architectures. It provides a client library for instrumenting applications and sending trace data to a Zipkin server for analysis. If you are looking for an open-source solution similar to Jaeger, zipkin is a solid choice, especially for those who want to visualize and analyze trace data in a user-friendly interface.
prom-client is a popular Node.js client for Prometheus, a powerful monitoring and alerting toolkit. This library allows developers to instrument their applications by exposing metrics in a format that Prometheus can scrape. With prom-client, you can track various metrics such as request counts, response times, and custom application metrics, making it easier to monitor the health and performance of your Node.js applications.
While prom-client is a robust solution for integrating Prometheus metrics into your application, there are several alternatives that also provide similar functionalities. Here are a few noteworthy options:
express-prometheus-middleware is a middleware for Express applications that automatically collects metrics and exposes them for Prometheus. This library simplifies the process of monitoring your Express app by automatically tracking request counts, response times, and other relevant metrics without requiring extensive manual instrumentation. If you are using Express and want a quick way to integrate Prometheus metrics, this middleware is an excellent choice.
prometheus-api-metrics is another middleware designed for Express applications that focuses specifically on API metrics. It provides a simple way to track metrics related to API performance, such as request counts, response times, and error rates. This library is particularly useful for developers who want to monitor the performance of their APIs in a straightforward manner, allowing for easy integration with Prometheus.
prometheus-gc-stats is a specialized library that tracks garbage collection (GC) statistics in Node.js applications and exposes them for Prometheus. This library is particularly useful for monitoring memory usage and performance related to garbage collection, which can be critical for optimizing Node.js applications. If you're concerned about memory management and want to gain insights into your application's garbage collection behavior, prometheus-gc-stats is a valuable tool.
datadog-metrics is a Node.js library that allows developers to send custom metrics to Datadog, a popular monitoring and analytics platform. This package provides a simple and efficient way to track application performance, monitor system health, and gain insights into user behavior by sending real-time metrics to Datadog. While datadog-metrics is a powerful tool for monitoring, there are several alternatives that can also be used for similar purposes. Here are a few noteworthy options:
axios is a promise-based HTTP client for the browser and Node.js. While primarily used for making HTTP requests, it can also be utilized to send metrics to various monitoring services, including Datadog. Its simplicity and ease of use make it a popular choice for developers looking to integrate API calls and metrics reporting into their applications.
opentracing is a vendor-neutral API for distributed tracing. It provides a standard way to instrument applications for tracing requests across microservices. By using opentracing, developers can gain insights into application performance and latency, making it a suitable alternative for monitoring and observability.
prom-client is a Prometheus client for Node.js applications. It allows developers to expose application metrics in a format that can be scraped by Prometheus, a powerful monitoring system. If your infrastructure is built around Prometheus, using prom-client can be an effective way to collect and monitor metrics.
statsd-client is a client for sending metrics to a StatsD server. StatsD is a simple network daemon that listens for statistics, like counters and timers, and sends aggregates them to a backend service such as Graphite. If you are using StatsD for monitoring, this client can help you easily send metrics from your applications.
winston is a versatile logging library for Node.js. While its primary purpose is logging, it can also be extended to send metrics to monitoring services. By integrating logging and metrics collection, developers can gain a comprehensive view of application performance and behavior.
For some bundlers and some ES6 linter's you may need to do the following:
import { default as axios } from 'axios';
For cases where something went wrong when trying to import a module into a custom or legacy environment,
you can try importing the module package directly:
Note: CommonJS usage
In order to gain the TypeScript typings (for intellisense / autocomplete) while using CommonJS imports with require(), use the following approach:
import axios from 'axios';
//const axios = require('axios'); // legacy way
// Make a request for a user with a given ID
axios.get('/user?ID=12345')
.then(function (response) {
// handle success
console.log(response);
})
.catch(function (error) {
// handle error
console.log(error);
})
.finally(function () {
// always executed
});
// Optionally the request above could also be done as
axios.get('/user', {
params: {
ID: 12345
}
})
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.log(error);
})
.finally(function () {
// always executed
});
// Want to use async/await? Add the `async` keyword to your outer function/method.
async function getUser() {
try {
const response = await axios.get('/user?ID=12345');
console.log(response);
} catch (error) {
console.error(error);
}
}
Note: async/await is part of ECMAScript 2017 and is not supported in Internet
Explorer and older browsers, so use with caution.
The available instance methods are listed below. The specified config will be merged with the instance config.
axios#request(config)
axios#get(url[, config])
axios#delete(url[, config])
axios#head(url[, config])
axios#options(url[, config])
axios#post(url[, data[, config]])
axios#put(url[, data[, config]])
axios#patch(url[, data[, config]])
axios#getUri([config])
Request Config
These are the available config options for making requests. Only the url is required. Requests will default to GET if method is not specified.
{
// `url` is the server URL that will be used for the request
url: '/user',
// `method` is the request method to be used when making the request
method: 'get', // default
// `baseURL` will be prepended to `url` unless `url` is absolute.
// It can be convenient to set `baseURL` for an instance of axios to pass relative URLs
// to methods of that instance.
baseURL: 'https://some-domain.com/api/',
// `transformRequest` allows changes to the request data before it is sent to the server
// This is only applicable for request methods 'PUT', 'POST', 'PATCH' and 'DELETE'
// The last function in the array must return a string or an instance of Buffer, ArrayBuffer,
// FormData or Stream
// You may modify the headers object.
transformRequest: [function (data, headers) {
// Do whatever you want to transform the data
return data;
}],
// `transformResponse` allows changes to the response data to be made before
// it is passed to then/catch
transformResponse: [function (data) {
// Do whatever you want to transform the data
return data;
}],
// `headers` are custom headers to be sent
headers: {'X-Requested-With': 'XMLHttpRequest'},
// `params` are the URL parameters to be sent with the request
// Must be a plain object or a URLSearchParams object
params: {
ID: 12345
},
// `paramsSerializer` is an optional config that allows you to customize serializing `params`.
paramsSerializer: {
//Custom encoder function which sends key/value pairs in an iterative fashion.
encode?: (param: string): string => { /* Do custom operations here and return transformed string */ },
// Custom serializer function for the entire parameter. Allows user to mimic pre 1.x behaviour.
serialize?: (params: Record<string, any>, options?: ParamsSerializerOptions ),
//Configuration for formatting array indexes in the params.
indexes: false // Three available options: (1) indexes: null (leads to no brackets), (2) (default) indexes: false (leads to empty brackets), (3) indexes: true (leads to brackets with indexes).
},
// `data` is the data to be sent as the request body
// Only applicable for request methods 'PUT', 'POST', 'DELETE , and 'PATCH'
// When no `transformRequest` is set, must be of one of the following types:
// - string, plain object, ArrayBuffer, ArrayBufferView, URLSearchParams
// - Browser only: FormData, File, Blob
// - Node only: Stream, Buffer, FormData (form-data package)
data: {
firstName: 'Fred'
},
// syntax alternative to send data into the body
// method post
// only the value is sent, not the key
data: 'Country=Brasil&City=Belo Horizonte',
// `timeout` specifies the number of milliseconds before the request times out.
// If the request takes longer than `timeout`, the request will be aborted.
timeout: 1000, // default is `0` (no timeout)
// `withCredentials` indicates whether or not cross-site Access-Control requests
// should be made using credentials
withCredentials: false, // default
// `adapter` allows custom handling of requests which makes testing easier.
// Return a promise and supply a valid response (see lib/adapters/README.md)
adapter: function (config) {
/* ... */
},
// Also, you can set the name of the built-in adapter, or provide an array with their names
// to choose the first available in the environment
adapter: 'xhr' // 'fetch' | 'http' | ['xhr', 'http', 'fetch']
// `auth` indicates that HTTP Basic auth should be used, and supplies credentials.
// This will set an `Authorization` header, overwriting any existing
// `Authorization` custom headers you have set using `headers`.
// Please note that only HTTP Basic auth is configurable through this parameter.
// For Bearer tokens and such, use `Authorization` custom headers instead.
auth: {
username: 'janedoe',
password: 's00pers3cret'
},
// `responseType` indicates the type of data that the server will respond with
// options are: 'arraybuffer', 'document', 'json', 'text', 'stream'
// browser only: 'blob'
responseType: 'json', // default
// `responseEncoding` indicates encoding to use for decoding responses (Node.js only)
// Note: Ignored for `responseType` of 'stream' or client-side requests
// options are: 'ascii', 'ASCII', 'ansi', 'ANSI', 'binary', 'BINARY', 'base64', 'BASE64', 'base64url',
// 'BASE64URL', 'hex', 'HEX', 'latin1', 'LATIN1', 'ucs-2', 'UCS-2', 'ucs2', 'UCS2', 'utf-8', 'UTF-8',
// 'utf8', 'UTF8', 'utf16le', 'UTF16LE'
responseEncoding: 'utf8', // default
// `xsrfCookieName` is the name of the cookie to use as a value for xsrf token
xsrfCookieName: 'XSRF-TOKEN', // default
// `xsrfHeaderName` is the name of the http header that carries the xsrf token value
xsrfHeaderName: 'X-XSRF-TOKEN', // default
// `undefined` (default) - set XSRF header only for the same origin requests
withXSRFToken: boolean | undefined | ((config: InternalAxiosRequestConfig) => boolean | undefined),
// `onUploadProgress` allows handling of progress events for uploads
// browser & node.js
onUploadProgress: function ({loaded, total, progress, bytes, estimated, rate, upload = true}) {
// Do whatever you want with the Axios progress event
},
// `onDownloadProgress` allows handling of progress events for downloads
// browser & node.js
onDownloadProgress: function ({loaded, total, progress, bytes, estimated, rate, download = true}) {
// Do whatever you want with the Axios progress event
},
// `maxContentLength` defines the max size of the http response content in bytes allowed in node.js
maxContentLength: 2000,
// `maxBodyLength` (Node only option) defines the max size of the http request content in bytes allowed
maxBodyLength: 2000,
// `validateStatus` defines whether to resolve or reject the promise for a given
// HTTP response status code. If `validateStatus` returns `true` (or is set to `null`
// or `undefined`), the promise will be resolved; otherwise, the promise will be
// rejected.
validateStatus: function (status) {
return status >= 200 && status < 300; // default
},
// `maxRedirects` defines the maximum number of redirects to follow in node.js.
// If set to 0, no redirects will be followed.
maxRedirects: 21, // default
// `beforeRedirect` defines a function that will be called before redirect.
// Use this to adjust the request options upon redirecting,
// to inspect the latest response headers,
// or to cancel the request by throwing an error
// If maxRedirects is set to 0, `beforeRedirect` is not used.
beforeRedirect: (options, { headers }) => {
if (options.hostname === "example.com") {
options.auth = "user:password";
}
},
// `socketPath` defines a UNIX Socket to be used in node.js.
// e.g. '/var/run/docker.sock' to send requests to the docker daemon.
// Only either `socketPath` or `proxy` can be specified.
// If both are specified, `socketPath` is used.
socketPath: null, // default
// `transport` determines the transport method that will be used to make the request. If defined, it will be used. Otherwise, if `maxRedirects` is 0, the default `http` or `https` library will be used, depending on the protocol specified in `protocol`. Otherwise, the `httpFollow` or `httpsFollow` library will be used, again depending on the protocol, which can handle redirects.
transport: undefined, // default
// `httpAgent` and `httpsAgent` define a custom agent to be used when performing http
// and https requests, respectively, in node.js. This allows options to be added like
// `keepAlive` that are not enabled by default.
httpAgent: new http.Agent({ keepAlive: true }),
httpsAgent: new https.Agent({ keepAlive: true }),
// `proxy` defines the hostname, port, and protocol of the proxy server.
// You can also define your proxy using the conventional `http_proxy` and
// `https_proxy` environment variables. If you are using environment variables
// for your proxy configuration, you can also define a `no_proxy` environment
// variable as a comma-separated list of domains that should not be proxied.
// Use `false` to disable proxies, ignoring environment variables.
// `auth` indicates that HTTP Basic auth should be used to connect to the proxy, and
// supplies credentials.
// This will set an `Proxy-Authorization` header, overwriting any existing
// `Proxy-Authorization` custom headers you have set using `headers`.
// If the proxy server uses HTTPS, then you must set the protocol to `https`.
proxy: {
protocol: 'https',
host: '127.0.0.1',
// hostname: '127.0.0.1' // Takes precedence over 'host' if both are defined
port: 9000,
auth: {
username: 'mikeymike',
password: 'rapunz3l'
}
},
// `cancelToken` specifies a cancel token that can be used to cancel the request
// (see Cancellation section below for details)
cancelToken: new CancelToken(function (cancel) {
}),
// an alternative way to cancel Axios requests using AbortController
signal: new AbortController().signal,
// `decompress` indicates whether or not the response body should be decompressed
// automatically. If set to `true` will also remove the 'content-encoding' header
// from the responses objects of all decompressed responses
// - Node only (XHR cannot turn off decompression)
decompress: true, // default
// `insecureHTTPParser` boolean.
// Indicates where to use an insecure HTTP parser that accepts invalid HTTP headers.
// This may allow interoperability with non-conformant HTTP implementations.
// Using the insecure parser should be avoided.
// see options https://nodejs.org/dist/latest-v12.x/docs/api/http.html#http_http_request_url_options_callback
// see also https://nodejs.org/en/blog/vulnerability/february-2020-security-releases/#strict-http-header-parsing-none
insecureHTTPParser: undefined, // default
// transitional options for backward compatibility that may be removed in the newer versions
transitional: {
// silent JSON parsing mode
// `true` - ignore JSON parsing errors and set response.data to null if parsing failed (old behaviour)
// `false` - throw SyntaxError if JSON parsing failed (Note: responseType must be set to 'json')
silentJSONParsing: true, // default value for the current Axios version
// try to parse the response string as JSON even if `responseType` is not 'json'
forcedJSONParsing: true,
// throw ETIMEDOUT error instead of generic ECONNABORTED on request timeouts
clarifyTimeoutError: false,
},
env: {
// The FormData class to be used to automatically serialize the payload into a FormData object
FormData: window?.FormData || global?.FormData
},
formSerializer: {
visitor: (value, key, path, helpers) => {}; // custom visitor function to serialize form values
dots: boolean; // use dots instead of brackets format
metaTokens: boolean; // keep special endings like {} in parameter key
indexes: boolean; // array indexes format null - no brackets, false - empty brackets, true - brackets with indexes
},
// http adapter only (node.js)
maxRate: [
100 * 1024, // 100KB/s upload limit,
100 * 1024 // 100KB/s download limit
]
}
Response Schema
The response for a request contains the following information.
{
// `data` is the response that was provided by the server
data: {},
// `status` is the HTTP status code from the server response
status: 200,
// `statusText` is the HTTP status message from the server response
statusText: 'OK',
// `headers` the HTTP headers that the server responded with
// All header names are lowercase and can be accessed using the bracket notation.
// Example: `response.headers['content-type']`
headers: {},
// `config` is the config that was provided to `axios` for the request
config: {},
// `request` is the request that generated this response
// It is the last ClientRequest instance in node.js (in redirects)
// and an XMLHttpRequest instance in the browser
request: {}
}
When using then, you will receive the response as follows:
When using catch, or passing a rejection callback as second parameter of then, the response will be available through the error object as explained in the Handling Errors section.
Config Defaults
You can specify config defaults that will be applied to every request.
Global axios defaults
axios.defaults.baseURL = 'https://api.example.com';
// Important: If axios is used with multiple domains, the AUTH_TOKEN will be sent to all of them.
// See below for an example using Custom instance defaults instead.
axios.defaults.headers.common['Authorization'] = AUTH_TOKEN;
axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded';
Custom instance defaults
// Set config defaults when creating the instance
const instance = axios.create({
baseURL: 'https://api.example.com'
});
// Alter defaults after instance has been created
instance.defaults.headers.common['Authorization'] = AUTH_TOKEN;
Config order of precedence
Config will be merged with an order of precedence. The order is library defaults found in lib/defaults.js, then defaults property of the instance, and finally config argument for the request. The latter will take precedence over the former. Here's an example.
// Create an instance using the config defaults provided by the library
// At this point the timeout config value is `0` as is the default for the library
const instance = axios.create();
// Override timeout default for the library
// Now all requests using this instance will wait 2.5 seconds before timing out
instance.defaults.timeout = 2500;
// Override timeout for this request as it's known to take a long time
instance.get('/longRequest', {
timeout: 5000
});
Interceptors
You can intercept requests or responses before they are handled by then or catch.
// Add a request interceptor
axios.interceptors.request.use(function (config) {
// Do something before request is sent
return config;
}, function (error) {
// Do something with request error
return Promise.reject(error);
});
// Add a response interceptor
axios.interceptors.response.use(function (response) {
// Any status code that lie within the range of 2xx cause this function to trigger
// Do something with response data
return response;
}, function (error) {
// Any status codes that falls outside the range of 2xx cause this function to trigger
// Do something with response error
return Promise.reject(error);
});
If you need to remove an interceptor later you can.
When you add request interceptors, they are presumed to be asynchronous by default. This can cause a delay
in the execution of your axios request when the main thread is blocked (a promise is created under the hood for
the interceptor and your request gets put on the bottom of the call stack). If your request interceptors are synchronous you can add a flag
to the options object that will tell axios to run the code synchronously and avoid any delays in request execution.
axios.interceptors.request.use(function (config) {
config.headers.test = 'I am only a header!';
return config;
}, null, { synchronous: true });
If you want to execute a particular interceptor based on a runtime check,
you can add a runWhen function to the options object. The request interceptor will not be executed if and only if the return
of runWhen is false. The function will be called with the config
object (don't forget that you can bind your own arguments to it as well.) This can be handy when you have an
asynchronous request interceptor that only needs to run at certain times.
There are many different axios error messages that can appear that can provide basic information about the specifics of the error and where opportunities may lie in debugging.
The general structure of axios errors is as follows:
| Property | Definition |
| -------- | ---------- |
| message | A quick summary of the error message and the status it failed with. |
| name | This defines where the error originated from. For axios, it will always be an 'AxiosError'. |
| stack | Provides the stack trace of the error. |
| config | An axios config object with specific instance configurations defined by the user from when the request was made |
| code | Represents an axios identified error. The table below lists out specific definitions for internal axios error. |
| status | HTTP response status code. See here for common HTTP response status code meanings.
Below is a list of potential axios identified error
| Code | Definition |
| -------- | ---------- |
| ERR_BAD_OPTION_VALUE | Invalid or unsupported value provided in axios configuration. |
| ERR_BAD_OPTION | Invalid option provided in axios configuration. |
| ECONNABORTED | Request timed out due to exceeding timeout specified in axios configuration. |
| ETIMEDOUT | Request timed out due to exceeding default axios timelimit. |
| ERR_NETWORK | Network-related issue.
| ERR_FR_TOO_MANY_REDIRECTS | Request is redirected too many times; exceeds max redirects specified in axios configuration.
| ERR_DEPRECATED | Deprecated feature or method used in axios.
| ERR_BAD_RESPONSE | Response cannot be parsed properly or is in an unexpected format.
| ERR_BAD_REQUEST | Requested has unexpected format or missing required parameters. |
| ERR_CANCELED | Feature or method is canceled explicitly by the user.
| ERR_NOT_SUPPORT | Feature or method not supported in the current axios environment.
| ERR_INVALID_URL | Invalid URL provided for axios request.
Handling Errors
the default behavior is to reject every response that returns with a status code that falls out of the range of 2xx and treat it as an error.
axios.get('/user/12345')
.catch(function (error) {
if (error.response) {
// The request was made and the server responded with a status code
// that falls out of the range of 2xx
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
// The request was made but no response was received
// `error.request` is an instance of XMLHttpRequest in the browser and an instance of
// http.ClientRequest in node.js
console.log(error.request);
} else {
// Something happened in setting up the request that triggered an Error
console.log('Error', error.message);
}
console.log(error.config);
});
Using the validateStatus config option, you can override the default condition (status >= 200 && status < 300) and define HTTP code(s) that should throw an error.
axios.get('/user/12345', {
validateStatus: function (status) {
return status < 500; // Resolve only if the status code is less than 500
}
})
Using toJSON you get an object with more information about the HTTP error.
This API is deprecated since v0.22.0 and shouldn't be used in new projects
You can create a cancel token using the CancelToken.source factory as shown below:
const CancelToken = axios.CancelToken;
const source = CancelToken.source();
axios.get('/user/12345', {
cancelToken: source.token
}).catch(function (thrown) {
if (axios.isCancel(thrown)) {
console.log('Request canceled', thrown.message);
} else {
// handle error
}
});
axios.post('/user/12345', {
name: 'new name'
}, {
cancelToken: source.token
})
// cancel the request (the message parameter is optional)
source.cancel('Operation canceled by the user.');
You can also create a cancel token by passing an executor function to the CancelToken constructor:
const CancelToken = axios.CancelToken;
let cancel;
axios.get('/user/12345', {
cancelToken: new CancelToken(function executor(c) {
// An executor function receives a cancel function as a parameter
cancel = c;
})
});
// cancel the request
cancel();
Note: you can cancel several requests with the same cancel token/abort controller.
If a cancellation token is already cancelled at the moment of starting an Axios request, then the request is cancelled immediately, without any attempts to make a real request.
During the transition period, you can use both cancellation APIs, even for the same request:
If your backend body-parser (like body-parser of express.js) supports nested objects decoding, you will get the same object on the server-side automatically
var app = express();
app.use(bodyParser.urlencoded({ extended: true })); // support encoded bodies
app.post('/', function (req, res, next) {
// echo body as JSON
res.send(JSON.stringify(req.body));
});
server = app.listen(3000);
Using multipart/form-data format
FormData
To send the data as a multipart/formdata you need to pass a formData instance as a payload.
Setting the Content-Type header is not required as Axios guesses it based on the payload type.
const formData = new FormData();
formData.append('foo', 'bar');
axios.post('https://httpbin.org/post', formData);
In node.js, you can use the form-data library as follows:
const FormData = require('form-data');
const form = new FormData();
form.append('my_field', 'my value');
form.append('my_buffer', new Buffer(10));
form.append('my_file', fs.createReadStream('/foo/bar.jpg'));
axios.post('https://example.com', form)
🆕 Automatic serialization to FormData
Starting from v0.27.0, Axios supports automatic object serialization to a FormData object if the request Content-Type
header is set to multipart/form-data.
The following request will submit the data in a FormData format (Browser & Node.js):
Axios FormData serializer supports some special endings to perform the following operations:
{} - serialize the value with JSON.stringify
[] - unwrap the array-like object as separate fields with the same key
Note: unwrap/expand operation will be used by default on arrays and FileList objects
FormData serializer supports additional options via config.formSerializer: object property to handle rare cases:
visitor: Function - user-defined visitor function that will be called recursively to serialize the data object
to a FormData object by following custom rules.
dots: boolean = false - use dot notation instead of brackets to serialize arrays and objects;
metaTokens: boolean = true - add the special ending (e.g user{}: '{"name": "John"}') in the FormData key.
The back-end body-parser could potentially use this meta-information to automatically parse the value as JSON.
indexes: null|false|true = false - controls how indexes will be added to unwrapped keys of flat array-like objects
Axios supports the following shortcut methods: postForm, putForm, patchForm
which are just the corresponding http methods with the Content-Type header preset to multipart/form-data.
Sending Blobs/Files as JSON (base64) is not currently supported.
🆕 Progress capturing
Axios supports both browser and node environments to capture request upload/download progress.
The frequency of progress events is forced to be limited to 3 times per second.
await axios.post(url, data, {
onUploadProgress: function (axiosProgressEvent) {
/*{
loaded: number;
total?: number;
progress?: number; // in range [0..1]
bytes: number; // how many bytes have been transferred since the last trigger (delta)
estimated?: number; // estimated time in seconds
rate?: number; // upload speed in bytes
upload: true; // upload sign
}*/
},
onDownloadProgress: function (axiosProgressEvent) {
/*{
loaded: number;
total?: number;
progress?: number;
bytes: number;
estimated?: number;
rate?: number; // download speed in bytes
download: true; // download sign
}*/
}
});
You can also track stream upload/download progress in node.js:
Note:
Capturing FormData upload progress is not currently supported in node.js environments.
⚠️ Warning
It is recommended to disable redirects by setting maxRedirects: 0 to upload the stream in the node.js environment,
as follow-redirects package will buffer the entire stream in RAM without following the "backpressure" algorithm.
🆕 Rate limiting
Download and upload rate limits can only be set for the http adapter (node.js):
Axios has its own AxiosHeaders class to manipulate headers using a Map-like API that guarantees caseless work.
Although HTTP is case-insensitive in headers, Axios will retain the case of the original header for stylistic reasons
and for a workaround when servers mistakenly consider the header's case.
The old approach of directly manipulating headers object is still available, but deprecated and not recommended for future usage.
Working with headers
An AxiosHeaders object instance can contain different types of internal values. that control setting and merging logic.
The final headers object with string values is obtained by Axios by calling the toJSON method.
Note: By JSON here we mean an object consisting only of string values intended to be sent over the network.
The header value can be one of the following types:
string - normal string value that will be sent to the server
null - skip header when rendering to JSON
false - skip header when rendering to JSON, additionally indicates that set method must be called with rewrite option set to true
to overwrite this value (Axios uses this internally to allow users to opt out of installing certain headers like User-Agent or Content-Type)
undefined - value is not set
Note: The header value is considered set if it is not equal to undefined.
The headers object is always initialized inside interceptors and transformers:
axios.interceptors.request.use((request: InternalAxiosRequestConfig) => {
request.headers.set('My-header', 'value');
request.headers.set({
"My-set-header1": "my-set-value1",
"My-set-header2": "my-set-value2"
});
request.headers.set('User-Agent', false); // disable subsequent setting the header by Axios
request.headers.setContentType('text/plain');
request.headers['My-set-header2'] = 'newValue' // direct access is deprecated
return request;
}
);
You can iterate over an AxiosHeaders instance using a for...of statement:
const headers = new AxiosHeaders({
foo: '1',
bar: '2',
baz: '3'
});
for(const [header, value] of headers) {
console.log(header, value);
}
// foo 1
// bar 2
// baz 3
Returns the internal value of the header. It can take an extra argument to parse the header's value with RegExp.exec,
matcher function or internal key-value parser.
Returns true if at least one header has been cleared.
AxiosHeaders#normalize(format);
If the headers object was changed directly, it can have duplicates with the same name but in different cases.
This method normalizes the headers object by combining duplicate keys into one.
Axios uses this method internally after calling each interceptor.
Set format to true for converting headers name to lowercase and capitalize the initial letters (cOntEnt-type => Content-Type)
Merges the instance with targets into a new AxiosHeaders instance. If the target is a string, it will be parsed as RAW HTTP headers.
Returns a new AxiosHeaders instance.
AxiosHeaders#toJSON(asStrings?)
toJSON(asStrings?: boolean): RawAxiosHeaders;
Resolve all internal headers values into a new null prototype object.
Set asStrings to true to resolve arrays as a string containing all elements, separated by commas.
Returns a new AxiosHeaders instance created from the raw headers passed in,
or simply returns the given headers object if it's an AxiosHeaders instance.
Fetch adapter was introduced in v1.7.0. By default, it will be used if xhr and http adapters are not available in the build,
or not supported by the environment.
To use it by default, it must be selected explicitly:
The adapter supports the same functionality as xhr adapter, including upload and download progress capturing.
Also, it supports additional response types such as stream and formdata (if supported by the environment).
Semver
Until axios reaches a 1.0 release, breaking changes will be released with a new minor version. For example 0.5.1, and 0.5.4 will have the same API, but 0.6.0 will have breaking changes.
Promises
axios depends on a native ES6 Promise implementation to be supported.
If your environment doesn't support ES6 Promises, you can polyfill.
TypeScript
axios includes TypeScript definitions and a type guard for axios errors.
let user: User = null;
try {
const { data } = await axios.get('/user?ID=12345');
user = data.userDetails;
} catch (error) {
if (axios.isAxiosError(error)) {
handleAxiosError(error);
} else {
handleUnexpectedError(error);
}
}
Because axios dual publishes with an ESM default export and a CJS module.exports, there are some caveats.
The recommended setting is to use "moduleResolution": "node16" (this is implied by "module": "node16"). Note that this requires TypeScript 4.7 or greater.
If use ESM, your settings should be fine.
If you compile TypeScript to CJS and you can’t use "moduleResolution": "node 16", you have to enable esModuleInterop.
If you use TypeScript to type check CJS JavaScript code, your only option is to use "moduleResolution": "node16".
Online one-click setup
You can use Gitpod, an online IDE(which is free for Open Source) for contributing or running the examples online.
axios is heavily inspired by the $http service provided in AngularJS. Ultimately axios is an effort to provide a standalone $http-like service for use outside of AngularJS.