feedparser, rss, and rss-parser are npm packages used for working with RSS and Atom feeds in JavaScript applications, but they serve fundamentally different purposes. feedparser and rss-parser are feed parsers that convert XML feed content into structured JavaScript objects, while rss is a feed generator that creates RSS or Atom XML from JavaScript data. Understanding this distinction is crucial: you cannot parse feeds with rss, nor can you generate feeds with feedparser or rss-parser. Each package offers different APIs—feedparser uses a streaming, event-driven model suitable for large-scale or memory-constrained scenarios; rss-parser provides a modern promise-based interface that simplifies fetching and parsing from URLs or XML strings; and rss uses a builder pattern to construct valid RSS 2.0 or Atom 1.0 feeds for publishing content.
When you need to consume or generate RSS/Atom feeds in a JavaScript application, three npm packages often come up: feedparser, rss, and rss-parser. But they serve very different purposes — two are for parsing feeds, and one is for generating them. Let’s cut through the confusion and compare them based on real-world usage.
First, it’s critical to understand what each package actually does:
feedparser: A streaming XML parser for RSS and Atom feeds. It reads raw XML (as a stream or string) and emits structured JavaScript objects.rss-parser: A promise-based RSS/Atom feed parser that fetches and parses feeds from URLs or XML strings.rss: A feed generator, not a parser. It helps you create RSS 2.0 or Atom 1.0 XML from JavaScript objects.⚠️ Important: You cannot use
rssto parse incoming feeds. It only builds them. Confusing it with the others is a common mistake.
Let’s look at how each handles its role.
Both feedparser and rss-parser parse RSS/Atom, but their APIs and execution models differ significantly.
feedparser: Streaming, Event-Driven Parsingfeedparser works with Node.js streams. It’s ideal when you’re dealing with large feeds or want fine-grained control over parsing (e.g., processing items as they arrive).
import FeedParser from 'feedparser';
import { createReadStream } from 'fs';
const parser = new FeedParser();
const stream = createReadStream('feed.xml');
stream.pipe(parser);
parser.on('readable', function () {
let item;
while ((item = this.read())) {
console.log(item.title);
}
});
parser.on('error', (err) => {
console.error('Parse error:', err);
});
You can also pipe HTTP responses directly:
import https from 'https';
import FeedParser from 'feedparser';
https.get('https://example.com/feed.xml', (res) => {
const parser = new FeedParser();
res.pipe(parser);
// ... handle 'readable' and 'error' events
});
This approach is memory-efficient for large feeds but requires managing streams and events.
rss-parser: Simple Promise-Based APIrss-parser abstracts away streams and gives you a clean async/await interface. It can fetch from a URL or parse a raw XML string.
import Parser from 'rss-parser';
const parser = new Parser();
// Parse from URL
const feed = await parser.parseURL('https://example.com/feed.xml');
console.log(feed.title);
feed.items.forEach(item => console.log(item.title));
// Or parse from XML string
const xmlString = `<rss version="2.0"><channel>...</channel></rss>`;
const feedFromXml = await parser.parseString(xmlString);
It normalizes fields across RSS and Atom formats (e.g., pubDate becomes isoDate), which reduces boilerplate.
rss Does ThisIf you need to create an RSS feed (e.g., for a blog or podcast), rss is your go-to. The other two cannot do this.
import RSS from 'rss';
const feed = new RSS({
title: 'My Blog',
description: 'A tech blog',
feed_url: 'https://example.com/rss.xml',
site_url: 'https://example.com',
pubDate: new Date(),
});
feed.item({
title: 'New Post',
description: 'Learn about RSS!',
url: 'https://example.com/post1',
date: new Date()
});
const xml = feed.xml(); // Returns XML string
This is straightforward and widely used in static site generators and CMS backends.
As of the latest checks:
feedparser: Actively maintained. No deprecation notice. Works well in modern Node.js environments.rss-parser: Actively maintained. Regular updates and issue responses.rss: Actively maintained. Used by many production systems for feed generation.None of these packages are deprecated, so all are safe for new projects — as long as you use them for their intended purpose.
feedparserrss-parserrss❌ Never try to use
rssto parse a feed — it will not work.
Yes! A common pattern is:
rss-parser to fetch and parse a third-party feed.rss to generate your own aggregated feed.import Parser from 'rss-parser';
import RSS from 'rss';
const parser = new Parser();
const sourceFeed = await parser.parseURL('https://news.example.com/feed');
const myFeed = new RSS({
title: 'My Aggregated News',
site_url: 'https://myapp.com',
feed_url: 'https://myapp.com/aggregated.rss'
});
sourceFeed.items.slice(0, 10).forEach(item => {
myFeed.item({
title: item.title,
url: item.link,
date: item.pubDate
});
});
const xml = myFeed.xml();
| Package | Purpose | API Style | Fetches URLs? | Generates Feeds? | Streaming? |
|---|---|---|---|---|---|
feedparser | Parser | Event-driven | ❌ (use with http) | ❌ | ✅ |
rss-parser | Parser | Promise-based | ✅ | ❌ | ❌ |
rss | Generator | Builder pattern | ❌ | ✅ | ❌ |
Don’t pick based on popularity — pick based on what you’re trying to do:
feedparser (for performance and control) and rss-parser (for simplicity).rss — it’s the standard tool for the job.Mixing up parsers and generators is the #1 mistake developers make with these packages. Keep their roles clear, and you’ll save hours of debugging.
Choose feedparser if you need a streaming, event-driven parser for RSS/Atom feeds and are working in a Node.js environment where memory efficiency or real-time item processing matters (e.g., aggregating thousands of feeds). It integrates naturally with Node streams but requires handling events and errors manually. Avoid it if you prefer a simpler async/await API or need built-in HTTP fetching.
Choose rss only when you need to generate RSS 2.0 or Atom 1.0 feeds from your own content, such as for blogs, podcasts, or news sites. It cannot parse incoming feeds, so never use it for consumption. It’s the standard choice for feed publishing in Node.js backends and static site generators due to its straightforward builder API.
Choose rss-parser if you want a simple, promise-based API to fetch and parse RSS/Atom feeds from URLs or XML strings with minimal setup. It normalizes feed metadata across formats and is ideal for typical web apps (e.g., Next.js API routes) where you don’t need streaming. Avoid it if you’re processing very large feeds or require fine-grained control over parsing performance.
Feedparser is for parsing RSS, Atom, and RDF feeds in node.js.
It has a couple features you don't usually see in other feed parsers:
npm install feedparser
This example is just to briefly demonstrate basic concepts.
Please also review the complete example for a thorough working example that is a suitable starting point for your app.
var FeedParser = require('feedparser');
var fetch = require('node-fetch'); // for fetching the feed
var req = fetch('http://somefeedurl.xml')
var feedparser = new FeedParser([options]);
req.then(function (res) {
if (res.status !== 200) {
throw new Error('Bad status code');
}
else {
// The response `body` -- res.body -- is a stream
res.body.pipe(feedparser);
}
}, function (err) {
// handle any request errors
});
feedparser.on('error', function (error) {
// always handle errors
});
feedparser.on('readable', function () {
// This is where the action is!
var stream = this; // `this` is `feedparser`, which is a stream
var meta = this.meta; // **NOTE** the "meta" is always available in the context of the feedparser instance
var item;
while (item = stream.read()) {
console.log(item);
}
});
You can also check out this nice working implementation that demonstrates one way to handle all the hard and annoying stuff. :smiley:
normalize - Set to false to override Feedparser's default behavior,
which is to parse feeds into an object that contains the generic properties
patterned after (although not identical to) the RSS 2.0 format, regardless
of the feed's format.
addmeta - Set to false to override Feedparser's default behavior, which
is to add the feed's meta information to each article.
feedurl - The url (string) of the feed. FeedParser is very good at
resolving relative urls in feeds. But some feeds use relative urls without
declaring the xml:base attribute any place in the feed. This is perfectly
valid, but we don't know know the feed's url before we start parsing the feed
and trying to resolve those relative urls. If we discover the feed's url, we
will go back and resolve the relative urls we've already seen, but this takes
a little time (not much). If you want to be sure we never have to re-resolve
relative urls (or if FeedParser is failing to properly resolve relative urls),
you should set the feedurl option. Otherwise, feel free to ignore this option.
resume_saxerror - Set to false to override Feedparser's default behavior, which
is to silently handle them and then automatically resume parsing. In
my experience, SAXErrors are not usually fatal, so this is usually helpful
behavior. If you prefer to abort parsing the feed when there's a SAXError,
set resume_saxerror to false, which will cause the SAXError to be emitted
on error and abort parsing.
See the examples directory.
Feedparser is a transform stream operating in "object mode": XML in -> Javascript objects out. Each readable chunk is an object representing an article in the feed.
meta - called with feed meta when it has been parsederror - called with error whenever there is a fatal Feedparser error. SAXErrors are only emitted here when resume_saxerror is false; otherwise they are silently collected in feedparser.errors.Feedparser parses each feed into a meta (emitted on the meta event) portion
and one or more articles (emited on the data event or readable after the readable
is emitted).
Regardless of the format of the feed, the meta and each article contain a
uniform set of generic properties patterned after (although not identical to)
the RSS 2.0 format, as well as all of the properties originally contained in the
feed. So, for example, an Atom feed may have a meta.description property, but
it will also have a meta['atom:subtitle'] property.
The purpose of the generic properties is to provide the user a uniform interface
for accessing a feed's information without needing to know the feed's format
(i.e., RSS versus Atom) or having to worry about handling the differences
between the formats. However, the original information is also there, in case
you need it. In addition, Feedparser supports some popular namespace extensions
(or portions of them), such as portions of the itunes, media, feedburner
and pheedo extensions. So, for example, if a feed article contains either an
itunes:image or media:thumbnail, the url for that image will be contained in
the article's image.url property.
All generic properties are "pre-initialized" to null (or empty arrays or
objects for certain properties). This should save you from having to do a lot of
checking for undefined, such as, for example, when you are using jade
templates.
In addition, all properties (and namespace prefixes) use only lowercase letters, regardless of how they were capitalized in the original feed. ("xmlUrl" and "pubDate" also are still used to provide backwards compatibility.) This decision places ease-of-use over purity -- hopefully, you will never need to think about whether you should camelCase "pubDate" ever again.
The title and description properties of meta and the title property of
each article have any HTML stripped if you let feedparser normalize the output.
If you really need the HTML in those elements, there are always the originals:
e.g., meta['atom:subtitle']['#'].
url and title properties)link property, origlink contains the original link)guid field and the isPermalink attribute is not set to false, permalink contains the value of guid)url and title properties)url and title properties pointing to the original source for an article; see the RSS Spec for an explanation of this element)url property and possibly type and length properties)article emissions)View all the contributors.
Although node-feedparser no longer shares any code with node-easyrss, it was
the original inspiration and a starting point.
(The MIT License)
Copyright (c) 2011-2026 Dan MacTough and contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.