chatgpt vs openai
Official vs Community SDKs for OpenAI API Integration in JavaScript
chatgptopenaiSimilar Packages:

Official vs Community SDKs for OpenAI API Integration in JavaScript

openai is the official JavaScript/TypeScript SDK provided by OpenAI for interacting with their APIs, including ChatGPT, DALL·E, and other models. It offers full coverage of the OpenAI REST API with strong TypeScript support, automatic retries, streaming, and up-to-date features. chatgpt, on the other hand, is a third-party community package that focuses exclusively on the chat completions endpoint (primarily for GPT-3.5 and GPT-4) and provides a simplified, higher-level interface for common conversational use cases. While it abstracts away some complexity, it does not cover the full OpenAI API surface and is not maintained by OpenAI.

Npm Package Weekly Downloads Trend

3 Years

Github Stars Ranking

Stat Detail

Package
Downloads
Stars
Size
Issues
Publish
License
chatgpt018,129131 kB153 years agoMIT
openai010,6647.42 MB1492 days agoApache-2.0

Official vs Community SDKs: openai vs chatgpt for OpenAI Integration

When integrating OpenAI capabilities into your JavaScript application, you’ll likely encounter two popular npm packages: the official openai SDK and the community-built chatgpt. Despite similar names, they differ significantly in scope, maintenance, and suitability for professional use. Let’s break down what each offers — and when to use which.

🏛️ Package Status and Maintenance

openai is the official SDK published and maintained by OpenAI. It’s actively updated alongside API changes, includes comprehensive TypeScript definitions, and follows semantic versioning. You can trust it to reflect the latest OpenAI features and best practices.

chatgpt is a third-party package created by a community developer. As of mid-2024, its GitHub repository shows limited recent activity, and it explicitly states it’s “not affiliated with OpenAI.” Crucially, it wraps only a subset of the OpenAI API and may not keep pace with new model releases or endpoint changes.

⚠️ Important: The chatgpt package should not be used in new production projects due to its unofficial status and narrow scope. Relying on it introduces risk of breakage when OpenAI updates its API.

🧩 API Coverage and Flexibility

openai gives you full access to every OpenAI endpoint: chat completions, embeddings, image generation (DALL·E), fine-tuning, audio transcription, and more.

// openai: Full API access
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: '...' });

// Chat
await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }]
});

// Embeddings
await openai.embeddings.create({
  model: 'text-embedding-ada-002',
  input: 'The quick brown fox'
});

// Images
await openai.images.generate({
  prompt: 'A red panda wearing sunglasses',
  n: 1,
  size: '1024x1024'
});

chatgpt supports only chat completions, and even then, only through a simplified abstraction. You cannot access embeddings, images, or other OpenAI services.

// chatgpt: Chat-only, high-level API
import { sendChatMessage } from 'chatgpt';

const response = await sendChatMessage({
  message: 'Hello!',
  conversationId: '...', // optional
  parentMessageId: '...' // for threading
});
// Returns { response, conversationId, messageId }
// No access to raw OpenAI parameters like temperature, top_p, etc.

🔌 Streaming Support

Real-time streaming is critical for responsive chat UIs. Here’s how each handles it.

openai provides native, standards-compliant streaming using async iterators:

// openai: Streaming with full control
const stream = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || '';
  process.stdout.write(content); // or update React state
}

chatgpt also supports streaming, but through a callback-based interface that’s less idiomatic in modern JavaScript:

// chatgpt: Streaming via callback
import { streamChatMessage } from 'chatgpt';

const abortController = new AbortController();

streamChatMessage({
  message: 'Tell me a story',
  onProgress: (partialResponse) => {
    console.log(partialResponse.text);
  },
  abortSignal: abortController.signal
});

While functional, this approach doesn’t integrate as cleanly with React’s async patterns or modern async/await workflows.

🧪 Error Handling and Debugging

openai throws well-defined error types (APIError, APIConnectionError, etc.) with clear messages and HTTP status codes:

try {
  await openai.chat.completions.create({ /*...*/ });
} catch (error) {
  if (error instanceof OpenAI.APIError) {
    console.error('OpenAI API error:', error.status, error.message);
  }
}

chatgpt wraps errors but doesn’t expose underlying HTTP details, making debugging harder:

// chatgpt: Generic error handling
try {
  await sendChatMessage({ message: 'Hi' });
} catch (err) {
  console.error('Chat failed:', err.message); // Less actionable info
}

📦 TypeScript and Developer Experience

openai ships with first-class TypeScript support, including complete type definitions for every parameter, response, and error. Your IDE will autocomplete model names, validate message structures, and catch mistakes at compile time.

chatgpt has basic TypeScript definitions, but they’re less comprehensive and don’t reflect the full range of OpenAI’s schema. You’ll often fall back to any or manual casting.

🔄 Message History Management

One area where chatgpt tries to add value is by managing conversation state automatically:

// chatgpt: Built-in conversation tracking
const { response, conversationId } = await sendChatMessage({
  message: 'What’s my name?',
  conversationId: existingId // reuses context
});

However, this is easily replicated with openai using standard patterns:

// openai: Manual but explicit history
let messages = [
  { role: 'system', content: 'You are a helpful assistant.' }
];

function addMessage(role, content) {
  messages.push({ role, content });
}

addMessage('user', 'What’s my name?');
const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages
});
addMessage('assistant', response.choices[0].message.content);

The explicit approach gives you full control over token usage, context window limits, and message pruning — critical for production apps.

📊 Summary: Key Differences

Featureopenaichatgpt
Maintainer✅ Official (OpenAI)❌ Community
API Coverage🌐 Full (chat, embeddings, images, etc.)💬 Chat completions only
Streaming⚡ Async iterator (modern)📞 Callback-based
TypeScript🧠 First-class, complete types🧩 Basic, incomplete
Error Details🛠️ Rich, actionable errors📦 Generic messages
Production Ready✅ Yes❌ Not recommended

💡 Final Recommendation

For any professional frontend or full-stack project, use the openai package. It’s reliable, complete, and future-proof. The minor extra setup for message history is worth the control and stability.

Avoid chatgpt unless you’re building a quick demo and understand the trade-offs. In production code, unofficial wrappers introduce unnecessary risk and technical debt.

Remember: when working with AI APIs, explicit is better than implicit. The official SDK gives you the transparency and flexibility needed to build robust, maintainable applications.

How to Choose: chatgpt vs openai

  • chatgpt:

    Choose chatgpt only if you need a minimal, opinionated wrapper strictly for chat-based interactions and are comfortable relying on a community-maintained package that may lag behind API changes. It’s suitable for simple prototypes or hobby projects where you want to avoid boilerplate for message history management, but avoid it in production systems requiring reliability, full API access, or long-term maintainability.

  • openai:

    Choose openai for any serious project—especially production applications—because it’s the official, fully supported SDK that covers all OpenAI endpoints (chat, embeddings, images, fine-tuning, etc.), includes robust error handling, streaming support, and stays current with API updates. Its TypeScript definitions and consistent API design make it the safe, scalable choice for professional frontend and full-stack development.

README for chatgpt

ChatGPT API

Node.js client for the official ChatGPT API.

NPM Build Status MIT License Prettier Code Formatting

Intro

This package is a Node.js wrapper around ChatGPT by OpenAI. TS batteries included. ✨

Example usage

Updates

April 10, 2023

This package now fully supports GPT-4! 🔥

We also just released a TypeScript chatgpt-plugin package which contains helpers and examples to make it as easy as possible to start building your own ChatGPT Plugins in JS/TS. Even if you don't have developer access to ChatGPT Plugins yet, you can still use the chatgpt-plugin repo to get a head start on building your own plugins locally.

If you have access to the gpt-4 model, you can run the following to test out the CLI with GPT-4:

npx chatgpt@latest --model gpt-4 "Hello world"

Using the chatgpt CLI with gpt-4

We still support both the official ChatGPT API and the unofficial proxy API, but we now recommend using the official API since it's significantly more robust and supports GPT-4.

MethodFree?Robust?Quality?
ChatGPTAPI❌ No✅ Yes✅️ Real ChatGPT models + GPT-4
ChatGPTUnofficialProxyAPI✅ Yes❌ No️✅ ChatGPT webapp

Note: We strongly recommend using ChatGPTAPI since it uses the officially supported API from OpenAI. We will likely remove support for ChatGPTUnofficialProxyAPI in a future release.

  1. ChatGPTAPI - Uses the gpt-3.5-turbo model with the official OpenAI chat completions API (official, robust approach, but it's not free)
  2. ChatGPTUnofficialProxyAPI - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
Previous Updates
March 1, 2023

The official OpenAI chat completions API has been released, and it is now the default for this package! 🔥

MethodFree?Robust?Quality?
ChatGPTAPI❌ No✅ Yes✅️ Real ChatGPT models
ChatGPTUnofficialProxyAPI✅ Yes☑️ Maybe✅ Real ChatGPT

Note: We strongly recommend using ChatGPTAPI since it uses the officially supported API from OpenAI. We may remove support for ChatGPTUnofficialProxyAPI in a future release.

  1. ChatGPTAPI - Uses the gpt-3.5-turbo model with the official OpenAI chat completions API (official, robust approach, but it's not free)
  2. ChatGPTUnofficialProxyAPI - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
Feb 19, 2023

We now provide three ways of accessing the unofficial ChatGPT API, all of which have tradeoffs:

MethodFree?Robust?Quality?
ChatGPTAPI❌ No✅ Yes☑️ Mimics ChatGPT
ChatGPTUnofficialProxyAPI✅ Yes☑️ Maybe✅ Real ChatGPT
ChatGPTAPIBrowser (v3)✅ Yes❌ No✅ Real ChatGPT

Note: I recommend that you use either ChatGPTAPI or ChatGPTUnofficialProxyAPI.

  1. ChatGPTAPI - (Used to use) text-davinci-003 to mimic ChatGPT via the official OpenAI completions API (most robust approach, but it's not free and doesn't use a model fine-tuned for chat)
  2. ChatGPTUnofficialProxyAPI - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
  3. ChatGPTAPIBrowser - (deprecated; v3.5.1 of this package) Uses Puppeteer to access the official ChatGPT webapp (uses the real ChatGPT, but very flaky, heavyweight, and error prone)
Feb 5, 2023

OpenAI has disabled the leaked chat model we were previously using, so we're now defaulting to text-davinci-003, which is not free.

We've found several other hidden, fine-tuned chat models, but OpenAI keeps disabling them, so we're searching for alternative workarounds.

Feb 1, 2023

This package no longer requires any browser hacks – it is now using the official OpenAI completions API with a leaked model that ChatGPT uses under the hood. 🔥

import { ChatGPTAPI } from 'chatgpt'

const api = new ChatGPTAPI({
  apiKey: process.env.OPENAI_API_KEY
})

const res = await api.sendMessage('Hello World!')
console.log(res.text)

Please upgrade to chatgpt@latest (at least v4.0.0). The updated version is significantly more lightweight and robust compared with previous versions. You also don't have to worry about IP issues or rate limiting.

Huge shoutout to @waylaidwanderer for discovering the leaked chat model!

If you run into any issues, we do have a pretty active ChatGPT Hackers Discord with over 8k developers from the Node.js & Python communities.

Lastly, please consider starring this repo and following me on twitter twitter to help support the project.

Thanks && cheers, Travis

CLI

To run the CLI, you'll need an OpenAI API key:

export OPENAI_API_KEY="sk-TODO"
npx chatgpt "your prompt here"

By default, the response is streamed to stdout, the results are stored in a local config file, and every invocation starts a new conversation. You can use -c to continue the previous conversation and --no-stream to disable streaming.

Usage:
  $ chatgpt <prompt>

Commands:
  <prompt>  Ask ChatGPT a question
  rm-cache  Clears the local message cache
  ls-cache  Prints the local message cache path

For more info, run any command with the `--help` flag:
  $ chatgpt --help
  $ chatgpt rm-cache --help
  $ chatgpt ls-cache --help

Options:
  -c, --continue          Continue last conversation (default: false)
  -d, --debug             Enables debug logging (default: false)
  -s, --stream            Streams the response (default: true)
  -s, --store             Enables the local message cache (default: true)
  -t, --timeout           Timeout in milliseconds
  -k, --apiKey            OpenAI API key
  -o, --apiOrg            OpenAI API organization
  -n, --conversationName  Unique name for the conversation
  -h, --help              Display this message
  -v, --version           Display version number

If you have access to the gpt-4 model, you can run the following to test out the CLI with GPT-4:

Using the chatgpt CLI with gpt-4

Install

npm install chatgpt

Make sure you're using node >= 18 so fetch is available (or node >= 14 if you install a fetch polyfill).

Usage

To use this module from Node.js, you need to pick between two methods:

MethodFree?Robust?Quality?
ChatGPTAPI❌ No✅ Yes✅️ Real ChatGPT models + GPT-4
ChatGPTUnofficialProxyAPI✅ Yes❌ No️✅ Real ChatGPT webapp
  1. ChatGPTAPI - Uses the gpt-3.5-turbo model with the official OpenAI chat completions API (official, robust approach, but it's not free). You can override the model, completion params, and system message to fully customize your assistant.

  2. ChatGPTUnofficialProxyAPI - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)

Both approaches have very similar APIs, so it should be simple to swap between them.

Note: We strongly recommend using ChatGPTAPI since it uses the officially supported API from OpenAI and it also supports gpt-4. We will likely remove support for ChatGPTUnofficialProxyAPI in a future release.

Usage - ChatGPTAPI

Sign up for an OpenAI API key and store it in your environment.

import { ChatGPTAPI } from 'chatgpt'

async function example() {
  const api = new ChatGPTAPI({
    apiKey: process.env.OPENAI_API_KEY
  })

  const res = await api.sendMessage('Hello World!')
  console.log(res.text)
}

You can override the default model (gpt-3.5-turbo) and any OpenAI chat completion params using completionParams:

const api = new ChatGPTAPI({
  apiKey: process.env.OPENAI_API_KEY,
  completionParams: {
    model: 'gpt-4',
    temperature: 0.5,
    top_p: 0.8
  }
})

If you want to track the conversation, you'll need to pass the parentMessageId like this:

const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })

// send a message and wait for the response
let res = await api.sendMessage('What is OpenAI?')
console.log(res.text)

// send a follow-up
res = await api.sendMessage('Can you expand on that?', {
  parentMessageId: res.id
})
console.log(res.text)

// send another follow-up
res = await api.sendMessage('What were we talking about?', {
  parentMessageId: res.id
})
console.log(res.text)

You can add streaming via the onProgress handler:

const res = await api.sendMessage('Write a 500 word essay on frogs.', {
  // print the partial response as the AI is "typing"
  onProgress: (partialResponse) => console.log(partialResponse.text)
})

// print the full text at the end
console.log(res.text)

You can add a timeout using the timeoutMs option:

// timeout after 2 minutes (which will also abort the underlying HTTP request)
const response = await api.sendMessage(
  'write me a really really long essay on frogs',
  {
    timeoutMs: 2 * 60 * 1000
  }
)

If you want to see more info about what's actually being sent to OpenAI's chat completions API, set the debug: true option in the ChatGPTAPI constructor:

const api = new ChatGPTAPI({
  apiKey: process.env.OPENAI_API_KEY,
  debug: true
})

We default to a basic systemMessage. You can override this in either the ChatGPTAPI constructor or sendMessage:

const res = await api.sendMessage('what is the answer to the universe?', {
  systemMessage: `You are ChatGPT, a large language model trained by OpenAI. You answer as concisely as possible for each responseIf you are generating a list, do not have too many items.
Current date: ${new Date().toISOString()}\n\n`
})

Note that we automatically handle appending the previous messages to the prompt and attempt to optimize for the available tokens (which defaults to 4096).

Usage in CommonJS (Dynamic import)
async function example() {
  // To use ESM in CommonJS, you can use a dynamic import like this:
  const { ChatGPTAPI } = await import('chatgpt')
  // You can also try dynamic importing like this:
  // const importDynamic = new Function('modulePath', 'return import(modulePath)')
  // const { ChatGPTAPI } = await importDynamic('chatgpt')

  const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })

  const res = await api.sendMessage('Hello World!')
  console.log(res.text)
}

Usage - ChatGPTUnofficialProxyAPI

The API for ChatGPTUnofficialProxyAPI is almost exactly the same. You just need to provide a ChatGPT accessToken instead of an OpenAI API key.

import { ChatGPTUnofficialProxyAPI } from 'chatgpt'

async function example() {
  const api = new ChatGPTUnofficialProxyAPI({
    accessToken: process.env.OPENAI_ACCESS_TOKEN
  })

  const res = await api.sendMessage('Hello World!')
  console.log(res.text)
}

See demos/demo-reverse-proxy for a full example:

npx tsx demos/demo-reverse-proxy.ts

ChatGPTUnofficialProxyAPI messages also contain a conversationid in addition to parentMessageId, since the ChatGPT webapp can't reference messages across different accounts & conversations.

Reverse Proxy

You can override the reverse proxy by passing apiReverseProxyUrl:

const api = new ChatGPTUnofficialProxyAPI({
  accessToken: process.env.OPENAI_ACCESS_TOKEN,
  apiReverseProxyUrl: 'https://your-example-server.com/api/conversation'
})

Known reverse proxies run by community members include:

Reverse Proxy URLAuthorRate LimitsLast Checked
https://ai.fakeopen.com/api/conversation@pengzhile5 req / 10 seconds by IP4/18/2023
https://api.pawan.krd/backend-api/conversation@PawanOsman50 req / 15 seconds (~3 r/s)3/23/2023

Note: info on how the reverse proxies work is not being published at this time in order to prevent OpenAI from disabling access.

Access Token

To use ChatGPTUnofficialProxyAPI, you'll need an OpenAI access token from the ChatGPT webapp. To do this, you can use any of the following methods which take an email and password and return an access token:

These libraries work with email + password accounts (e.g., they do not support accounts where you auth via Microsoft / Google).

Alternatively, you can manually get an accessToken by logging in to the ChatGPT webapp and then opening https://chat.openai.com/api/auth/session, which will return a JSON object containing your accessToken string.

Access tokens last for days.

Note: using a reverse proxy will expose your access token to a third-party. There shouldn't be any adverse effects possible from this, but please consider the risks before using this method.

Docs

See the auto-generated docs for more info on methods and parameters.

Demos

Most of the demos use ChatGPTAPI. It should be pretty easy to convert them to use ChatGPTUnofficialProxyAPI if you'd rather use that approach. The only thing that needs to change is how you initialize the api with an accessToken instead of an apiKey.

To run the included demos:

  1. clone repo
  2. install node deps
  3. set OPENAI_API_KEY in .env

A basic demo is included for testing purposes:

npx tsx demos/demo.ts

A demo showing on progress handler:

npx tsx demos/demo-on-progress.ts

The on progress demo uses the optional onProgress parameter to sendMessage to receive intermediary results as ChatGPT is "typing".

A conversation demo:

npx tsx demos/demo-conversation.ts

A persistence demo shows how to store messages in Redis for persistence:

npx tsx demos/demo-persistence.ts

Any keyv adaptor is supported for persistence, and there are overrides if you'd like to use a different way of storing / retrieving messages.

Note that persisting message is required for remembering the context of previous conversations beyond the scope of the current Node.js process, since by default, we only store messages in memory. Here's an external demo of using a completely custom database solution to persist messages.

Note: Persistence is handled automatically when using ChatGPTUnofficialProxyAPI because it is connecting indirectly to ChatGPT.

Projects

All of these awesome projects are built using the chatgpt package. 🤯

If you create a cool integration, feel free to open a PR and add it to the list.

Compatibility

  • This package is ESM-only.
  • This package supports node >= 14.
  • This module assumes that fetch is installed.
    • In node >= 18, it's installed by default.
    • In node < 18, you need to install a polyfill like unfetch/polyfill (guide) or isomorphic-fetch (guide).
  • If you want to build a website using chatgpt, we recommend using it only from your backend API

Credits

License

MIT © Travis Fischer

If you found this project interesting, please consider sponsoring me or following me on twitter twitter