Introducing OpenAI Streams
OpenAI Streams is a dynamic library designed to enhance the way developers interact with OpenAI's API by returning responses as streams. This allows for more efficient and fluid integration, particularly with applications that benefit from real-time data processing and display.
Key Features
-
Stream-Only Responses: OpenAI Streams is built to prioritize streaming API responses. Traditional non-stream endpoints are handled as a single chunk stream, enabling applications to immediately render data as it arrives.
-
Environment Key Management: The library automatically loads your
OPENAI_API_KEY
from the environment (process.env
), simplifying security management. -
Type Inference: The single function approach intelligently infers the parameter types based on the specified endpoint, ensuring seamless integration with various endpoints available in the OpenAI API.
-
Support for Modern Environments: By using
ReadableStream
across browsers, Edge Runtime, and Node 18+, OpenAI Streams ensures broad compatibility. For those using other Node versions, a variant compatible withNodeJS.Readable
is available.
Installation
To add OpenAI Streams to your project, you can use either Yarn or npm. Just run:
yarn add openai-streams
# -or-
npm i --save openai-streams
How to Use
To utilize OpenAI Streams, an API call is made as follows:
await OpenAI(
ENDPOINT,
PARAMS,
OPTIONS
);
- Endpoint: Examples include 'completions' or 'chat'.
- Params: These could be
max_tokens
,temperature
,messages
, etc. - Options: Includes
apiBase
,apiKey
,mode
, andcontroller
.
API Key Usage: Set the OPENAI_API_KEY
environment variable or use the { apiKey }
option directly in your code. This key is crucial for authenticating API requests.
Function Call: Simply use await OpenAI(endpoint, params, options?)
to trigger the API interaction. Parameters are inferred based on the endpoint input, ensuring accurate API calls.
Edge and Browser Implementation
For applications running on Edge or within browsers, stream consumption is supported. Here's a basic implementation:
import { OpenAI } from "openai-streams";
export default async function handler() {
const stream = await OpenAI("completions", {
model: "text-davinci-003",
prompt: "Write a happy sentence.\n\n",
max_tokens: 100,
});
return new Response(stream);
}
export const config = {
runtime: "edge",
};
Node.js Integration
If your project setup requires Node.js streams, you can easily integrate OpenAI Streams:
import type { NextApiRequest, NextApiResponse } from "next";
import { OpenAI } from "openai-streams/node";
export default async function test(_: NextApiRequest, res: NextApiResponse) {
const stream = await OpenAI("completions", {
model: "text-davinci-003",
prompt: "Write a happy sentence.\n\n",
max_tokens: 25,
});
stream.pipe(res);
}
ChatGPT API Compatibility
The library also supports ChatGPT API, enhancing interactive chat applications. When using mode = "tokens"
, the return value is a sequence of message deltas. For more comprehensive events, switch to mode = "raw"
.
Example of using the ChatGPT API:
const stream = await OpenAI("chat", {
model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content: "You are a helpful assistant that translates English to French.",
},
{
role: "user",
content: 'Translate the following English text to French: "Hello world!"',
},
],
});
In tokens
mode, you receive streaming responses broken into chunks, making data rendering more efficient.
Additional Notes
When working with streams, consider using asynchronous generators to process data. This approach leverages constructs like for await (const chunk of yieldStream(stream))
, providing an intuitive pattern for stream handling.
With its focus on flexibility and real-time data, OpenAI Streams represents a powerful tool for developers looking to integrate advanced AI capabilities into their applications.