Introduction to async-openai
async-openai
is an unofficial Rust library designed for interfacing with OpenAI’s various capabilities. It aims to provide Rust developers with an easy-to-use, asynchronous way to access OpenAI's features, making the integration into Rust projects as seamless as possible.
Overview
At its core, async-openai
is built upon the OpenAI OpenAPI specifications, ensuring compatibility and expected behavior. The library currently supports several key features:
- Assistants (v2)
- Audio handling
- Batch processing
- Chat interfaces
- Legacy Completions
- Embeddings generation
- File handling
- Fine-Tuning of models
- Image generation and manipulation
- Model management
- Content Moderation
- Realtime API types (currently in Beta)
Some areas, like Organization Management and Uploads, are not yet available, but the planned extensions suggest continual development.
The library also supports SSE (Server-Sent Events) streaming where applicable, and employs an exponential backoff strategy for retries when requests are rate limited. Additionally, it uses an ergonomic builder pattern for constructing request objects, making the code more readable and easier to maintain.
Usage
To operate async-openai
, an API key from OpenAI is required. This key is accessed through the environment variable OPENAI_API_KEY
. For users on macOS/Linux, the key can be set with:
export OPENAI_API_KEY='sk-...'
On Windows Powershell, the equivalent command is:
$Env:OPENAI_API_KEY='sk-...'
Developers can look at the examples provided in the library’s GitHub repository for practical implementation scenarios. Comprehensive documentation is also available at docs.rs.
Realtime API
Currently, async-openai
implements only the types for Realtime API, accessible using a feature flag realtime
. These implementations might change once OpenAI releases its official specifications for them.
Image Generation Example
Here’s an example of how the library can be used to generate images. The example focuses on how a user can create a client, build an image request, and download the response images:
use async_openai::{
types::{CreateImageRequestArgs, ImageSize, ImageResponseFormat},
Client,
};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let client = Client::new();
let request = CreateImageRequestArgs::default()
.prompt("cats on sofa and carpet in living room")
.n(2)
.response_format(ImageResponseFormat::Url)
.size(ImageSize::S256x256)
.user("async-openai")
.build()?;
let response = client.images().create(request).await?;
let paths = response.save("./data").await?;
paths
.iter()
.for_each(|path| println!("Image file path: {}", path.display()));
Ok(())
}
Contributing
async-openai
is open to contributions, welcoming new features, bug fixes, and documentation improvements. Code contributions should adhere to a few guidelines to maintain the project’s quality:
- Names and documentation should align with OpenAPI specs.
- Contributions need to be tested, with existing tests adjusted as needed.
- Changes should remain within the scope of official API documents.
- Consistency in code style is essential to ensure a smooth developer experience.
The project observes the Rust Code of Conduct, and contributors should reflect this in their interactions and code submissions.
Complimentary Crates
Several additional crates enhance async-openai
:
- openai-func-enums: These procedural macros facilitate the use of this library with OpenAI’s tool calling feature, supporting parallel tool calls and more.
- async-openai-wasm: This crate offers support for WebAssembly (WASM).
License
async-openai
is available under the MIT license, providing flexibility for both open-source and private use. The full license details can be found in the project’s GitHub repository.
This introduction highlights async-openai
as a valuable tool for Rust developers seeking to leverage OpenAI’s capabilities within their applications. Its ease of use, extensive feature set, and ongoing development make it a significant asset in the Rust ecosystem.