Introduction to openai-api-rust
The openai-api-rust project offers a straightforward Rust client designed for interfacing with OpenAI's API. While it provides some user-friendly conveniences, its functionality remains closely aligned with the API's core features. This Rust client is an ideal tool for developers looking to integrate OpenAI's capabilities into their Rust applications with minimal hassle.
Installation
Getting started with openai-api-rust is simple. Developers can add the package to their project using Cargo, the Rust package manager. By running the command:
$ cargo add openai-api
the necessary dependencies are included in the project, allowing developers to start utilizing the OpenAI services.
Quickstart
Using openai-api-rust is quick and straightforward. In a minimal example, developers can set up an asynchronous runtime using the Tokio library and interact with the OpenAI API. Here's how a basic setup looks:
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_token = std::env::var("OPENAI_SK")?;
let client = openai_api::Client::new(&api_token);
let prompt = String::from("Once upon a time,");
println!(
"{}{}",
prompt,
client.complete(prompt.as_str()).await?
);
Ok(())
}
This example showcases how to make a simple API call. The developer retrieves their API key from the environment, initializes the OpenAI client, and requests a completion based on a given prompt.
Basic Usage
Creating a Completion
For developers who wish to perform simple demonstrations or debugging tasks, openai-api-rust allows for prompt completions. These completions can then be converted into strings using the Display
trait on a Completion
object:
let response = client.complete_prompt("Once upon a time").await?;
println!("{}", response);
Beyond basic completions, developers have the flexibility to configure prompts with more precision using the CompletionArgs
builder. This provides options to specify different parameters like the engine type, maximum tokens, temperature, and more:
let args = openai_api::api::CompletionArgs::builder()
.prompt("Once upon a time,")
.engine("text-davinci-003")
.max_tokens(20)
.temperature(0.7)
.top_p(0.9)
.stop(vec!["\n".into()]);
let completion = client.complete_prompt(args).await?;
println!("Response: {}", response.choices[0].text);
println!("Model used: {}", response.model);
This level of customization allows developers to tailor the responses they need, based on the specific requirements of their application.
For more complex examples and use-cases, developers are encouraged to refer to the examples provided within the project's repository under examples/
. This section can help guide users through more sophisticated implementations and offer insights into leveraging openai-api-rust to its full potential.