Project Introduction: openai-multi-client
openai-multi-client
is a Python library designed to simplify the process of making many requests to the OpenAI API at once. It is particularly beneficial for projects that handle a substantial number of interactions with OpenAI's services. This library helps streamline these requests by managing them concurrently and ensuring any that fail are retried automatically. This allows developers to focus more on the insights from the API's responses rather than the intricacies of managing multiple requests.
Motivation
The library was developed to address the challenges faced by individuals dealing with large datasets and needing to interact with powerful language models like those provided by OpenAI. Without this library, developers would have to manage requests one-by-one, leading to inefficiency and long wait times. openai-multi-client
removes this bottleneck by efficiently handling numerous requests at once, reducing processing times significantly.
Features
- Concurrent Requests: Send multiple API requests at the same time, accelerating processes that rely on these interactions.
- Order Management: Choose whether to maintain or disregard the order of requests and their responses.
- Automatic Retries: Configure the library to automatically retry failed requests, minimizing manual error handling.
- Customizable: Tailor the library to specific needs with custom retry settings and API client testing.
- User-Friendly Interface: Designed for ease of use, ensuring developers can implement it without extensive overhead.
Installation
Setting up the openai-multi-client
library is straightforward. Simply run the following command:
pip install openai-multi-client
Usage Example
This Python code snippet exemplifies how to use openai-multi-client
:
from openai_multi_client import OpenAIMultiClient
# Set the environment variable with your API key
api = OpenAIMultiClient(endpoint="chats", data_template={"model": "gpt-3.5-turbo"})
def make_requests():
for num in range(1, 10):
api.request(data={
"messages": [{
"role": "user",
"content": f"Can you tell me what is {num} * {num}?"
}]
}, metadata={'num': num})
api.run_request_function(make_requests)
for result in api:
num = result.metadata['num']
response = result.response['choices'][0]['message']['content']
print(f"{num} * {num}:", response)
If the order of responses is important, you can use OpenAIMultiOrderedClient
instead.
Configuring API Keys and Endpoints
To integrate the library, configure the OpenAI client with your API key:
import openai
openai.api_key = "your_api_key_here"
You can also configure an endpoint if necessary:
openai.api_base = "azure_openai_api_base_here"
Implementation Details
The client supports several endpoints like "completions"
, "chats"
, "embeddings"
, etc., each catering to different types of requests. You can customize data and metadata for every request, allowing for detailed and precise API interactions.
Contribution and Licensing
openai-multi-client
is open for contributions, welcoming improvements and suggestions via GitHub. The project follows the MIT License, ensuring free usage and modification within the terms specified.
This library emerges as an essential tool for any developer looking to optimize OpenAI API interactions, allowing rapid, efficient, and reliable management of multiple requests.