Introduction to openai-style-api
The openai-style-api project serves as a versatile tool designed to streamline interactions with various large-scale AI models by adopting a standardized OpenAI API format. It is especially useful for managing API keys and facilitating configurable management of diverse model parameters, allowing users to concentrate primarily on the API key and message handling.
Purpose
The primary goal of the openai-style-api is to eliminate the discrepancies between different large language models' APIs. By providing a uniform interface based on the OpenAI API format, it simplifies the process of engaging with these models. Additionally, it supports API key redistribution management, ensuring a seamless experience when integrating and utilizing different AI models.
Key Features
-
Multi-Model Support: The project supports a wide array of large AI models including:
- OpenAI
- Azure Open AI
- Claude API (pending testing)
- Claude Web (encapsulated as an OpenAI API)
- Zhipu AI
- Kimi
- Bingchat (Copilot)
- iFlytek Spark
- Gemini
- Tongyi Qianwen
Support for more models is continuously expanding, with models like Baidu Wenxin Yiyan also being considered.
-
Stream-Based Invocation: Allows for streaming mode to call models efficiently.
-
Proxy Service Support for OpenAI: Compatible with third-party proxy services such as openai-sb, providing flexibility in deployment.
-
Online Configuration Updates: Users can update configurations online via a user-friendly interface available at
http://0.0.0.0:8090/
. -
Load Balancing: Features load balancing capabilities where a single API key can be used to access multiple models in a round-robin, random, or parallel manner.
-
Routing by Model Name: Offers routing capabilities based on the model name, enabling intelligent distribution of requests across models.
Deployment Options
The project can be deployed in various ways:
-
Docker: Create a
model-config.json
file locally to configure your setup based on the provided configuration example, then deploy using Docker commands. -
Docker Compose: Clone the project or download the
docker-compose.yml
file. Adjust the model configuration path and deploy the project using Docker Compose. -
Local Deployment: Clone the repository, copy the default configuration to
model-config.json
, modify as necessary, install required dependencies via pip, and run the service using a Python script.
Configuration Example
The essential configuration file, model-config.json
, allows defining multiple model setups. Each configuration specifies a token and model type, along with model-specific parameters such as API base, API key, and model settings like temperature.
Usage
Users can interact with the service using curl commands or through the OpenAI library, ensuring diverse and efficient communication with various AI models.
Support and Contributions
The project is open to contributions, with users encouraged to report issues or submit pull requests to enhance and update the service. Various open-source projects have influenced and supported the development of this tool.
Acknowledgments
The openai-style-api project extends its gratitude towards the open-source community for providing pivotal resources that have been adapted and incorporated into this tool.
By providing a detailed yet straightforward API management solution, the openai-style-api facilitates efficient and uniform interactions with numerous AI models, simplifying the deployment and management processes for developers and businesses alike.