Introduction to the openai-api-proxy Project
The openai-api-proxy project is a versatile tool designed to streamline access to OpenAI's API. It achieves this by operating as a proxy that can be easily deployed using a simple Docker command or on cloud functions. This solution is particularly appealing for users who seek a straightforward setup process and efficient integration into their systems.
Deployment Flexibility
One of the outstanding features of this proxy is its flexibility in deployment. Users can choose between deploying it via Docker or cloud functions, making it adaptable to various environments. This versatility ensures that the proxy can operate efficiently, regardless of the infrastructure in place.
Docker Deployment
Deploying the proxy with Docker can be accomplished in one line:
docker run -p 9000:9000 easychen/ai.level06.com:latest
This command sets up the proxy and makes it accessible through the specified IP address and port.
NodeJS Deployment
Alternatively, users can deploy the application using NodeJS in environments that support NodeJS 14 or newer versions. The steps include copying app.js
and package.json
into the desired directory, installing dependencies with yarn
, and starting the service with node app.js
.
Key Features
- SSE Streaming: The proxy supports Server-Sent Events (SSE) for real-time data streaming, a crucial feature for applications requiring immediate updates.
- Text Moderation: With built-in text moderation capabilities (requiring Tencent Cloud configuration), the proxy ensures that content adheres to specified guidelines. It provides options for different moderation levels, allowing users to fine-tune the balance between content control and freedom.
Environment Configuration
The project offers several environment variables for customization:
- PORT: Specifies the service port.
- PROXY_KEY: An access key for controlling access to the proxy.
- TIMEOUT: Defines the request timeout duration, with a default of 30 seconds.
- TENCENT_CLOUD_ Variables*: These include configurations for integration with Tencent Cloud, such as secret ID, secret key, and region settings.
API Integration
Integrating the proxy with existing applications involves altering the base URL of OpenAI's API requests to point to the proxy's domain or IP address. Users can also utilize a PROXY_KEY
to secure access if set up.
Limitations and Considerations
The current setup supports only GET and POST requests and does not handle file-related interfaces. Notable is the support for SSE, enhancing the proxy's functionality for real-time streaming applications.
Example of Client-side Usage
For developers seeking to implement this proxy in applications, a usage example is provided using the chatgpt package:
chatApi= new gpt.ChatGPTAPI({
apiKey: 'sk.....:<proxy_key_here>',
apiBaseUrl: "http://localhost:9001/v1", // Replace with proxy domain/IP
});
Acknowledgements
The project draws inspiration from the chatgpt-api project, especially in its implementation of SSE streaming functionalities.
In summary, the openai-api-proxy provides a robust, flexible, and straightforward solution for integrating OpenAI's API into diverse environments, ensuring both security and efficiency.