Gemini-OpenAI-Proxy Project Overview
The Gemini-OpenAI-Proxy is a highly innovative tool designed to serve as a bridge between the OpenAI API protocol and Google's Gemini protocol. This solution allows applications initially designed for OpenAI to communicate effectively with the Gemini protocol. It supports key features such as Chat Completion, Embeddings, and various Models endpoints, thereby enhancing compatibility and extending functionality.
Build
Building the Gemini-OpenAI-Proxy is a straightforward process. Execute the following command, which compiles the project into an executable file named gemini
:
go build -o gemini main.go
This command uses Go, a robust programming language known for its simplicity and efficiency, offering a seamless build process.
Deploy
Deploying the Gemini-OpenAI-Proxy using Docker ensures an easy and efficient setup. Docker creates a lightweight, portable, and self-sufficient environment for running applications. Follow these steps to deploy with Docker:
Command Line Method
Execute this command to deploy the proxy:
docker run --restart=unless-stopped -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest
Docker Compose Method
Alternatively, use Docker Compose with the configuration below for deployment:
version: '3'
services:
gemini:
container_name: gemini
environment:
- GPT_4_VISION_PREVIEW=gemini-1.5-flash-latest
- DISABLE_MODEL_MAPPING=0
ports:
- "8080:8080"
image: zhu327/gemini-openai-proxy:latest
restart: unless-stopped
Adjust the port and image version as required to fit your deployment environment.
Usage
Using the Gemini-OpenAI-Proxy facilitates seamless integration of OpenAI functionalities with applications capable of using OpenAI-compatible endpoints. Here's how to utilize the proxy:
-
Set Up OpenAI Endpoint:
Configure your application to use a custom OpenAI API endpoint. The proxy works seamlessly with any OpenAI-compatible endpoint. -
Acquire Google AI Studio API Key:
Obtain an API key from Google AI. Use this key in place of your OpenAI API key when interacting with the Gemini-OpenAI-Proxy. -
Integrate Proxy into Application: Direct API requests from your application to the proxy by replacing the endpoint and using the Google AI Studio API key.
- Chat Completion Example:
curl http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \ -d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Say this is a test!"}], "temperature": 0.7}'
- Embeddings Example:
curl http://localhost:8080/v1/embeddings \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \ -d '{"model": "text-embedding-ada-002", "input": "This is a test sentence."}'
Consider changing the environment variable
GPT_4_VISION_PREVIEW
to use different models if needed. -
Handle Responses:
Manage responses in the same manner as you would with OpenAI's services, ensuring smooth operation within your application.
The proxy effectively extends the capabilities of your application, allowing it to harness OpenAI services through Google Gemini protocols.
Compatibility
More information on compatibility, discussion, or issues related to the proxy can be found on its GitHub page.
License
Gemini-OpenAI-Proxy is distributed under the MIT License. For more details about its usage and distribution rights, refer to the LICENSE file.