azure-openai-proxy Project Overview
The Azure OpenAI Service Proxy is an innovative solution designed to bridge the gap between OpenAI's official API and Microsoft's Azure OpenAI API. This project manages to remove the barriers that currently exist between these two platforms, allowing seamless interaction with all supported models, including GPT-4 and Embeddings. This means users can access the Azure OpenAI service without incurring additional costs or needing to make significant adjustments to their existing systems.
Key Features of azure-openai-proxy
- Support for All Models: The proxy supports all models and ensures compatibility with the latest offerings, like GPT-4.
- Zero Cost Transition: It allows the OpenAI ecosystem to access Azure's offerings without any additional cost, making the transition smooth and hassle-free.
- Complements Several Projects: Verified for use in projects such as chatgpt-web, chatbox, langchain, and ChatGPT-Next-Web.
Getting Started
Key and Endpoint Retrieval
To effectively utilize Azure OpenAI through this proxy, users need to obtain specific information. This includes:
- AZURE_OPENAI_ENDPOINT: This is a critical URL found in the Azure portal or within Azure OpenAI Studio, necessary for connecting with Azure services.
- AZURE_OPENAI_API_VER: The API version you're working with, typically found in Azure documentation or the Studio.
- AZURE_OPENAI_MODEL_MAPPER: A mapping that aligns deployed models on Azure with the official OpenAI model names, using a specific format.
Example Configuration:
AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt-35-turbo
Setting Up a Proxy
Users can configure both HTTP and Socks5 proxies by setting the appropriate environment variables:
-
HTTP Proxy:
AZURE_OPENAI_HTTP_PROXY=http://127.0.0.1:1087
-
Socks5 Proxy:
AZURE_OPENAI_SOCKS_PROXY=socks5://127.0.0.1:1080
Using Docker
Docker provides a straightforward method to run the proxy, with configurations done either through environment variables or configuration files. Example docker run command:
docker run -d -p 8080:8080 --name=azure-openai-proxy \
--env AZURE_OPENAI_ENDPOINT=your_azure_endpoint \
--env AZURE_OPENAI_API_VER=your_azure_api_ver \
--env AZURE_OPENAI_MODEL_MAPPER=your_azure_deploy_mapper \
stulzq/azure-openai-proxy:latest
Integration with Applications
Azure OpenAI Proxy can be integrated with various applications like ChatGPT-Next-Web and ChatGPT-Web through services defined in a docker-compose.yml
file. These files list services, dependencies, and the environment setup necessary for smooth operation.
Example for ChatGPT-Next-Web:
version: '3'
services:
chatgpt-web:
image: yidadaa/chatgpt-next-web
ports:
- 3000:3000
environment:
OPENAI_API_KEY: <Azure OpenAI API Key>
BASE_URL: http://azure-openai:8080
depends_on:
- azure-openai
networks:
- chatgpt-ns
azure-openai:
image: stulzq/azure-openai-proxy
ports:
- 8080:8080
environment:
AZURE_OPENAI_ENDPOINT: <Azure OpenAI API Endpoint>
networks:
- chatgpt-ns
networks:
chatgpt-ns:
driver: bridge
Configuration File Usage
The project also supports YAML configuration files, which can specify different endpoints and API keys for each model. This flexibility ensures that users can tailor their proxy setup to precise requirements and deployment architectures.
Running the Proxy
After configuration, the proxy service can be initiated using Docker Compose, enabling the setup of a complete system with minimal manual effort.
docker compose up -d
By integrating Azure OpenAI Proxy within their systems, users can leverage the capabilities of Azure's AI services without disrupting existing workflows, benefiting from a unified and efficient API environment.