Introduction to chatgptProxyAPI
chatgptProxyAPI is a project designed to facilitate the use of OpenAI's APIs by employing intermediate services like Cloudflare Workers and Pages to handle requests. The primary aim is to enhance accessibility and manage high traffic loads effectively. Here's an in-depth look at the features and deployment options of chatgptProxyAPI.
Cloudflare Workers for API Relay
The core functionality of chatgptProxyAPI involves using Cloudflare Workers to route requests to api.openai.com
. The process involves:
- Creating a Cloudflare Worker.
- Deploying the code from cf_worker.js.
- Binding a domain name to the Worker that is not blocked.
- Replacing
api.openai.com
with your own domain name.
For more detailed instructions, you can refer to the tutorial.
Cloudflare Pages for API Relay
Another method provided by the project is using Cloudflare Pages. This approach not only relays requests but also allows you to query the OpenAI API balance.
Deployment Steps:
- Use the project template by clicking Use this template to create a new repository.
- Log into the Cloudflare Dashboard.
- Navigate to
Pages > Create a project > Connect to Git
. - Select your repository and follow the defaults in
Set up builds and deployments
. - Deploy and acquire your access domain.
To use this setup, replace https://api.openai.com
with your Cloudflare Pages domain like https://xxx.pages.dev
.
Check out the full tutorial for more information on this setup.
Standalone API Deployment
For those interested in deploying only the API relay feature, you can follow the instructions.
Docker Deployment
While Docker deployment is another option, it is typically executed on a VPS located outside certain geographical restrictions. However, SSE (Server-Sent Events) might not be supported, making it less advisable for some use cases.
Example for deploying via Docker:
docker run -itd --name openaiproxy \
-p 3000:3000 \
--restart=always \
gindex/openaiproxy:latest
The usage would be something like:
curl --location 'http://vpsip:3000/proxy/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxxxxxxxxxxxxx' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Usage
Examples of how to utilize the proxy via different programming languages are provided below:
Using JavaScript with fetch API
const requestOptions = {
method: 'POST',
headers: {
"Authorization": "Bearer sk-xxxxxxxxxxxx",
"Content-Type": "application/json"
},
body: JSON.stringify({
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "hello word"
}
]
})
};
fetch("https://openai.1rmb.tk/v1/chat/completions", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
Using Python
import requests
url = "https://openai.1rmb.tk/v1/chat/completions"
api_key = 'sk-xxxxxxxxxxxxxxxxxxxx'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
payload = {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "hello word"
}
]
}
try:
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
data = response.json()
print(data)
except requests.exceptions.RequestException as e:
print(f"请求错误: {e}")
except json.JSONDecodeError as e:
print(f"无效的 JSON 响应: {e}")
Using Node.js with chatgpt API
This example uses the transitive-bullshit/chatgpt-api library:
import { ChatGPTAPI } from 'chatgpt'
async function example() {
const api = new ChatGPTAPI({
apiKey: "sk-xxxxxxxxxxxxxx",
apiBaseUrl:"https://openai.1rmb.tk/v1"
})
const res = await api.sendMessage('Hello World!')
console.log(res.text)
}
example()
Querying API Balance
Users can also query their OpenAI API usage and billing information using this setup. Here's an example of how to achieve this using JavaScript:
const headers = {
'content-type': 'application/json',
'Authorization': `Bearer sk-xxxxxxxxxxxxxxxxx`
}
// Check subscription status
const subscription = await fetch("https://openai.1rmb.tk/v1/dashboard/billing/subscription", {
method: 'get',
headers: headers
})
if (!subscription.ok) {
const data = await subscription.json()
return data
} else {
const subscriptionData = await subscription.json()
const endDate = subscriptionData.access_until
const startDate = new Date(endDate - 90 * 24 * 60 * 60);
const response = await fetch(`https://openai.1rmb.tk/v1/dashboard/billing/usage?start_date=${formatDate(startDate, "YYYY-MM-DD")}&end_date=${formatDate(endDate, "YYYY-MM-DD")}`, {
method: 'get',
headers: headers
})
const usageData = await response.json();
const plan = subscriptionData.plan.id
console.log(usageData);
}
Conclusion
chatgptProxyAPI serves a versatile role in managing OpenAI API requests, providing users with various deployment options to suit different needs. Whether through Cloudflare Workers, Pages, or Docker, the project aims to enhance accessibility and reliability for users interacting with OpenAI's services.