OpenOpenAI: An Introduction
OpenOpenAI is an innovative project designed to offer a self-hosted version of OpenAI's stateful Assistants API. This project aims to give users greater control and flexibility by allowing them to run their own OpenAI Assistants that are 100% compatible with the official versions.
Why OpenOpenAI?
The rationale behind OpenOpenAI is to enable users to harness the power of OpenAI Assistants while being able to customize and control their configurations. This compatibility opens up new possibilities such as:
- Utilizing custom models compatible with OpenAI's Assistants.
- Implementing a fully customizable Retrieval-Augmented Generation (RAG).
- Employing custom code interpreters to enhance functionality.
- Enabling self-hosted or on-premise deployments.
- Providing complete control over assistant evaluations and testing.
Considering the potential future growth of OpenAI's "GPT Store," the ability to run and debug OpenAI-compatible Assistants could prove invaluable.
Project Stack
OpenOpenAI utilizes a variety of tools and platforms to deliver its services effectively:
- Postgres: As the primary datastore using Prisma.
- Redis: For managing an asynchronous task queue through BullMQ.
- S3-Compatible Storage: For handling uploaded files.
- Hono: To serve the REST API.
- TypeScript: The main programming language used.
Development
To get started with developing OpenOpenAI, you'll need Node.js (v18 or higher) and pnpm
(v8 or higher). After installing the necessary dependencies and generating Prisma types, you can configure environment variables to tailor your setup. The project supports customization through PostgreSQL, OpenAI API keys, Redis, and S3 storage settings.
Services
The application comprises two key services:
- A RESTful API server.
- An asynchronous task runner.
Both services can be run simultaneously and scaled horizontally as needed. Developers can choose between a quick start using tsx
or a production-ready setup by transpiling TypeScript to JavaScript.
E2E Examples
OpenOpenAI provides comprehensive end-to-end examples to showcase its functionality:
- Custom Function Example: Demonstrates how to integrate a custom function like
get_weather
. - Retrieval Tool Example: Shows the use of a built-in retrieval tool to handle file attachments.
These examples illustrate how the platform can be used with the official OpenAI API and customized API endpoints with minimal differences in execution.
Server Routes
OpenOpenAI supports a range of API server routes for managing files, assistants, threads, messages, and more. The routes are designed to be compatible with the OpenAI API, providing users a familiar yet flexible interface.
Current Status and Future Tasks
All currently available API routes have been tested and function as expected. Future development tasks include:
- Creating a hosted demo.
- Implementing hosted Redis.
- Enhancing the code interpreter tool functionality.
- Expanding support for non-text files.
License
OpenOpenAI is licensed under the MIT License by Travis Fischer. The project is open-source, and contributions or sponsorships are welcomed.