OpenAOE: An Innovative AI Framework
What is OpenAOE?
OpenAOE, taking inspiration from the gaming world of DOTA2, stands for "Area Of Effect," which refers to abilities that impact a group of targets within a specific area. In the context of Artificial Intelligence (AI), OpenAOE allows users to obtain simultaneous outputs from multiple large language models (LLMs) with a single prompt. This revolutionary approach provides parallel processing capabilities, setting it apart from many existing chat frameworks.
The Problem OpenAOE Addresses
Despite the abundance of open-source frameworks developed for chat purposes, leveraging technologies akin to ChatGPT, the concept of LGC (LLM Group Chat) was still unmet until the emergence of OpenAOE. OpenAOE bridges this gap by enabling LLM researchers, evaluators, engineering developers, and even non-professionals to swiftly access a variety of both commercial and open-source LLMs. It offers both single-model serial response mode as well as a multi-model parallel response mode, thus broadening the accessibility and applicability of LLMs.
Key Benefits of OpenAOE
OpenAOE offers several distinctive advantages:
- Simultaneous Responses: Users can receive responses from multiple LLMs at the same time with a single prompt.
- Access to Commercial LLM APIs: OpenAOE facilitates the integration with well-known commercial LLMs including GPT-3.5, GPT-4, Google Palm, Minimax, Claude, and Spark. It also allows users to define and incorporate other large model APIs, given that API keys are prepared beforehand.
- Support for Open-Source LLMs: The framework supports open-source LLM APIs, with the recommendation to use LMDeploy for single-click deployment.
- Comprehensive API and Web Interface: OpenAOE provides robust backend APIs and a web-based user interface to cater to diverse requirements and user needs.
Quick Start Guide
For those eager to explore OpenAOE, there are three main ways to set it up: by using pip, Docker, or directly from the source code.
Run by pip:
-
Install:
pip install -U openaoe
-
Start:
openaoe -f /path/to/your/config-template.yaml
Run by Docker:
-
Install:
-
Option 1: Pull the latest Docker image
docker pull opensealion/openaoe:latest
-
Option 2: Build the Docker image locally
git clone https://github.com/internlm/OpenAOE cd OpenAOE docker build . -f docker/Dockerfile -t opensealion/openaoe:latest
-
-
Start:
docker run -p 10099:10099 -v /path/to/your/config-template.yaml:/app/config.yaml --name OpenAOE opensealion/openaoe:latest
Run by Source Code:
-
Install:
-
Clone the repository
git clone https://github.com/internlm/OpenAOE
-
[Optional] Build the frontend if needed
cd OpenAOE/openaoe/frontend npm install npm run build
-
-
Start:
cd OpenAOE pip install -r openaoe/backend/requirements.txt python -m openaoe.main -f /path/to/your/config-template.yaml
Note: The config-template.yaml
is required to start OpenAOE as it contains essential configuration details such as API URLs, access keys, and tokens. Users must populate this file with their specific API data.
Technical Overview
The OpenAOE project is receptive to contributions from the community, encouraging collaboration and further enhancement of its capabilities.
Tech Stack
OpenAOE rests on a solid technological foundation:
- Backend: Built on Python coupled with FastAPI.
- Frontend: Developed using TypeScript with React-based Sealion-Client and Sealion-UI.
- Build Tools:
- Conda for creating virtual Python environments
- NPM for managing frontend build processes
Organizing the Project
- Frontend files reside in
openaoe/frontend
- Backend files can be found in
openaoe/backend
- The main entry-point script is
openaoe/main.py
Extending OpenAOE
To incorporate additional models or features:
- Add new model details in
openaoe/frontend/src/config/model-config.ts
. - Configure API request payloads in
openaoe/frontend/src/config/api-config.ts
. - Customize payload structures in
openaoe/frontend/src/services/fetch.ts
as per the API specifications of the model.
OpenAOE’s innovative approach and extensive configurability make it an attractive framework for those working with LLMs, facilitating more dynamic interactions and broader applications in AI-driven conversations.