Project Overview: OpenAI.mini
OpenAI.mini is an impressive open-source repository that integrates OpenAI APIs with open-source models. This includes the implementation of various models such as open-source Language Learning Models (LLMs) for chat functions, Whisper for audio processing, and SDXL for image handling, among others. By using OpenAI.mini, users can interact with these models through the OpenAI libraries or the LangChain library, offering a versatile development tool for AI enthusiasts and developers.
Frontend
The frontend of OpenAI.mini provides a user-friendly interface to interact with the models. While visual images reflect the design and functionality, the frontend can be modified or obtained as per the provided instructions.
How to Use OpenAI.mini
1. Install Dependencies
To get started, install the necessary dependencies using pip by executing:
pip3 install -r requirements.txt
pip3 install -r app/requirements.txt
Alternatively, if make
is available, execute:
make install
2. Get Frontend
The frontend can either be built using yarn or downloaded from the releases:
Option 1: Build Frontend
Navigate to the frontend directory and build using yarn:
cd app/frontend
yarn install
yarn build
Option 2: Download Built Package
Retrieve the dist.tar.gz
file from the release page, then extract it into the app/frontend
directory, ensuring the directory follows the given layout.
3. Configure Environment Variables
Copy the example environment configuration and modify it to suit your setup:
cp .env.example .env
# Edit the `.env` file as necessary
4. Download Model Weights (Optional)
If preferred, specify a storage path for model weights using the MODEL_HUB_PATH
in the .env
file. Alternatively, OpenAI.mini will automatically download models from Huggingface if not found locally.
5. Start the Server
Use the following commands to start the server:
python3 -m src.api
python3 -m app.server
6. Access OpenAI.mini Services
You can access the services via the OpenAI API by setting openai.api_base
and openai.api_key
, or utilize the web frontend for a ChatGPT-style interaction.
OpenAI API Status
A variety of services are supported, ranging from listing models and chat completions to creating images and transcriptions. Several functions are complete, while others still await implementation.
Supported Models
The repository supports a wide array of language models, embedding models, diffusion models, and audio models. These cater to different dimensional and sequence length needs in various application scenarios.
Example Code
OpenAI.mini includes comprehensive example code snippets for tasks such as chat streaming, creating embeddings, listing models, generating images, and transcribing audio. These examples are designed to showcase the practical applications of OpenAI.mini's capabilities.
Acknowledgements
OpenAI.mini draws inspiration from several significant projects and contributors such as @xusenlinzy's api-for-open-llm and @hiyouga's LLaMA-Efficient-Tuning. The community's support and shared knowledge have greatly influenced its development.
Star History
For those interested in the project’s popularity and community engagement over time, the star history is available, reflecting the growth and interest within the developer community.