Introduction to gpt-assistants-api-ui
The gpt-assistants-api-ui is a versatile chat user interface designed for easy integration with OpenAI's Assistants API. This project is ideal for developers and businesses looking to leverage AI-driven assistance within their applications or workflows. Here's a straightforward breakdown of what this project is about and how you can get started with it.
Key Features
- Chat UI for OpenAI Assistants API: A user-friendly interface that simplifies communication with OpenAI Assistants.
- Easy Setup with Assistant IDs: Simply set up the unique Assistant IDs to start using the platform.
- File Operations Support: It includes functionality to upload and download files, enhancing interaction capabilities.
- Streaming API Support: Facilitates seamless, real-time data flow.
- Multiple Assistant Profiles: Manage different assistant profiles in a single interface.
- Azure OpenAI Compatibility: Although full functionality requires Azure OpenAI Service to support Streaming API.
Getting Started
Follow these instructions to quickly set up and run your own instance of the gpt-assistants-api-ui:
-
Create an Assistant: Use the OpenAI platform to create an assistant and obtain the necessary assistant ID.
-
Obtain an API Key: Secure your API key from OpenAI's platform.
-
Clone the Repository: Download the project files using Git.
$ git clone https://github.com/ryo-ma/gpt-assistants-api-ui.git
-
Install Dependencies: Use Poetry to install the required libraries and dependencies.
$ poetry install
-
Configure Environment Variables: Set your
.env
file with configurations such as API keys, assistant IDs, and file upload settings. -
Optional Authentication Setup: If needed, set up user authentication by creating a secrets file in the
.streamlit
directory.
Running the Application
Using Streamlit
-
Execute these commands to run the application using Streamlit:
$ poetry shell $ streamlit run app.py
Using Docker
-
Follow these steps to build and run the app in a Docker container:
-
Build the Docker image:
$ docker compose build
-
Run the app:
$ docker compose up
-
-
The application will be accessible at http://localhost:8501.
Deploy to Streamlit Cloud
You can also deploy your application on Streamlit Cloud. This eliminates the need for local hosting. Make sure to configure the environment with Python 3.10 and necessary variables during deployment.
Conclusion
The gpt-assistants-api-ui is an intuitive, feature-rich tool for integrating AI chat assistance into various applications. With detailed setup instructions and flexible deployment options, it is a reliable choice for enhancing user interaction with AI-driven solutions. Whether you’re running locally or deploying in the cloud, this project provides a robust framework to maximize the potential of OpenAI's capabilities.