Taipy LLM Chat Demo
The Taipy LLM Chat Demo is a straightforward application that allows users to engage in conversations with a language learning model (LLM). This demo serves as a foundation for creating any LLM Inference Web Application using solely Python programming.
In this specific version, the app employs OpenAI's GPT-4 API. This technology is responsible for generating responses to the user’s messages. However, the design is flexible, and the code can be easily modified to integrate with other APIs or language models if preferred.
Tutorial
For those interested in developing this application, a comprehensive tutorial is available. This tutorial can be found in the Taipy documentation, providing step-by-step guidance and additional insights into the app development process.
How to Use
To begin using the Taipy LLM Chat Demo, users should ensure they have an OpenAI account equipped with an active API key. Follow these steps to set up and run the application:
-
Clone the Repository
First, clone the repository to your local machine with the following command:git clone https://github.com/Avaiga/demo-llm-chat.git
-
Install Dependencies
Navigate to the project directory and install the necessary dependencies via:pip install -r requirements.txt
-
Configure the Environment
Within the root directory, create a.env
file. This file will store your API key in the following format:OPENAI_API_KEY=sk-...
-
Run the Application
To start the application, execute the following command:python main.py
By following these steps, users can effortlessly interact with the language model and explore its capabilities through the Taipy LLM Chat Demo. It provides a flexible and engaging platform to experience the power of language learning models firsthand.