Introduction to QA-Pilot
QA-Pilot is a pioneering project designed to facilitate an interactive chat experience with GitHub code repositories using advanced local and online language learning models (LLMs). It provides users with a rapid and intuitive way to understand and navigate code, aiming to streamline the coding process and enhance user interaction.
Key Features
-
Interactive GitHub Chat: Users can chat with public GitHub repositories by simply cloning them, making it easier to understand and interact with complex codebases.
-
Chat History Storage: QA-Pilot allows users to store chat histories, enabling them to revisit previous conversations for reference and continued learning.
-
Easy Configuration: The project offers straightforward setup options, making it accessible and user-friendly for diverse audiences.
-
Multiple Chat Sessions: Users can manage multiple chat sessions simultaneously, enhancing productivity and allowing for better project management.
-
Quick Session Search: With a robust search function, users can locate their sessions efficiently, saving time and effort.
-
CodeGraph Integration: QA-Pilot integrates with CodeGraph, enabling users to view Python files within the interface for a holistic coding experience.
-
Diverse LLM Support: The project supports various LLM models, including:
- Ollama models like llama3.1, phi3
- OpenAI models including gpt-4 and gpt-3.5-turbo
- MistralAI models such as mistral-tiny and codestral
- LocalAI and Nvidia models
- ZhipuAI, Anthropic, Llamacpp, Tongyi, and Moonshot
Recent Updates and Enhancements
QA-Pilot continuously evolves, incorporating the latest techniques and tools:
- July 3, 2024: Integrated the Moonshot API and updated the Langchain to version 0.2.6.
- June 2024: Added Go CodeGraph support, included Nvidia/Tongyi API, and introduced Llamacpp features.
- May-June 2024: Emphasized API support for various models, refactored functions, and transitioned from Flask to FastAPI for better performance.
Deployment Guide
For deploying QA-Pilot, follow these steps:
-
Clone the Repository: Use git to clone the QA-Pilot repository and navigate into it.
-
Set Up Environment: Install conda, create a virtual environment, and activate it.
-
Install Dependencies: Use pip to install required packages and set up PyTorch with CUDA.
-
Configure LLM Setups:
- Ollama: Follow guidelines on their website and GitHub to manage local LLM models.
- LocalAI: Use Docker to manage local LLM with appropriate base URLs.
- Llamacpp: Upload models to specified directories and configure them.
-
API Key Setup: Place API keys for various models in the
.env
file. -
Database Configuration: Set up a PostgreSQL environment for storing chat sessions, configure connections, and test them.
-
Frontend Setup: Install Node.js and set the frontend environment in a separate terminal.
-
Run the QA-Pilot Backend: Execute the backend program to begin using QA-Pilot.
Usage Tips
- Avoid using URLs and uploads simultaneously.
- Manual removal of local chromadb is required upon stopping its use.
- Use the New Source Button to add new projects.
- Specific commands such as
rsd:
andrr:
facilitate advanced search and document retrieval. - Click on
Open Code Graph
for viewing code in supported languages like Python and Go.
QA-Pilot represents an exciting venture into the world of interactive coding assistance, harnessed by cutting-edge LLMs and comprehensive code analysis tools. It's a powerful resource for developers looking to enhance their coding capabilities and streamline their interactions with code repositories.