Introduction to AutoLLM
AutoLLM is a versatile project designed to simplify, unify, and amplify the deployment of applications involving Large Language Models (LLMs). It stands out by offering a unified API that supports over 100 different LLMs and integrates seamlessly with multiple Vector Databases, enhancing usability for developers and businesses alike.
Why Choose AutoLLM?
AutoLLM addresses some common challenges faced by developers when working with LLMs:
-
Diverse LLM Options: Supports over 100 language models, allowing developers to choose the most suitable one for their needs.
-
Unified API: Unlike some alternatives, AutoLLM provides a single API interface, reducing complexity and integration effort.
-
Cost Calculation: It includes advanced features like cost calculation for using more than 100 LLMs, enabling users to manage expenses effectively.
-
Rapid Deployment: It enables creating Retrieval-Augmented Generation (RAG) engines and FastAPI applications with minimal lines of code, often just one line.
Installation
Installing AutoLLM is straightforward and requires Python 3.8 or higher. The base package is available via pip:
pip install autollm
To enable built-in data readers for different file types, you can install additional components:
pip install autollm[readers]
Quickstart Guide
AutoLLM is designed for ease of use, featuring tutorials and Colab notebooks. Here's a simple example to create a query engine in seconds:
from autollm import AutoQueryEngine, read_files_as_documents
documents = read_files_as_documents(input_dir="path/to/documents")
query_engine = AutoQueryEngine.from_defaults(documents)
response = query_engine.query("Why did SafeVideo AI develop this project?")
print(response.response)
Powerful Features
-
Supports 100+ LLMs: Developers can select from various models hosted on platforms like Hugging Face, Microsoft Azure, Google VertexAI, and AWS Bedrock.
-
20+ Vector Database Support: Offers seamless integration with databases such as Qdrant, enhancing data handling capabilities without complex setups.
-
Automated Cost Calculation: Provides tools to calculate and display the costs associated with token usage, supporting informed decision-making.
-
FastAPI Application Creation: Allows converting a query engine to a FastAPI app effortlessly, streamlining the process of hosting APIs.
Advanced Use Cases and Migration
For those migrating from Llama-Index or seeking advanced usage, AutoLLM offers detailed guidance and options to configure and deploy applications in diverse environments.
FAQs
Q: Can AutoLLM be used for commercial projects?
A: Yes, it is available under the AGPL 3.0 license, which permits commercial use with certain conditions.
Future Roadmap
AutoLLM's team is continuously working on enriching the platform with features like one-line Gradio app creation, automated LLM evaluation, and more pre-configured applications for specific use cases, such as PDF chat and academic paper analysis.
Getting Involved
Whether you're a developer looking to contribute or simply interested in leveraging AutoLLM for your projects, the community is encouraged to star the repository, provide feedback, and contribute to enhancing the project. For guidelines and more information, refer to their contributing policy.
AutoLLM is more than just a tool; it's a community-driven effort aimed at revolutionizing how developers and businesses interact with advanced language models.