Local AI Stack: Empowering Local AI Applications
Local AI Stack is a pioneering project designed to enable anyone to run a simple artificial intelligence application entirely on their local device, without the need for cloud services or financial commitments. Rooted in the innovative AI Starter Kit, the project allows for a seamless 100% local AI experience specifically tailored for document Q&A tasks.
Core Components of the Local AI Stack
The Local AI Stack is composed of several integral components that empower the development and deployment of local AI applications:
-
Inference with Ollama: Ollama serves as the foundation for processing user queries locally, ensuring speed and privacy by handling tasks without relying on remote servers.
-
Vector Database via Supabase pgvector: This component efficiently stores and manages vector representations of documents, enabling quick retrieval and response to queries. Supabase provides the framework for a scalable and robust VectorDB solution.
-
Language Model Orchestration by Langchain.js: Langchain.js offers a powerful orchestration layer that streamlines the integration and manipulation of large language models, facilitating complex question and answer interactions.
-
Application Logic with Next.js: Development of the application logic is conducted using Next.js, a popular framework for crafting responsive and dynamic user interfaces, allowing developers to prioritize user experience and front-end performance.
-
Embeddings Generation using Transformer.js: Embedding generation is crucial for transforming document data into machine-understandable formats. The use of Transformer.js, supplemented by all-MiniLM-L6-v2, ensures high-quality and efficient embeddings for document processing.
Step-by-Step Guide to Get Started
To kickstart your journey with the Local AI Stack, follow these essential steps:
1. Fork and Clone the Repository
To begin, fork the project repository to your GitHub account. Clone it to your local development environment using:
git clone [email protected]:[YOUR_GITHUB_ACCOUNT_NAME]/local-ai-stack.git
2. Install Project Dependencies
Navigate to your project directory and install required dependencies by executing:
cd local-ai-stack
npm install
3. Install Ollama
Ensure Ollama is installed on your machine. Detailed instructions are available on the Ollama GitHub page.
4. Run Supabase Locally
Begin by installing the Supabase CLI:
brew install supabase/tap/supabase
Then, start Supabase while in the /local-ai-stack
directory:
supabase start
5. Configure Environment Secrets
Copy example environment variables and fill in necessary secrets:
cp .env.local.example .env.local
For the SUPABASE_PRIVATE_KEY
, run:
supabase status
Copy the service_role key
and assign it to SUPABASE_PRIVATE_KEY
in your .env.local
.
6. Generate Data Embeddings
Generate and store embeddings with metadata by running:
node src/scripts/indexBlogLocal.mjs
This script processes documents from the /blogs directory using Transformer.js.
7. Launch Your Application Locally
To test the application locally, run npm run dev
from the project root and visit http://localhost:3000
.
8. Deploy Your Application
For those interested in extending beyond local deployment, guidance is provided via the AI Starter Kit for utilizing additional cloud services like Clerk, Pinecone, Supabase, OpenAI, and Replicate.
Supporting References & Acknowledgments
The Local AI Stack builds upon foundational tools and resources such as AI SDK, LangUI, Tailwind CSS, and inspiration from the a16z AI Starter Kit. Explore the extensive documentation and links provided for deeper integration and understanding.
By following this overview and the detailed setup instructions, users can harness the power of Local AI Stack to develop sophisticated AI-driven applications on their local infrastructure, ensuring privacy, speed, and control.