Create Llama: A Simple Way to Build LlamaIndex Applications
The create-llama
project offers a straightforward method for developers to jumpstart a LlamaIndex application. This CLI tool, akin to creating a new app setup, comes pre-configured and ready to use, saving developers time and effort in the initial stages.
Getting Started
To begin with create-llama
, simply run:
npx create-llama@latest
Once executed, your app is generated, and by running:
npm run dev
you can start the development server and view your application at http://localhost:3000.
What You Receive
When you launch a new project using create-llama
, several components are at your disposal:
- Pre-defined Use Cases: These include options like Agentic RAG, Data Analysis, or Report Generation, which guide the initial setup.
- Next.js Powered Front-end: This uses UI components from shadcn/ui for creating a chat interface capable of interacting with data or an agent.
- Backend Choices:
Both backend choices include endpoints for chat streaming and uploading private files for chat integration.
By default, the app is integrated with OpenAI's models; thus, an OpenAI API key is required. Users can also switch to other LLMs that LlamaIndex supports.
Utilizing Your Data
You can integrate your own data, which the app will index for usage in query answering. The generated app holds a folder named data
where supported files can be placed for ingestion. For Next.js and Express apps, run:
npm run generate
And for Python-backed apps, execute:
poetry run generate
Customizing AI Models
The default AI model used is OpenAI’s gpt-4o-mini
, but you can switch this by adding the --ask-models
parameter or by manually adjusting configurations in project-specific settings files.
Example Setup
The simplest configuration involves running create-llama
in interactive mode, where the setup process will guide you through naming your project and choosing options like the app type and preferred language.
Command Line and Pro Mode
You also have the option of setting up your project using command line arguments or by switching to a pro mode using the --pro
flag, which provides a detailed selection process for components such as:
- Vector Store: Options like MongoDB, Pinecone, and others.
- Tools: These include a Python code interpreter, an OpenAPI Action, and more.
- Data Sources: Integrate from files, websites, or databases.
- Backend Options: Select Express or other options.
- Observability Tools: Choose from tools like LlamaTrace and Traceloop.
Pro mode offers advanced customization for those comfortable with detailed technical configurations, making it a perfect choice for developers seeking precise project control.
Documentation
For more information, consult the LlamaIndex TS/JS Documentation or Python Documentation.
The create-llama
tool draws inspiration from create-next-app, showcasing its utility in organizing a structured and efficient project setup.