Heat Project Overview
Heat is an innovative open-source project designed to bring the power of large language models (LLMs) to more users, specifically through a native iOS and macOS client. This project is tailored for those who want to explore and utilize the most popular LLM services seamlessly. Heat aims to simplify interactions with various LLM providers and offers a user-friendly platform for both technical and non-technical individuals.
Key Features
- Wide LLM Provider Support: Heat supports popular LLM providers such as OpenAI, Mistral, Anthropic, and Gemini. This allows users to access a broad range of LLM capabilities.
- Local LLM Support with Ollama: For those who prefer running models locally, Heat also supports open-source LLMs via the Ollama platform.
- Image Generation: The application provides support for image generation using Stable Diffusion and Dall-e, expanding its utility beyond text.
- User-Friendly Launcher: Heat includes a launcher similar to macOS's Spotlight, accessible using Option+Space, making it easy to access functionalities quickly.
- Enhanced Tool Use: The app supports multi-step tool use, allowing for complex operations depending on the model capabilities.
- Advanced Web Integration: To enhance response accuracy, Heat includes web search and browsing features.
- Calendar and Filesystem Integration: Users can benefit from calendar reading features and desktop-only filesystem search for more personalized results.
- Basic Memory Persistence: Heat can remember past interactions, improving the continuity of user sessions.
- No Additional Server Dependencies: Aside from accessing the models themselves, Heat does not rely on external servers, ensuring a streamlined experience.
Installation and Use
Install Locally
- Use Xcode to build and run the application on your device.
- Set up API keys by navigating to Preferences > Model Services.
- Select your preferred models, or use the provided defaults.
- Customize the app's behavior by setting preferred services for different scenarios under Preferences.
Running Locally with Ollama
- Download and install Ollama.
- Acquire models from the Ollama library and run the server with
ollama serve
. - Configure the Ollama service in Preferences > Model Services.
- Align preferred services to utilize Ollama as needed.
For iOS devices, determine your computer's local IP to connect with the Ollama server. Ensure the settings are correct in Preferences, adjusting ports if necessary (potentially changing from default port 11434 to another like 8080).
Future Prospects
The initial vision for Heat was to run models directly on devices, aligning with the project's name, as devices may warm up during this process. This goal remains a challenge but is something the project aims to revisit as technology advances, potentially making on-device execution more feasible.
Heat represents a significant step forward in making LLMs accessible and usable for a wide audience, integrating advanced technology in a straightforward and user-friendly manner.