#distributed inference

Logo of petals
petals
The project allows the execution and fine-tuning of large language models like Llama 3.1, Mixtral, Falcon, and BLOOM, either on home desktops or through Google Colab by leveraging a decentralized network akin to BitTorrent. Users tapping into the global network can experience up to 10 times faster performance in fine-tuning and inference compared to more conventional methods. This open-source initiative encourages community collaboration by sharing computational resources, especially GPUs, broadening capabilities for tasks such as text generation and chatbot applications. Emphasizing privacy, it enables secure data handling through public or private network setups. Detailed guides are accessible for various systems, including Linux, Windows, and macOS, with community support provided via Discord.
Logo of AI-Horde
AI-Horde
AI Horde provides a scalable, community-powered solution for distributed AI model inference, ideal for image and text generation. This enterprise-level middleware utilizes idle community resources to operate models like Stable Diffusion and Pygmalion/Llama, allowing even non-GPU users to access advanced AI capabilities. It seamlessly integrates with non-Python clients, including games and applications. The system can be privately deployed in a closed enterprise environment, with installation in hours and scalable ML solutions in days. Users may register for enhanced access and contribution tracking, gaining priority based on involvement. Anonymous usage is available, offering limited tracking and priority. Explore AI Horde for robust support and collaboration in your AI initiatives.