llama_ros
Discover how llama_ros enables the integration of llama.cpp's optimization features into ROS 2 projects. It supports GGUF-based LLMs, VLMs, real-time LoRA changes, and GBNF grammars, enhancing robotic applications. The repository includes detailed installation guides, Docker options, and usage examples. Enhance ROS 2 functionality with CUDA support and other tools offered by llama_ros, suitable for expanding project capabilities with LangChain and related demos.