Project Icon

react-llm

Execute Large Language Models in Browser with WebGPU-Accelerated React Hooks

Product DescriptionReact-llm facilitates efficient in-browser LLM operation with headless React Hooks and WebGPU integration. Supporting models like Vicuna 7B, it features prompts customization, secure data handling, and caching for speed. Ideal for developers seeking to integrate LLMs easily using `useLLM()` API, while preserving UI customization options.
Project Details