Project Icon

llm-guard

Ensure Accurate and Safe Interactions with Large Language Models

Product DescriptionLLM Guard is a security toolkit designed to protect large language model interactions by detecting harmful language, preventing data leaks, and resisting prompt injections. It offers seamless integration into production settings and provides regularly updated features. A variety of input and output scanners enhance its versatility in safeguarding LLM systems.
Project Details