llm-guard
LLM Guard is a security toolkit designed to protect large language model interactions by detecting harmful language, preventing data leaks, and resisting prompt injections. It offers seamless integration into production settings and provides regularly updated features. A variety of input and output scanners enhance its versatility in safeguarding LLM systems.