Project Icon

NeMo-Guardrails

Implementing Programmable Guardrails to Enhance the Security of LLM-based Applications

Product DescriptionNeMo Guardrails is an open-source toolkit designed to add programmable guardrails to conversational applications based on large language models (LLMs). It improves security and guides interactions to predefined paths, while offering customization over content moderation and output. The toolkit helps guard against common vulnerabilities such as jailbreaks and prompt injections. Compatible with LLMs like GPT-3.5, GPT-4, and more, it supports diverse applications including question answering and specialized assistants, usable via a Python API or a server setup.
Project Details