awesome-llm-security
Explore a curated collection of tools, papers, and resources focused on improving Large Language Model (LLM) security. This repository addresses various attack strategies, including white-box, black-box, and backdoor methods, along with defensive measures and platform evaluations. Delve into recent advancements and research on adversarial attacks and prompt injection. Learn about security tools such as Plexiglass and PurpleLlama aimed at testing and protecting LLMs. Access essential information for researchers, developers, and security professionals in the field of LLM security.