llm-sp
The resource provides an extensive overview of security and privacy challenges in large language models (LLMs), focusing on vulnerabilities like prompt injection, remote code execution, and adversarial attacks. It explores various attack taxonomies, benchmarks, and the susceptibility of applications to these threats. Regular updates on GitHub and Notion make it an essential reference for researchers and developers aiming to protect LLM applications, outlining key risks and potential mitigation strategies.