A Taxonomy of Prompt Injection Attacks
Schneier on Security
MARCH 8, 2024
.” Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition Abstract: Large Language Models (LLMs) are deployed in interactive contexts with direct user engagement, such as chatbots and writing assistants.
Let's personalize your content