Gary Lopez Munoz

and

Keegan Hines

LLM Prompt Injection: Attacks and Defenses (pdf, video)

The advent of powerful transformer-based language models has opened up new possibilities and driven extensive adoption across diverse industry settings. However, despite their impressive utility and generality, these models carry new risks for exploitation and manipulation by malicious agents. In this tutorial session, listeners will gain hands-on experience wrestling with issues surrounding LLM prompt injection. We will describe taxonomies of LLM injection attacks, including User Prompt Injection Attacks (UPIA) and Cross-domain Prompt Injection Attacks (XPIA). Listeners will implement their own LLM bots and gain experience attacking/exploiting them using various techniques. We will then act as defenders and implement emerging techniques for defending against prompt injection attacks. By the end of this session, listeners will walk away with a practical understanding of prompt injection vulnerabilities and defensive measures that they can take into their work developing LLM products.