Back to All Events

Adversarial Attacks and Model Safeguards for LLMs and VLMs

  • 31st Floor Sands Capital , Sands Capital 1000 Wilson Blvd #3000 Arlington VA (map)

About: This session focuses on research directly addressing the vulnerabilities, attack methods, and defensive strategies for Large Language Models (LLMs) and Visual Language Models (VLMs).

A Framework for Adaptive Multi-Turn Jailbreak Attacks on Large Language Models

Speaker: Javad Rafiei Asl

This paper introduces HarmNet, a modular framework designed to systematically construct, refine, and execute multi-turn jailbreak queries against LLMs, demonstrating significantly higher attack success rates compared to prior methods.

LLM Salting: From Rainbow Tables to Jailbreaks

Speaker: Tamás Vörös

This work proposes LLM salting, a lightweight defense mechanism that rotates the internal refusal direction of LLMs, rendering previously effective jailbreak prompts (like GCG) ineffective without degrading model utility.

ShadowLogic: Hidden Backdoors in Any Whitebox LLM

Speaker: Amelia Kawasaki

This paper unveils ShadowLogic, a method for injecting hidden backdoors into white-box LLMs by modifying theircomputational graphs. These backdoors are activated by a secret trigger phrase, allowing the model to generate uncensored responses and exposing a new class of graph-level vulnerabilities.

Text2VLM: Adapting Text-Only Datasets to Evaluate Alignment Training in Visual Language Models

Speaker: Jake Thomas

This research presents Text2VLM, a novel pipeline that adapts text-only datasets into multimodal formats to evaluate the resilience of Visual Language Models (VLMs) against typographic prompt injection attacks. It highlights the increased susceptibility of VLMs when visual inputs are introduced.

Earlier Event: October 22
Break
Later Event: October 22
Lunch