Jake Thomas
Text2VLM: Adapting Text-Only Datasets to Evaluate Alignment Training in Visual Language Models (video, pdf)
Speaker: Jake Thomas
Author(s): Jake Thomas; Damian Ruck; Gabriel Downer; Sean Craven
Abstract: This research presents Text2VLM, a novel pipeline that adapts text-only datasets into multimodal formats to evaluate the resilience of Visual Language Models (VLMs) against typographic prompt injection attacks. It highlights the increased susceptibility of VLMs when visual inputs are introduced.
Adaptive by Design: Contextual Reinforcement Learning for Mission-Ready Cyber Defence (video, pdf)
Speaker: Jake Thomas
Author(s): Jake Thomas; Pranay Shah
Abstract: This paper introduces a framework for applying Contextual Reinforcement Learning (cRL) to cyber defense, where agents dynamically incorporate contextual signals (like mission objectives or threat assessments) to modulate their policies in real-time without retraining.
