#1701059: Webinar - Breaking AI: Inside the Art of LLM Pen Testing
Description: |
Large Language Models (LLMs) are reshaping enterprise technology and redefining what it means to secure software. But here’s the problem: most penetration testers are using the wrong tools for the job. Traditional techniques focus on exploits and payloads, assuming the AI is just another application. But it’s not. In this session, Brian D., Security Consultant III at Bishop Fox, makes the case that effective LLM security testing is more about persuasion than payloads. Drawing on hands-on research and real-world client engagements, Brian reveals a new model for AI pen testing – one grounded in social engineering, behavioral manipulation, and even therapeutic dialogue. You’ll explore Adversarial Prompt Exploitation (APE), a methodology that targets trust boundaries and decision pathways using psychological levers like emotional preloading, narrative control, and language nesting. This is not Prompt Injection 101 — it’s adversarial cognition at scale – using real-world case studies to demonstrate success. This virtual session tracks key operational challenges: the limitations of static payloads and automation, the complexity of reproducibility, and how to communicate findings to executive and technical leadership. Brian Covers: Why conventional penetration testing methodologies fail on LLMs How attackers exploit psychological and linguistic patterns, not code Practical adversarial techniques: emotional preloading, narrative leading, and more Frameworks for simulating real-world threats to LLM-based systems How to think like a social engineer to secure AI Who Should Watch: This session is perfect for anyone securing, testing, or building AI systems, especially those using LLMs. Pen testers and red teamers will explore a new adversarial framework focused on behavioral manipulation over payloads. AI/ML security pros and researchers will gain insight into psychological attack techniques like emotional preloading and narrative control. Developers will see real-world examples of how attackers engage with models, and CISOs/tech leads will benefit from guidance on operational challenges like reproducibility and communicating findings. |
---|---|
More info: | https://event.on24.com/wcc/r/5059511/F92123D5E1C6C9725C8DE670913F1BF0?partnerref=invite1 |
Date added | Sept. 3, 2025, 5:44 p.m. |
---|---|
Source | On24 |
Subjects | |
Venue | Sept. 11, 2025, midnight - Sept. 11, 2025, midnight |