#1705859: Model Red-Teaming: Dynamic Security Analysis for LLMs - 29th October 2025
| Description: |
The rise of Large Language Models has many organizations rushing to integrate AI-powered tools into existing products, but they introduce significant new risk. OWASP has recently introduced the LLM Top 10 to highlight these novel threat vectors, including prompt injection and data exfiltration. However, existing AppSec tools are not designed to detect and remediate these vulnerabilities. In particular, static analysis (SAST), one of the most common tools, cannot be used since there is no code: machine-learning models are effectively “black boxes." LLM red-teaming is emerging as a technique to minimize the vulnerabilities associated with LLM adoption, ensure data confidentiality, and verify that safety and ethical guardrails are being applied. It applies tactics associated with penetration testing and dynamic analysis (DAST) of traditional software to the new world of machine-learning models. Join this session for an overview of LLM red-teaming principles including: What are some of the novel threat vectors associated with large language models, and how are these attacks carried out? Why are traditional vulnerability-detection tools (such as SAST and SCA) incapable of detecting the most serious risks in LLMs? How can the principles of traditional dynamic analysis be applied to machine learning models, and what types of new tools are needed? How should organizations begin to approach building an effective program for LLM red-teaming? |
|---|---|
| More info: | https://whitepapers.theregister.com/paper/view/44429/model-red-teaming-dynamic-security-analysis-for-llms?td=m-ev?td=m-ev-2025-10-07&utm_source=events&utm_medium=mailshot&utm_content=webcast |
| Date added | Oct. 7, 2025, 9:54 a.m. |
|---|---|
| Source | The Register |
| Subjects | |
| Venue | Oct. 29, 2025, midnight - Oct. 29, 2025, midnight |
