Red Teaming AI How to Stress Test LLM Integrated Apps Like an Attacker
Автор: DevSecCon
Загружено: 2025-10-22
Просмотров: 211
It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: