AI Agents can write 10,000 lines of hacking code in seconds [Dr. Ilia Shumailov]
Автор: Machine Learning Street Talk
Загружено: 2025-10-03
Просмотров: 13321
Dr. Ilia Shumailov - Former DeepMind AI Security Researcher, now building security tools for AI agents
Ever wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think.
*SPONSOR MESSAGES*
—
Check out notebooklm for your research project, it's really powerful
https://notebooklm.google.com/
—
Take the Prolific human data survey - https://www.prolific.com/humandatasur... and be the first to see the results and benchmark their practices against the wider community!
—
cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy
Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++
Hiring a SF VC Principal: https://talent.cyber.fund/companies/c...
Submit investment deck: https://cyber.fund/contact?utm_source...
—
We're racing toward a world where AI agents will handle our emails, manage our finances, and interact with sensitive data 24/7. But there is a problem. These agents are nothing like human employees. They never sleep, they can touch every endpoint in your system simultaneously, and they can generate sophisticated hacking tools in seconds. Traditional security measures designed for humans simply won't work.
Dr. Ilia Shumailov
https://x.com/iliaishacked
https://iliaishacked.github.io/
https://sequrity.ai/
TRANSCRIPT:
https://app.rescript.info/public/shar...
More from Ilia on our Patreon:
/ 116142401 (interview from last year)
/ ilia-shumailov-140359158 (extended version of this interview)
TOC:
00:00:00 - Introduction & Trusted Third Parties via ML
00:03:45 - Background & Career Journey
00:06:42 - Safety vs Security Distinction
00:09:45 - Prompt Injection & Model Capability
00:13:00 - Agents as Worst-Case Adversaries
00:15:45 - Personal AI & CAML System Defense
00:19:30 - Agents vs Humans: Threat Modeling
00:22:30 - Calculator Analogy & Agent Behavior
00:25:00 - IMO Math Solutions & Agent Thinking
00:28:15 - Diffusion of Responsibility & Insider Threats
00:31:00 - Open Source Security Concerns
00:34:45 - Supply Chain Attacks & Trust Issues
00:39:45 - Architectural Backdoors
00:44:00 - Academic Incentives & Defense Work
00:48:30 - Semantic Censorship & Halting Problem
00:52:00 - Model Collapse: Theory & Criticism
00:59:30 - Career Advice & Ross Anderson Tribute
REFS:
Lessons from Defending Gemini Against Indirect Prompt Injections
https://arxiv.org/abs/2505.14534
Defeating Prompt Injections by Design. Google, Google DeepMind, and ETH Zurich. (CAML)
Debenedetti, E., Shumailov, I., Fan, T., Hayes, J., Carlini, N., Fabian, D., Kern, C., Shi, C., Terzis, A., & Tramèr, F.
https://arxiv.org/pdf/2503.18813
Agentic Misalignment: How LLMs could be insider threats
https://www.anthropic.com/research/ag...
STOP ANTHROPOMORPHIZING INTERMEDIATE TOKENS AS REASONING/THINKING TRACES!
Subbarao Kambhampati et al
https://arxiv.org/pdf/2504.09762
Meiklejohn, S., Blauzvern, H., Maruseac, M., Schrock, S., Simon, L., & Shumailov, I. (2025).
Machine learning models have a supply chain problem.
https://arxiv.org/abs/2505.22778
Gao, Y., Shumailov, I., & Fawaz, K. (2025).
Supply-chain attacks in machine learning frameworks.
In Proceedings of the 8th MLSys Conference.
https://openreview.net/pdf?id=EH5PZW6aCr
Apache Log4j Vulnerability Guidance
https://www.cisa.gov/news-events/news...
Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2023).
Architectural backdoors in neural networks.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 21163–21173).
Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2022).
Architectural backdoors in neural networks. arXiv preprint arXiv:2206.07840.
https://arxiv.org/pdf/2206.07840
Langford, H., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2024).
Architectural neural backdoors from first principles.
arXiv preprint arXiv:2402.06957.
Küchler, N., Petrov, I., Grobler, C., & Shumailov, I. (2025).
Architectural backdoors for within-batch data stealing and model inference manipulation.
arXiv preprint arXiv:2505.18323.
Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches
David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, Vardan Papyan
https://proceedings.mlr.press/v235/gl...
AlphaEvolve MLST interview [Matej Balog, Alexander Novikov]
• Wild breakthrough on Math after 56 years.....
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: