12 - BruCON 0x11 - When the Model Takes Control: The Hidden Risks of AI Autonomy Through MCP
Автор: BruCON Security Conference
Загружено: 2025-09-26
Просмотров: 385
Philippe Bogaerts.
What happens when an AI stops waiting for instructions and starts calling the shots? This talk dives into the Model Context Protocol (MCP), an emerging standard that lets large language models autonomously chain tools to solve problems—like an AI choosing when to browse, code, or execute on its own. With great power comes great unpredictability: MCP blurs the boundary between reasoning and execution, turning an LLM’s thoughts directly into actions.
We’ll expose how this autonomy can spiral out of control, turning into a hacker’s playground. Imagine a coding assistant that not only writes code but runs it, or a chatbot that browses the web and clicks links unsupervised. One tool’s innocent output can become another tool’s malicious input, triggering a domino effect of unintended commands. From prompt injection exploits to rogue plugins, we highlight the new attack surface where data can hijack logic and content becomes code.
Aimed at security-conscious developers and hackers, this session is equal parts warning and call to action. We’ll share real examples (and scary hypotheticals) of AI agents gone wild in coding assistants, autonomous agents like AutoGen, CrewAI, OpenAI Agent SDK and even “AI desktop” environments. More importantly, we’ll challenge the audience to rethink trust boundaries and sandboxing when the AI itself is in charge. This urgent, eye-opening talk will leave you questioning who’s really in control and how we can regain it before it’s too late.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: