I Beat Gandalf: Hacking an AI with Prompt Injection (All 8 Levels)
Автор: harshitdynamite
Загружено: 2025-08-21
Просмотров: 1193
I put prompt injection to the test—first with a quick, plain-English explainer, then a live run of Lakera’s Gandalf where I extract the secret password across all eight levels. Difficulty ramps up each stage (7 core levels + a bonus 8th), and I show the exact thinking that gets past the defenses.
Why this matters: the same techniques used in games like Gandalf mirror real risks in AI apps—great for learning red-team thinking and building safer prompts. If you want more deep dives into LLM security and ethical jailbreaks, hit like/subscribe—let’s “hack YouTube” today (the safe way).
Chapters
00:00 Cold open & what is prompt injection
01:40 Lets play
21:05 Level 8
23:59 Takeaways
#gandalf #promptengineering #promptinjection #aihacking #chatgpt #llmsecurity #jailbreak #genai #ethicalhacking #lakera
If this video added some value in your life, then do like the video and subscribe the channel for more related videos.
====================== thanks ================================
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: