Why AI Fails at Counting Letters (But Solves PhD Math Problems)
Автор: Lecture Distilled
Загружено: 2026-01-13
Просмотров: 10
Why can an AI solve Olympiad-level mathematics but fail to count the Rs in "strawberry"? This video reveals the fundamental constraint that explains these bizarre failures: LLMs have a fixed computational budget per token.
You'll discover why the same model that handles complex physics problems confidently claims 9.11 is larger than 9.9—and how training patterns from Bible verses actually cause this specific error.
📚 Key concepts covered:
• Fixed computation per token — Every token gets the same ~100 layers of processing, whether the task is trivial or complex
• Tokenization blindness — Models see tokens, not individual characters, making letter counting fundamentally unreliable
• Chain-of-thought reasoning — Why "think step by step" dramatically improves accuracy by distributing computation
• Training pattern interference — How statistical associations (like Bible verse notation) can override correct reasoning
• Code tools as the solution — Why having models write Python is more reliable than "mental math"
━━━━━━━━━━━━━━━━━━━━━━━━
🎓 ORIGINAL SOURCE
━━━━━━━━━━━━━━━━━━━━━━━━
This video distills concepts from:
• Deep Dive into LLMs like ChatGPT
Full credit to the original creator for the source material. Please visit the original lecture for the complete, in-depth discussion.
━━━━━━━━━━━━━━━━━━━━━━━━
📖 About Lecture Distilled
━━━━━━━━━━━━━━━━━━━━━━━━
Long lectures. Short videos. Core insights.
We distill lengthy academic lectures into focused concept videos that respect your time while preserving the essential knowledge.
🔗 GitHub: https://github.com/Augustinus12835/au...
#LLM #ArtificialIntelligence #MachineLearning #GPT #Tokenization #AIExplained #DeepLearning #NeuralNetworks
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: