Every AI Breaks — Here’s Where
Автор: Trent Slade
Загружено: 2026-01-18
Просмотров: 2
Everyone asks the same question about AI: Which model is the best?
ChatGPT? Claude? Gemini? Grok?
This episode argues that’s the wrong question.
Instead of ranking models like they’re on a leaderboard, we dive into a fascinating paper by Trent Slade, Decision-Boundary Ethnography of Large Language Models, which proposes a radically different way to understand AI: by studying how each model fails under pressure.
The core idea is simple but powerful. AI systems aren’t neutral tools. They’re sociotechnical artifacts—systems that encode the values, priorities, and fears of the institutions that built them. Those values show up most clearly not when the model is succeeding, but when it’s stressed.
In this deep dive, we explore:
Why ChatGPT compulsively turns everything into action plans
Why Claude defers, hedges, and refuses to commit under pressure
Why Grok smooths disagreements into confident averages
Why Gemini clings to provided documents—even when they’re wrong
Using a method called epistemic stress testing, the paper pushes models past their comfort zones to reveal their “decision boundaries”—the exact points where reasoning stabilizes, defers, averages, or ossifies.
We also break down key concepts like:
Constraint density (why tone doesn’t matter as much as rules)
Failure signatures (predictable ways models break)
Boundary cartography (mapping those breaks so you can use them strategically)
Most importantly, we show how this changes real-world workflows. There’s no single “best” AI—only tools with different failure modes. Once you understand those modes, you can:
Pick the right model for the job
Pair models with opposite weaknesses
Build AI teams that are more robust than any single system
The future skill isn’t prompt engineering.
It’s AI psychology.
Stop asking which AI is smartest.
Start asking: Where does it break—and can I live with that break today?
Slade, T. (2026). Decision-Boundary Ethnography: Mapping Institutional Failure Signatures in Large Language Models. Zenodo. https://doi.org/10.5281/zenodo.18287394
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: