When AI Writes More Than Humans Can Review
Автор: This Dot Media
Загружено: 2026-01-14
Просмотров: 56
In this episode of The Context Window, Tracy Lee, Ben Lesh, and Brandon Mathis dig into what people miss with Cursor and Claude Code, like feeding the tools the right documentation, using multiple agents to review output, and treating the model less like a genius and more like a fast junior developer that needs constant direction and code review. They talk about why tab development still wins in some real world refactor work, why the future problem is not generating code but reviewing it, and what happens when AI can produce more than humans can realistically validate.
The conversation also explores building agents inside companies, the emerging SDK race between platforms like Vercel and TanStack, why MCP isn’t dead but has growing pains around context bloat, and what enterprise teams can do when they are stuck with slower setups like AWS Bedrock.
What You’ll Learn:
How to stop AI coding tools from producing “confident garbage” by giving them the right context and constraints
Why using Cursor or Claude Code well means supervising an assistant like a fast junior dev and doing real code review again
How to use multiple agents to review the same change so you can spot issues without reading every line manually
When “tab development” beats full agent mode and how to recognize those refactor style use cases
What’s next in agent building inside companies including tool calling workflows, MCP growing pains, and surviving enterprise realities like Bedrock
Tracy Lee on Linkedin: / tracyslee
Ben Lesh on Linkedin: / blesh
Brandon Mathis on Linkedin: / mathisbrandon
This Dot Labs Twitter: https://x.com/ThisDotLabs
This Dot Media Twitter: https://x.com/ThisDotMedia
This Dot Labs Instagram: / thisdotlabs
This Dot Labs Facebook: / thisdot
Sponsored by This Dot Labs: https://ai.thisdot.co/
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: