Meta's New AI: Better Than LLMs ?
Автор: LogicLayers
Загружено: 2025-12-31
Просмотров: 1830
Meta's AI Chief Yann LeCun reportedly plans to leave to build his own startup... but before that, he dropped a revolutionary new AI architecture that challenges everything we know about LLMs.
Meet VL-JEPA: A "Non-Generative" AI model that learns like a human. It doesn't predict the next token—it understands the world.
In this video, we visualize exactly how VL-JEPA works, why it destroys traditional benchmarks with 50% less compute, and why Yann LeCun thinks Generative AI is a dead end for true intelligence.
⏱️ Timestamps
0:00 - Only 17s? The Hook
0:17 - What is VL-JEPA?
0:33 - The Problem with Generative AI (LLMs)
1:05 - Implications for Robotics & Agents
1:21 - Token Flow vs. Semantic Flow
1:59 - How VL-JEPA "Thinks" (Semantic Thinking)
2:57 - The Dot Cloud: Visualizing Meaning
3:40 - Comparison: Cheap AI vs. JEPA
4:26 - Temporal Meaning (Understanding Time)
5:10 - The Architecture Explained (No Decoder?!)
6:01 - Benchmarks: Crushing the Competition
7:53 - Yann LeCun on "World Models"
8:52 - Sonia Joseph: Modeling Causal Dynamics
9:53 - Reddit Reacts & Conclusion
🔗 References & Links
Read the Paper: [Link to VL-JEPA Paper]
Comparison of I-JEPA vs VL-JEPA
Yann LeCun's original tweets
#AI #YannLeCun #MetaAI #MachineLearning #VLJEPA #ArtificialIntelligence #LLM #GenerativeAI #ComputerVision #DeepLearning
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: