29.4% ARC-AGI-2 🤯 (TOP SCORE!) - Jeremy Berman
Автор: Machine Learning Street Talk
Загружено: 2025-09-27
Просмотров: 15793
We need AI systems to synthesise new knowledge, not just compress the data they see.
Jeremy Berman, is a research scientist at Reflection AI and recent winner of the ARC-AGI v2 public leaderboard.
*SPONSOR MESSAGES*
—
Take the Prolific human data survey - https://www.prolific.com/humandatasur... and be the first to see the results and benchmark their practices against the wider community!
—
cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy
Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++
Hiring a SF VC Principal: https://talent.cyber.fund/companies/c...
Submit investment deck: https://cyber.fund/contact?utm_source...
—
Imagine trying to teach an AI to think like a human i.e. solving puzzles that are easy for us but stump even the smartest models. Jeremy's evolutionary approach—evolving natural language descriptions instead of python code like his last version—landed him at the top with about 30% accuracy on the ARCv2.
We discuss why current AIs are like "stochastic parrots" that memorize but struggle to truly reason or innovate as well as big ideas like building "knowledge trees" for real understanding, the limits of neural networks versus symbolic systems, and whether we can train models to synthesize new ideas without forgetting everything else.
Jeremy Berman:
https://x.com/jerber888
TRANSCRIPT:
https://app.rescript.info/public/shar...
TOC:
Introduction and Overview [00:00:00]
ARC v1 Solution [00:07:20]
Evolutionary Python Approach [00:08:00]
Trade-offs in Depth vs. Breadth [00:10:33]
ARC v2 Improvements [00:11:45]
Natural Language Shift [00:12:35]
Model Thinking Enhancements [00:13:05]
Neural Networks vs. Symbolism Debate [00:14:24]
Turing Completeness Discussion [00:15:24]
Continual Learning Challenges [00:19:12]
Reasoning and Intelligence [00:29:33]
Knowledge Trees and Synthesis [00:50:15]
Creativity and Invention [00:56:41]
Future Directions and Closing [01:02:30]
REFS:
Jeremy’s 2024 article on winning ARCAGI1-pub
https://jeremyberman.substack.com/p/h...
Getting 50% (SoTA) on ARC-AGI with GPT-4o [Greenblatt]
https://blog.redwoodresearch.org/p/ge...
• Solving Chollet's ARC-AGI with GPT4o [his MLST interview]
A Thousand Brains: A New Theory of Intelligence [Hawkins]
https://www.amazon.com/Thousand-Brain...
• #59 JEFF HAWKINS - Thousand Brains Theory [MLST interview]
Francois Chollet + Mike Knoop’s lab
https://ndea.com/
On the Measure of Intelligence [Chollet]
https://arxiv.org/abs/1911.01547
On the Biology of a Large Language Model [Anthropic]
https://transformer-circuits.pub/2025...
The ARChitects [won 2024 ARC-AGI-1-private]
• The ARC Prize 2024 Winning Algorithm [Dani...
Connectionism critique 1998 [Fodor/Pylshyn]
https://uh.edu/~garson/F&P1.PDF
Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis [Kumar/Stanley]
https://arxiv.org/pdf/2505.11581
AlphaEvolve interview (also program synthesis)
• Wild breakthrough on Math after 56 years.....
ShinkaEvolve: Evolving New Algorithms with LLMs, Orders of Magnitude More Efficiently [Lange et al]
https://sakana.ai/shinka-evolve/
Deep learning with Python Rev 3 [Chollet] - READ CHAPTER 19 NOW!
https://deeplearningwithpython.io/
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: