Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

29.4% ARC-AGI-2 🤯 (TOP SCORE!) - Jeremy Berman

Автор: Machine Learning Street Talk

Загружено: 2025-09-27

Просмотров: 15793

Описание:

We need AI systems to synthesise new knowledge, not just compress the data they see.

Jeremy Berman, is a research scientist at Reflection AI and recent winner of the ARC-AGI v2 public leaderboard.

*SPONSOR MESSAGES*
—
Take the Prolific human data survey - https://www.prolific.com/humandatasur... and be the first to see the results and benchmark their practices against the wider community!
—
cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy
Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++
Hiring a SF VC Principal: https://talent.cyber.fund/companies/c...
Submit investment deck: https://cyber.fund/contact?utm_source...
—

Imagine trying to teach an AI to think like a human i.e. solving puzzles that are easy for us but stump even the smartest models. Jeremy's evolutionary approach—evolving natural language descriptions instead of python code like his last version—landed him at the top with about 30% accuracy on the ARCv2.

We discuss why current AIs are like "stochastic parrots" that memorize but struggle to truly reason or innovate as well as big ideas like building "knowledge trees" for real understanding, the limits of neural networks versus symbolic systems, and whether we can train models to synthesize new ideas without forgetting everything else.

Jeremy Berman:
https://x.com/jerber888

TRANSCRIPT:
https://app.rescript.info/public/shar...

TOC:
Introduction and Overview [00:00:00]
ARC v1 Solution [00:07:20]
Evolutionary Python Approach [00:08:00]
Trade-offs in Depth vs. Breadth [00:10:33]
ARC v2 Improvements [00:11:45]
Natural Language Shift [00:12:35]
Model Thinking Enhancements [00:13:05]
Neural Networks vs. Symbolism Debate [00:14:24]
Turing Completeness Discussion [00:15:24]
Continual Learning Challenges [00:19:12]
Reasoning and Intelligence [00:29:33]
Knowledge Trees and Synthesis [00:50:15]
Creativity and Invention [00:56:41]
Future Directions and Closing [01:02:30]

REFS:
Jeremy’s 2024 article on winning ARCAGI1-pub
https://jeremyberman.substack.com/p/h...

Getting 50% (SoTA) on ARC-AGI with GPT-4o [Greenblatt]
https://blog.redwoodresearch.org/p/ge...
   • Solving Chollet's ARC-AGI with GPT4o   [his MLST interview]

A Thousand Brains: A New Theory of Intelligence [Hawkins]
https://www.amazon.com/Thousand-Brain...
   • #59 JEFF HAWKINS - Thousand Brains Theory   [MLST interview]

Francois Chollet + Mike Knoop’s lab
https://ndea.com/

On the Measure of Intelligence [Chollet]
https://arxiv.org/abs/1911.01547

On the Biology of a Large Language Model [Anthropic]
https://transformer-circuits.pub/2025...

The ARChitects [won 2024 ARC-AGI-1-private]
   • The ARC Prize 2024 Winning Algorithm [Dani...  

Connectionism critique 1998 [Fodor/Pylshyn]
https://uh.edu/~garson/F&P1.PDF

Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis [Kumar/Stanley]
https://arxiv.org/pdf/2505.11581

AlphaEvolve interview (also program synthesis)
   • Wild breakthrough on Math after 56 years.....  

ShinkaEvolve: Evolving New Algorithms with LLMs, Orders of Magnitude More Efficiently [Lange et al]
https://sakana.ai/shinka-evolve/

Deep learning with Python Rev 3 [Chollet] - READ CHAPTER 19 NOW!
https://deeplearningwithpython.io/

29.4% ARC-AGI-2 🤯 (TOP SCORE!) - Jeremy Berman

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

He Co-Invented the Transformer. Now: Continuous Thought Machines [Llion Jones / Luke Darlow]

He Co-Invented the Transformer. Now: Continuous Thought Machines [Llion Jones / Luke Darlow]

Something Strange Happens When You Measure the Expansion of the Universe

Something Strange Happens When You Measure the Expansion of the Universe

ARC-AGI-3 and Action Efficiency | ARC Prize @ MIT

ARC-AGI-3 and Action Efficiency | ARC Prize @ MIT

The Strange Math That Predicts (Almost) Anything

The Strange Math That Predicts (Almost) Anything

Development Talks: A Computational View of Life and Intelligence

Development Talks: A Computational View of Life and Intelligence

Мы создали калькуляторы, потому что мы ГЛУПЫЕ! [Профессор Дэвид Кракауэр]

Мы создали калькуляторы, потому что мы ГЛУПЫЕ! [Профессор Дэвид Кракауэр]

Timeline for AGI: 2030 with 50% chance | Demis Hassabis and Lex Fridman

Timeline for AGI: 2030 with 50% chance | Demis Hassabis and Lex Fridman

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

Проблема масштабирования ИИ

Проблема масштабирования ИИ

Современные идеи искусственного интеллекта: Дженсен Хуан, Джеффри Хинтон, Ян Лекун и видение буду...

Современные идеи искусственного интеллекта: Дженсен Хуан, Джеффри Хинтон, Ян Лекун и видение буду...

Andrej Karpathy: Software Is Changing (Again)

Andrej Karpathy: Software Is Changing (Again)

Google DeepMind CEO Demis Hassabis on AI, Creativity, and a Golden Age of Science | All-In Summit

Google DeepMind CEO Demis Hassabis on AI, Creativity, and a Golden Age of Science | All-In Summit

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton – Father of RL thinks LLMs are a dead end

Исследователь Google показывает, что жизнь «возникает из кода» [Блез Агуэра и Аркас]

Исследователь Google показывает, что жизнь «возникает из кода» [Блез Агуэра и Аркас]

The ARC Prize 2024 Winning Algorithm [Daniel Franzen and Jan Disselhoff]

The ARC Prize 2024 Winning Algorithm [Daniel Franzen and Jan Disselhoff]

This Simple Optimizer Is Revolutionizing How We Train AI [Muon]

This Simple Optimizer Is Revolutionizing How We Train AI [Muon]

François Chollet: How We Get To AGI

François Chollet: How We Get To AGI

Простая задача на 1 000 000 долларов, которую не может решить ИИ

Простая задача на 1 000 000 долларов, которую не может решить ИИ

Что ошибочно пишут в книгах об ИИ [Двойной спуск]

Что ошибочно пишут в книгах об ИИ [Двойной спуск]

The Real Reason Huge AI Models Actually Work [Prof. Andrew Wilson]

The Real Reason Huge AI Models Actually Work [Prof. Andrew Wilson]

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]