Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Why the 'intelligence explosion' might be too fast to handle | Will MacAskill

Автор: 80,000 Hours

Загружено: 2025-03-11

Просмотров: 146719

Описание:

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, the internet, postmodernism, game theory, genetic engineering, the Big Bang, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.

That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years. (https://www.forethought.org/research/...)

The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.

Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.

What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed. So there’s reason to worry about our own capacity to make wise choices. Will lays out 10 “grand challenges” we’ll need to quickly navigate to successfully avoid things going wrong during this period.

In this wide-ranging conversation with host Rob Wiblin, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare.

Learn more and see the full transcript on the 80,000 Hours website: https://80k.info/wm25

This episode was originally recorded on February 7, 2025.

Chapters:
• Cold open (00:00:00)
• Who’s Will MacAskill? (00:00:43)
• Why Will now just works on AGI (00:01:03)
• Will was wrong(ish) on AI timelines and hinge of history (00:04:21)
• A century of history crammed into a decade (00:09:19)
• Science goes super fast; our institutions don't keep up (00:16:15)
• Is it good or bad for intellectual progress to 10x? (00:21:44)
• An intelligence explosion is not just plausible but likely (00:23:41)
• Intellectual advances outside technology are similarly important (00:30:04)
• Counterarguments to intelligence explosion (00:32:42)
• The three types of intelligence explosion (software, technological, industrial) (00:39:00)
• The industrial intelligence explosion is the most certain and enduring (00:42:01)
• Is a 100x or 1,000x speedup more likely than 10x? (00:53:44)
• The grand superintelligence challenges (00:57:39)
• Grand challenge #1: Many new destructive technologies (01:01:29)
• Grand challenge #2: Seizure of power by a small group (01:09:10)
• Is global lock-in really plausible? (01:11:06)
• Grand challenge #3: Space governance (01:21:50)
• Is space truly defence-dominant? (01:32:19)
• Grand challenge #4: Morally integrating with digital beings (01:36:04)
• Will we ever know if digital minds are happy? (01:45:01)
• “My worry isn't that we won't know; it's that we won't care” (01:50:39)
• Can we get AGI to solve all these issues as early as possible? (01:54:05)
• Politicians have to learn to use AI advisors (02:07:05)
• Ensuring AI makes us smarter decision-makers (02:11:25)
• How listeners can speed up AI epistemic tools (02:15:11)
• AI could become great at forecasting (02:18:54)
• How not to lock in a bad future (02:20:26)
• AI takeover might happen anyway — should we rush to load in our values? (02:32:14)
• ML researchers are feverishly working to destroy their own power (02:41:57)
• We should aim for more than mere survival (02:45:23)
• By default the future is rubbish (02:57:03)
• No easy utopia (03:05:21)
• What levers matter most to utopia (03:15:19)
• Bottom lines from the modelling (03:29:32)
• People distrust utopianism; should they distrust this? (03:33:34)
• What conditions make eventual eutopia likely? (03:38:26)
• The new Forethought Centre for AI Strategy (03:47:15)
• How does Will resist hopelessness? (04:00:42)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Why the 'intelligence explosion' might be too fast to handle | Will MacAskill

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Terry Tao:

Terry Tao: "LLMs Are Simpler Than You Think – The Real Mystery Is Why They Work!"

"We have 900 days left." | Emad Mostaque

Ray Kurzweil: The Singularity Has Started, Merging with AI, Humanity 1000x Smarter by 2045

Ray Kurzweil: The Singularity Has Started, Merging with AI, Humanity 1000x Smarter by 2045

Ex-Google Officer on AI, Capitalism, and the Future of Humanity

Ex-Google Officer on AI, Capitalism, and the Future of Humanity

FULL DISCUSSION: Google's Demis Hassabis, Anthropic's Dario Amodei Debate the World After AGI | AI1G

FULL DISCUSSION: Google's Demis Hassabis, Anthropic's Dario Amodei Debate the World After AGI | AI1G

Может ли у ИИ появиться сознание? — Семихатов, Анохин

Может ли у ИИ появиться сознание? — Семихатов, Анохин

The Man Warning The West: Trump Is Changing The World Behind The Scenes

The Man Warning The West: Trump Is Changing The World Behind The Scenes

Why Objects Don’t Really Exist | Leonard Susskind

Why Objects Don’t Really Exist | Leonard Susskind

Preparing for the intelligence explosion | Will MacAskill | EAG London: 2025

Preparing for the intelligence explosion | Will MacAskill | EAG London: 2025

We Let an AI Talk To Another AI. Things Got Really Weird. | Kyle Fish, Anthropic

We Let an AI Talk To Another AI. Things Got Really Weird. | Kyle Fish, Anthropic

49 минут, которые ИЗМЕНЯТ ваше понимание Вселенной | Владимир Сурдин

49 минут, которые ИЗМЕНЯТ ваше понимание Вселенной | Владимир Сурдин

The Graph That Explains Most of Geopolitics Today | Professor Hugh White

The Graph That Explains Most of Geopolitics Today | Professor Hugh White

Do LLMs Understand? AI Pioneer Yann LeCun Spars with DeepMind’s Adam Brown.

Do LLMs Understand? AI Pioneer Yann LeCun Spars with DeepMind’s Adam Brown.

What If You Keep Slowing Down?

What If You Keep Slowing Down?

The Particle We’ve Been Chasing for 30 Years Might Not Exist

The Particle We’ve Been Chasing for 30 Years Might Not Exist

Tech Billionaires Want Us Dead

Tech Billionaires Want Us Dead

The REAL Reason AI Can’t Be Stopped Now

The REAL Reason AI Can’t Be Stopped Now

AI Safety Expert: Will AI Destroy Humanity?

AI Safety Expert: Will AI Destroy Humanity?

Роман Ямпольский: развитие ИИ, риски сверх интеллекта, контроль технологий и др.

Роман Ямпольский: развитие ИИ, риски сверх интеллекта, контроль технологий и др.

The 4 Most Plausible AI Takeover Scenarios | Ryan Greenblatt, Chief Scientist at Redwood Research

The 4 Most Plausible AI Takeover Scenarios | Ryan Greenblatt, Chief Scientist at Redwood Research

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: infodtube@gmail.com