Will & Mr Beastly Debate AI Doom
Автор: Mr Beastly
Загружено: 2025-02-27
Просмотров: 56
This is a conversation between myself (Mr Beastly) and Will Petillo ( / @willpetillo1189 ) on Feb 20, 2025. The topic of discussion is the probability of AI driving humanity into extinction.
In this talk, I did not contest the possibility of superhuman AI in the near future, the possibility that its values would be misaligned with those of humanity, or that such a misaligned superintelligence could effectively resist being shut down and otherwise cause havoc. The crux of the disagreement is whether the AI would grow its presence in the world to the point where humans would no longer be able to live in it — or if the new global equilibrium will have space for both humans and AI.
Specifically, we focus on four main claims:
1) Whether or not AI will develop nanotechnology and threaten all life on Earth.
2) Why pausing AI before evidence of harm is unlikely to work.
3) Why AGI companies will be forced to secure the world's open-source software against attacks from super-intelligent AI developed by competitors or rival nations.
4) Why I believe AI will eventually abandon human code and create its own internet.
Will also introduces the idea of developing a "big red button" to stop AI, and testing this button even for only 5 minutes. He argues that this would be a good way to prepare for a potential AI takeover.
___
Additional Resources:
This Slide Deck:
https://bit.ly/MrBeastly-SlideDeck-An...
https://bit.ly/MrBeastly-Slides_for_C...
Related Article on LessWrong:
Lesswrong: An Alternate History of the Future, 2025-2040 by Mr Beastly, Feb 24th 2025
Lesswrong: A History of the Future, 2025-2040, by L Rudolf L, Feb 17th 2025
PauseAI.info: AI models are unpredictable digital brains, Joep Meindertsma and the PauseAI Community, Mar 5 2024
https://pauseai.info/digital-brains
Want to chat about these topics IRL?
https://pauseai.info/join
___
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: