Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

The dangerous study of self-modifying AIs

Автор: Dr Waku

Загружено: 2023-11-05

Просмотров: 4063

Описание:

In this video we discuss some of the nuts and bolts of AI evolution, how an AI might go about improving itself and how far away we are from that point. As a model gets better and better at its job, the temptation is to turn it into an agent that can operate independently without human intervention. This combined with the singular nature of optimization reward functions makes it easy to accidentally give AI the opportunity to self-evolve.

Needless to say, kicking off machine evolution by allowing AI to self-evolve would be extremely dangerous for humanity. It's quite likely that we would enter the technological singularity at a severe disadvantage.

We discuss several ways of recursively leveraging LLMs to solve problems more effectively than a zero-shot large language model invocation. Chain of thoughts, tree of thoughts, and especially the latest Self-Taught Optimizer (STOP) paper from Microsoft and Stanford. Although this type of research contains an element of danger, at least in the future, it's important to help understand how AI might go about self-improvement.

#ai #agi #research

Chain-of-Thought Prompting
https://www.promptingguide.ai/techniq...

“Recursive self-improvement” (RSI) is one of the oldest ideas in AI
https://twitter.com/ericzelikman/stat...

[paper] Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
https://arxiv.org/abs/2310.02304

[paper] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
https://arxiv.org/abs/2305.10601

0:00 Intro
0:22 Contents
0:27 Part 1: Self-improvement
0:44 Letting the AI act fully autonomously
1:34 What if AI focuses on improving itself?
2:01 Disadvantage going into the singularity
2:50 Book recommendation: AI Apocalypse
3:31 Defining initial self-improvement
4:24 Part 2: The LLM primitive
4:53 Using multiple calls to solve one problem
5:05 Paper: Chain of thoughts
5:52 Backtracking for complex problems
6:30 Paper: Tree of thoughts
6:47 Using LLM to define scaffolding
7:32 Part 3: Playing with fire
7:42 Paper: Self-taught optimizer (STOP)
8:44 Analogy: Programming contests
9:43 Utility function and downstream task
10:07 Algorithms suggested by LLM
10:28 GPT-4 repeatedly makes improver better
11:27 Security issues: circumventing the sandbox
12:40 Reward hacking, normal for AI
13:42 Ethics of self-improvement research
14:24 Responsible disclosure
14:47 Conclusion
15:50 Outro

The dangerous study of self-modifying AIs

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Вы ОТСТОЙ в подсказках ИИ (Вот в чем секрет)

Вы ОТСТОЙ в подсказках ИИ (Вот в чем секрет)

Emergent intelligence: when AI agents work together

Emergent intelligence: when AI agents work together

What NOT to do: Self Modifying Code - Computerphile

What NOT to do: Self Modifying Code - Computerphile

Гонка технологий искусственного интеллекта

Гонка технологий искусственного интеллекта

When will AI surpass humans? Final countdown to AGI

When will AI surpass humans? Final countdown to AGI

Can robotics overcome its data scarcity problem?

Can robotics overcome its data scarcity problem?

Will neuromorphic computers accelerate AGI development?

Will neuromorphic computers accelerate AGI development?

Nanobots will be inside everyone by 2030

Nanobots will be inside everyone by 2030

Клод становится хаотичным злом

Клод становится хаотичным злом

Предел развития НЕЙРОСЕТЕЙ

Предел развития НЕЙРОСЕТЕЙ

Claude Code: полный гайд по AI-кодингу (хаки, техники и секреты)

Claude Code: полный гайд по AI-кодингу (хаки, техники и секреты)

How could we control superintelligent AI?

How could we control superintelligent AI?

What would it feel like to be a cyborg?

What would it feel like to be a cyborg?

Момент, когда мы перестали понимать ИИ [AlexNet]

Момент, когда мы перестали понимать ИИ [AlexNet]

Can we trust decisions made by AI?

Can we trust decisions made by AI?

Can we reach AGI with just LLMs?

Can we reach AGI with just LLMs?

This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya Sutskever.

This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya Sutskever.

How to build a brain: why AGI is almost here

How to build a brain: why AGI is almost here

Mustafa Suleyman at Live Talks Los Angeles

Mustafa Suleyman at Live Talks Los Angeles

Скрытый шпион вашего компьютера с Windows 11: тёмная правда о чипах TPM

Скрытый шпион вашего компьютера с Windows 11: тёмная правда о чипах TPM

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]