Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Lecture 06 • Reward Modelling

Автор: Meridian Cambridge

Загружено: 2025-05-11

Просмотров: 263

Описание:

This is the sixth lecture in the Language Models and Intelligent Agentic Systems course, run by Meridian Cambridge in collaboration with the Cambridge Centre for Data Driven Discovery (C2D3).

This lecture covers reward modelling. Reward modelling is the problem of training a neural network to output rewards in response to behaviours which acurately capture human preferences over those behaviours. We'll cover the motivation behind reward modelling, the Bradley-Terry loss used to train reward models, how preference data is obtained to train these models, and shortcoming and open problems in current reward modelling.

The slides for the lecture can be found here: https://tinyurl.com/LMaIAS

Meridian's Website: https://www.meridiancambridge.org/
Meridian's course webpage: https://www.meridiancambridge.org/lan...
C2D3's course webpage: https://www.c2d3.cam.ac.uk/events/LLM...

Give feedback on this lecture here: https://airtable.com/appjc4HBUw4Ktlij...
Give feedback on the lecture series as a whole here: https://airtable.com/appjc4HBUw4Ktlij...

Lecture 06 • Reward Modelling

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

array(10) { [0]=> object(stdClass)#4848 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "8mRG0JtxLL0" ["related_video_title"]=> string(53) "Lecture 07 • Agents and Agent Architectures" ["posted_time"]=> string(23) "1 месяц назад" ["channelName"]=> string(18) "Meridian Cambridge" } [1]=> object(stdClass)#4821 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "UiGa8Bx1Srk" ["related_video_title"]=> string(59) "Lecture 01 • Introduction to Language Models" ["posted_time"]=> string(23) "1 месяц назад" ["channelName"]=> string(18) "Meridian Cambridge" } [2]=> object(stdClass)#4846 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "BfAcYUt3gyQ" ["related_video_title"]=> string(45) "Why Your Coaching Business Needs This in 2025" ["posted_time"]=> string(24) "10 часов назад" ["channelName"]=> string(10) "Laura Agar" } [3]=> object(stdClass)#4853 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "zYGDpG-pTho" ["related_video_title"]=> string(62) "RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models" ["posted_time"]=> string(25) "2 месяца назад" ["channelName"]=> string(14) "IBM Technology" } [4]=> object(stdClass)#4832 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "3TqD_vcykaQ" ["related_video_title"]=> string(68) "Lecture 11 • Deceptive Alignment and Alignment Faking" ["posted_time"]=> string(25) "3 недели назад" ["channelName"]=> string(18) "Meridian Cambridge" } [5]=> object(stdClass)#4850 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "UKcWu1l_UNw" ["related_video_title"]=> string(58) "THIS is why large language models can understand the world" ["posted_time"]=> string(25) "2 месяца назад" ["channelName"]=> string(22) "Algorithmic Simplicity" } [6]=> object(stdClass)#4845 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "j1pMDywaLrk" ["related_video_title"]=> string(109) "Война набирает обороты: Жертвы, разрушения и ответные удары" ["posted_time"]=> string(23) "5 часов назад" ["channelName"]=> string(31) "Сергей Ауслендер" } [7]=> object(stdClass)#4855 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "hbqs81QT7bI" ["related_video_title"]=> string(69) "Lecture 09 • Reward Hacking and Goal Misgeneralisation" ["posted_time"]=> string(25) "4 недели назад" ["channelName"]=> string(18) "Meridian Cambridge" } [8]=> object(stdClass)#4831 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "fNWkwLAFg_0" ["related_video_title"]=> string(57) "Lecture 04 • Post-Training Language Models" ["posted_time"]=> string(23) "1 месяц назад" ["channelName"]=> string(18) "Meridian Cambridge" } [9]=> object(stdClass)#4849 (5) { ["video_id"]=> int(9999999) ["related_video_id"]=> string(11) "neA0NJNUEfM" ["related_video_title"]=> string(25) "2015 10 30 Claude Shannon" ["posted_time"]=> string(19) "9 лет назад" ["channelName"]=> string(30) "MIT Video Productions External" } }
Lecture 07 • Agents and Agent Architectures

Lecture 07 • Agents and Agent Architectures

Lecture 01 • Introduction to Language Models

Lecture 01 • Introduction to Language Models

Why Your Coaching Business Needs This in 2025

Why Your Coaching Business Needs This in 2025

RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

Lecture 11 • Deceptive Alignment and Alignment Faking

Lecture 11 • Deceptive Alignment and Alignment Faking

THIS is why large language models can understand the world

THIS is why large language models can understand the world

Война набирает обороты: Жертвы, разрушения и ответные удары

Война набирает обороты: Жертвы, разрушения и ответные удары

Lecture 09 • Reward Hacking and Goal Misgeneralisation

Lecture 09 • Reward Hacking and Goal Misgeneralisation

Lecture 04 • Post-Training Language Models

Lecture 04 • Post-Training Language Models

2015 10 30 Claude Shannon

2015 10 30 Claude Shannon

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]