Fine-tuning LLMs on Human Feedback (RLHF + DPO)
Автор: Shaw Talebi
Загружено: 2025-03-03
Просмотров: 19620
💡 Get 30 (free) AI project ideas: https://30aiprojects.com/
Here, I discuss how to use reinforcement learning to fine-tune LLMs on human feedback (i.e. RLHF) and a more efficient reformulation of it (i.e. DPO)
📰 Read more: https://medium.com/@shawhin/fine-tuni...
Example code: https://github.com/ShawhinT/YouTube-B...
🤗 Dataset: https://huggingface.co/datasets/shawh...
🤗 Fine-tuned Model: https://huggingface.co/shawhin/Qwen2....
References
[1] arXiv:2407.21783 [cs.AI]
[2] arXiv:2203.02155 [cs.CL]
[3] arXiv:1707.06347 [cs.LG]
[4] • Deep Dive into LLMs like ChatGPT
[5] arXiv:2305.18290 [cs.LG]
Intro - 0:00
Base Models - 0:25
InstructGPT - 2:20
RL from Human Feedback (RLHF) - 5:18
Proximal Policy Optimization (PPO) - 9:20
Limitations of RLHF - 10:30
Direct Policy Optimization (DPO) - 11:50
Example: Fine-tuning Qwen on Title Preferences - 14:29
Step 1: Curate preference data - 17:49
Step 2: Fine-tuning with DPO - 20:53
Step 3: Evaluate fine-tuning model - 25:27
Homepage: https://www.shawhintalebi.com/
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: