Turns out Attention wasn't all we needed - How have modern Transformer architectures evolved?
Автор: Neural Breakdown with AVB
Загружено: 2024-12-16
Просмотров: 6547
In this video, we discuss the evolution of the classic Neural Attention mechanism from early adoptions of Bahnadau Attention and more specifically Self-Attention and Causal Masked Attention introduced in the seminal "Attention is all you need" paper. This video discusses more advanced forms of the Multi Headed Attention such as Multi Query Attention and Grouped Query Attention. Along the way, we also talk about important innovations in the Transformers and Large Language Models (LLMs) architecture, such as KV Caching. The video contains visualizations and graphics to further explain these concepts.
Correction in the slide at 22:03 - MHA has high latency (runs slow) MQA has low latency (runs faster)
All the slides, animations and write-up in this video will soon be shared in our Patreon. Go have fun! :)
Join the channel on Patreon to receive updates about the channel, and get access to bonus content used in all my videos. Here is the link:
/ neuralbreakdownwithavb
Videos you might like:
Attention to Transformers playlist: • Attention to Transformers from zero to her...
50 concepts to know NLP: • 10 years of NLP history explained in 50 co...
Guide to fine-tuning open source LLMs: • Finetune LLMs to teach them ANYTHING with ...
Generative Language Modeling from scratch: • From Attention to Generative Language Mode...
#deeplearning #machinelearning
Timestamps:
0:00 - Intro
1:15 - Language Modeling and Next Word Prediction
5:22 - Self Attention
10:40 - Causal Masked Attention
14:45 - Multi Headed Attention
16:03 - KV Cache
19:49 - Multi Query Attention
21:43 - Grouped Query Attention
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: