Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Stanford CS25: V1 I Mixture of Experts (MoE) paradigm and the Switch Transformer

Автор: Stanford Online

Загружено: 2022-07-14

Просмотров: 39811

Описание:

In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.

Barret Zoph is a research scientist on the Google Brain team. He has worked on a variety of deep learning research topics ranging from neural architecture search (NAS), data augmentation, semi-supervised learning for computer vision and model sparsity. Prior to Google Brain he worked at the Information Sciences Institute working on machine translation.

Irwan Bello is a research scientist on the Google Brain team. His research interests primarily lie in modeling, scaling and designing layers that process structured information while trading off scalability and inductive biases.

View the entire CS25 Transformers United playlist:    • Stanford CS25 - Transformers United  

Stanford CS25: V1 I Mixture of Experts (MoE) paradigm and the Switch Transformer

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Stanford CS25: V1 I DeepMind's Perceiver and Perceiver IO: new data family architecture

Stanford CS25: V1 I DeepMind's Perceiver and Perceiver IO: new data family architecture

Stanford CS25: V4 I Aligning Open Language Models

Stanford CS25: V4 I Aligning Open Language Models

4 Hours Chopin for Studying, Concentration & Relaxation

4 Hours Chopin for Studying, Concentration & Relaxation

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 9 - Recap & Current Trends

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 9 - Recap & Current Trends

December Jazz ~ Positive Coffee Jazz Music & Exquisite Bossa Nova Instrumental for Good Mood

December Jazz ~ Positive Coffee Jazz Music & Exquisite Bossa Nova Instrumental for Good Mood

Трансформаторы-переключатели: масштабирование до моделей с триллионами параметров с простой и эфф...

Трансформаторы-переключатели: масштабирование до моделей с триллионами параметров с простой и эфф...

A Visual Guide to Mixture of Experts (MoE) in LLMs

A Visual Guide to Mixture of Experts (MoE) in LLMs

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 1 - Transformer

Stanford CME295 Transformers & LLMs | Autumn 2025 | Lecture 1 - Transformer

Visualizing transformers and attention | Talk for TNG Big Tech Day '24

Visualizing transformers and attention | Talk for TNG Big Tech Day '24

LLM и GPT - как работают большие языковые модели? Визуальное введение в трансформеры

LLM и GPT - как работают большие языковые модели? Визуальное введение в трансформеры

The Turing Lectures: The future of generative AI

The Turing Lectures: The future of generative AI

Stanford CS25: V5 I Transformers for Video Generation, Andrew Brown of Meta

Stanford CS25: V5 I Transformers for Video Generation, Andrew Brown of Meta

Stanford AI Club: Jeff Dean on Important AI Trends

Stanford AI Club: Jeff Dean on Important AI Trends

What is Mixture of Experts?

What is Mixture of Experts?

Stanford CS25: V5 I Overview of Transformers

Stanford CS25: V5 I Overview of Transformers

Bossa Nova Jazz - Best Bossa Nova Covers 2025 for a Relaxing Vibe

Bossa Nova Jazz - Best Bossa Nova Covers 2025 for a Relaxing Vibe

Stanford CS25: V1 I Decision Transformer: Reinforcement Learning via Sequence Modeling

Stanford CS25: V1 I Decision Transformer: Reinforcement Learning via Sequence Modeling

Краткое объяснение больших языковых моделей

Краткое объяснение больших языковых моделей

RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

Mixtral of Experts (Paper Explained)

Mixtral of Experts (Paper Explained)

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]