Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Pixelated Butterfly: Fast Machine Learning with Sparsity - Beidi Chen | Stanford MLSys #49

Автор: Stanford MLSys Seminars

Загружено: 2022-01-06

Просмотров: 5891

Описание:

Episode 49 of the Stanford MLSys Seminar Series!

Pixelated Butterfly: Simple and Efficient Sparse Training for Neural Network Models
Speaker: Beidi Chen

Abstract:
Overparameterized neural networks generalize well but are expensive to train. Ideally, one would like to reduce their computational cost while retaining their generalization benefits. Sparse model training is a simple and promising approach to achieve this, but there remain challenges as existing methods struggle with accuracy loss, slow training runtime, or difficulty in sparsifying all model components. The core problem is that searching for a sparsity mask over a discrete set of sparse matrices is difficult and expensive. To address this, our main insight is to optimize over a continuous superset of sparse matrices with a fixed structure known as products of butterfly matrices. As butterfly matrices are not hardware efficient, we propose simple variants of butterfly (block and flat) to take advantage of modern hardware. Our method (Pixelated Butterfly) uses a simple fixed sparsity pattern based on flat block butterfly and low-rank matrices to sparsify most network layers (e.g., attention, MLP). We empirically validate that Pixelated Butterfly is 3x faster than butterfly and speeds up training to achieve favorable accuracy--efficiency tradeoffs. On the ImageNet classification and WikiText-103 language modeling tasks, our sparse models train up to 2.5x faster than the dense MLP-Mixer, Vision Transformer, and GPT-2 medium with no drop in accuracy.

Bio:
Beidi Chen is a Postdoctoral scholar in the Department of Computer Science at Stanford University, working with Dr. Christopher Ré. Her research focuses on large-scale machine learning and deep learning. Specifically, she designs and optimizes randomized algorithms (algorithm-hardware co-design) to accelerate large machine learning systems for real-world problems. Prior to joining Stanford, she received her Ph.D. in the Department of Computer Science at Rice University, advised by Dr. Anshumali Shrivastava. She received a BS in EECS from UC Berkeley in 2015. She has held internships in Microsoft Research, NVIDIA Research, and Amazon AI. Her work has won Best Paper awards at LISA and IISA. She was selected as a Rising Star in EECS by MIT and UIUC.

--

0:00 Presentation
20:48 Discussion

Stanford MLSys Seminar hosts: Dan Fu, Karan Goel, Fiodar Kazhamiaka, and Piero Molino
Executive Producers: Matei Zaharia, Chris Ré

Twitter:
  / realdanfu​  
  / krandiash​  
  / w4nderlus7  

--

Check out our website for the schedule: http://mlsys.stanford.edu
Join our mailing list to get weekly updates: https://groups.google.com/forum/#!for...

#machinelearning #ai #artificialintelligence #systems #mlsys #computerscience #stanford #butterfly #pixelatedbutterfly #sparsity #lowrank #optimization #hardware #deeplearning

Pixelated Butterfly: Fast Machine Learning with Sparsity - Beidi Chen | Stanford MLSys #49

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Resource-Efficient Deep Learning Execution - Deepak Narayanan | Stanford MLSys #50

Resource-Efficient Deep Learning Execution - Deepak Narayanan | Stanford MLSys #50

Notes on AI Hardware - Benjamin Spector | Stanford MLSys #88

Notes on AI Hardware - Benjamin Spector | Stanford MLSys #88

Monarch Mixer: Making Foundation Models More Efficient - Dan Fu | Stanford MLSys #86

Monarch Mixer: Making Foundation Models More Efficient - Dan Fu | Stanford MLSys #86

Sparse Neural Networks: From Practice to Theory

Sparse Neural Networks: From Practice to Theory

Talk: Rethinking Test-Time Scaling Laws (Beidi Chen)

Talk: Rethinking Test-Time Scaling Laws (Beidi Chen)

Scientists Just Discovered What Came Before the Big Bang—Here's What It Means

Scientists Just Discovered What Came Before the Big Bang—Here's What It Means

Sparse is Enough in Scaling Transformers (aka Terraformer) | ML Research Paper Explained

Sparse is Enough in Scaling Transformers (aka Terraformer) | ML Research Paper Explained

Hardware-aware Algorithms for Sequence Modeling - Tri Dao | Stanford MLSys #87

Hardware-aware Algorithms for Sequence Modeling - Tri Dao | Stanford MLSys #87

Stanford AI Club: Jeff Dean on Important AI Trends

Stanford AI Club: Jeff Dean on Important AI Trends

Sparse Training of Neural Networks Using AC/DC

Sparse Training of Neural Networks Using AC/DC

The Man Behind Google's AI Machine | Demis Hassabis Interview

The Man Behind Google's AI Machine | Demis Hassabis Interview

Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI

Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI

EVO: DNA Foundation Models - Eric Nguyen | Stanford MLSys #96

EVO: DNA Foundation Models - Eric Nguyen | Stanford MLSys #96

Lecture 1 | Modern Physics: Classical Mechanics (Stanford)

Lecture 1 | Modern Physics: Classical Mechanics (Stanford)

Teaching LLMs to Use Tools at Scale - Shishir Patil | Stanford MLSys #98

Teaching LLMs to Use Tools at Scale - Shishir Patil | Stanford MLSys #98

Как тонкая настройка программ LLM с открытым исходным кодом решает проблему внедрения GenAI в про...

Как тонкая настройка программ LLM с открытым исходным кодом решает проблему внедрения GenAI в про...

Stanford CS230 | Autumn 2025 | Lecture 1: Introduction to Deep Learning

Stanford CS230 | Autumn 2025 | Lecture 1: Introduction to Deep Learning

Удалось ли Терри Тао решить уравнение стоимостью 1 000 000 долларов, которое нарушает законы физики?

Удалось ли Терри Тао решить уравнение стоимостью 1 000 000 долларов, которое нарушает законы физики?

Следующий 100x — Гэвин Уберти | Stanford MLSys #92

Следующий 100x — Гэвин Уберти | Stanford MLSys #92

Sparsity in Neural Networks (Brains@Bay Meetup)

Sparsity in Neural Networks (Brains@Bay Meetup)

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: infodtube@gmail.com