Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues

Автор: AutoML Seminars

Загружено: 2025-01-09

Просмотров: 615

Описание:

Title: Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues

Abstract:

Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to Transformers in large language modeling, offering linear scaling with sequence length and improved training efficiency. However, LRNNs struggle to perform state-tracking which may impair performance in tasks such as code evaluation or tracking a chess game. Even parity, the simplest state-tracking task, which non-linear RNNs like LSTM handle effectively, cannot be solved by current LRNNs. Recently, Sarrof et al. (2024) demonstrated that the failure of LRNNs like Mamba to solve parity stems from restricting the value range of their diagonal state-transition matrices to [0,1] and that incorporating negative values can resolve this issue. We extend this result to non-diagonal LRNNs, which have recently shown promise in models such as DeltaNet. We prove that finite precision LRNNs with state-transition matrices having only positive eigenvalues cannot solve parity, while complex eigenvalues are needed to count modulo 3. Notably, we also prove that LRNNs can learn any regular language when their state-transition matrices are products of identity minus vector outer product matrices, each with eigenvalues in the range [−1,1]. Our empirical results confirm that extending the eigenvalue range of models like Mamba and DeltaNet to include negative values not only enables them to solve parity but consistently improves their performance on state-tracking tasks. Furthermore, pre-training LRNNs with an extended eigenvalue range for language modeling achieves comparable performance and stability while showing promise on code and math data. Our work enhances the expressivity of modern LRNNs, broadening their applicability without changing the cost of training or inference.

Speaker: Julien Siems (https://juliensiems.github.io/) and Riccardo Grazzi (https://prolearner.github.io/riccardo...)

Paper: https://arxiv.org/abs/2411.12537

Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Accurate predictions on small data (and time series) with the tabular foundation model TabPFN

Accurate predictions on small data (and time series) with the tabular foundation model TabPFN

Chronos: Time series forecasting in the age of pretrained models

Chronos: Time series forecasting in the age of pretrained models

Scaling Exponents Across Parameterizations and Optimizers

Scaling Exponents Across Parameterizations and Optimizers

TabArena: A Living Benchmark for Machine Learning on Tabular Data

TabArena: A Living Benchmark for Machine Learning on Tabular Data

Multi-Objective AutoML: Towards Accurate and Robust models

Multi-Objective AutoML: Towards Accurate and Robust models

The World's Most Important Machine

The World's Most Important Machine

A LLM Evolutionary Algorithm for Automatically Generating Bayesian Optimization Algorithms

A LLM Evolutionary Algorithm for Automatically Generating Bayesian Optimization Algorithms

Cybersecurity Architecture: Networks

Cybersecurity Architecture: Networks

System Design Concepts Course and Interview Prep

System Design Concepts Course and Interview Prep

Киберколониализм, или Почему чем яростнее беспредел, тем ближе стабилизация | Андрей Масалович

Киберколониализм, или Почему чем яростнее беспредел, тем ближе стабилизация | Андрей Масалович

AutoML in the Age of Structured Foundation Models

AutoML in the Age of Structured Foundation Models

Cybersecurity Architecture: Five Principles to Follow (and One to Avoid)

Cybersecurity Architecture: Five Principles to Follow (and One to Avoid)

Think Faster, Talk Smarter with Matt Abrahams

Think Faster, Talk Smarter with Matt Abrahams

But what is quantum computing?  (Grover's Algorithm)

But what is quantum computing? (Grover's Algorithm)

Hyperband-based Bayesian Optimization for Efficient Black-box Prompt Selection

Hyperband-based Bayesian Optimization for Efficient Black-box Prompt Selection

Cybersecurity Architecture: Who Are You? Identity and Access Management

Cybersecurity Architecture: Who Are You? Identity and Access Management

ЛИПСИЦ: Кризис ТОТАЛЬНЫЙ. Минфин горит. Нефть Путина никому не нужна. Цены растут. Трамп. Банки

ЛИПСИЦ: Кризис ТОТАЛЬНЫЙ. Минфин горит. Нефть Путина никому не нужна. Цены растут. Трамп. Банки

ВОЙНА ИЗ ПОСЛЕДНИХ СИЛ. БЕСЕДА С ИГОРЕМ ЛИПСИЦЕМ @IgorLipsits_1950

ВОЙНА ИЗ ПОСЛЕДНИХ СИЛ. БЕСЕДА С ИГОРЕМ ЛИПСИЦЕМ @IgorLipsits_1950

Признаки свержения автократий. S09E20

Признаки свержения автократий. S09E20

Accelerating Bayesian Inference and Data Acquisition via Amortization

Accelerating Bayesian Inference and Data Acquisition via Amortization

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: infodtube@gmail.com