Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Processing Megapixel Images with Deep Attention-Sampling Models

machine learning

deep learning

research

attention

attention sampling

attention model

attention distribution

megapixel images

large images

artificial intelligence

megapixel mnist

street sign dataset

monte carlo

speed

memory

cnn

convolutional neural networks

limited resources

ai

image recognition

image classifier

Автор: Yannic Kilcher

Загружено: 12 авг. 2019 г.

Просмотров: 3 527 просмотров

Описание:

Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper proposes attention sampling, which learns to selectively process parts of any large image in full resolution, while discarding uninteresting bits. This leads to enormous gains in speed and memory consumption.

https://arxiv.org/abs/1905.03711

Abstract:
Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images.

Authors: Angelos Katharopoulos, François Fleuret

Processing Megapixel Images with Deep Attention-Sampling Models

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Gauge Equivariant Convolutional Networks and the Icosahedral CNN

Gauge Equivariant Convolutional Networks and the Icosahedral CNN

xLSTM: Extended Long Short-Term Memory

xLSTM: Extended Long Short-Term Memory

[GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

[GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

Но что такое нейронная сеть? | Глава 1. Глубокое обучение

Но что такое нейронная сеть? | Глава 1. Глубокое обучение

Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Paper Explained)

Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Paper Explained)

Путешествие в заквантовый мир. Визуализация субатомных частиц, вирусов, и молекул

Путешествие в заквантовый мир. Визуализация субатомных частиц, вирусов, и молекул

How are Microchips Made? 🖥️🛠️ CPU Manufacturing Process Steps

How are Microchips Made? 🖥️🛠️ CPU Manufacturing Process Steps

The AI Revolution Is Underhyped | Eric Schmidt | TED

The AI Revolution Is Underhyped | Eric Schmidt | TED

The Hole In Relativity Einstein Didn’t Predict

The Hole In Relativity Einstein Didn’t Predict

Quantum Computing: Where We Are and Where We’re Headed | NVIDIA GTC 2025 Fireside Chat

Quantum Computing: Where We Are and Where We’re Headed | NVIDIA GTC 2025 Fireside Chat

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]