Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Optimizing Recommendations with Multi-Armed & Contextual Bandits for Personalized Next Best Actions

Автор: WiDS Worldwide

Загружено: 2025-01-22

Просмотров: 692

Описание:

In this WiDS Upskill Workshop, Keerthi Gopalakrishnan explores how Multi-Armed Bandit (MAB) and Contextual Bandit algorithms can optimize online recommendations for next-best-action scenarios. These techniques help balance the trade-off between exploration (trying new recommendations) and exploitation (leveraging successful actions) to drive better personalization and engagement.

Keerthi will break down key MAB concepts, including:

Epsilon-greedy
Upper Confidence Bound (UCB) & Contextual UCB
Thompson Sampling
Real-world applications in recommendation systems

This session is ideal for:

Data Scientists and Machine Learning Engineers
Product Managers and Data Strategists
Researchers and Academics

Prior knowledge: A background in supervised learning and evaluation metrics is recommended. Familiarity with online learning or decision-making algorithms is helpful but not required.

Learn more about WiDS Upskill Workshops: https://www.widsworldwide.org/learn/u...

Optimizing Recommendations with Multi-Armed & Contextual Bandits for Personalized Next Best Actions

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

WiDS 2025 Global Datathon Workshop #1: Introduction to the Challenge and Dataset

WiDS 2025 Global Datathon Workshop #1: Introduction to the Challenge and Dataset

Optimization and Contextual Bandits at Stripe

Optimization and Contextual Bandits at Stripe

Multi-Armed Bandits: A Cartoon Introduction - DCBA #1

Multi-Armed Bandits: A Cartoon Introduction - DCBA #1

The Contextual Bandits Problem

The Contextual Bandits Problem

LLM fine-tuning или ОБУЧЕНИЕ малой модели? Мы проверили!

LLM fine-tuning или ОБУЧЕНИЕ малой модели? Мы проверили!

Что я реально делаю как Data Scientist в США за $410.000/год

Что я реально делаю как Data Scientist в США за $410.000/год

20 концепций искусственного интеллекта, объясненных за 40 минут

20 концепций искусственного интеллекта, объясненных за 40 минут

WiDS 2025 Global Datathon Workshop #2: Dataset Preprocessing and Preparation

WiDS 2025 Global Datathon Workshop #2: Dataset Preprocessing and Preparation

Personalizing Explainable Recommendations with Multi-objective Contextual Bandits

Personalizing Explainable Recommendations with Multi-objective Contextual Bandits

System Design Concepts Course and Interview Prep

System Design Concepts Course and Interview Prep

07 09 Deep Contextual MAB

07 09 Deep Contextual MAB

Многорукий бандит: концепции науки о данных

Многорукий бандит: концепции науки о данных

WiDS Datathon University Program Kickoff 2026

WiDS Datathon University Program Kickoff 2026

WiDS 2025 Global Datathon Workshop #3: Building & Evaluating a Machine Learning Model

WiDS 2025 Global Datathon Workshop #3: Building & Evaluating a Machine Learning Model

Contextual Bandits

Contextual Bandits

Causal Inference in Business: Real-World Impact | Shreya Bhattacherjee, Walmart Global Tech

Causal Inference in Business: Real-World Impact | Shreya Bhattacherjee, Walmart Global Tech

What the heck are

What the heck are "contextual bandits"?!

Playlist,,Deep House,Music Played in Louis Vuitton Stores

Playlist,,Deep House,Music Played in Louis Vuitton Stores

Arize:Observe - Using Reinforcement Learning Techniques for Recommender Systems

Arize:Observe - Using Reinforcement Learning Techniques for Recommender Systems

МАШИННОЕ ОБУЧЕНИЕ - ВСЕ ЧТО НУЖНО ЗНАТЬ

МАШИННОЕ ОБУЧЕНИЕ - ВСЕ ЧТО НУЖНО ЗНАТЬ

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: infodtube@gmail.com