Shusen Wang
Staff Engineer @ Meta
RL-1G: Summary
RL-1F: Evaluate Reinforcement Learning
RL-1E: Value Functions
RL-1D: Rewards and Returns
RL-1C: Randomness in MDP, Agent-Environment Interaction
RL-1B: State, Action, Reward, Policy, State Transition
RL-1A: Случайные величины, наблюдения, случайная выборка
Vision Transformer for Image Classification
BERT для предварительной подготовки Трансформеров
Transformer Model (2/2): Build a Deep Neural Network (1.25x speed recommended)
Transformer Model (1/2): Attention Layers
Самовосприятие для RNN (рекомендуется скорость 1,25x)
Attention for RNN Seq2Seq Models (1.25x speed recommended)
Few-Shot Learning (3/3): Pretraining + Fine-tuning
Few-Shot Learning (2/3): Siamese Networks
17-4: Random Shuffle & Fisher-Yates Algorithm
5-2: Dense Matrices: row-major order, column-major order
5-1: Matrix basics: additions, multiplications, time complexity analysis
17-1: Monte Carlo Algorithms
3-1: Insertion Sort
6-1: Binary Tree Basics
Few-Shot Learning (1/3): Basic Concepts
2-3: Skip List
2-2: Binary Search
2-1: Array, Vector, and List: Comparisons