From FP32 to INT8: Post-Training Quantization Explained in PyTorch
Автор: MLWorks
Загружено: 2025-11-04
Просмотров: 254
Shrink your models and speed up inference — all without retraining! 🚀
This video’ll explore step-by-step post-training quantization (PTQ) using PyTorch.
You’ll learn:
✅ What is PTQ
✅ How to apply PTQ in PyTorch with just a few lines of code
✅ Real model size comparison and performance gain demo
Perfect for developers and ML engineers looking to optimize deep learning models for edge devices or production deployment.
🔔 Subscribe for more PyTorch, Model Optimization, and MLOps tutorials!
#Quantization #PyTorch #ModelOptimization #DeepLearning #MLOps #PostTrainingQuantization
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: