Learn to Guide Your Diffusion Model (Oct 2025)
Автор: AI Papers Slop
Загружено: 2025-10-03
Просмотров: 36
Title: Learn to Guide Your Diffusion Model (Oct 2025)
Link: http://arxiv.org/abs/2510.00815v1
Date: October 2025
Summary:
This paper introduces a method to learn guidance weights for classifier-free guidance (CFG) in diffusion models. The proposed approach learns guidance weights that are continuous functions of conditioning, time, and denoising progress, by minimizing the distributional mismatch between true and guided diffusion processes. The method is extended to reward-guided sampling and is evaluated on image generation tasks, demonstrating improvements in FID and image-prompt alignment.
Key Topics:
Diffusion Models
Classifier-Free Guidance (CFG)
Guidance Weight Learning
Distribution Matching
Reward-Guided Sampling
Image Generation
Fréchet Inception Distance (FID)
Image-Prompt Alignment
Chapters:
00:00 - Introduction to Guidance in Diffusion Models
00:08 - Learn to Guide: A New Approach
00:21 - The Problem with Fixed Guidance Weights
00:48 - Dynamic Guidance Weights: Omega CSST
01:16 - Self-Consistency for Learning Guidance
01:38 - Experimental Results: Image Generation
02:02 - The Need for Adaptive Guidance
02:33 - Classifier Free Guidance Explained
03:17 - The Issue of Distributional Alignment
03:43 - Diversity and Stereotypical Outputs
04:18 - Adaptive Guidance: A Solution
04:53 - Learning Dynamic Weights: Consistency
05:30 - Self-Consistency Explained
06:46 - MMD Loss and Time Gaps
07:27 - Training with Large Time Gaps
08:29 - ImageNet Results: Improved FID
09:10 - Class-Specific Guidance
10:11 - Text-to-Image Generation
10:47 - Reward Guidance and CLIP Score
11:25 - Combining Consistency and Rewards
12:17 - Key Takeaways
13:11 - Future Research Directions
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: