Geometry and Topology Seminar
Steering Diffusion Models
Guidance mechanisms enable controllable generation from diffusion models at inference time. Classifier guidance steers sampling using gradients from a noise-aware classifier, offering principled control but requiring a separately trained network. Classifier-free guidance eliminates the external classifier by interpolating conditional and unconditional predictions, yet demands paired training.
Fine-tuning and Steering of Diffusions with Non-Differentiable Rewards
Abstract: We consider stochastic differential equations that are modified by reward functions or likelihood based weights in order to promote specific events. This perspective applies both to diffusion type models used in generative modeling and to SDEs describing physical phenomena such as molecular dynamics or weather. The main emphasis is on rewards that are non smooth or singular, as they appear in conditioning, threshold objectives, and rare event simulation.
Local Theories of Diffusion Model Generalization in High Dimensions
Modern generative diffusion models are distinguished by their ability to generalize, consistently and robustly, in very high dimensional spaces. They produce a combinatorial explosion of novel images from a relatively small training set, subverting normal concerns about the curse of dimensionality. Yet, their generations also sometimes fall short, exhibiting distinctive flaws such as spatial inconsistency (e.g. excessive limbs).
Diffusion Models Through the Linear Lens: Exact Analysis of Sampling, Learning, Receptive Fields, and Consistency
Diffusion models are powerful generative systems, yet their internal mechanisms remain difficult to analyze. Taking a physicist's approach, we study the simplest tractable case: a diffusion model with a linear score function.
Diffusion Model’s Generalization via Data-Dependent Ridge Manifolds
When a diffusion model is not memorizing the training samples, what does it generate, and why? In this talk, I will describe a quantitative framework for understanding the distribution produced by a learned diffusion model through a data-driven geometric object: a log-density ridge manifold of the smoothed training distribution.
Harnessing Low-Dimensionality for Generalizable and Trustworthy Generative AI
Abstract: Generative AI has rapidly transformed machine learning, with diffusion and autoregressive models achieving unprecedented performance across vision, language, and scientific discovery. Despite this success, our theoretical understanding still lags far behind practice: why do these models generalize so effectively from finite data in high dimensions?
Pagination
- Previous page
- Page 2