Skip to navigation Skip to content

Geometry and Topology Seminar

Steering Diffusion Models

Guidance mechanisms enable controllable generation from diffusion models at inference time. Classifier guidance steers sampling using gradients from a noise-aware classifier, offering principled control but requiring a separately trained network. Classifier-free guidance eliminates the external classifier by interpolating conditional and unconditional predictions, yet demands paired training.

Fine-tuning and Steering of Diffusions with Non-Differentiable Rewards

Abstract: We consider stochastic differential equations that are modified by reward functions or likelihood based weights in order to promote specific events. This perspective applies both to diffusion type models used in generative modeling and to SDEs describing physical phenomena such as molecular dynamics or weather. The main emphasis is on rewards that are non smooth or singular, as they appear in conditioning, threshold objectives, and rare event simulation.

Local Theories of Diffusion Model Generalization in High Dimensions

Modern generative diffusion models are distinguished by their ability to generalize, consistently and robustly, in very high dimensional spaces. They produce a combinatorial explosion of novel images from a relatively small training set, subverting normal concerns about the curse of dimensionality. Yet, their generations also sometimes fall short, exhibiting distinctive flaws such as spatial inconsistency (e.g. excessive limbs).

Harnessing Low-Dimensionality for Generalizable and Trustworthy Generative AI

Abstract: Generative AI has rapidly transformed machine learning, with diffusion and autoregressive models achieving unprecedented performance across vision, language, and scientific discovery. Despite this success, our theoretical understanding still lags far behind practice: why do these models generalize so effectively from finite data in high dimensions?

Subscribe to Geometry and Topology Seminar