Score-based diffusion models have significantly advanced the generation of high-dimensional data across diverse domains by learning a denoising oracle (or score) from datasets. From a Bayesian perspective, these models provide a natural representation of data priors and shall also facilitate sampling from related distributions, such as posterior distributions in inverse problems or tilted distributions shaped by additional criteria.
While many heuristic methods exist for such adaptations, they often lack the quantitative guarantees needed in scientific applications. This talk introduces recently developed techniques, grounded in the analysis of SDEs, that allow principled modifications of the initial distribution or drift to achieve such adaptations. By leveraging the rich information encoded in pretrained score models, the resulting algorithms can substantially enhance classical sampling methods such as Langevin Monte Carlo or Sequential Monte Carlo.
DoMSS Seminar
Monday, September 29
12:00pm MST/AZ
Virtual via Zoom - reach out to Heyrim Cho for the link.
Jiequn Han
Research Scientist
Center for Computational Mathematics
Flatiron Institute.