Structure-preserving generative models

-
Abstract

In this talk, I will discuss how intrinsic structures of probability distributions–-such as (approximate) group symmetries, multimodality, and low dimensionality–-can be systematically incorporated into generative models to improve data efficiency. In the first part, I will focus on generative adversarial networks (GANs) and explain how group symmetry can be embedded into their architecture. A central theme will be a precise analysis of the resulting reduction in sample complexity, that is, the number of samples required to effectively learn the target distribution. Somewhat surprisingly, the quantitative gains are not always aligned with naive intuition. In the second part, I will turn to score-based diffusion models. I will present a framework in which structural information is integrated into the noising dynamics itself, so that the diffusion process reflects multimodality, low dimensionality, and approximate symmetries inherent in the data. This modification leads to improved learning efficiency while preserving the flexibility and scalability of diffusion-based approaches. I will also briefly discuss recent extensions of this framework to latent-space formulations.

Bio
https://sites.google.com/view/weizhumath/home

Description

CAM/DoMSS Seminar
Monday, February 9
12:00pm MST/AZ
GWC 487

Speaker

Wei Zhu
Assistant Professor
Georgia Tech

 

Location
GWC 487