Paper presented by Jimmie Adriazola

-
Abstract

Paper: Neural Operator: Learning Maps Between Function Spaces with Applications to PDEs by Kovachki et al. (2023)
Journal: Journal of Machine Learning Research (preprint: arXiv:2108.08481)
Synopsis:
Operator learning is an emerging paradigm in scientific machine learning. Unlike physics-informed neural networks, which focus on solving a fixed PDE instance with given initial/boundary data, neural operators aim to approximate the entire solution map, that is, from the function space of inputs to the function space of solutions. This board talk will introduce the central ideas of neural operators as developed in the work of Kovachki and others. I will outline the universal approximation theorem for neural operators, highlight several important architectures (including multipole graph operators and Fourier neural operators), and illustrate how they are applied to PDE problems. A brief computer demo on canonical examples of PDEs will follow so that you might be able to adapt these ideas to some aspect of your research.

Description

AMPD UP (Applied Math Presentations & Discourse for Understanding Papers)
Wednesday, September 3
9:00am AZ/MST
WXLR A202

For more information, visit the website. Reach out to Kimball Johnston with questions.

What is AMPD UP (Applied Math Presentations & Discourse for Understanding Papers)?
It is a reading group for applied mathematicians at ASU. Each week, a different member of the group presents the results of a recent applied math research paper to the rest of the group. That way, each group member only has to read one or two of the papers covered by the group but gets to learn about all of these papers. In addition to spreading knowledge, this will help students become better researchers and presenters.

Speaker

Jimmie Adriazola
NSF MPS-Ascend Postdoctoral Fellow
ASU Presidential Post Doctoral Fellow
School of Mathematical and Statistical Sciences
Arizona State University

Location
WXLR A202