Towards a Theoretical Understanding of Neural Networks

-
Abstract

What kind of functions do neural networks learn? Why can neural networks perform well in high-dimensional settings? What is the right way to define the “norm” of a neural network?

This talk will answer these questions and provide mathematical explanations of existing design and training strategies that have evolved largely through experiment and practice.  This includes new insights into the importance of weight decay, linear layers, and skip connections, and a deeper understanding of sparsity and the curse of dimensionality, The theory also suggests new neural network architectures and regularization methods that may lead to improvements in practice. (Joint work with Rahul Parhi).

Description

DoMSS Seminar
Monday, February 28           

1:30 pm MST/AZ

Zoom meeting room link:  https://asu.zoom.us/j/6871076660

Note: This meeting will be via Zoom.  This semester, we anticipate some talks will be in person but most will be by Zoom.

Speaker

Robert Nowak
Nosbusch Professorship in Engineering
University of Wisconsin, Madison

Location
Virtual via Zoom