Optimization Methods for Machine Learning

-
Abstract

Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent generally require fine-tuning many hyper-parameters. In this talk we discuss alternative approaches for solving ML problems based on a quasi-Newton trust-region framework that does not require extensive parameter tuning. We will present numerical results that demonstrate the potential of the proposed approaches.

Note: This meeting will be via Zoom.  This semester, we anticipate some talks will be in person but most will be by Zoom.
Description

CAM/DoMSS Seminar 
Monday, October 3
1:30 pm MST/AZ 
Virtual Via Zoom
  
https://asu.zoom.us/j/83816961285

Speaker

Roummel Marcia
Professor of Applied Mathematics
School of Natural Sciences
University of California - Merced
 

Location
Virtual via Zoom