Image recovery for linear inverse problems from multiple measurements: from point estimates to uncertainty quantification 

-
Abstract

Numerical algorithms for image recovery from linear inverse problems that are designed to recover point estimates typically involve solving a convex optimization problem that includes a regularization term  used to promote some prior belief about the underlying image. Compressive sensing (CS) provides a well-known example for which the regularization term encourages sparsity in some presumably sparse domain (e.g. edges, gradient).  Although the CS approach has been used successfully in a broad range of applications, it can be difficult to choose the appropriate regularization parameters, as well as the appropriate regularization operator, and the results are not always robust to increased amounts of noise or additional undersampling.

The Bayesian approach casts the inverse problem in terms of random variables,  and is often used to formulate the recovery as the posterior distribution of the unknown, from which it is possible to sample.  The main (hypothetical) advantages to using the Bayesian approach are (1) the hierarchical structure of the prior in the Bayesian framework (as compared to the regularization term parameter in CS methods) reduces the amount of hand-tuning required and (2) sampling from the posterior distribution allows for uncertainty quantification, which can be important especially in applications where it crucial to know how reliable the recovery is.   However, as is the case for CS methods, it is important to choose a good prior, which is not always easy to do, with the “best” priors sometimes leading to various computational complexities.

In some applications we are given multiple measurement vectors (MMV) of  noisy observable (indirect) data.  This talk discusses how both the CS and Bayesian approaches can exploit the redundant information to improve the accuracy of approximation, limit the amount of hand tuning of parameters, reduce the uncertainty of the sampled posterior mean,  all while being mindful of computational complexities.

This work involves many collaborators: Jan Glaubitz, Jonathan Lindbloom, Theresa Scarnati, Guohui Song, Yao Xiao, and Jack Zhang. 

Description

CAM/DoMSS Seminar
Monday, November 14

1:30 pm

In person:  WXLR A307

Speaker

Anne Gelb
John G. Kemeny Parents Professor
Department of Mathematics
Dartmouth College

Location
WXLR A307