A General Framework for Bounding Approximate Dynamic Programming Schemes

-
Abstract

For years, there has been interest in approximation methods for solving dynamic programming problems, because of the inherent complexity in computing optimal solutions characterized by Bellman's principle of optimality. A wide range of approximate dynamic programming (ADP) methods now exists. Examples of ADP methods are myopic schemes, roll-out schemes, and reinforcement learning schemes. It is of great interest to guarantee that the performance of an ADP scheme be at least some known fraction, say β, of optimal. In this talk, we introduce a general approach to bounding the performance of ADP methods, in this sense, in the stochastic setting. The approach is based on new results for bounding greedy solutions in string optimization problems, where one must choose a string (ordered set) of actions to maximize an objective function. This bounding technique is inspired by submodularity theory, but submodularity is not required for establishing bounds. Instead, the bounding is based on quantifying certain notions of curvature of string functions; the smaller the curvatures the better the bound. The key insight is that any ADP scheme is a greedy scheme for some surrogate string objective function that coincides in its optimal solution and value with those of the original optimal control problem. The ADP scheme then yields to the bounding technique mentioned above, and the curvatures of the surrogate objective determine the value β of the bound. The surrogate objective and its curvatures depend on the specific ADP.

This is joint work with Edwin K. P. Chong and Yajing Liu from Colorado State University.

Description

CAM/DoMSS Seminar
Monday, November 21

1:30 pm
Virtual via 
Zoom
https://asu.zoom.us/j/83816961285

Speaker

Ali Pezeshki
Professor 
Colorado State University

Location
Virtual via Zoom