Lab Home | Phone | Search | ||||||||
|
||||||||
We describe research issues surrounding the study of a general class of control strategies called state-dependent action optimization (SDAO). We outline theoretical and applied research issue related to SDAO. On the theoretical side, we introduce a framework based on optimal control and show that, under quite general conditions, there exists an optimal solution (policy or controller) that is also an SDAO scheme. This result, not usually stated this way and more commonly known as BellmanÃs principle, makes SDAO schemes of interest in a wide range of applications and is the basis for self-driving vehicles and AlphaGo, the master-beating Go playing machine. We then show how to use results from submodular optimization to bound the performance of SDAO schemes relative to the optimal. We describe several application examples in which such schemes have been applied, and also outline some ongoing applications. Host: Anatoly Ziotnik |