Lab Home | Phone | Search | ||||||||
|
||||||||
This talk discusses analytic and learning-inspired state estimation and control techniques for Unmanned Aircraft Systems (UAS) and Multi-Agent Systems (MAS) equipped with onboard visual and inertial sensors. In real-world conditions sensor measurements exhibit Gaussian noise, and very often they fail to produce data or contain spurious information, i.e., outliers or impulsive disturbances. This reveals a need for advanced state estimation and control techniques ensuring an effective operation of robotic agents in real-time. The first part of the talk introduces a model-based switching control strategy which enables an original UAS prototype to perform vision-based trajectory tracking missions even when the visual feedback information is not available for some instants of time. Commercially-available ready-to-fly UASs, on the other hand, are gaining popularity due to two main reasons: low cost and high computational power. Unfortunately, it is rare that the manufacturers of such systems disclose the dynamic models and control strategies of their products – a condition that makes them unfit for our feedback control-related research activities. The second part of the talk introduces a system identification technique combined with a robust controller, which allow us to use commercial UASs in our experiments. The third part of the talk extends these techniques to the domain of MAS. A real-time cooperative vision-based autonomous target-tracking mission is discussed, along with a robust state estimation technique. The last part of the talk discusses how state estimation and control can benefit from learning-inspired solutions – a current trend in diverse research domains. A performance-guaranteed control strategy which takes inspiration from a computational model of the emotional learning process observed in the mammalian limbic system is introduced, along with its application in autonomous systems and MAS. Host: Anatoly Zlotnik |