Lab Home | Phone | Search | ||||||||
|
||||||||
The shift in the focus of the computing industry towards learning applications, and the inevitable end to Moore's law (`death by CMOS scaling and heat') have empowered the search for unconventional substrates, to build computing systems that can “learn like the human brain." While we have made tremendous progress in realizing intelligence in narrow tasks using large datasets and compute power, artificial general intelligence (AGI) that can rival the human brain in terms of energy efficiency and performance across several domains continues to be the holy grail of computing. To move forward, it is important to have a solid theoretical foundation for computing and intelligence, and what they both entail. It is necessary to address these questions to help us identify the optimal devices, architectures and design techniques that will allow us to efficiently build the intelligent systems of the future. This talk will review the fundamental ideas and philosophical assumptions that currently underlie computing. The crucial distinctions between our intelligence, and that achieved through current computational approaches are a result of these assumptions and needs to be explored. A number of these issues beg the foundational question - is computing the optimal path to artificial intelligence? Building off these ideas, I will discuss recent results from non-equilibrium thermodynamics that obtain certain inference rules by optimizing thermodynamic efficiency, to propose an alternate theoretical framework of Thermodynamic Intelligence, that treats intelligence as a physical process and describes it in terms of homeostasis, entropy flow and energy dissipation. I will also discuss the path moving forward – introduction of a new engineering paradigm to realize thermodynamic intelligence, change to existing design philosophies, and novel substrates that might serve as good testbeds. Host: Stephan Eidenbenz |