Lab Home | Phone | Search | ||||||||
|
||||||||
The information bottleneck (IB) has been proposed as a principled way to compress a random variable, while only preserving that information which is relevant for predicting another random variable. In recent times, the IB has been proposed --- and challenged --- as a theoretical framework for understanding why and how deep learning architectures achieve good performance. I will cover: (1) an introduction to the ideas behind IB, (2) methods for implementing information-theoretic compression in neural networks + some possible applications of such methods, (3) the current status of the IB theory of deep learning, (4) recently discovered caveats that arise for IB in machine learning scenarios. NOTE: Future speaker nominations through the Information Science and Technology Institute (ISTI) are welcome and can be entered at: https://isti-seminar.lanl.gov/app/calendar. Host: Juston Moore |