Lab Home | Phone | Search | ||||||||
|
||||||||
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or detect bugs in their behavior. For example, determining when a machine learning model is “good enough” is challenging since held-out accuracy metrics significantly overestimate real-world performance.In this talk, I will describe our research on approaches that explain the predictions of any classifier in an interpretable and faithful manner, and automated techniques to detect bugs that can occur naturally when a model is deployed. In particular, these methods describe the relationship between the components of the input instance and the classifier’s prediction. I will cover various ways in which we summarize this relationship: as linear weights, as precise rules, and as counter-examples, and present experiments to contrast them and evaluate their utility in understanding, and debugging, black-box machine learning algorithms, on tabular, image, text, and graph completion applications. **This seminar is part of a series on Artificial Intelligence for Computational Science. Host: Aric Hagberg |