Title: Mixture of Linear Models Co-supervised by Deep Neural Networks
Abstract: Deep neural networks (DNN) often achieve state-of-the-art prediction accuracy for many applications. However, in some areas, the use of DNN is resisted because it is extremely hard to explain a DNN model. On the other hand, a linear model, e.g., logistic regression, is usually considered highly interpretable but its accuracy tends to be low. Our goal is to develop mechanisms for balancing interpretability and accuracy so as to bridge the gap between explainable linear models and black-box models. Specifically, we propose a new mixture of linear models (MLM) for regression or classification, whose estimation is guided by a pre-trained DNN, acting as a proxy of the optimal prediction function. Visualization methods and quantitative approaches have been developed for interpretation. Experiments show that the new method can trade-off interpretability and accuracy. For some examples, MLM achieves comparable accuracy as DNN but significantly enhances interpretability. I will also briefly discuss our more recent work on an EM-type algorithm to estimate MLM and its potential to improve logistic regression for small datasets.