I am a PhD student working under the supervision of Lili Jiang. My research focuses on explaining machine learning predictions in a human-understandable way. Machine learning algorithms are the heart of many intelligent decision support systems from ﬁnance to medical diagnosis and manufacturing.
The problem is that if the user doesn’t understand the reasoning behind machine-made decisions, they will not trust these systems. Thus, explanations help us to evaluate how much we can trust a model and what are the weaknesses of a model so that it can be improved upon that.