Whereas explanations of natural intelligence in cognitive science center on cognitive models, explanations of artificial intelligence in explainable AI do not. Measured against the standards of the former, the latter appear explanatorily deficient. Indeed, if explainable AI is to achieve its goal of promoting values such as fairness, robustness, and high performance, then cognitive models are needed. In this talk, I will motivate the use of cognitive models in explainable AI, and discuss the prospects and limitations of some very recent attempts to develop such models.
Carlos Zednik is an Assistant Professor for Philosophy of AI at Eindhoven University of Technology. My work focuses on the explanation of natural and artificial intelligence, using methods from philosophy of science to better understand the interrelations between AI, neuroscience and psychology. I am involved in various AI standardization efforts to promote explainable and responsible AI, as well as in the use of AI in higher education.