Professor in Data Science, with emphasis in Data Analysis and Machine Learning. Head of Explainable AI (XAI) team. Wallenberg AI, Autonomous Systems and Software Program (WASP) professor.
My core research at Umeå University focuses on Explainable Artificial Intelligence (XAI) and notably on so-called "outcome explanation", i.e. explaining and/or justifying results, actions or recommendations made by any kind of AI systems, including (deep or not) neural networks. A core technology is the Contextual Importance and Utility (CIU) method that I developed during my PhD thesis in 1991-1996.
During 2000-2018, domains such as Intelligent products, Internet of Things, Digital Twin (or Virtual Counterpart/Product Agent) and Systems of Systems were my core domains of research. My research on XAI maintains a strong connection with those domains, i.e. for enabling the use of AI in everyday products and life. Explainability is crucial for ensuring that AI remains "humane", which signifies that AI systems should be able to communicate the reasons for their actions and intentions in ways that are understandable by different end-users and appropriate to the real-life situation at hand.