Research group
Artificial intelligence is becoming increasingly used in society, business and industry. Explanatory AI is about making the decisions or actions taken by an AI system understandable to the people using the system. The European Union as well as many countries are now moving towards requiring such functionality from AI systems.
An increasing number of functions in our society are now run by AI-based systems, which has caused concern about how reliable these AI-based systems really are. A basic prerequisite for trusting AI systems is to make it possible to gain an insight into the underlying reasoning and be able to get answers to questions such as, "why was this decision made?", "why not?", "what would happen if?" and "why A and not B?"
End users in focus
We believe that current research in explanatory AI mostly ignores the end users and rather focuses on the creators of the AI systems. In our research, we aim to develop methods that can justify and explain their decisions and actions in a similar way that humans explain to each other. This means enabling a dialogue in which the AI system takes into account people's background knowledge, their capacity to handle different amounts of information at the same time, and their reactions during the dialogue.
Depending on how AI systems have been programmed or the data used for their learning, they may contain errors, biases or simply 'opinions' - just like humans. Explanatory AI is therefore necessary to truly understand the reasoning of the AI system and to judge whether we agree with it or not.
Unique research group
The eXplainable Artificial Intelligence (XAI) team at Umeå University was founded in 2017 by Professor Kary Främling when he took up his WASP professor position in Data Science, specialising in data analysis and machine learning. The XAI focus area is central as Främling has been a very active and internationally recognised researcher in artificial intelligence since the 1980s, focusing on topics such as neural network learning, multi-criteria decision support and reinforcement learning.
An important part of the team's research is based on the Contextual Importance and Utility (CIU) methodology, which makes it possible to explain and justify the performance of AI systems in a given situation.
Kary Främling is the head of the eXplainable Artificial Intelligence (XAI) team at the university of Umea in Sweden. Kary has been an active researcher in Artificial Intelligence since the 1980's, focusing on topics such as neural network learning, multiple criteria decision support and reinforcement learning. His PhD thesis from 1996 entitled « Learning and Explaining Preferences with Neural Networks for Multiple Criteria Decision Making » (in French : Modélisation et apprentissage des préférences par réseaux de neurones pour l'aide à la décision multicritère ») can be considered to be one of the first research initiatives that explicitly address the topic of Explainable Artificial Intelligence, including aspects such as confidence, robustness and reliability of the reasoning. He is currently Professor in Data Science at Umeå University, Sweden.
Kary is also the Founder and Head of the team Adaptive Systems of Intelligent Agents, (formerly known as the team « Distributed Information Architectures for collaborative LOGistics ») at Aalto University, Finland. The DIALOG software developed by the team was presumably the first Internet of Things (IoT) implementation worldwide in 2002. DIALOG was described in research articles since 2002 (see e.g. « FRÄMLING, Kary, HOLMSTRÖM, Jan, ALA-RISKU, Timo, KÄRKKAINEN, Mikko. Product agents for handling information about physical objects. Report of Laboratory of Information Processing Science series B, TKO-B 153/03, Helsinki University of Technology, 2003. 20 p. »).
Kary Främling has 147 published research papers, 2608 citations, and h-index 22 (source: Google Scholar date: 22/10/2018). He is a member of the Editorial Board of the Computers in Industry journal, he has been the reviewer for 29 journals and Member of the Program or Scientific Committees of 32 conferences and workshops.
He was the Guest Editor of the Special Issue on Intelligent Products of the Computers in Industry journal in 2009, together with Prof. Duncan McFarlane from Cambridge University.
Kary has organized 6 special sessions and workshops:
Workshop on "Reinforcement learning in non-stationary environments" of ECML 2005;
Special Session on Closed-loop PLM & Intelligent products at the International Conference on Industrial Engineering and Systems Management, 2011;
Special Session on Intelligent Products at 14th IFAC Symposium on Information Control Problems in Manufacturing (INCOM 2012);
Special Session on Intelligent Products in Manufacturing & Services Systems, 2nd International Conference on Communications, Computing and Control Applications (CCCA'2012);
Special Session on Intelligent Products: concepts, deployments and related aspects at 11th IFAC Workshop on Intelligent Manufacturing Systems (IMS'13);
Special Session on Smart City Interoperability and Cross-Platform Implementation at APMS'2018.
Special session Technologies and Infrastructures for Smart grids, Buildings, and Cities at IEEE International Conference on Industrial Informatics (INDIN'19)