The fundamental challenge lies in how intelligent autonomous agents may collaborate with humans in decision making tasks to achieve goals, and in making prioritizations among potentially conflicting goals, needs, motivations, preferences and choices of actions, e.g. in medical situations where healthcare professionals diagnose or select treatment methods. This is also highly important in situations where a person aims to change unhealthy behaviour, or needs to take action in order to reduce risk in work situations.
The aim of the research project is to develop socially intelligent software agents for human-agent collaboration. To provide socially intelligent systems that humans can trust enough to collaborate with, algorithms for explaining automated learning, reasoning, and values of arguments and decision outcomes will be developed.
Artificial intelligence-based methods for user modelling, user adaptation, and for the system to act in a socially acceptable way tailored to a situation will also be developed, partly by formalising theories about human behaviour. Methods that handle uncertain and incomplete information, as well as different types of values, norms, or utilities in such situations will be explored.