"False"
Skip to content
printicon
Main menu hidden.

Autonomous Systems' Ability to Understand Their Own Limitations

Research project This project is part of a larger research initiative at Umeå University of 8 postdoc projects on autonomous systems for the future of industry and society.

Autonomous systems, such as self-driving cars or robots in households and healthcare, are on the verge of becoming integral parts of our life and society. They will perform tasks in our everyday environments that may entail a lot of ambiguous or uncertain information, thus, challenging their capabilities. One way of solving these issues is to get help from a human. To successfully do this, the systems need to first understand when such support would benefit them, and then need to be able to request support in useful ways. Both aspects will be explored in this project for some example tasks.

Head of project

Kai-Florian Richter
Associate professor
E-mail
Email

Project overview

Project period:

2018-02-01 2020-01-31

Funding

The Kempe Foundations, 2018-2020: SEK 600,000

Participating departments and units at Umeå University

Department of Computing Science, Faculty of Science and Technology

Research area

Computing science

Project description

Knowing when full autonomy will fail and collaboration with others is needed to successfully execute a task is a fundamental ability for humans to ensure efficiency, safety, and even survival. This ability is equally important for autonomous systems, in particular artificial cognitive agents who operate in our public or private spaces where they will often be faced with ill-defined or ambiguous human requests. Without this ability, these systems may get lost in their operations, which may be taken quite literally for systems freely operating in our environments, such as autonomous robots or self-driving vehicles, but is equally problematic for other more abstract systems, which may get lost faced with a multitude of possible actions or decisions. In order for autonomous systems to know that they need support-usually from a human-they need some way of realizing that their current plan will not result in the intended outcomes, or that they may indeed be confused and not able to make reasonable decisions given the situation they are currently in. They then need to communicate this insight and potential reasons for failure or confusion to the human operator in adequate ways. This project will explore mechanisms for those cognitive agents mentioned above to infer that they need human support in their task execution. The project's focus will be on principles for interaction between system and human in such situations of need. Potential scenarios include spatial navigation, performing operations that may manipulate equipped indoor spaces (e.g., in the context of eldercare), and planning tasks more general.
Latest update: 2018-12-13