"False"
Skip to content

Students who have not changed their password since 7 May cannot log in to the student web. Read about how to change your password.

printicon
Main menu hidden.
Published: 2023-02-22

Intelligent robots don't always have to be more intelligent

NEWS We are heading towards increasingly autonomous systems – computers and robots aimed at mastering human skills, ideally more reliably than we do. But these systems don't always need more "intelligence" to solve a problem. "They just need to understand that they have a problem and that they need help," says Kai-Florian Richter, Associate Professor of Computing Science at Umeå University.

Text: Victoria Skeidsvoll

As household robots, self-driving vehicles and smart apps on our mobile phones are now being more widely introduced, they will soon become part of our everyday lives. However, their progress will depend on both how well the systems work and on the extent to which we humans trust and want to cooperate with them.
"Anyone who has had a conversation with Siri or interacted with a chatbot will have found that the technology is not always so 'smart.’ It's often hard for us humans to understand what the system can and can't do, or what we can do with them," says Kai-Florian Richter, Associate Professor at the Department of Computing Science, Umeå University.

Resolve mismatches

In the new research project "Strategies for autonomous systems to resolve ambiguities in human-system interaction", which is funded with SEK 3 700 000 by The Swedish Research Council, Richter will study how robots and autonomous vehicles can better understand the environment we live in. He seeks to enable a system to resolve mismatches between how the system sees the world – and how the user sees it. The goal is to develop a set of strategies to resolve errors and ambiguities in interactions for certain scenarios. For example looking at a scene again under the assumption that the system missed something in the first place, (if the human talks about a cup, it should be there somewhere). Or asking the user for help in a clarification dialog (“I see three cups, which one do you mean?”).

Common understanding

The project aims to contribute to the creation of a well-functioning, trustworthy, and user-friendly interaction with autonomous systems. "If robots are to interact with us in different situations, there must be some kind of 'common' understanding. Otherwise, communication becomes really difficult," Richter explains.

Among other things, it is important that the new technology is able to perceive the environment and reason about a given situation – in interaction with humans. This could involve getting the machine to 'map' its perception of the environment to concepts we would use, such as understanding 'left' or 'right'.

"But it could also be more complicated and vague, such as "over there" or "take me to the beach". Which beach? The system should be able to ask you that," says Kai-Florian Richter.

In many respects, past AI research and today's deep machine learning have assumed that the methods for processing and interpreting information need to be improved.
"Instead, it may be that systems don't need to become more 'intelligent' to solve a problem, they just need to understand that they have a problem and need help. It is a different kind of intelligence," says Kai-Florian.

About the researcher

Kai-Florian Richter is Associate Professor at the Department of Computing Science, performing interdisciplinary research at the intersection of artificial intelligence, human-computer interaction, spatial cognition, and geographic information science. Richter is leading the Spatial Cognitive Engineering research group. He is also director of studies for doctoral education and deputy head of department. He is also in charge of the doctoral studies program and Associate Head of the Department.