Research group
We develop algorithms for spatial reasoning and spatial problem solving, as well as algorithms that learn spatial rules through interaction with the surrounding world. These results will be applied to intelligent agents in scenarios involving interaction between humans and robots or between humans and computers, aiming to improve the agents' common sense and problem-solving abilities, but also to promote spatial skills in humans, such as mental rotation, perspective-taking or navigation skills.
Our research interests lay on reducing the sensory-semantic gap, that is, the gap between the acquisition of low-level information by digital sensors (e.g. mobile robot sensors, domotic sensors, smartphone sensors, etc.) and the need of obtaining high-level information for symbol grounding and enhancing human-robot-interaction (HRI) and human-computer-interaction (HCI). For that we define spatial reference systems and spatial reasoning models. Lately we are exploring automatic Scene Graph Generation (SGG) through Visual Language Models (VLM) for symbol grounding and reasoning.
On the other hand, we are also interested on bridging the gap between automated reasoning and machine learning techniques, that is, on developing new hybrid Artificial Intelligence (hybrid-AI) techniques with the aim of learning spatial reasoning rules by interaction but also to be able to explain them and rectifying them, if needed.
Our AI applications are human-centred or have the human-in-the-loop, mainly in educative scenarios. For that, we use tools and methods from spatial cognition and other theories from cognitive science in order to develop systems that are intuitive and guide the user to improve their spatial skills and creativity.