Skip to content

Information for students, faculty and staff regarding COVID-19. (Updated: 4 December 2020)

printicon

All you want to know about Artificial Intelligence.

Welcome to a series of inspiring talks on artificial intelligence by Umea University researchers, as well as internationally acclaimed experts.

Participating in #frAiday is your opportunity to share your experience and knowledge about artificial intelligence, learn more about the field, discuss on a wide range of perspectives on AI and meet new people.

Time: Fridays, 12:15 – 13:00
Place: Online via Zoom, due to new regional restrictions. The event is open for everyone who is interested in AI.

Program

11 December
Toward Human-Centric Trustworthy Systems

Juan Carlos Nieves, Associate Professor at the Department of Computing Science, Umeå university

Abstract

From the human-AI interaction view, some major achievements of Artificial Intelligent (AI) technology that have been highlighted are the ones when AI-based technology has beaten humans in activities that usually were devoted to humans, e.g. Chess game, Go game, etc. However, we have seen less successful stories where AI technology has helped people to overcome their basic problems, e.g. improve their social skills, stop smoking, etc. In this talk, we call for a Human-Centric Trustworthy AI technology that aims to help people and not to beat people in their daily activities.

 

18 december 
Decision Making in Context 

Frank Dignum, Professor at the Department of Computing Science, Umeå university

Abstract 

In the every day reasoning and behavior of people, "context" plays an important role. When I am in holidays I might not look at the price of a drink at the beach, while at home, in the supermarket I hunt for bargains. Or, I might have very environmental friendly habits at home, having solar panels installed, taking the bike whenever possible, etc. but, I travel a lot by plane at work.

So, apparently inconsistent behavior can often be explained by the context.

If we want to model human behavior and interactions for e.g. social simulations or if we want to be able to robustly interact with people in different contexts, like in social robotics, we need to account for the context in which the interactions take place. Although there has been quite some research on the use of context in HCI there has been little fundamental work done and the concept of context is still quite elusive.

In this presentation I will discuss some of the important characteristics of contexts and how they can be modeled for our purposes in social simulation and social robotics.

15 january 
Contesting Algorithmic Decision-Making 

Andrea Aler Tubella, Postdoctoral fellow at the Department of Computing Science, Umeå university 

Abstract

The right to contest a decision with consequences on individuals or the society is a well-established democratic right. Despite this right also being explicitly included in GDPR in reference to automated decision-making, its study seems to have received much less attention in the AI literature compared, for example, to the right to explanation.

Complementing the current attention on fairness, transparency, explainability and accountability for automated decision-making systems, in this talk we will focus on the right to contest decisions. We will discuss the type of assurances that are needed in a contesting process when algorithmic black boxes are involved, opening new questions about the interplay of contestability and explainability and proposing some ways forward.

22 january
Applying Cognitive-Affective Models to the Design of Ethical Assistant Agents

Catriona Kennedy, Honorary Research Fellow at the University of Birmingham

Abstract

People do not always make decisions according to their ethical values. For example, hiring decisions can be affected by unconscious bias; people who support environmental sustainability often use cars and short-haul flights because of convenience and time-pressure.

This discrepancy is called the value-action gap and may be caused by social and structural pressures as well as cognitive biases. Current technology tends to widen this gap (e.g. addictive engagement with social media, pressure selling on websites).

Computational models of cognition and affect can provide insights into the value-action gap and how it can be mitigated. Such models include dual process architectures, emotion models and behaviour change theories. In particular, metacognition (“thinking about thinking”) plays an important role in many of these models as a mechanism for self-regulation and for reasoning about mental attitudes, including values.

This talk will give an overview of cognitive-affective models and how they might be applied to the design of assistant agents to help people make decisions according to their values.

29 january 
Standardising and Auditing AI 

Andreas Theodorou, Postdoctoral Fellow at the Department of Computing Science, Umeå university

Abstract

Under the last few years we have seen a huge growth in the capabilities and applications of Artificial Intelligence (AI). Hardly a day goes by without news about technological advances and the societal impact of the use of AI.

Not only are there large expectations of AI's potential to help to solve many current problems and to support the well-being of all, there are also growing concerns about the impact of AI on societal and human wellbeing. Standards and ethical guidelines keep coming out from prominent intergovernmental organisations and bodies.

In this talk, Andreas will focus on first giving an overview of the current work on standards in AI and the challenges faced at producing them. Andreas will also try to motivate PhD/EngD students to participate – instead of being passive observers – on the ongoing policymaking discussions.

The talk will conclude with a short overview of our work at Umeå University's Responsible AI group on AI Governance.

 

 

 

 

 

frAIday register form

To participate in the seminar #frAIday, please register. We will send you a link to the Zoom event. Please note that you don't have to register for each event.

The University is a public authority. Messages that you submit here are stored in accordance with Swedish law. Read more at umu.se/en/gdpr about how we process personal data.

For more information, please contact

Tatyana Sarayeva
Coordinator of the WASP-HS program and Responsible AI research group.
E-mail: tatyana.sarayeva@umu.se

Christian Kammler 
Doctoral student, Department of Computing Science 
E-mail: christian.kammler@umu.se

Earlier presentations

"3o years in search for Human-Centric AI and this is what I found"
Helena Lindgren, Professor, Department of Computing Science, Umeå University.

"From Plato to Yoda, training responsible AI designers for on-the-field action" 
LoÏs Vanhée, Associate Professor, Computing Science, Umeå University. Download the presentation "From Plato to Yoda" here

"Lies, deceptions and computation"
Hans van Ditmarsch, Senior Researcher at CNRS, the French National Research Organization. Download the presentation "Lies, deceptions and computation" here

"XAI: A New Model and Its Implications for Medical Ethics"
Erik Campano, Doctoral student, Department of Informatics, Umeå University. Download the presentation "AI and informed consent" here

"Research Directions on Data Privacy" 
Vicenc Torra, Professor at Department of Computing Science, Umeå University. Download the presentation Research directions on data privacy.

"Implementing AI in a corporation – the good, the bad and the odd"
Salla Franzen, Chief Data Scientist, SEB. Download the presentation "AI ethics in financial services" here

"Prototyping for Social Simulation"
Maarten Jensen, Doctoral student, Department of Computing Science at Umeå university. Will be presented shortly.