Information for students, faculty and staff regarding COVID-19. (Updated: 1 July 2020)

Skip to content
printicon

Responsible Artificial Intelligence

The research group in Responsible AI was established to study the ethical and societal impact of AI, while supporting policymakers through the development of tools and methodologies to mitigate adverse effects.

We study the ethical and societal impact of AI, through the development of tools and methodologies design, monitor, and develop trustworthy AI systems and applications.
Our research is not only about the development of intelligent systems, but also in understanding the effects of their deployment on our societies. We are working to ensure the ethical application of Artificial Intelligence (AI), both through public engagement and frequent interaction with policymakers, and by facilitating the engineering of Responsible AI.

Our diverse multidisciplinary research programme aims to help all relevant actors to have access to the means and tools to develop, deploy, operate, and govern systems, while taking any ethical, legal, and socio-economics implications into consideration.

Research Topics & Questions

  • AI Governance: Which ethical, legal, and socio-economic issues arise from the activity of autonomous intelligent agents in teams? How can activity be regulated?  What are the moral and legal values we want our systems to adhere to?
  • Systems Engineering of AI: How can we develop intelligent systems with modular maintainable code? How can we efficiently develop systems that adhere to our moral and legal values?
  • Analysis and formalization of social interaction: The aim is to study the effect of social and organizational structure taking into account the autonomy and heterogeneity of participants and the societal and legal values holding in the context. To this end, we are developing formal theories and a computational architecture for agent deliberation based on social-practices.
  • Design and evaluation of human-agent teamwork: The central research question is how do people interact (negotiate, trust, cooperate) with autonomous cognitive entities in a social setting and the development of agent-based simulations of complex socio-technical domains.

Current Projects

  • WASP-HS: Funded by the Wallenberg Foundations, WASP-HS is a 660 MSEK project headed by Virginia Dignum here at Umeå. The project aims to study of the impact of technology on entrepreneurship and society, and will also collaborate with WASP on the doctoral program, among other things.The program is interdisciplinary, combining humanities and social sciences with technological research to support the recruitment of over 70 PhD students and various supporting researchers.
  • WASP-AI: This project, funded by the Wallenberg AI, Autonomous Systems and Software Program, is focused on researching and addressing the societal, ethical and cultural impact of AI. Within this umbrella, we are developing formal methods to help monitor and design ethically-aligned agents.
  • AI4EU: AI4EU is the European Union’s landmark AI project, which seeks to develop a European ecosystem, bringing together the knowledge, algorithms, tools and resources available and making it a compelling solution for users. Involving 80 partners, covering 21 countries, the AI4EU will unify Europe’s Artificial Intelligence community. Our contribution is the design of a development methodology for AI systems. Our development methodology, to be made available in the platform, enables its users to create systems that do not only perform, but are also inline with our European values.
  • HumaneAI: This project aims at create a set of recommended actions to present the value of the Human-Centered approach to help all European member states build Human-Centered Artificial Intelligence and achieve the goals set by the European Commission in its European approach to Artificial Intelligence. The HumaneAI Action Plan will be based and drawn from the new proposed research roadmap and will provide recommendations to stakeholders in strategic areas that are relevant for a Human-Centered Artificial Intelligence.
  • HumaneAI-Net:
    The project brings together top European research centres, universities and key industrial champions into a
    network of centres of excellence that goes beyond a narrow definition of AI and combines world-leading AI competence with key players in related areas such as HCI, cognitive science, social sciences and complexity science. This is crucial to develop a truly Human Centric brand of European AI. The aim is to facilitate AI systems that enhance human capabilities and empower individuals and society while respecting human autonomy and self-determination.
  • ASSOCC – Agent-based Social Simulation of the Coronavirus Crisis: Understanding the effectiveness of containment policy responses to the coronavirus pandemic by social simulation and social reporting.
  • Sustainable AI:
    As there are more and more cases of accidental discrimination and privacy violations, the demands for the ability to control AI in a more responsible way increase. Today, AI is integrated without prerequisites for being able to identify, measure and evaluate the implications from a broader ethical perspective. The goal of the project is that Sustainable AI framework as a tool to avoid the unintentional ethical pitfalls is applied in companies, start-ups and public authorities. The project also aims to make them ready for upcoming regulations on AI & ethics as well as to provide them with tools to control AI based on their ethical values.
  • AI, Democracy and Self-determination:
    The project aims to investigate to what extent and how AI can be designed and used to behave in line with ethical principles and social values, as well as to investigate the impact of AI systems on self-determination and on democratic processes and values.
  • AI Glass Box:
    AI systems are increasingly expected to act autonomously. Using current Machine Learning approaches that focus on pattern matching and rely heavily on correlation methods leads to impenetrable systems that are notoriously difficult to monitor (the so-called black box algorithms). The project focuses on development of methods to verify and monitor the ethical behavior of AI systems based on the observation of input and output behavior according to a continuously evolving societal optimum.
  • Bias free chatbots:
    Humans and society are biased, and this bias is reflected in collected data which is then used to build Artificial Intelligence (AI) systems such as chatbots.
    That is, bias is propagated from human, over data, to AI systems. Thus, bias can be detected and mitigated in the AI system, the data or directly in human behavior. The aim of this project is to develop computational methods to detect bias directly in human chat behavior. The research insights will help to develop chatbots that detect bias and react appropriately to biased statements by human users, for example, by informing or educating the users about the expressed bias. The project is coordinated by GH Solutions AB.

Visiting Fellowship Program

The AI research groups at the Department of Computer Science, Umeå University, Sweden, have established a Visiting Fellowship program to foster the development of links with international scholars (at least 5 years post doctorate). This Fellowship is an exciting opportunity for scholars to visit the Department, contribute to its academic life and form new collaborations. The focus of the Fellowship is on the impact of Artificial Intelligence, in the broadest sense, a key strength and unifying feature of the AI cluster, or on a substantive area of interest within AI cluster.

Proposals for collaborative research with members of the AI cluster are especially welcome. Applicants are therefore strongly encouraged to engage on interaction with members of the cluster prior to submitting their application for visiting fellowship program. We expect to support 3 to 5 fellowships per year.

Applications for visiting fellowship are currently closed. Notifications for fellows in the Fall 2020/Winter 2021 were announced in May 2020. 
Fellows for 2020/2021 are:
• Hans Weigand, The Netherlands (postponed to 2021)
• Rui Prada, Portugal (postponed to 2021)
• Mark Klein, USA
• Hans van Dittmarsch, France
• Aurelie Clodic, France
• Catriona Kennedy, UK
• Stephen Cranefield, New Zealand

 

News

Unique collaboration on IT and artificial intelligence
Published: 25 Jun, 2020

The initiative Digital Impact North is based on strong growth in AI, autonomous systems and software

Spin-off awarded for industry grade physics in Unreal Engine
Published: 24 Jun, 2020

Algoryx has been awarded for implementation of industry grade physics simulation in Unreal Engine.

Researcher contributes to Conference on Systems, Man, and Cybernetics
Published: 12 Jun, 2020

Organizing a special session on Data Analytics and Computation Intelligence

Virginia Dignum one of 50 top AI ethics influencers
Published: 31 May, 2020

Virginia Dignum on the French digital web agency IPFC list of top 50 AI Ethics influencers.

Licentiate thesis on socially intelligent systems
Published: 27 May, 2020

Timotheus Kampik has defended his Licentiate Thesis.

Virginia Dignum one of 50 top AI ethics influencers
Virginia Dignum one of 50 top AI ethics influencers

Virginia Dignum on the French digital web agency IPFC list of top 50 AI Ethics influencers.

Umeå professor on world list
Umeå professor on world list

Professor Virginia Dignum's book on AI ranked as one of the best books in computer science 2019.