Researchers at Umeå University are developing a framework for trustworthy AI. The research project Responsible Artificial INtelligence (RAIN) is supported with a SEK 1,500,000 grant from Knut and Alice Wallenberg Foundation.
Text: Mikael Hansson
Andreas Theodorou and Virginia Dignum.
The researchers involved in the project are Andreas Theodorou (principal investigator) and Virginia Dignum, both at Department of Computing Science, Umeå University.
Many guidelines for trustworthy AI have been produced in the past few years. These guidelines, including the ones by the European Commission’s High-Level Experts Group, rely on promoting high-level context-specific values, such as transparency, fairness, and accountability. The interpretation of these values varies by culture to culture, stakeholder to stakeholder. The multi-interpretability of such terms may prove to be one of the greatest challenges of appropriately regulating intelligent systems and creating actionable policies.
While it may be impossible to find, let alone enforce, universal interpretations of ethical and social values, we can at the very least try to make their interpretations explicit and transparent.
In RAIN, the researchers are developing a structured methodology to enable organisations to move from high-level abstract values into operationalisable requirements. In addition to the methodology, RAIN is looking into the development of a concrete multi-stakeholder assessment framework, alongside with relevant aid tools, to enable the auditability and compliance checking of existing and upcoming AI systems. Both the methodology and assessment frameworks will be field tested by industrial partners and key stakeholders in the Nordic countries.