Using ethics to shape AI in the best interest of humans
Virigina Dignum was inaugurated Professor of Social and Ethical Artificial Intelligence in 2019.
Text: Jonas Lidström
Image: Mattias Pettersson
As Artificial Intelligence (AI) systems are increasingly making decisions that directly affect users and society, many questions rise about the social, economic, political, and ethical impact of these systems. Can machines make moral decisions and deal with moral dilemmas? Which ethical principles should be included in the design of AI systems?
There are many options to consider but there is not one ‘right’ choice. Methods are needed that ensure Accountability, Responsibility and Transparency of AI as part of complex socio-technical system environments. Optimal AI is not a system that optimizes the result ignoring the context, but one that gives the most responsible and ethically acceptable result given its context.
Virginia Dignum researches how to develop AI systems that meet their social responsibility, so that their decisions and impact are trustworthy and relevant. This includes developing theories, models and tools for supporting designers, to help overseeing the behaviour of the system and measure its societal impact, and to formally verify that their behavior is according to a set of ethical principles.
Virginia Dignum is Professor of Social and Ethical Artificial Intelligence. She was born in Lisbon in 1964 and did her PhD in 2004 from Utrecht University. She is a Fellow of the European Artificial Intelligence Association (EURAI), a member of the European Commission’s High Level Expert Group on AI, and of the Executive Board of IEEE’s Initiative on Ethically Aligned Design. She is member of the scientific board of Delft Design for Values Institute, the Responsible Robotics Foundation, and the ALLAI Alliance.