Research project The project studies legal issues relating to the increased use of semi-automatic decision-making within areas affecting fundamental rights, more specifically within law enforcement, social welfare systems, and online moderation.
The project aims to clarify the responsibilities of human users of such systems, as well as how hybrid systems impact human decision-making and autonomy. This in turn can answer vital questions about how such decision-systems should be regulated, and how they can be better implemented as not to interfere with legal requirements.
This project aims to generate new knowledge about the legal aspects surrounding the increased use of semi- automated decision-making in areas affecting fundamental rights. Responding to increased political calls for digitalization and effectivization, systems of hybrid algorithmic/human decision-making and AI decision support systems have increasingly been implemented in diverse contexts such as social welfare control systems, within law enforcement operations, and online content moderation. The legal preconditions for the use of such systems are however still largely unexplored, and as courts and other legal actors have begun reviewing their implementation, they have increasingly found them lacking in respects such as transparency, legality and proportionality. This project will assist in the mitigation of such issues by using a cross-disciplinary approach of legal science and informatics to analyze in-depth both the legal landscape surrounding semi-automated decision- making systems, and their practical implementation and interaction with legal rules and principles in specific contexts. This allows for an analysis of the effects of such decision-making systems on the legal rights of individuals subject to decisions, the legal agency of human decision-makers in such contexts, as well as how hybrid decision-making could be regulated to better serve fundamental rule of law values.