The simple idea behind Humlab’s Tech-breakfast initiative is to create a collaborative space for practical, critical, and theoretical explorations of digital methods and digital technologies. To the breakfasts, every second Thursday between 08.00–10.00, scholars and students are invited to bring their work and get hands-on support from Humlab staff with expertise in digital methods. Do you have a research problem to which a digital approach might be part of the solution? Are you curious to explore what can be done with free and easy-to-use software solutions?
This time Bram Vaassen from the Dept. of Historical, Philosophical and Religious Studies will also give a short presentation:
System opacity and user autonomy for AI algorithms Policy texts (HLEG 2019), academic literature (Floridi et al. 2018) and popularizing books alike (O’Neill 2016) call for increased transparency of automatized decision algorithms. However, such texts rarely provide explicit explanations of what is actually objectionable about the opacity of AI decision algorithms. I aim to address this lacuna by arguing that such opacity threatens the autonomy of users by obscuring salient pathways of shaping their lives. My account (i) comes with a firm grounding in several ethical theories, (ii) explains how opacity can undermine user responsibility, (iii) provides a more accurate picture than competitors such as Walmsey (2020) and (iv) addresses recent criticism on demands for transparency (Zerilli 2019). Or so I will argue.