Skip to content


To use the chat, you need to allow Microsoft Dynamics to use cookies.

Main menu hidden.

#frAIday - AI, Opacity, and Personal Autonomy

Time Friday 17 September, 2021 at 12:15 - 13:00
Place Online, Zoom.

Advancements in machine learning have fuelled the popularity of using AI decision algorithms to streamline procedures such as bail hearings (Feller et al., 2016), medical diagnoses (Rajkomar et al., 2018; Esteva et al., 2019) and recruitment (Heilweil, 2019; Van Esch et al., 2019). Academic articles (Floridi et al., 2018), policy texts (HLEG, 2019), and popularizing books (O’Neill, 2016) alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation (Lombrozo, 2011; Hitchcock, 2012), I raise a moral concern for opaque algorithms that often goes unnoticed: opaque algorithms can undermine users’ autonomy by hiding salient pathways of affecting their outcomes. I argue that this concern is distinct from those typically discussed in the literature and that it deserves further attention. I also argue that it can guide us in deciding what degree of transparency should be demanded. Plausibly, the required degree of transparency is attainable without ‘opening the black box’ of machine learning algorithms.

Event type: Seminar
Tatyana Sarayeva
Read about Tatyana Sarayeva