"False"
Skip to content
printicon
Main menu hidden.
Illustration på artificiell intelligens, digitala tvillingar.

Cross-disciplinary conversations about AI: theoretical and ethical issues

Wed
26
Nov
Time Wednesday 26 November until Thursday 27 November, 2025 at 09:30 - 17:30
Place HUM.H.119 and HUM.F.200, Humanities Building, Umeå University

Workshop at Umeå University
November 26-27, 2025 

Program

Wednesday, November 26th

Philosophy of AI meets Cognitive Science and Computer Science

Room: HUM.H.119:

9.30-10.30 - Linus Holm (Cognitive Science, Umeå): Thinking machines with teleological drives

10.45-11.45 - James Turner (Philosophy, Umeå) with Fabian Hundertmark (Philosophy, Valencia): Do AI Frogs Have Mental Representations?

11.45-13.15 - Lunch break

13.15-15.00 - Iwan Williams (Philosophy, Copenhagen - Higher Seminar in Philosophy): Content determination for AI representations – the disputed roles of structure, selection, and success

15.15-16.15 - Adam Dahlgren (Computer Science, Umeå): Are we building minds or machines?

16.30-17.30 - Dorna Behdadi (Philosophy and TAIGA, Umeå): Making Sense of Mentalizing Behavior in Relation to AI 

Thursday, November 27th

Ethics meets AI in Medicine and Healthcare, funded by EU, AICE-project.

Ethics meets AI in Medicine and Healthcare, funded by EU, AICE-project.

Room: HUM.H.119:

9.30-10.30 - Giorgia Pozzi (Philosophy, Delft - hybrid): Clinician-AI disagreement in medicine: beyond opposition

10.45-11.45 - Joshua Hatherley (Philosophy, Copenhagen): Prescription griefbots? On the moral permissibility of “therapeutic” postmortem avatars in healthcare

11.45-12.45 - Lunch break.

Room: HUM.F.200:

12:45-13:45 - Madeleine Hayenhjelm  (Philosophy, Umeå): Medical AI through the lens of risk

14:00-15:00 - Pii Telakivi (Philosophy, Helsinki): The Digital Other in AI Therapy

15:15-16:15 -  Roundtable conversation: What are the ethical challenges for AI in medicine? Is there something we are missing, exaggerating, or downplaying?

Registration

All interested are welcome to this event, but advance registration is requested. Please register by sending an email to dimitri.mollo@umu.se no later than November 12. Registration is free.

Arranged by

This workshop is arranged by the Department of Historical, Philosophical and Religious Studies at Umeå University, together with the AICE-project, and funded by the European Union. 

Organisers

Dimitri Coelho Mollo, Umeå University. 
Madelene Hayenhjelm, Umeå University.

Abstracts

Linus Holm

Thinking machines with teleological drives

Contemporary data-driven AI implementing reinforcement learning have managed impressive feats such as consistently beating go-champions and provided tools for generating roughly intelligible text, images and even video on demand. These architectures are now at roads end because essentially, they rest on interpolation of its data, are susceptible to corruption and hallucination, have no way of expanding their ontology nor adapt and their “answers” are natively unexplainable. In my talk, I will sketch out an alternative architecture that resolves these shortcomings and time permitting, will touch on the value of intrinsic motivation as a guide for autonomous generally intelligent systems.

James Turner and Fabian Hundertmark

Do AI Frogs Have Mental Representations?

The frog that snaps at flies is a canonical example in the study of mental representations. It is assumed that the frog’s capacity to snap at flies is underpinned by its ability to represent them. But how is it that frogs come to represent flies? The answer from teleosemantics is that the frog represents flies because its visual system has the function of representing flies. (Millikan 1984; Neander 2017; Shea 2018). We transpose this teleosemantic theory to a minimalist artificial setting—an artificial neural network (ANN) that receives input from a camera and coordinates an artificial “tongue” that snaps at flies—and ask whether function-based accounts can underwrite representational ascriptions in such systems.

The mainstream teleosemantic theory faces difficulties. Functions are standardly thought to be conferred by natural selection—namely, the result of differential reproduction. But ANNs do not undergo natural selection. Taking inspiration from the Generalised selected-effects (GSE) theory (Garson 2019a; 2019b; Garson & Papineau 2022), we propose that natural selection is not the only form of selection—differential retention is also a form of selection, and thus can confer functions to systems. Since differential retention is the process by which many neuronal structures gain their functions (Garson 2019a), we assess whether differential retention occurs in artificial neural networks such that it confers functions to them. We contend that while differential retention of neuronal weights does seem to occur in ANNs, the question of whether this confers functions to them rests on whether either, (a) artificial neurones form populations, or (b) the neuronal weights are retained due to their effects on the larger system of which they are a part (the artificial frog). We conclude that while (a) seems unlikely, (b) is plausible. Thus, we conclude that it is plausible that ANNs have the kinds of functions necessary for them to have genuine representations.

Iwan Williams

Content determination for AI representations – the disputed roles of structure, selection, and success

What determines the content of internal representations in LLMs, and other AI models? In the first part of the talk, I'll argue that structural correspondences between internal activations in LLMs and real world entities can play a role in grounding representation of those entities. To ground content, however, these correspondences need to be exploited—to play a causal role in explaining the behavioural success of the system. But what fixes the success conditions of an AI model's behaviour? An attractive view appeals to history: successful outcomes are those have have been selected for or stabilised through training. In the second half of the talk, I'll question this final assumption. When explaining an AI system's behaviour, we typically want to understand why it succeeds or fails on our terms, relative to its deployment context, and thus (I'll tentatively suggest) success and failure in this sense are the appropriate explananda for representational explanations in AI.

Adam Dahlgren Lindström

Are we building minds or machines?

Claims that large language models exhibit human-like capabilities like abstract reasoning often obscure a basic uncertainty: when should we look to our own cognition for inspiration, and when is that a problematic strategy? This presentation examines that question by contrasting LLM training and evaluation to natural cognition. We highlight where deep learning systems replicate familiar cognitive errors, and where their failures betray mechanisms unlike any human learner. As a case study using structurally equivalent problem variants, we show that LLM “reasoning” remains strikingly inconsistent, despite surface-level fluency. Furthermore, recent multimodal findings indicate that meaningful abstraction requires grounding signals absent from text-only models. Together, these perspectives argue for caution in interpreting LLM behaviour as evidence of emerging minds with abstract reasoning. 

Dorna Behdadi

Making Sense of Mentalizing Behavior in Relation to AI 

The rapid development of LLM-based chatbots and other socially responsive machines has been accompanied by increasing reports of users viewing and treating AI systems as if they had thoughts, feelings or personalities, and sometimes even as friends or partners (Skjuve et al., 2021; Döring et al., 2020; Newman, 2014). Yet empirical data reveal a discrepancy between self-reports and behavior: while users explicitly deny attributing mental states to AI, indirect linguistic and behavioral measures suggest otherwise (Spatola & Wudarczyk, 2021; Thellman et al., 2022). Moreover, recent studies indicate that disclosure strategies, such as labeling chatbots with ‘I’m a bot’, have little to no effect on users’ tendency to perceive or treat AI as minded entities or on their susceptibility to AI’s influence (Gallegos et al., preprint; van der Goot et al., 2024; Park et al., 2023). This article considers three explanatory frameworks for the mentioned divergence: the distinction between explicit and implicit attitudes, the alief account, and the fictionalist approach to mentalization. Furthermore, depending on which account we take to best explain these findings, different ethical and practical conclusions may follow for whether mentalizing behavior toward AI constitutes a problem and how it should be addressed in policy and design.

Giorgia Pozzi

Clinician-AI disagreement in medicine: beyond opposition

As AI becomes increasingly integrated into medical care, questions arise about how clinicians should collaborate with these systems, especially when their recommendations diverge. The literature suggests three main responses: deferring to the AI, overruling it, or treating it as an epistemic equal requiring a second human opinion in cases of disagreement. This talk outlines the limits of these approaches and proposes a more nuanced way of thinking about clinician–AI disagreement. By distinguishing different types of disagreement and considering AI’s specific role in practice, we maintain that a collaborative approach can better navigate the uncertainties of medical decision-making.

Joshua Hatherley

Prescription griefbots? On the moral permissibility of “therapeutic” postmortem avatars in healthcare

Postmortem avatars (PMAs), or “griefbots,” refer to chatbots designed to mimic the behaviour of specific deceased individuals. This talk aims to initiate discussion and debate as to the moral permissibility of “therapeutic” PMAs, a term I use to refer to PMAs (a) whose use is recommended, prescribed, mediated, and/or supervised by a healthcare professional; and (b) used with therapeutic goals in mind (e.g. to assist patients in regulating difficult emotions associated with grief, or easing symptoms of grief-related anxiety or depression). Using Kurzweil and Story’s (2025) recently developed theatrical framework for PMAs, I argue for the pro tanto moral permissibility of therapeutic PMAs. In particular, I highlight two specific use-cases in which these systems could, in principle, be used safely, ethically, and to the potential benefit of mourning patients. I conclude by highlighting some unresolved ethical complexities associated with the design and regulation of PMAs that warrant further research from bioethicists and philosophers.

Pii Telakivi

The Digital Other in AI Therapy

Mental health problems are increasing globally, and many are placing their hopes in conversational AI agents used in therapy, i.e., “therapy chatbots”. They can be beneficial for some people, but one worry is that they lack what is often considered the most crucial prerequisite (or ‘common element’) of successful therapy – namely the therapeutic bond between two autonomous agents that includes the ability to share and be affected by another’s emotional states (see Wampold 2015). However, having a conversation with an LLM based therapy chatbot can be phenomenologically experienced as something akin to sociality – as quasi-social (Strasser & Schwitzgebel 2024) or marked by quasi-otherness (Heersmink et al. 2024). I will argue that chatbots designed to act as 'quasi-social interaction partners' raise particularly significant ethical concerns, because when using them, it is easy to forget that they are not trustworthy partners capable of taking up the role required for the therapeutic bond.

Event type: Workshop
Contact
Dimitri Coelho Mollo
Read about Dimitri Coelho Mollo
Contact
Madeleine Hayenhjelm
Read about Madeleine Hayenhjelm