Skip to content


To use the chat, you need to allow Microsoft Dynamics to use cookies.

Main menu hidden.

Lies, deception and computation

Time Friday 6 November, 2020 at 12:15 - 13:00
Place MC323, MIT-building and Zoom

In a dynamic modal logic a 'lie that p' (where p is a some proposition) is an action, that is interpreted as a state transformer relative to the proposition p. These states are pointed Kripke models encoding the uncertainty of agents about their beliefs, and their transformation results in updated beliefs. Lies can be about factual propositions but also about the beliefs of other agents. Deception can be given meaning in terms of protocols consisting of sequences of such actions, in view of realizing an epistemic goal. Agents can have many different roles. Two speaker perspectives are: (obs) an outside observer who is lying to an agent that is modelled in the system, and (ag) an agent who is lying to another agent, and where both are modelled in the system. Three addressee perspectives are: the *credulous* agent who believes everything that it is told (even at the price of inconsistency), the *skeptical* agent who only believes what it is told if that is consistent with its current beliefs, and the *belief revising* agent who believes everything that it is told by consistently revising its current, possibly conflicting, beliefs. Then again there may be non-addressed agents perceiving lies and deception yet different from the addressee.

Lying may be costly. Not only in terms of delayed response time in psychological experiments, on which we will not speak, but also in terms of computational complexity of performing certain tasks while lying or deceiving, or while taking into account that one might be lied to. We do not know of hard results in this area but there seem to be many interesting open questions. Results on computational complexity seem to be particularly relevant for AI. Issues are not merely the possibility of one lie among many truths coming out of the mouth of a single agent (we recall Ulam games), but also the presence of one unreliable agent in the presence of many trustworthy agents (typical in security protocol settings). How to detect lies or liars, and how costly is that? There is ample room for special case studies and benchmarks, such as lying in gossip protocols.

Speaker: Hans van Ditmarsch, Senior researcher, CNRS, The French National Research Organization

Read more abot the seminar here

Event type: Lecture