"False"
Skip to content
printicon
Main menu hidden.

Large language models, AI risk and AI alignment

Thu
25
May
Time Thursday 25 May, 2023 at 13:15 - 14:15
Place MIT Ljusgården (MIT-place)

We are at a crucial moment of human history, when we are automating and offloading to machines the one key asset that has brought our tremendous success so far: our intelligence. What could possibly go wrong?

A lot.

But there is hope: the field that has become known as AI alignment aspires to make sure that advanced AI has goals and values aligned with ours and compatible with human flourishing. I will discuss some challenges in AI alignment in the context of the breakneck speed at which large language models and other AI are currently being developed and deployed.

Olle Häggström is Professor of Mathematical Statistics at Chalmers University of Technology and has spent much time studying the risks of new technologies. Olle Häggström is the author of Here be Dragons: Science, Technology and the Future of Humanity (2015) and Thinking Machines (2021).

In this talk, Olle Häggström will discuss some challenges in AI alignment in the context of the breakneck speed at which large language models, such as ChatGPT, and other AI are currently being developed and deployed. 

This is an event for students, staff and the general public. Welcome!

Event type: Lecture
Contact
Åke Brännström
Read about Åke Brännström