Explainable Artificial Intelligence 7.5 credits
About the course
This course explores the principles, methods, and applications of Explainable Artificial Intelligence (XAI). As AI systems become more complex and widely used in critical domains such as healthcare, finance, and autonomous systems, understanding their decision-making processes is crucial for transparency, fairness, and trust.
Students will learn about various XAI approaches, including model-specific and model-agnostic techniques, interpretable machine learning models, and post-hoc explanation methods. The course also covers human-centered design for AI explanations and real-world case studies where explainability is essential.
Through hands-on exercises, projects, and discussions, participants will gain practical experience in implementing XAI techniques, evaluating explainability metrics, and assessing the validity, reliability, and usability of XAI explanations. A particular emphasis will be placed on identifying the intended audience and tailoring explanations to different user groups. The course will also explore how explanations may need to be adapted based on the specific context of use.
Module 1, theory, 4.0 credits.
This module provides a theoretical foundation for Explainable Artificial Intelligence, focusing on its principles, methods, and applications. Through lectures and exercises, students will explore different approaches to explainability, including interpretable models, post-hoc explanation techniques, and human-centered AI design. The module also addresses ethical considerations, regulatory frameworks, and the role of explainability in various application domains.
Various AI, machine learning and XAI methods will be used. The intention is to make the students proficient with how those methods can be applied in real-world settings encountered in industry and society in general. This is why lectures are accompanied by exercises where students practice applying some of the methods treated during lectures.
The course mainly uses the Python and R programming languages for the lectures and examples provided. Students can freely choose which language they prefer to use for the exercises.
A key component of the module is the Learning Diary, where students will critically reflect on lecture content, exercises, and key readings. This assessment encourages deeper engagement with the material, allowing students to articulate their understanding, analyze different XAI techniques, and evaluate their practical implications.
Key topics covered are:
- Introduction to Explainable AI: Importance, definitions, and challenges
- Interpretable vs. black-box models
- Model-agnostic explanation methods (e.g., LIME, SHAP, CIU)
- Explainability in deep learning and neural networks
- Human-centered XAI and usability aspects
- Fairness, bias, and ethical considerations in XAI
- Case studies and industry applications
Module 2, practice, 3.5 credits.
This module focuses on the practical implementation of Explainable Artificial Intelligence through a group project, performed in groups of 1-4 students. Project topics and data sets will be provided by the course personnel, but student-proposed topics are encouraged. Each group presents their progress, plans and open questions to course personnel and fellow students in intermediate "mentoring sessions" and in one final presentation session. Through this mentoring approach, students will take an active role in developing an XAI solution, critically assessing its usability, and adapting explanations to different stakeholders.
The purpose of mentoring sessions is to provide constructive feedback and guidance to the students in their learning project. Rather than traditional lectures, students will engage in self-directed learning with support from mentors, who will guide discussions, provide feedback, and help refine project outcomes. The final deliverable is a project report, in which students will document their methodology, justify their design choices, evaluate the effectiveness of their explanations, and reflect on the broader implications of their work.
Apply
Contact us
Your message goes to Infocenter, and they’ll make sure it gets to the right person – so you get the best and most relevant reply.