A Framework for XAI (Explainable AI) in Socio-Technical Systems
Fri
30
Sep
Friday 30 September, 2022at 12:15 - 13:00
Online via Zoom
The role of explanations in intelligent systems has in the last few years entered the spotlight as AI-based solutions appear in an ever-growing set of applications. Though data-driven, (or machine learning) techniques are often used as examples of how opaque (also called black-box) approaches can lead to problems such as bias and general lack of explainability and interpretability, in reality, these features are difficult to tame in general, even for approaches that are based on tools typically considered to be more amenable, like knowledge-based formalisms.
Detecting hate speech and cyberbullying
In this talk, we describe a line of research and development toward building tools that facilitate the implementation of explainable and interpretable hybrid intelligent socio-technical systems, focusing on features that users can leverage to build explanations for their queries. In particular, we discuss HEIST, Hybrid Explainable and Interpretable Socio-Technical systems, a framework for the implementation of intelligent socio-technical systems that are explainable by design, and study two use cases in social platforms, detecting hate speech and cyberbullying, respectively.
Please Register
Please register here, and we will send you a link in good time before the event. #frAIday is open to everyone who is interested in AI!