Humans and society are biased, and this bias is reflected in collected data which is then used to build Artificial Intelligence (AI) systems such as chatbots.
That is, bias is propagated from human, over data, to AI systems. Thus, bias can be detected and mitigated in the AI system, the data or directly in human behavior. The aim of this project is to develop computational methods to detect bias directly in human chat behavior. The research insights will help to develop chatbots that detect bias and react appropriately to biased statements by human users, for example, by informing or educating the users about the expressed bias.
The project is coordinated by GH Solutions AB.