"False"
Skip to content
printicon
Main menu hidden.
Published: 2020-09-24

Researchers on AI for Good

NEWS With a project that can advance progress on detecting bias in language data researchers from Umeå University and Uppsala University will participate in AI for Good Global Summit.

Text: Mikael Hansson

PhD student Hannah Devinney and associate professor Henrik Björklund from the Department of Computing Science will together with Jenny Björklund, associate professor at Uppsala University, present a project on gender bias in machine learning at the international AI for Good Global Summit´s Breakthroug Days, 21 - 30 September.

Hannah Devinney is also affiliated to Umeå Centre for Gender Studies.

“We get feedback, tips, and other forms of support on our project ideas from the expert panel at AI for Good and others who participate in the workshops," says Henrik Björklund.

The project that will be presented is a collaboration between Umeå University and Uppsala University. It is one of only nine projects that will have the opportunity to present their work during the so-called Breakthrough Days at the conference. Teams from around the world have submitted their project ideas and after being judged by some of the world top experts in different fields, a total of nine have been selected for the final to present their project proposals during the Breakthrough Days event.

After receiving feedback during workshops, the nine projects will present their work from the main stage of the conference. For the joint project of Umeå University and Uppsala University, Hannah Devinney will make the presentation, with the support of Henrik Björklund and Jenny Björklund.

Presentation on the main stage is scheduled for Monday 28 September, 17.00 - 18.30 (local time for Sweden). It is open for everyone to follow.

Register to follow the presentation.

The project presented by the group from Umeå and Uppsala is entitled “Topic Modeling for Detecting Bias in Language Data”. The project brings together methods from gender studies, literary studies and computer science to investigate how inequality and stereotypical notions are reproduced in text, how they fit into machine learning algorithms and how they can be analyzed to minimize inequality in these algorithms.

The overall goal is to contribute to the creation of more equal machine learning models.