Gender and Natural Language Processing. I work on methods for reducing algorithmic harms caused by biased language data. My pronouns are they/them in English.
We interact with Natural Language Processing technology every day, whether in forms we see (auto-correct, translation services, search results) or those we don't (social media algorithms, "suggested reading" for news articles). NLP also fuels other "AI" tools - such as sorting CVs or approving loan applications - which can have major effects on our lives.
"Machine learning" methods replicate patterns in human-produced data, but these patterns are often undesireable (stereotypes and other reflections of human prejudice are present both implicitly and explicitly in the language we "show" computers when training these systems). My research is on understanding these biases (with respect to structural power) in the language data used to train NLP models and developing methods to reduce the potential for these models to do harm. Currating training data to better represent marginalized groups is an important first step towards a justice-focused approach to developing and deploying algorithms. You can read more about the EQUITBL project on its page (coming soon).
Dual association with Department of Computing Science and the Umeå Centre for Gender studies.
My pronouns are they/them in English (hen/hen i svenska).