Skip to content

Information for students, faculty and staff regarding COVID-19. (Updated: 3 december 2020)

printicon
Staff photo Hannah Devinney

Hannah Devinney

Gender and Natural Language Processing. I work on methods for reducing algorithmic harms caused by biased language data. My pronouns are they/them and she/her in English.

Contact

Works at

Affiliation
Doctoral student at Department of Computing Science
Location
MIT-huset, Umeå universitet, D415 Umeå universitet, 901 87 Umeå
Affiliation
Affiliated as doctoral student at Umeå Centre for Gender Studies (UCGS)
Location
Samhällsvetarhuset, Plan 4

We interact with Natural Language Processing technology every day, whether in forms we see (auto-correct, translation services, search results) or those we don't (social media algorithms, "suggested reading" for news articles). NLP also fuels other "AI" tools - such as sorting CVs or approving loan applications - which can have major effects on our lives.

"Machine learning" methods replicate patterns in human-produced data, but these patterns are often undesireable (stereotypes and other reflections of human prejudice are present both implicitly and explicitly in the language we "show" computers when training these systems). My research is on understanding these biases (with respect to structural power) in the language data used to train NLP models and developing methods to reduce the potential for these models to do harm. Currating training data to better represent marginalized groups is an important first step towards a justice-focused approach to developing and deploying algorithms.
You can read more about the EQUITBL project on its page (coming soon).

Dual association with Department of Computing Science and the Umeå Centre for Gender studies.

My pronouns are they/them and she/her in English (hen i svenska; elle en français).

Published: 24 Sep, 2020