"False"
Skip to content
printicon
Main menu hidden.

Image: Petra Wester

NAUSICA: PrivAcy-AWare traNSparent deCIsions group

Research group We are interested in privacy-aware transparent AI systems. We focus on data privacy for data processing, privacy-aware machine learning for building models and data analytics, and decision models for making decisions.

AI systems are increasingly used for enhancing decision making. One of the main building blocks of AI is data. Machine and statistical learning methods are used to extract knowledge from the underlying data in terms of models and inferences. The typical workflow comprises of feeding pre-processed data into machine learning algorithms, followed by algorithms transforming data into models, and finally, the models are embedded into AI systems for decision making.

Machine learning and AI has spread into numerous domains where sensitive personal data are collected from users. Domains like healthcare, personal financial services, social networking, e-commerce, location services and recommender systems are some of these domains.

Data from these domains are continuously collected and analysed to derive useful decision and inferences.  However, the sensitive nature of these data raises privacy concerns that cannot be successfully addressed through naive anonymization alone.

Not only data, but models and aggregates can also lead to disclosure as they can contain traces of the data used in their computations. Attacks on data (e.g., reidentification and transparency attacks) and on models (e.g., membership attacks, model inversion) have proven the need for appropriate protection mechanisms. Data privacy develops techniques so that data, models, and decisions are made with appropriate privacy guarantees.

AI systems need to cope with uncertainty in order to be deployed in the real world, where imprecision, vagueness and randomness are rarely absent. Approximate reasoning studies models of reasoning to address uncertainty as e.g. probability-based, evidence theory-based, and fuzzy set-based models.

AI systems, in line with trustworthy AI guidelines, have as fundamental requirements fairness, accountability, explainability, and transparency. Requirements affect the whole design and building process of AI systems, from data to decisions. Data privacy, machine and statistical learning, and approximate reasoning models are basic components of this process, but they need to be combined to provide a holistic solution.

Our research group is interested in privacy-aware transparent AI systems. We want to understand the fundamental principles that permit us to build these systems, and develop algorithms for this purpose. We focus on data privacy for data processing, privacy-aware machine learning for building models and data analytics, and decision models for making decisions.

Some keywords of our research follow:

Data privacy and machine learning: privacy-aware machine and statistical learning methods, privacy-aware federated learning, disclosure risk assessment, transparency attacks, privacy models (privacy for reidentification, k-anonymity, differential and integral privacy), masking methods, statistical disclosure control.

Approximate reasoning: fuzzy sets and systems, non-additive measures and integrals, aggregation functions, decision making.

The group collaborates with several national and international research groups (e.g., Tamagawa University, Osaka University, and Tsukuba University in Japan, Maynooth University in Ireland and Autonomous University of Barcelona), and has links with industry and governmental organisations.

Head of research

Overview

Participating departments and units at Umeå University

Department of Computing Science

Research area

Computing science
Young Umeå researchers selected to participate in Nobel Week

25 young researchers are selected to meet to be inspired and inspire others.

Vicenç Torra new professor in AI and privacy

Vicenç Torra is a new professor at the department of Computing Science.

Latest update: 2023-09-07