"False"
Skip to content
printicon
Main menu hidden.
Published: 2025-06-02

New AI research secures privacy

NEWS Can we continue to benefit from smarter technologies without giving up our privacy? Sonakshi Garg, a doctoral student at Umeå University, believes the answer is yes. She presents a series of innovative strategies that facilitate research and development while at the same time keeping us humans safe. “Privacy is not an obstacle to progress - it is a foundation for building better and more reliable AI,” says Sonakshi Garg.

Every time you open an app, visit the doctor, or make an online purchase, you're generating data. That data feeds the artificial intelligence (AI) systems that help businesses improve services, doctors detect diseases faster, and governments make informed decisions. But as AI becomes more powerful and reliant on personal information, concerns about how our data is being used—and whether it’s being kept safe—are growing louder. At the heart of this tension is a critical question: can we continue to benefit from smarter technology without giving up our privacy?

Sonakshi Garg, a doctoral student at Umeå University, believes the answer is yes. In her groundbreaking thesis titled “Bridging AI and Privacy: Solutions for High-Dimensional Data and Foundation Models,” Garg presents a set of innovative strategies that aim to ensure AI can be both intelligent and respectful of personal data. Garg calls this the “privacy paradox”: do we choose strong AI or strong privacy? "We no longer have to choose one or the other we can have both", argues Sonakshi Garg.

To solve this issue, Garg uses manifold learning to simplify high-dimensional data while maintaining its meaningful structure. "Imagine unfolding a crumpled map without losing the roads and landmarks – this is what manifold learning does for complicated datasets,", says Garg.

Training AI Without Harm

She also introduces a hybrid privacy model that combines the strengths of two existing approaches, allowing users to better control how much information is protected while preserving more of the data’s usefulness. "It creates highly realistic “fake” data that behaves like the real thing but doesn’t reveal any actual person’s identity. This means researchers and developers can safely train AI systems without needing to access sensitive data", Garg argues.

A multi-layers approach to privacy

Finally, she addresses the privacy risks posed by large AI models like GPT and BERT, which can accidentally “memorize” private information. Her method compresses these models to make them smaller and more efficient while adding layers of privacy protection – allowing them to run securely even on personal devices like smartphones. Most importantly, Garg’s research empowers everyday people.

"It proves that it’s possible to benefit from personalized services and smart systems without giving up control over your personal life. Privacy isn’t an obstacle to progress – it’s a foundation for building better, more trustworthy AI.

A bright future

As technology becomes increasingly integrated into our lives, Sonakshi Garg's research provides a much-needed blueprint for a future where AI and privacy can thrive side by side.

"My research is a bold and timely reminder that smart innovation should never come at the expense of human dignity " and with the right tools, it doesn't have to," says Sonakshi.

Further information

Sonakshi Garg
Doctoral student
E-mail
Email

Breaking the Privacy Paradox: Breakthrough in AI and Data Protection

This thesis addresses the growing tension between the power of AI and the need to protect personal privacy in an age of high-dimensional data. It identifies the weaknesses of existing privacy methods like k-anonymity and differential privacy when used on high-dimensional datasets and proposes improved solutions using manifold learning, synthetic data generation, and privacy-preserving model compression. The research introduces advanced, scalable frameworks that enhance both data utility and privacy. Overall, the thesis offers a well-rounded approach to building ethical, privacy-aware AI systems that are practical for real-world applications.