Towards next-generation software for securing AI applications

Securing AI Applications

In the AI landscape, securing machine learning models and their data is crucial.
AI requires vast, often sensitive data, bound by privacy and regulatory constraints. Federated learning addresses these challenges by enabling collaborative model training without sharing raw data or moving it off-site.
By integrating federated learning with additional security techniques, we can significantly enhance the protection of AI applications, ensuring data privacy, regulatory compliance, and robust defense against threats.


Leakage profiling and risk oversight for machine learning models

The LeakPro project aims to build an open-source platform designed to evaluate the risk of information leakage in machine learning applications. It assesses leakage in trained models, federated learning, and synthetic data, enabling users to test under realistic adversary settings.

Built in collaboration AstraZeneca, AI Sweden, RISE, Syndata, Sahlgrenska University Hospital, and Region Halland, with principal Johan Östman.

Learn more


Comprehensive evaluation
Analyzes leakage risks in trained models, federated learning, and synthetic data.
Open source
Developed for the AI ecosystem, providing accessibility to state-of-the-art attacks.
Scalable and modular
Supports integration of new attacks and adapts to evolving threats.
Practical integration
Focuses on organizational integration for real-world application.

Project Portfolio

A selection of our current public cybersecurity projects
An advanced Intrusion Detection System (IDS) for IoT using federated learning, enhancing security and privacy by leveraging decentralised data analysis without compromising data privacy.
Learn more
AI Honeypots
A new approach to AI security by integrating honeypots into federated learning networks to identify unknown threats and use the collected data to create resilient AI solutions.
Learn more
Intelligent security solutions for connected vehicles, focusing on on-vehicle intrusion detection to evaluate risks and identify realistic attack vectors. With Scania CV as the principal coordinator.
Learn more
LeakPro aims to create an open-source platform to assess information leakage risks in machine learning models, federated learning, and synthetic data, testing under realistic adversary settings.
Learn more
Secure Enclaves (TEE)
A solution for mitigating the challenge of protecting and ensuring trusted execution of machine learning on local clients using secure enclaves.
Learn more


Our AI security projects bring together a network of trusted partners and leading experts in the fields of artificial intelligence, machine learning, and cybersecurity. Through strategic collaborations with renowned academic institutions, innovative tech companies, and experienced industry professionals, we leverage cutting-edge research and best practices to develop robust, secure AI solutions. Our partners share our commitment to advancing AI security, ensuring data privacy, and protecting against evolving cyber threats.

RISE leads a national cybersecurity collaboration to drive research and innovation in the industry and public sector, establishing a national node in cybersecurity. In partnership with MSB (The Swedish Civil Contingencies Agency) and NCC-SE (The Swedish National Coordination Centre for cybersecurity research and innovation), RISE is developing Sweden's cyber security competence community for the ECCC EU project.
Learn more
The research and educational activities at Uppsala University's Department of Information Technology covers the broad spectrum of security related to safeguarding digital infrastructure through innovative cyber security and computer security measures. Their work is focusing on how to prevent malicious disruptions and ensure the continuity of essential services in an increasingly connected world.
Learn more
Swedish Security & Defence Industry Association (SOFF) is an industry association for Swedish security and defense companies. It represents over 200  companies in civil security, cyber technology, and defense. SOFF actively shapes policies and regulations, and collaborates with government entities, strategic partners, and international organizations like the AeroSpace and Defence Association in Europe (ASD) and NATO Industrial Advisory Group (NIAG).
Learn more

Privacy-Enhancing Technologies

Federated Learning is a foundation technology, improving input privacy for distributed data scenarios. It can be complemented by integrating other privacy-enhancing technologies.

Differential Privacy
Differential privacy protects individual data points by adding noise, ensuring anonymity. When combined with federated learning, it enhances privacy by masking sensitive information in model updates. For example, it can anonymize patient data in healthcare models, ensuring secure and private data analysis.
Secure Multi-Party Computation (SMPC)
SMPC enables secure collaborative computations without data exposure. It uses cryptographic protocols to perform joint computations privately. For instance, it allows financial institutions to train fraud detection models on aggregated data without sharing sensitive information, thus enhancing data security.
Homomorphic Encryption
Homomorphic encryption allows computations on encrypted data without decryption, maintaining confidentiality throughout processing. It is ideal for cloud-based AI applications, enabling secure data analysis without exposing raw data, thereby preserving data privacy.
Trusted Execution Environments (TEE)
Trusted Execution Environments (TEE) use secure hardware enclaves to protect data and code during execution. They ensure secure AI model deployment on edge devices, like autonomous vehicles, safeguarding data integrity and confidentiality, making them essential for secure edge computing.


Explore our collection of expert-authored articles on AI security, covering the latest trends, techniques, and best practices. Discover in-depth analyses of AI security challenges and cutting-edge solutions to safeguard AI models and ensure data privacy. Whether you're a researcher, developer, or business leader, our articles provide valuable insights to help you stay ahead in the rapidly evolving landscape of AI security.

The Impact of Backdoor Attacks in Federated Learning
Uncover the impact of backdoor attacks on federated learning AI models and the risks they pose to AI cybersecurity. Our in-depth blog post explores experiments with the MNIST dataset, revealing the challenge of detecting hidden triggers inserted by malicious clients. Discover potential mitigation strategies from recent AI security research and the ongoing challenges in protecting sensitive data and ensuring the robustness of AI applications in decentralized environments.

Learn more
Input Privacy: Adversarial attacks and their impact on federated model training
Explore the effects of label-flipping attacks, a type of adversarial attack, on federated machine learning models. Our experiments reveal that these AI security threats have a limited impact on the global model's accuracy compared to centralized training, as the federated averaging process helps mitigate the influence of malicious clients. Discover how federated learning can enhance AI privacy and security in decentralized environments.

Learn more
Email Spam Detection with FEDn and Hugging Face
Discover how our project leverages the Hugging Face 'Transformers' library in FEDn to fine-tune a BERT-tiny model for accurate email spam detection. By utilizing the Enron email dataset and federated learning techniques, we ensure data privacy and security by splitting the dataset between two clients. Our AI model achieves high accuracy (~99%) in just a few rounds of federated training, showcasing the power of secure, decentralized AI applications.

Learn more
Enhancing IoT security with federated learning
Discover how we're revolutionizing IoT cybersecurity by integrating federated learning techniques to create an innovative intrusion detection system (IDS). Our approach enhances IoT privacy and threat detection by leveraging decentralized data analysis without compromising data security. This groundbreaking solution promises a secure, privacy-focused IoT ecosystem. Read our post for more details and follow us for updates on this cutting-edge AI security project.

Learn more
Output Privacy and Federated Machine Learning: Enhancing AI Security and Data Protection
With the rapid advancement of machine learning, ensuring data privacy and AI security has become paramount. Federated machine learning emerges as a groundbreaking approach to address these concerns by decentralizing data and providing innovative solutions to traditional AI challenges. Discover how federated learning works, its benefits for protecting sensitive data, potential risks, and the cutting-edge measures employed to fortify AI privacy and security. Dive into our comprehensive discussion to stay ahead of the curve in the evolving landscape of secure and privacy-preserving AI.

Learn more


Our AI security projects are supported by a network of trusted partners and sponsors. Their commitment and funding enable us to leverage cutting-edge research and best practices to develop secure AI solutions, advancing AI security and ensuring data privacy.