Secure Enclaves (TEE)

Federated machine learning (FedML) promises to be a cornerstone technology in Decentralized AI. A current open challenge in FedML is secure and trusted execution of machine learning code at local clients. This project aims to develop and critically evaluate a pilot solution to this problem based on Trusted Execution Environments.

The challenge

The long-term challenge is to enable secure and privacy-preserving machine learning in the rapidly expanding distributed cloud. Such Decentralized AI is a major current technology trend, driven both by the practical aspect of data being increasingly generated at the edge, and by an increased focus of data privacy in AI. Federated Learning (FedML) promises to be a cornerstone technology to address these challenges.

FedML is used when multiple organizations (cross-silo) or edge devices (cross-device) want to collectively train machine learning models but datasets cannot be centralized because of privacy reasons (private, sensitive, regulated data) or for practical reasons (big and fast data at the edge). Instead, completely local model updates are computed and combined into a global aggregated model (Fig 1).

FedML provides strong input privacy by enabling training of machine learning models without moving and disclosing data. Much research is currently invested into security and FedML, for example to guard against attacks on the training data itself with the purpose of biasing the model, so called data-poisoning attacks, or reverse-engineering of training data using predictions from the federated ML model, so called inference attacks. These are general problems in adversarial ML and can be mitigated using general approaches also in a federated setting.

A problem that is specific to FedML due to its intrinsic distributed nature is the threat associated with accidental or malicious modification of the client itself (identity), or the code run by clients when computing model updates (remote computation). If an dishonest member of the federation, an external attacker, or a misconfigured device is able to modify the training code, the consequences can range from mild such as affected convergence rate in training, to backdoor attacks rendering the model useless. Solutions that can help ensure the veracity of client execution has a great potential to help widespread adoption of federated learning technologies for decentralized AI.

The proposed solution

We believe that rapidly improving technology for Trusted Execution Environments (TEEs), such as Intel Software Guard Extensions (SGX), ARM TrustZone and AMD Secure Encrypted Virtualization (SEV) holds great potential to help overcome these challenges. Such confidential computing is a clear technology trend heavily pushed by hardware manufacturers and cloud providers and is rapidly becoming widely accessible and easier to use. In short, TEEs allow user-level code to define private regions of memory that cannot be read or accessed by any process outside that region, or enclave. The code executing in the enclave is protected from inspection by any process running outside, including the operating system and hypervisor. This makes it possible to divide a program into a trusted (enclaved) and untrusted part. Based on this threat model, we aim to engineer trusted clients for FedML.

We propose to leverage TEEs to ensure veracity of the model updates done by clients. The technical challenge is to do this in a way that does not add unacceptable overhead in computation, since the efficacy of model updates are also of the essence in federated learning.

The main objective of this project is to develop a pilot implementation of TEE-empowered federated learning and systematically benchmark it across two major machine learning use-case classes, in order to evaluate and reduce the risks associated with implementing it in a production solution for decentralized AI.

Vinnova

This is project is supported by Vinnova, Sweden's Innovation Agency. We will continuously post project updates in our social media channels and blog. If you are interested in discussing trusted execution environments and federated learning please get in touch.

Learn more:

https://www.vinnova.se/en/p/trusted-execution-environments-for-federated-learning/