Federated learning (FL) is used when multiple organizations (cross-silo) or edge devices (cross-device) want to collectively train machine learning models but datasets cannot be centralized because of privacy reasons (private, sensitive, regulated data) or for practical reasons (big and fast data at the edge). Instead, completely local model updates are computed and combined into a global aggregated model (Fig 1).
FL provides strong input privacy by enabling training of machine learning models without moving and disclosing data. Much research is currently invested into security and FL, for example to guard against attacks on the training data itself with the purpose of biasing the model, so called data-poisoning attacks, or reverse-engineering of training data using predictions from the federated ML model, so called inference attacks. These are general problems in adversarial ML and can be mitigated using general approaches also in a federated setting.

Read our paper for a deep-dive into scalable federated learning with our open source framework FEDn.
Andreas Hellander explains the advantages of federated machine learning and what problems it solves.