Scaleout EdgeAI: Federated Learning Across the Edge-Cloud
Scaleout EdgeAI is a Kubernetes-based platform designed to overcome the challenges of developing machine learning at the edge. It enables organizations to leverage the full potential of their distributed data and infrastructure through federated learning, model personalization, and comprehensive edge model management, while maintaining complete data control and privacy. Organizations can streamline deployment across diverse edge environments, from cloud to device edge, with centralized orchestration of the entire ML lifecycle.
Get Access
  • Crossed cloud, data bottleneck

    Network Constraints: Transferring raw data from edge devices is bandwidth-intensive and costly.

  • Crossed wifi, not internet connection

    Real-time Requirements: Edge applications demand low latency and offline capabilities.

  • Crossed shield, privacy and security risks

    Data Privacy & Governance: Sensitive data can't leave its source location.

  • Hardware Diversity: Edge environments involve heterogeneous devices with varying capabilities.

The AI Challenge in Edge-Cloud

Traditional machine learning approaches require centralizing data for training, creating significant challenges for edge environments.

Our guide to Edge AI

Bridging the Gap Between Cloud and Edge

Scaleout EdgeAI addresses these challenges through an AI platform that brings training to the data instead of the other way around.

By bridging the gap between cloud and edge, Scaleout AI enables you to build and deploy machine learning capabilities that were previously impractical or impossible to implement.

Learn more about the FEDn integration
  • Federated Learning Core: Train models across distributed nodes without sharing raw data

  • Edge Model Management: Deploy, monitor, and update models across your edge infrastructure

  • Fleet Learning Capabilities: Implement selective and hierarchical learning strategies

  • Kubernetes Integration: Leverage familiar tools and workflows for deployment and scaling

Key Capabilities

Scaleout EdgeAI provides essential capabilities for managing the entire machine learning lifecycle across distributed edge environments. Each capability is designed to address the unique challenges of edge AI deployment while maintaining enterprise-grade reliability, security, and performance.

Federated Learning

  1. Train across distributed infrastructure without sharing raw data
  2. Monitor experiments in real-time with comprehensive analytics dashboards
  3. Track performance metrics across heterogeneous hardware
  4. Support both synchronous and asynchronous training modes
  5. Optimize resource allocation across your edge fleet
  6. Compare performance across different client groups and training rounds

Edge Model Management

  1. Version and distribute models with controlled rollout to edge nodes
  2. Monitor inference performance in real-time across your infrastructure
  3. Personalize models through on-device fine-tuning for local conditions
  4. Secure model endpoints with token-based authentication
  5. Centrally manage model deployment and inference requests
  6. Schedule updates during optimal time windows
  7. Optimize for specific hardware including Nvidia AGX and RPI5

Fleet Learning

  1. Target specific nodes for specialized training tasks
  2. Organize devices into logical hierarchies for efficient training
  3. Manage complex workflows from a single interface
  4. Prioritize contributions from high-value data sources
  5. Apply different strategies to different client groups
  6. Coordinate training across organizational boundaries
  7. Implement governance controls for multi-party collaboration

Benefits

Scaleout EdgeAI is built for ML Engineers and Data Scientists to enhance how you develop and deploy edge ML models. The platform enables organizations to train machine learning models on sensitive or distributed data that can't be centralized, accelerating development cycles and improving model quality by leveraging diverse, real-world edge data. The platform abstracts infrastructure complexity, letting teams focus on ML while providing federated learning, robust MLOps capabilities, secure project collaboration, integration with popular open-source tools, detailed model management, multi-tenant workspaces, cloud-agnostic Kubernetes deployment, and a flexible plugin architecture.
  • Access Previously Unavailable Data: Train on data that can't be centralized due to privacy, size, or regulatory constraints

  • Shorten Development Cycles: Streamline the entire ML workflow from training to deployment and monitoring

  • Improve Model Quality: Create more robust, accurate models by incorporating diverse, real-world data from edge environments

  • Focus on ML, Not Infrastructure: The platform handles the complexities of distributed training and deployment

Federated Learning Integration
The foundation of Scaleout EdgeAI is FEDn, our production-ready federated learning framework that forms the core engine for distributed machine learning capabilities.

Built on a Robust Foundation

  1. Scaleout EdgeAI leverages FEDn's solid federated learning capabilities
  2. The platform extends FEDn with enterprise-grade management and monitoring
  3. All core FEDn capabilities are available through the Scaleout EdgeAI interface
  4. Updates to the open-source framework are continuously integrated into the platform

Architectural Relationship

  1. Train across distributed infrastructure without sharing raw data
  2. Monitor experiments in real-time with comprehensive analytics dashboards
  3. Track performance metrics across heterogeneous hardware
  4. Support both synchronous and asynchronous training modes
  5. Optimize resource allocation across your edge fleet
  6. Compare performance across different client groups and training rounds

Technical Overview

FEDn is a modular and distributed framework designed to enable federated learning across heterogeneous computing environments, from cloud infrastructure to edge devices. Its flexible architecture supports scalable model training and aggregation while maintaining control and coordination through a hierarchical system of controllers, combiners, and distributed clients.

Architecture

Tier 1: Control Layer (cloud, near edge)The global controller serves as the central coordination point, managing service discovery across the system. Model storage is handled through S3, while a database system maintains essential operational data and state information.

Tier 2: Model Layer (cloud, near edge, far edge)Combiners play a crucial role in model aggregation, with the flexibility to deploy one or multiple instances for optimal load balancing and geographical proximity to client groups. In scenarios involving multiple Combiners, a Reducer takes on the responsibility of aggregating combiner-level models into a cohesive whole.

Tier 3: Client Layer (near edge, far edge, device edge) Geographically distributed clients

Framework Support

FEDn is designed to be ML-framework agnostic, seamlessly supporting major frameworks such as Keras, PyTorch, Tensorflow, Huggingface and scikit-learn. Ready-to-use examples are provided, facilitating immediate application across different ML frameworks.

Deployment Options

FEDn Studio provides enterprise-grade deployment flexibility through standard Kubernetes, enabling seamless integration from cloud platforms to on-premise environments.

Software as a Service

Get started quickly with our fully managed cloud solution. Zero server-side DevOps means you can focus entirely on your ML objectives. Perfect for:

  • Rapid prototyping and proof-of-concepts
  • Early-stage projects and pilots
  • Teams focusing on ML development

Self-hosted

Deploy FEDn in your own infrastructure for maximum control. Ideal for organizations requiring:

  • Complete control over infrastructure
  • Enhanced security and data privacy
  • Custom networking and integration requirements

Flexible Client Implementation

FEDn supports multiple client implementation languages: Python (ideal for data scientists and researchers), C++ (for high-performance and resource-constrained environments), and Kotlin (optimized for Android devices). This flexibility enables organizations to implement federated learning across heterogeneous environments, leverage existing expertise, and target specific hardware constraints while maintaining consistent security standards and feature parity across all implementations.

Get started

Begin with the SDK using our quick start guide, or login to the platform.

Quick Start

Start with the comprehensive Getting Started tutorial.
View Tutorial

Get Access

Sign Up or Login to the platform
Get Access

Talk to our engineers

Share your edge AI challenges with our engineering team to explore federated learning solutions. A technical team member will contact you to discuss your specific requirements and implementation options.

Contact Us