Skip to content

FedML 0.8.0

Compare
Choose a tag to compare
@fedml-alex fedml-alex released this 23 Mar 12:37
· 7172 commits to master since this release

FedML Open and Collaborative AI Platform

Train, deploy, monitor, and improve machine learning models anywhere (edge/cloud) powered by collaboration on combined data, models, and computing resources
image

What's Changed

Feature Overview

  1. supporting MLOps (https://open.fedml.ai)
  2. Multiple scenarios:
  • FedML Octopus: Cross-silo Federated Learning
  • FedML Beehive: Cross-device Federated Learning
  • FedML Parrot: FL Simulation with Single Process or Distributed Computing, smooth migration from research to production
  • FedML Spider: Federated Learning on Web Browsers
  1. Support Any Machine Learning Framework: PyTorch, TensorFlow, JAX with Haiku, and MXNet.
  2. Diverse communication backends (MPI, gRPC, PyTorch RPC, MQTT + S3)
  3. Differential Privacy (CDP-central DP; LDP-local DP)
  4. Attacker (API: fedml.core.FedMLAttacker); README: python/fedml/core/security/readme.md
  5. Defender (API: fedml.core.FedMLDefender); README: python/fedml/core/security/readme.md
  6. Secure Aggregation (multi-party computation): cross_silo/light_sec_agg_example
  7. In FedML/python/app folder, we provide applications in real-world settings.
  8. Enable federated model inference at MLOps (https://open.fedml.ai)

For more detailed instructions, please refer to https://doc.fedml.ai/

New Features

  • [Serving] Make all serving pipelines work: device login, model creation, model packaging, model pushing, model deployment and model monitoring.
  • [Serving] Make three entries for creating model cards work: from the trained model list, from the web page for creating model cards, from the related CLI for fedml model.
  • [OpenSource] Formally releases all of the previous versions as this v0.8.0 version: training, security, aggregator, communication backends, MQTT optimization, metrics tracing, events tracing, realtime logs.

Bug Fixes

  • [CoreEngine] CLI engine error when running simulation.
  • [Serving] Adjust the training codes to adapt the ONNX sequence rule.
  • [Serving] URL error in the model serving platform.

Enhancements

  • [CoreEngine/MLOps][log] Format the log time to NTP time.
  • [CoreEngin/MLOps] Shows the progress bar and the size of the transferred data in the log when the client downloads and uploads the model.
  • [CoreEngine] Client optimization when the network is weak or disconnected.