Skip to content

Distributed Training API and Benchmark on Paddle Fluid

License

Notifications You must be signed in to change notification settings

danleifeng/Fleet

 
 

Repository files navigation

FleetX


Fork Issues License Star

Fully utilize your GPU Clusters with FleetX for your model pre-training.

What is it?

  • FleetX is an out-of-the-box pre-trained model training toolkit for cloud users. It can be viewed as an extension package for Paddle's High-Level Distributed Training API paddle.distributed.fleet.
  • 中文文档 | 快速开始

Key Features

  • Pre-defined Models for Training
    • define a Bert-Large or GPT-2 with one line code, which is commonly used self-supervised training model.
  • Friendly to User-defined Dataset
    • plugin user-defined dataset and do training without much effort.
  • Distributed Training Best Practices
    • the most efficient way to do distributed training is provided.

Community

Slack

To connect with other users and contributors, welcome to join our Slack channel

Feedback

For any feedback or to report a bug, please propose a GitHub Issue.

License

Apache 2.0 License

About

Distributed Training API and Benchmark on Paddle Fluid

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Shell 36.3%
  • C 35.5%
  • Python 25.4%
  • C++ 1.5%
  • Cuda 1.1%
  • Makefile 0.2%