From 92c66e63da6cc0cb703c2f86e30e49dc8e0e6516 Mon Sep 17 00:00:00 2001 From: Yasir Ali Date: Sat, 2 Dec 2023 18:53:18 +0900 Subject: [PATCH] fixed minor grammatical and spelling errors --- docker/Dockerfile | 4 ++-- docker/README.md | 8 ++++---- docs/source/overview.rst | 6 +++--- docs/source/tutorials/dataset_partition.rst | 2 +- .../source/tutorials/distributed_communication.rst | 14 +++++++------- docs/source/tutorials/docker_deployment.rst | 10 +++++----- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/docker/Dockerfile b/docker/Dockerfile index 3254b728..e47777de 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -1,7 +1,7 @@ -# This is a example of fedlab installation via Dockerfile +# This is an example of fedlab installation via Dockerfile # replace the value of TORCH_CONTAINER with pytorch image that satisfies your cuda version -# you can finde it in https://hub.docker.com/r/pytorch/pytorch/tags +# you can find it in https://hub.docker.com/r/pytorch/pytorch/tags ARG TORCH_CONTAINER=1.5-cuda10.1-cudnn7-runtime FROM pytorch/pytorch:${TORCH_CONTAINER} diff --git a/docker/README.md b/docker/README.md index 9f871dbb..83e1aee6 100644 --- a/docker/README.md +++ b/docker/README.md @@ -4,11 +4,11 @@ - Step 1 -Find an appropriate base pytorch image for your platform from dockerhub https://hub.docker.com/r/pytorch/pytorch/tags. Then, replace the value of TORCH_CONTAINER in demo dockerfile. +Find an appropriate base PyTorch image for your platform from dockerhub https://hub.docker.com/r/pytorch/pytorch/tags. Then, replace the value of TORCH_CONTAINER in demo dockerfile. - Step 2 -To install specific pytorch version, you need to choose a correct install command, which can be find in https://pytorch.org/get-started/previous-versions/. Then, modify the 16-th command in demo dockerfile. +To install specific PyTorch version, you need to choose a correct install command, which can be found at https://pytorch.org/get-started/previous-versions/. Then, modify the 16-th command in demo dockerfile. - Step 3 @@ -17,8 +17,8 @@ Build the images for your own platform by running command below. - Note -Please be sure using "--gpus all" and "--network=host" when start a docker contrainer: +Please be sure to use "--gpus all" and "--network=host" when starting a docker container: > $ docker run -itd --gpus all --network=host b23a9c46cd04(image name) /bin/bash -If you are not in China area, it is ok to remove line 11,12 and "-i https://pypi.mirrors.ustc.edu.cn/simple/" in line 19. +If you are not in China area, it is ok to remove line 11, 12 and "-i https://pypi.mirrors.ustc.edu.cn/simple/" in line 19. diff --git a/docs/source/overview.rst b/docs/source/overview.rst index b1b365e7..d2d88d8e 100644 --- a/docs/source/overview.rst +++ b/docs/source/overview.rst @@ -8,7 +8,7 @@ Introduction Federated learning (FL), proposed by Google at the very beginning, is recently a burgeoning research area of machine learning, which aims to protect individual data privacy in distributed machine learning process, especially in finance, smart healthcare and edge computing. Different from traditional data-centered distributed machine learning, participants in FL setting utilize localized data to train local model, then leverages specific strategies with other participants to acquire the final model collaboratively, avoiding direct data sharing behavior. -To relieve the burden of researchers in implementing FL algorithms and emancipate FL scientists from repetitive implementation of basic FL setting, we introduce highly customizable framework **FedLab** in this work. **FedLab** is builded on the top of `torch.distributed `_ modules and provides the necessary modules for FL simulation, including communication, compression, model optimization, data partition and other functional modules. **FedLab** users can build FL simulation environment with custom modules like playing with LEGO bricks. For better understanding and easy usage, FL algorithm benchmark implemented in **FedLab** are also presented. +To relieve the burden of researchers in implementing FL algorithms and emancipate FL scientists from repetitive implementation of basic FL setting, we introduce a highly customizable framework **FedLab** in this work. **FedLab** is built on the top of `torch.distributed `_ modules and provides the necessary modules for FL simulation, including communication, compression, model optimization, data partition and other functional modules. **FedLab** users can build FL simulation environment with custom modules like playing with LEGO bricks. For better understanding and easy usage, FL algorithm benchmarks implemented in **FedLab** are also presented. For more details, please read our `full paper`__. @@ -95,7 +95,7 @@ Experimental Scene Standalone ----------- -**FedLab** implements ``SerialTrainer`` for FL simulation in single system process. ``SerialTrainer`` allows user to simulate a FL system with multiple clients executing one by one in serial in one ``SerialTrainer``. It is designed for simulation in environment with limited computation resources. +**FedLab** implements ``SerialTrainer`` for FL simulation in single system process. ``SerialTrainer`` allows user to simulate a FL system with multiple clients executing one by one in serial in one ``SerialTrainer``. It is designed for simulation in environment with limited computational resources. .. image:: ../imgs/fedlab-SerialTrainer.svg :align: center @@ -108,7 +108,7 @@ Standalone Cross-process ------------- -**FedLab** enables FL simulation tasks to be deployed on multiple processes with correct network configuration (these processes can be run on single or multiple machines). More flexibly in parallel, ``SerialTrainer`` can replace the regular ``Trainer`` directly. Users can balance the calculation burden among processes by choosing different ``Trainer``. In practice, machines with more computation resources can be assigned with more workload of calculation. +**FedLab** enables FL simulation tasks to be deployed on multiple processes with correct network configuration (these processes can be run on single or multiple machines). More flexibly in parallel, ``SerialTrainer`` can replace the regular ``Trainer`` directly. Users can balance the calculation burden among processes by choosing different ``Trainer``. In practice, machines with more computational resources can be assigned with more workload of calculation. .. note:: diff --git a/docs/source/tutorials/dataset_partition.rst b/docs/source/tutorials/dataset_partition.rst index c57905d4..f30e9378 100644 --- a/docs/source/tutorials/dataset_partition.rst +++ b/docs/source/tutorials/dataset_partition.rst @@ -4,7 +4,7 @@ Federated Dataset and DataPartitioner ************************************* -Sophisticated in real world, FL need to handle various kind of data distribution scenarios, including +Sophisticated in real world, FL needs to handle various kinds of data distribution scenarios, including iid and non-iid scenarios. Though there already exists some datasets and partition schemes for published data benchmark, it still can be very messy and hard for researchers to partition datasets according to their specific research problems, and maintain partition results during simulation. FedLab provides :class:`fedlab.utils.dataset.partition.DataPartitioner` that allows you to use pre-partitioned datasets as well as your own data. :class:`DataPartitioner` stores sample indices for each client given a data partition scheme. Also, FedLab provides some extra datasets that are used in current FL researches while not provided by official Pytorch :class:`torchvision.datasets` yet. diff --git a/docs/source/tutorials/distributed_communication.rst b/docs/source/tutorials/distributed_communication.rst index 612f35d5..8c173302 100644 --- a/docs/source/tutorials/distributed_communication.rst +++ b/docs/source/tutorials/distributed_communication.rst @@ -8,9 +8,9 @@ Distributed Communication Initialize distributed network ====================================== -FedLab uses `torch.distributed `_ as point-to-point communication tools. The communication backend is Gloo as default. FedLab processes send/receive data through TCP network connection. Here is the details of how to initialize the distributed network. +FedLab uses `torch.distributed `_ as point-to-point communication tools. The communication backend is Gloo as default. FedLab processes send/receive data through TCP network connection. Here are the details of how to initialize the distributed network. -You need to assign right ethernet to :class:`DistNetwork`, making sure ``torch.distributed`` network initialization works. :class:`DistNetwork` is for quickly network configuration, which you can create one as follows: +You need to assign right ethernet to :class:`DistNetwork`, making sure ``torch.distributed`` network initialization works. :class:`DistNetwork` is for quick network configuration, which you can create as follows: .. code-block:: python @@ -29,7 +29,7 @@ You need to assign right ethernet to :class:`DistNetwork`, making sure ``torch.d - Make sure ``world_size`` is the same across process. - Rank should be different (from ``0`` to ``world_size-1``). - world_size = 1 (server) + client number. -- The ethernet is None as default. torch.distributed will try finding the right ethernet automatically. +- The ethernet is None as default. ``torch.distributed`` will try finding the right ethernet automatically. - The ``ethernet_name`` must be checked (using ``ifconfig``). Otherwise, network initialization would fail. If the automatically detected interface does not work, users are required to assign a right network interface for Gloo, by assigning in code or setting the environment variables ``GLOO_SOCKET_IFNAME``, for example ``export GLOO_SOCKET_IFNAME=eth0`` or ``os.environ['GLOO_SOCKET_IFNAME'] = "eth0"``. @@ -45,7 +45,7 @@ If the automatically detected interface does not work, users are required to ass Point-to-point communication ============================= -In recent update, we hide the communication details from user and provide simple APIs. :class:`DistNetwork` now provies two basic communication APIs: :meth:`send()` and :meth:`recv()`. These APIs suppor flexible pytorch tensor communication. +In recent update, we hide the communication details from user and provide simple APIs. :class:`DistNetwork` now provies two basic communication APIs: :meth:`send()` and :meth:`recv()`. These APIs support flexible pytorch tensor communication. **Sender process**: @@ -80,7 +80,7 @@ In recent update, we hide the communication details from user and provide simple Further understanding of FedLab communication ================================================ -FedLab pack content into a pre-defined package data structure. :meth:`send()` and :meth:`recv()` are implemented like: +FedLab packs content into a pre-defined package data structure. :meth:`send()` and :meth:`recv()` are implemented like: .. code-block:: python @@ -123,7 +123,7 @@ Currently, you can create a network package from following methods: tensor_list = [torch.rand(size) for size in tensor_sizes] package = Package(content=tensor_list) -3. append a tensor to exist package +3. append a tensor to existing package .. code-block:: python @@ -133,7 +133,7 @@ Currently, you can create a network package from following methods: new_tensor = torch.Tensor(size=(8,)) package.append_tensor(new_tensor) -4. append a tensor list to exist package +4. append a tensor list to existing package .. code-block:: python diff --git a/docs/source/tutorials/docker_deployment.rst b/docs/source/tutorials/docker_deployment.rst index 10058073..96deb34c 100644 --- a/docs/source/tutorials/docker_deployment.rst +++ b/docs/source/tutorials/docker_deployment.rst @@ -12,7 +12,7 @@ The communication APIs of **FedLab** is built on `torch.distributed