Skip to content

Commit

Permalink
[ML-11959] Update API doc in readme (#236)
Browse files Browse the repository at this point in the history
Update API doc in readme

Check this page
https://github.com/WeichenXu123/spark-deep-learning/tree/update_doc
to see the readme displaying.
  • Loading branch information
WeichenXu123 authored Sep 3, 2020
1 parent 8546cc7 commit acad2dc
Show file tree
Hide file tree
Showing 10 changed files with 81 additions and 191 deletions.
1 change: 0 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,4 @@ install:
- pip install sphinx

script:
- pushd docs && make html && popd
- pytest
80 changes: 80 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,86 @@ Visit databricks doc [HorovodRunner: distributed deep learning with Horovod](htt
To use the previous release that contains Spark Deep Learning Pipelines API, please go to [Spark Packages page](https://spark-packages.org/package/databricks/spark-deep-learning).


## API Documentation

### class sparkdl.HorovodRunner(\*, np, driver_log_verbosity='all')
Bases: `object`

HorovodRunner runs distributed deep learning training jobs using Horovod.

On Databricks Runtime 5.0 ML and above, it launches the Horovod job as a distributed Spark job.
It makes running Horovod easy on Databricks by managing the cluster setup and integrating with
Spark.
Check out Databricks documentation to view end-to-end examples and performance tuning tips.

The open-source version only runs the job locally inside the same Python process,
which is for local development only.

**NOTE**: Horovod is a distributed training framework developed by Uber.

* **Parameters**


* **np** - number of parallel processes to use for the Horovod job.
This argument only takes effect on Databricks Runtime 5.0 ML and above.
It is ignored in the open-source version.
On Databricks, each process will take an available task slot,
which maps to a GPU on a GPU cluster or a CPU core on a CPU cluster.
Accepted values are:

- If <0, this will spawn `-np` subprocesses on the driver node to run Horovod locally.
Training stdout and stderr messages go to the notebook cell output, and are also
available in driver logs in case the cell output is truncated. This is useful for
debugging and we recommend testing your code under this mode first. However, be
careful of heavy use of the Spark driver on a shared Databricks cluster.
Note that `np < -1` is only supported on Databricks Runtime 5.5 ML and above.
- If >0, this will launch a Spark job with `np` tasks starting all together and run the
Horovod job on the task nodes.
It will wait until `np` task slots are available to launch the job.
If `np` is greater than the total number of task slots on the cluster,
the job will fail. As of Databricks Runtime 5.4 ML, training stdout and stderr
messages go to the notebook cell output. In the event that the cell output is
truncated, full logs are available in stderr stream of task 0 under the 2nd spark
job started by HorovodRunner, which you can find in the Spark UI.
- If 0, this will use all task slots on the cluster to launch the job.
.. warning:: Setting np=0 is deprecated and it will be removed in the next major
Databricks Runtime release. Choosing np based on the total task slots at runtime is
unreliable due to dynamic executor registration. Please set the number of parallel
processes you need explicitly.
* **np** - driver_log_verbosity: This argument is only available on Databricks Runtime.

#### run(main, \*\*kwargs)
Runs a Horovod training job invoking main(\*\*kwargs).

The open-source version only invokes main(\*\*kwargs) inside the same Python process.
On Databricks Runtime 5.0 ML and above, it will launch the Horovod job based on the
documented behavior of np. Both the main function and the keyword arguments are
serialized using cloudpickle and distributed to cluster workers.


* **Parameters**


* **main** – a Python function that contains the Horovod training code.
The expected signature is def main(\*\*kwargs) or compatible forms.
Because the function gets pickled and distributed to workers,
please change global states inside the function, e.g., setting logging level,
and be aware of pickling limitations.
Avoid referencing large objects in the function, which might result large pickled data,
making the job slow to start.


* **kwargs** – keyword arguments passed to the main function at invocation time.



* **Returns**

return value of the main function.
With np>=0, this returns the value from the rank 0 process. Note that the returned
value should be serializable using cloudpickle.


## Releases
Visit [Github Release Page](https://github.com/databricks/spark-deep-learning/releases) to check the release notes.

Expand Down
20 changes: 0 additions & 20 deletions docs/Makefile

This file was deleted.

35 changes: 0 additions & 35 deletions docs/make.bat

This file was deleted.

58 changes: 0 additions & 58 deletions docs/source/conf.py

This file was deleted.

20 changes: 0 additions & 20 deletions docs/source/index.rst

This file was deleted.

7 changes: 0 additions & 7 deletions docs/source/modules.rst

This file was deleted.

21 changes: 0 additions & 21 deletions docs/source/sparkdl.horovod.rst

This file was deleted.

18 changes: 0 additions & 18 deletions docs/source/sparkdl.rst

This file was deleted.

12 changes: 1 addition & 11 deletions sparkdl/horovod/runner_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,17 +63,7 @@ def __init__(self, *, np, driver_log_verbosity="all"): # pylint: disable=invali
Databricks Runtime release. Choosing np based on the total task slots at runtime is
unreliable due to dynamic executor registration. Please set the number of parallel
processes you need explicitly.
:param driver_log_verbosity: driver log verbosity, "all" (default) or "log_callback_only".
During training, the first worker process will collect logs from all workers.
The training logs are always merged into the first Spark executors stderr logs.
If driver log verbosity is "all", HorovodRunner streams all logs to the driver and shows
them in the notebook cell output.
However, this might generate excessive amount of logs during distributed training.
You can turn it off by setting driver log verbosity to "log_callback_only".
In this mode, HorovodRunner will only stream selected logs if you use a HorovodRunner
log callback in the first worker process, e.g.,
:class:`sparkdl.horovod.tensorflow.keras.LogCallback`.
.. warning:: We will switch the default to "log_callback_only" in a future release.
:param driver_log_verbosity: This argument is only available on Databricks Runtime.
"""
self.num_processor = np

Expand Down

0 comments on commit acad2dc

Please sign in to comment.