Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix numbering in the docs #578

Merged
merged 2 commits into from
Apr 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .buildkite/pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,7 @@ steps:
cuda: "*"
artifact_paths:
- "tutorial_deps/*"
- "docs/build/**/*"
env:
DATADEPS_ALWAYS_ACCEPT: true
JULIA_DEBUG: "Documenter"
Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ layout: home

hero:
name: LuxDL Docs
text: Elegant & Performant Deep Learning in JuliaLang
tagline: A Pure Julia Deep Learning Framework putting Correctness and Performance First
text: Elegant & Performant Scientific Machine Learning in JuliaLang
tagline: A Pure Julia Deep Learning Framework designed for Scientific Machine Learning
actions:
- theme: brand
text: Tutorials
Expand Down
38 changes: 19 additions & 19 deletions docs/src/manual/distributed_utils.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,16 @@ DDP Training using `Lux.DistributedUtils` is a spiritual successor to

## Guide to Integrating DistributedUtils into your code

1. Initialize the respective backend with [`DistributedUtils.initialize`](@ref), by passing
in a backend type. It is important that you pass in the type, i.e. `NCCLBackend` and not
the object `NCCLBackend()`.
* Initialize the respective backend with [`DistributedUtils.initialize`](@ref), by passing
in a backend type. It is important that you pass in the type, i.e. `NCCLBackend` and not
the object `NCCLBackend()`.

```julia
DistributedUtils.initialize(NCCLBackend)
```

2. Obtain the backend via [`DistributedUtils.get_distributed_backend`](@ref) by passing in
the type of the backend (same note as last point applies here again).
* Obtain the backend via [`DistributedUtils.get_distributed_backend`](@ref) by passing in
the type of the backend (same note as last point applies here again).

```julia
backend = DistributedUtils.get_distributed_backend(NCCLBackend)
Expand All @@ -28,36 +28,36 @@ backend = DistributedUtils.get_distributed_backend(NCCLBackend)
It is important that you use this function instead of directly constructing the backend,
since there are certain internal states that need to be synchronized.

3. Next synchronize the parameters and states of the model. This is done by calling
[`DistributedUtils.synchronize!!`](@ref) with the backend and the respective input.
* Next synchronize the parameters and states of the model. This is done by calling
[`DistributedUtils.synchronize!!`](@ref) with the backend and the respective input.

```julia
ps = DistributedUtils.synchronize!!(backend, ps)
st = DistributedUtils.synchronize!!(backend, st)
```

4. To split the data uniformly across the processes use
[`DistributedUtils.DistributedDataContainer`](@ref). Alternatively, one can manually
split the data. For the provided container to work
[`MLUtils.jl`](https://github.com/JuliaML/MLUtils.jl) must be installed and loaded.
* To split the data uniformly across the processes use
[`DistributedUtils.DistributedDataContainer`](@ref). Alternatively, one can manually
split the data. For the provided container to work
[`MLUtils.jl`](https://github.com/JuliaML/MLUtils.jl) must be installed and loaded.

```julia
data = DistributedUtils.DistributedDataContainer(backend, data)
```

5. Wrap the optimizer in [`DistributedUtils.DistributedOptimizer`](@ref) to ensure that the
optimizer is correctly synchronized across all processes before parameter updates. After
initializing the state of the optimizer, synchronize the state across all processes.
* Wrap the optimizer in [`DistributedUtils.DistributedOptimizer`](@ref) to ensure that the
optimizer is correctly synchronized across all processes before parameter updates. After
initializing the state of the optimizer, synchronize the state across all processes.

```julia
opt = DistributedUtils.DistributedOptimizer(backend, opt)
opt_state = Optimisers.setup(opt, ps)
opt_state = DistributedUtils.synchronize!!(backend, opt_state)
```
```

6. Finally change all logging and serialization code to trigger on
`local_rank(backend) == 0`. This ensures that only the master process logs and serializes
the model.
* Finally change all logging and serialization code to trigger on
`local_rank(backend) == 0`. This ensures that only the master process logs and serializes
the model.

## [GPU-Aware MPI](@id gpu-aware-mpi)

Expand Down Expand Up @@ -108,4 +108,4 @@ And that's pretty much it!
1. Currently we don't run tests with CUDA or ROCM aware MPI, use those features at your own
risk. We are working on adding tests for these features.
2. AMDGPU support is mostly experimental and causes deadlocks in certain situations, this is
being investigated. If you have a minimal reproducer for this, please open an issue.
being investigated. If you have a minimal reproducer for this, please open an issue.
18 changes: 9 additions & 9 deletions docs/src/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ layout: page
<script setup>
import { VPTeamPage, VPTeamPageTitle, VPTeamMembers, VPTeamPageSection } from 'vitepress/theme'

const githubSvg = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 640 512"><path d="M392.8 1.2c-17-4.9-34.7 5-39.6 22l-128 448c-4.9 17 5 34.7 22 39.6s34.7-5 39.6-22l128-448c4.9-17-5-34.7-22-39.6zm80.6 120.1c-12.5 12.5-12.5 32.8 0 45.3L562.7 256l-89.4 89.4c-12.5 12.5-12.5 32.8 0 45.3s32.8 12.5 45.3 0l112-112c12.5-12.5 12.5-32.8 0-45.3l-112-112c-12.5-12.5-32.8-12.5-45.3 0zm-306.7 0c-12.5-12.5-32.8-12.5-45.3 0l-112 112c-12.5 12.5-12.5 32.8 0 45.3l112 112c12.5 12.5 32.8 12.5 45.3 0s12.5-32.8 0-45.3L77.3 256l89.4-89.4c12.5-12.5 12.5-32.8 0-45.3z"/></svg>';
const codeSvg = '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 640 512"><path d="M392.8 1.2c-17-4.9-34.7 5-39.6 22l-128 448c-4.9 17 5 34.7 22 39.6s34.7-5 39.6-22l128-448c4.9-17-5-34.7-22-39.6zm80.6 120.1c-12.5 12.5-12.5 32.8 0 45.3L562.7 256l-89.4 89.4c-12.5 12.5-12.5 32.8 0 45.3s32.8 12.5 45.3 0l112-112c12.5-12.5 12.5-32.8 0-45.3l-112-112c-12.5-12.5-32.8-12.5-45.3 0zm-306.7 0c-12.5-12.5-32.8-12.5-45.3 0l-112 112c-12.5 12.5-12.5 32.8 0 45.3l112 112c12.5 12.5 32.8 12.5 45.3 0s12.5-32.8 0-45.3L77.3 256l89.4-89.4c12.5-12.5 12.5-32.8 0-45.3z"/></svg>';

const beginners = [
{
Expand All @@ -16,7 +16,7 @@ const beginners = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'beginner/1_Basics' }
]
Expand All @@ -28,7 +28,7 @@ const beginners = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'beginner/2_PolynomialFitting' }
]
Expand All @@ -40,7 +40,7 @@ const beginners = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'beginner/3_SimpleRNN' }
]
Expand All @@ -52,7 +52,7 @@ const beginners = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'beginner/4_SimpleChains' }
]
Expand All @@ -67,7 +67,7 @@ const intermediate = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'intermediate/1_NeuralODE' }
]
Expand All @@ -79,7 +79,7 @@ const intermediate = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'intermediate/2_BayesianNN' }
]
Expand All @@ -92,7 +92,7 @@ const intermediate = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'intermediate/3_HyperNet' }
]
Expand All @@ -107,7 +107,7 @@ const advanced = [
links: [
{
icon: {
svg: githubSvg,
svg: codeSvg,
},
link: 'advanced/1_GravitationalWaveForm' }
]
Expand Down
Loading