Skip to content

Commit

Permalink
fix: misc docs issues
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Sep 5, 2024
1 parent 7b96d5d commit 8c964f4
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 8 deletions.
6 changes: 6 additions & 0 deletions docs/src/api/Lux/utilities.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,3 +119,9 @@ StatefulLuxLayer
@init_fn
@non_trainable
```

## Miscellaneous

```@docs
Lux.set_dispatch_doctor_preferences!
```
4 changes: 2 additions & 2 deletions docs/src/introduction/updating_to_v1.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ abstraction.
- `Experimental.StatefulLuxLayer` has been moved to [`Lux.StatefulLuxLayer`](@ref).
- `st_fixed_path` kwarg has been removed from [`Lux.StatefulLuxLayer`](@ref), instead use it
as `StatefulLuxLayer{st_fixed_path}(...)`.
- Strings as inputs to [`Experimental.layer_map`](@ref) and
[`Experimental.@debug_mode`](@ref) are removed, use `Functors.KeyPath` instead.
- Strings as inputs to [`Lux.Experimental.layer_map`](@ref) and
[`Lux.Experimental.@debug_mode`](@ref) are removed, use `Functors.KeyPath` instead.

### Breaking Changes (Changes in Defaults)
8 changes: 4 additions & 4 deletions docs/src/manual/distributed_utils.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,10 +87,10 @@ And that's pretty much it!
as input.
3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See
[GPU-aware MPI](@ref gpu-aware-mpi-preferences) for more information.
4. Older [`Lux.gpu`](@ref) implementations used to "just work" with `FluxMPI.jl`. We expect
[`gpu_device`](@ref) to continue working as expected, however, we recommend using
[`gpu_device`](@ref) after calling [`DistributedUtils.initialize`](@ref) to avoid any
mismatch between the device set via `DistributedUtils` and the device stores in
4. Older (now non-existent) `Lux.gpu` implementations used to "just work" with `FluxMPI.jl`.
We expect [`gpu_device`](@ref) to continue working as expected, however, we recommend
using [`gpu_device`](@ref) after calling [`DistributedUtils.initialize`](@ref) to avoid
any mismatch between the device set via `DistributedUtils` and the device stores in
`CUDADevice` or `AMDGPUDevice`.

## Known Shortcomings
Expand Down
2 changes: 1 addition & 1 deletion docs/src/manual/freezing_model_parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We can use [`Lux.Experimental.layer_map`](@ref) and freeze layers if they are of
`Dense`.

```@example freezing_model_parameters
using Lux, Functors, Random
using Lux, Random
rng = Xoshiro(0)
Expand Down
2 changes: 1 addition & 1 deletion test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ const RETESTITEMS_NWORKERS = parse(
@info "Running tests for group: [$(i)/$(length(LUX_TEST_GROUP))] $tag"

ReTestItems.runtests(Lux; tags=(tag == "all" ? nothing : [Symbol(tag)]),
nworkers=RETESTITEMS_NWORKERS, testitem_timeout=1800, retries=1)
nworkers=RETESTITEMS_NWORKERS, testitem_timeout=2400, retries=1)
end
end

Expand Down

0 comments on commit 8c964f4

Please sign in to comment.