From 8c964f4494847a434ff1004a298bc2cd01543493 Mon Sep 17 00:00:00 2001 From: Avik Pal Date: Wed, 4 Sep 2024 20:31:52 -0400 Subject: [PATCH] fix: misc docs issues --- docs/src/api/Lux/utilities.md | 6 ++++++ docs/src/introduction/updating_to_v1.md | 4 ++-- docs/src/manual/distributed_utils.md | 8 ++++---- docs/src/manual/freezing_model_parameters.md | 2 +- test/runtests.jl | 2 +- 5 files changed, 14 insertions(+), 8 deletions(-) diff --git a/docs/src/api/Lux/utilities.md b/docs/src/api/Lux/utilities.md index 19489e766..744624fa1 100644 --- a/docs/src/api/Lux/utilities.md +++ b/docs/src/api/Lux/utilities.md @@ -119,3 +119,9 @@ StatefulLuxLayer @init_fn @non_trainable ``` + +## Miscellaneous + +```@docs +Lux.set_dispatch_doctor_preferences! +``` diff --git a/docs/src/introduction/updating_to_v1.md b/docs/src/introduction/updating_to_v1.md index 72feb2a76..0d0629a8e 100644 --- a/docs/src/introduction/updating_to_v1.md +++ b/docs/src/introduction/updating_to_v1.md @@ -108,7 +108,7 @@ abstraction. - `Experimental.StatefulLuxLayer` has been moved to [`Lux.StatefulLuxLayer`](@ref). - `st_fixed_path` kwarg has been removed from [`Lux.StatefulLuxLayer`](@ref), instead use it as `StatefulLuxLayer{st_fixed_path}(...)`. -- Strings as inputs to [`Experimental.layer_map`](@ref) and - [`Experimental.@debug_mode`](@ref) are removed, use `Functors.KeyPath` instead. +- Strings as inputs to [`Lux.Experimental.layer_map`](@ref) and + [`Lux.Experimental.@debug_mode`](@ref) are removed, use `Functors.KeyPath` instead. ### Breaking Changes (Changes in Defaults) diff --git a/docs/src/manual/distributed_utils.md b/docs/src/manual/distributed_utils.md index dbee8ab11..677e47377 100644 --- a/docs/src/manual/distributed_utils.md +++ b/docs/src/manual/distributed_utils.md @@ -87,10 +87,10 @@ And that's pretty much it! as input. 3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See [GPU-aware MPI](@ref gpu-aware-mpi-preferences) for more information. -4. Older [`Lux.gpu`](@ref) implementations used to "just work" with `FluxMPI.jl`. We expect - [`gpu_device`](@ref) to continue working as expected, however, we recommend using - [`gpu_device`](@ref) after calling [`DistributedUtils.initialize`](@ref) to avoid any - mismatch between the device set via `DistributedUtils` and the device stores in +4. Older (now non-existent) `Lux.gpu` implementations used to "just work" with `FluxMPI.jl`. + We expect [`gpu_device`](@ref) to continue working as expected, however, we recommend + using [`gpu_device`](@ref) after calling [`DistributedUtils.initialize`](@ref) to avoid + any mismatch between the device set via `DistributedUtils` and the device stores in `CUDADevice` or `AMDGPUDevice`. ## Known Shortcomings diff --git a/docs/src/manual/freezing_model_parameters.md b/docs/src/manual/freezing_model_parameters.md index 0d5258cfa..5f2f4055e 100644 --- a/docs/src/manual/freezing_model_parameters.md +++ b/docs/src/manual/freezing_model_parameters.md @@ -13,7 +13,7 @@ We can use [`Lux.Experimental.layer_map`](@ref) and freeze layers if they are of `Dense`. ```@example freezing_model_parameters -using Lux, Functors, Random +using Lux, Random rng = Xoshiro(0) diff --git a/test/runtests.jl b/test/runtests.jl index 5ac9215c0..a6160b119 100644 --- a/test/runtests.jl +++ b/test/runtests.jl @@ -93,7 +93,7 @@ const RETESTITEMS_NWORKERS = parse( @info "Running tests for group: [$(i)/$(length(LUX_TEST_GROUP))] $tag" ReTestItems.runtests(Lux; tags=(tag == "all" ? nothing : [Symbol(tag)]), - nworkers=RETESTITEMS_NWORKERS, testitem_timeout=1800, retries=1) + nworkers=RETESTITEMS_NWORKERS, testitem_timeout=2400, retries=1) end end