Skip to content

Commit

Permalink
Minor fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Nov 21, 2023
1 parent 1d79acf commit 683ac1b
Show file tree
Hide file tree
Showing 9 changed files with 15 additions and 17 deletions.
2 changes: 0 additions & 2 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
DataDeps = "124859b0-ceae-595e-8997-d05f6a7a8dfe"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DiffEqFlux = "aae7a2af-3d4f-5e19-a356-7da93b79d9d0"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Expand Down Expand Up @@ -43,7 +42,6 @@ ComponentArrays = "0.13, 0.14, 0.15"
DataDeps = "0.7"
DataFrames = "1"
DiffEqFlux = "3"
DifferentialEquations = "7.6.0"
Distances = "0.10.7"
Distributions = "0.25.78"
Documenter = "1"
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/GPUs.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ For a detailed discussion on how GPUs need to be setup refer to
[Lux Docs](https://lux.csail.mit.edu/stable/manual/gpu_management).

```julia
using DifferentialEquations, Lux, LuxCUDA, SciMLSensitivity, ComponentArrays
using OrdinaryDiffEq, Lux, LuxCUDA, SciMLSensitivity, ComponentArrays
using Random
rng = Random.default_rng()

Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/augmented_neural_ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## Copy-Pasteable Code

```@example augneuralode_cp
using DiffEqFlux, DifferentialEquations, Statistics, LinearAlgebra, Plots, LuxCUDA, Random
using DiffEqFlux, OrdinaryDiffEq, Statistics, LinearAlgebra, Plots, LuxCUDA, Random
using MLUtils, ComponentArrays
using Optimization, OptimizationOptimisers, IterTools
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/collocation.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ us to get an estimate of the approximate noiseless dynamics:

```@example collocation
using ComponentArrays,
Lux, DiffEqFlux, Optimization, OptimizationOptimisers, DifferentialEquations, Plots
Lux, DiffEqFlux, Optimization, OptimizationOptimisers, OrdinaryDiffEq, Plots
using Random
rng = Random.default_rng()
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/hamiltonian_nn.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ dataloader = ncycle(((selectdim(data, 2, ((i - 1) * B + 1):(min(i * B, size(data
We parameterize the HamiltonianNN with a small MultiLayered Perceptron. HNNs are trained by optimizing the gradients of the Neural Network. Zygote currently doesn't support nesting itself, so we will be using ForwardDiff in the training loop to compute the gradients of the HNN Layer for Optimization.

```@example hamiltonian
hnn = HamiltonianNN(Chain(Dense(2 => 64, relu), Dense(64 => 1)); ad - AutoZygote())
hnn = HamiltonianNN(Chain(Dense(2 => 64, relu), Dense(64 => 1)); ad = AutoZygote())
ps, st = Lux.setup(Random.default_rng(), hnn)
ps_c = ps |> ComponentArray
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/multiple_shooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The following is a working demo, using Multiple Shooting:

```julia
using ComponentArrays,
Lux, DiffEqFlux, Optimization, OptimizationPolyalgorithms, DifferentialEquations, Plots
Lux, DiffEqFlux, Optimization, OptimizationPolyalgorithms, OrdinaryDiffEq, Plots
using DiffEqFlux: group_ranges

using Random
Expand Down
6 changes: 3 additions & 3 deletions docs/src/examples/neural_ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ Before getting to the explanation, here's some code to start with. We will
follow a full explanation of the definition and training process:

```@example neuralode_cp
using ComponentArrays, Lux, DiffEqFlux, DifferentialEquations, Optimization,
OptimizationOptimJL, OptimizationOptimisers, Random, Plots
using ComponentArrays, Lux, DiffEqFlux, OrdinaryDiffEq, Optimization, OptimizationOptimJL,
OptimizationOptimisers, Random, Plots
rng = Random.default_rng()
u0 = Float32[2.0; 0.0]
Expand Down Expand Up @@ -83,7 +83,7 @@ callback(result_neuralode2.u, loss_neuralode(result_neuralode2.u)...; doplot = t
Let's get a time series array from a spiral ODE to train against.

```@example neuralode
using ComponentArrays, Lux, DiffEqFlux, DifferentialEquations, Optimization,
using ComponentArrays, Lux, DiffEqFlux, OrdinaryDiffEq, Optimization,
OptimizationOptimJL, OptimizationOptimisers, Random, Plots
rng = Random.default_rng()
Expand Down
12 changes: 6 additions & 6 deletions docs/src/examples/normalizing_flows.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ Before getting to the explanation, here's some code to start with. We will
follow a full explanation of the definition and training process:

```@example cnf
using ComponentArrays, DiffEqFlux, DifferentialEquations, Optimization,
OptimizationOptimisers, OptimizationOptimJL, Distributions, Random
using ComponentArrays, DiffEqFlux, OrdinaryDiffEq, Optimization, Distributions,
Random, OptimizationOptimisers, OptimizationOptimJL
nn = Chain(Dense(1, 3, tanh), Dense(3, 1, tanh))
tspan = (0.0f0, 10.0f0)
Expand Down Expand Up @@ -37,7 +37,7 @@ adtype = Optimization.AutoForwardDiff()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ps)
res1 = Optimization.solve(optprob, Adam(0.1); maxiters = 20, callback = cb)
res1 = Optimization.solve(optprob, Adam(0.01); maxiters = 20, callback = cb)
optprob2 = Optimization.OptimizationProblem(optf, res1.u)
res2 = Optimization.solve(optprob2, Optim.LBFGS(); allow_f_increases = false,
Expand All @@ -62,8 +62,8 @@ new_data = rand(ffjord_dist, 100)
We can use DiffEqFlux.jl to define, train and output the densities computed by CNF layers. In the same way as a neural ODE, the layer takes a neural network that defines its derivative function (see [1] for a reference). A possible way to define a CNF layer, would be:

```@example cnf2
using ComponentArray, DiffEqFlux, DifferentialEquations, Optimization,
OptimizationOptimisers, OptimizationOptimJL, Distributions, Random
using ComponentArrays, DiffEqFlux, OrdinaryDiffEq, Optimization, OptimizationOptimisers,
OptimizationOptimJL, Distributions, Random
nn = Chain(Dense(1, 3, tanh), Dense(3, 1, tanh))
tspan = (0.0f0, 10.0f0)
Expand Down Expand Up @@ -113,7 +113,7 @@ adtype = Optimization.AutoForwardDiff()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ps)
res1 = Optimization.solve(optprob, Adam(0.1); maxiters = 20, callback = cb)
res1 = Optimization.solve(optprob, Adam(0.01); maxiters = 20, callback = cb)
```

We then complete the training using a different optimizer, starting from where `Adam` stopped.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/tensor_layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ solvers in `DifferentialEquations`:

```@example tensor
using ComponentArrays, DiffEqFlux, Optimization, OptimizationOptimisers,
DifferentialEquations, LinearAlgebra, Random
OrdinaryDiffEq, LinearAlgebra, Random
k, α, β, γ = 1, 0.1, 0.2, 0.3
tspan = (0.0, 10.0)
Expand Down

0 comments on commit 683ac1b

Please sign in to comment.