Skip to content

Commit

Permalink
Replace ADAM with Adam etc
Browse files Browse the repository at this point in the history
Signed-off-by: ErikQQY <2283984853@qq.com>
  • Loading branch information
ErikQQY committed Jul 26, 2023
1 parent 94e4092 commit 51ad636
Show file tree
Hide file tree
Showing 20 changed files with 73 additions and 73 deletions.
8 changes: 4 additions & 4 deletions docs/src/examples/augmented_neural_ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ cb = function()
end
model, parameters = construct_model(1, 2, 64, 0)
opt = ADAM(0.005)
opt = Adam(0.005)
println("Training Neural ODE")
Expand All @@ -89,7 +89,7 @@ end
plt_node = plot_contour(model)
model, parameters = construct_model(1, 2, 64, 1)
opt = ADAM(5f-3)
opt = Adam(5f-3)
println()
println("Training Augmented Neural ODE")
Expand Down Expand Up @@ -237,10 +237,10 @@ end

### Optimizer

We use ADAM as the optimizer with a learning rate of 0.005
We use Adam as the optimizer with a learning rate of 0.005

```@example augneuralode
opt = ADAM(5f-3)
opt = Adam(5f-3)
```

## Training the Neural ODE
Expand Down
8 changes: 4 additions & 4 deletions docs/src/examples/collocation.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentArray(pinit))
result_neuralode = Optimization.solve(optprob, ADAM(0.05), callback = callback, maxiters = 10000)
result_neuralode = Optimization.solve(optprob, Adam(0.05), callback = callback, maxiters = 10000)
prob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)
nn_sol, st = prob_neuralode(u0, result_neuralode.u, st)
Expand All @@ -78,7 +78,7 @@ optf = Optimization.OptimizationFunction((x, p) -> loss_neuralode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentArray(pinit))
numerical_neuralode = Optimization.solve(optprob,
ADAM(0.05),
Adam(0.05),
callback = callback,
maxiters = 300)
Expand Down Expand Up @@ -153,7 +153,7 @@ adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentArray(pinit))
result_neuralode = Optimization.solve(optprob, ADAM(0.05), callback = callback, maxiters = 10000)
result_neuralode = Optimization.solve(optprob, Adam(0.05), callback = callback, maxiters = 10000)
prob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)
nn_sol, st = prob_neuralode(u0, result_neuralode.u, st)
Expand Down Expand Up @@ -182,7 +182,7 @@ optf = Optimization.OptimizationFunction((x, p) -> loss_neuralode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentArray(pinit))
numerical_neuralode = Optimization.solve(optprob,
ADAM(0.05),
Adam(0.05),
callback = callback,
maxiters = 300)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/examples/hamiltonian_nn.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ hnn = HamiltonianNN(Lux.Chain(Lux.Dense(2, 64, relu), Lux.Dense(64, 1)))
ps, st = Lux.setup(Random.default_rng(), hnn)
ps_c = ps |> ComponentArray
opt = ADAM(0.01f0)
opt = Adam(0.01f0)
function loss_function(ps, data, target)
pred, st_ = hnn(data, ps, st)
Expand Down Expand Up @@ -90,7 +90,7 @@ hnn = HamiltonianNN(Lux.Chain(Lux.Dense(2, 64, relu), Lux.Dense(64, 1)))
ps, st = Lux.setup(Random.default_rng(), hnn)
ps_c = ps |> ComponentArray
opt = ADAM(0.01f0)
opt = Adam(0.01f0)
function loss_function(ps, data, target)
pred, st_ = hnn(data, ps, st)
Expand Down
6 changes: 3 additions & 3 deletions docs/src/examples/mnist_conv_neural_ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ loss(x, y) = logitcrossentropy(model(x), y)
# burn in loss
loss(img, lab)

opt = ADAM(0.05)
opt = Adam(0.05)
iter = 0

callback() = begin
Expand Down Expand Up @@ -332,10 +332,10 @@ loss(img, lab)

#### Optimizer

`ADAM` is specified here as our optimizer with a **learning rate of 0.05**:
`Adam` is specified here as our optimizer with a **learning rate of 0.05**:

```julia
opt = ADAM(0.05)
opt = Adam(0.05)
```

#### CallBack
Expand Down
6 changes: 3 additions & 3 deletions docs/src/examples/mnist_neural_ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ loss(x, y) = logitcrossentropy(model(x), y)
# burn in loss
loss(img, lab)

opt = ADAM(0.05)
opt = Adam(0.05)
iter = 0

callback() = begin
Expand Down Expand Up @@ -316,10 +316,10 @@ loss(img, lab)

#### Optimizer

`ADAM` is specified here as our optimizer with a **learning rate of 0.05**:
`Adam` is specified here as our optimizer with a **learning rate of 0.05**:

```julia
opt = ADAM(0.05)
opt = Adam(0.05)
```

#### CallBack
Expand Down
10 changes: 5 additions & 5 deletions docs/src/examples/neural_ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ optf = Optimization.OptimizationFunction((x, p) -> loss_neuralode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, pinit)
result_neuralode = Optimization.solve(optprob,
ADAM(0.05),
Adam(0.05),
callback = callback,
maxiters = 300)
Expand Down Expand Up @@ -169,7 +169,7 @@ callback(pinit, loss_neuralode(pinit)...)

We then train the neural network to learn the ODE.

Here we showcase starting the optimization with `ADAM` to more quickly find a
Here we showcase starting the optimization with `Adam` to more quickly find a
minimum, and then honing in on the minimum by using `LBFGS`. By using the two
together, we can fit the neural ODE in 9 seconds! (Note, the timing
commented out the plotting). You can easily incorporate the procedure below to
Expand All @@ -182,20 +182,20 @@ The `x` and `p` variables in the optimization function are different from
the original problem, so `x_optimization` == `p_original`.

```@example neuralode
# Train using the ADAM optimizer
# Train using the Adam optimizer
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss_neuralode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, pinit)
result_neuralode = Optimization.solve(optprob,
ADAM(0.05),
Adam(0.05),
callback = callback,
maxiters = 300)
```

We then complete the training using a different optimizer, starting from where
`ADAM` stopped. We do `allow_f_increases=false` to make the optimization automatically
`Adam` stopped. We do `allow_f_increases=false` to make the optimization automatically
halt when near the minimum.

```@example neuralode
Expand Down
4 changes: 2 additions & 2 deletions docs/src/examples/neural_ode_weather_forecast.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ using Dates
using Optimization
using ComponentArrays
using Lux
using DiffEqFlux: NeuralODE, ADAMW, swish
using DiffEqFlux: NeuralODE, AdamW, swish
using DifferentialEquations
using CSV
using DataFrames
Expand Down Expand Up @@ -193,7 +193,7 @@ function train(t, y, obs_grid, maxiters, lr, rng, p=nothing, state=nothing; kwar
if state === nothing state = state_new end

p, state = train_one_round(
node, p, state, y, ADAMW(lr), maxiters, rng;
node, p, state, y, AdamW(lr), maxiters, rng;
callback=log_results(ps, losses),
kwargs...
)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/neural_sde.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ smaller `n` and then decrease it after it has had some time to adjust towards
the right mean behavior:

```@example nsde
opt = ADAM(0.025)
opt = Adam(0.025)
# First round of training with n = 10
adtype = Optimization.AutoZygote()
Expand Down
8 changes: 4 additions & 4 deletions docs/src/examples/normalizing_flows.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ffjord_mdl.p)
res1 = Optimization.solve(optprob,
ADAM(0.1),
Adam(0.1),
maxiters = 100,
callback=cb)
Expand Down Expand Up @@ -107,20 +107,20 @@ In this example, we wish to choose the parameters of the network such that the l

We then train the neural network to learn the distribution of `x`.

Here we showcase starting the optimization with `ADAM` to more quickly find a minimum, and then honing in on the minimum by using `LBFGS`.
Here we showcase starting the optimization with `Adam` to more quickly find a minimum, and then honing in on the minimum by using `LBFGS`.

```@example cnf2
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ffjord_mdl.p)
res1 = Optimization.solve(optprob,
ADAM(0.1),
Adam(0.1),
maxiters = 100,
callback=cb)
```

We then complete the training using a different optimizer, starting from where `ADAM` stopped.
We then complete the training using a different optimizer, starting from where `Adam` stopped.

```@example cnf2
optprob2 = Optimization.OptimizationProblem(optf, res1.u)
Expand Down
6 changes: 3 additions & 3 deletions docs/src/examples/tensor_layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,16 +81,16 @@ function callback(θ,l)
end
```

and we train the network using two rounds of `ADAM`:
and we train the network using two rounds of `Adam`:

```@example tensor
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p) -> loss_adjoint(x), adtype)
optprob = Optimization.OptimizationProblem(optf, α)
res1 = Optimization.solve(optprob, ADAM(0.05), callback = callback, maxiters = 150)
res1 = Optimization.solve(optprob, Adam(0.05), callback = callback, maxiters = 150)
optprob2 = Optimization.OptimizationProblem(optf, res1.u)
res2 = Optimization.solve(optprob2, ADAM(0.001), callback = callback,maxiters = 150)
res2 = Optimization.solve(optprob2, Adam(0.001), callback = callback,maxiters = 150)
opt = res2.u
```

Expand Down
Loading

0 comments on commit 51ad636

Please sign in to comment.