Skip to content

Commit

Permalink
Added Boltz.jl to docs examples and fixed errors building documenta…
Browse files Browse the repository at this point in the history
…tion
  • Loading branch information
nicholaskl97 committed Sep 11, 2024
1 parent e227356 commit a12ed2c
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 9 deletions.
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
[deps]
Boltz = "4544d5e4-abc5-4dea-817f-29e4c205d9c8"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
Expand Down
4 changes: 0 additions & 4 deletions docs/src/demos/damped_SHO.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,6 @@ prob = discretize(pde_system, discretization)
########################## Solve OptimizationProblem ##########################

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)

###################### Get numerical numerical functions ######################
net = discretization.phi
Expand Down Expand Up @@ -206,8 +204,6 @@ prob = discretize(pde_system, discretization)
using Optimization, OptimizationOptimisers, OptimizationOptimJL
res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)
net = discretization.phi
θ = res.u.depvar
Expand Down
10 changes: 5 additions & 5 deletions docs/src/demos/policy_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We'll jointly train a neural controller ``\tau = u \left( \theta, \frac{d\theta}
## Copy-Pastable Code

```julia
using NeuralPDE, Lux, ModelingToolkit, NeuralLyapunov
using NeuralPDE, Lux, Boltz, ModelingToolkit, NeuralLyapunov
import Optimization, OptimizationOptimisers, OptimizationOptimJL
using Random

Expand Down Expand Up @@ -55,7 +55,7 @@ dim_phi = 3
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
PeriodicEmbedding([1], [2π]),
Boltz.Layers.PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
Expand Down Expand Up @@ -179,7 +179,7 @@ Other than that, setting up the neural network using Lux and NeuralPDE training
For more on that aspect, see the [NeuralPDE documentation](https://docs.sciml.ai/NeuralPDE/stable/).

```@example policy_search
using Lux
using Lux, Boltz
# Define neural network discretization
# We use an input layer that is periodic with period 2π with respect to θ
Expand All @@ -189,7 +189,7 @@ dim_phi = 3
dim_u = 1
dim_output = dim_phi + dim_u
chain = [Lux.Chain(
PeriodicEmbedding([1], [2π]),
Boltz.Layers.PeriodicEmbedding([1], [2π]),
Dense(3, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
Expand Down Expand Up @@ -384,7 +384,7 @@ Now, let's simulate the closed-loop dynamics to verify that the controller can g
First, we'll start at the downward equilibrium:

```@example policy_search
state_order = map(st -> SymbolicUtils.iscall(st) ? operation(st) : st, state_order)
state_order = map(st -> SymbolicUtils.isterm(st) ? operation(st) : st, state_order)
state_syms = Symbol.(state_order)
closed_loop_dynamics = ODEFunction(
Expand Down

0 comments on commit a12ed2c

Please sign in to comment.