Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrote documentation website #28

Merged
merged 34 commits into from
Jun 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
39f244a
Organized documenation
nicholaskl97 May 17, 2024
a67ca6b
Clarified documentation for `get_numerical_lyapunov_function`
nicholaskl97 May 17, 2024
7353da3
Updated documentation on Lyapunov function structures
nicholaskl97 May 17, 2024
e18ec57
Eliminated the grad V field from `NeuralLyapunovStructure`
nicholaskl97 May 17, 2024
67f0520
Added documentation for defining custom Lyapunov function structures
nicholaskl97 May 17, 2024
256deb9
Updated `LyapunovMinimizationCondition` documentation
nicholaskl97 May 21, 2024
d70582c
Added documentation of minimization conditions
nicholaskl97 May 21, 2024
1e4eba9
Refactored `LyapunovDecreaseCondition` and changed meaning of `streng…
nicholaskl97 May 24, 2024
4e95518
Updated decrease documentation
nicholaskl97 May 24, 2024
e1c55fc
Updated documentation for `local_lyapunov`
nicholaskl97 May 24, 2024
421113b
Updated documentation for RoA-aware training
nicholaskl97 May 24, 2024
0314e66
Added warning about symbolic tracing to documentation
nicholaskl97 May 27, 2024
1d6fa44
Added policy search documentation
nicholaskl97 May 27, 2024
ae4da03
Added damped SHO demo to docs
nicholaskl97 May 31, 2024
e036d63
Changed `[0.0, 0.0]` to `fixed_point` in SHO test
nicholaskl97 Jun 5, 2024
ac6460a
Sped up damped SHO test
nicholaskl97 Jun 6, 2024
574f53b
Sped up dampd SHO documentation demo
nicholaskl97 Jun 6, 2024
20b725b
Fixed minimization condition documentation not showing when arguments…
nicholaskl97 Jun 6, 2024
c677934
Add default behavior to structure documentation
nicholaskl97 Jun 6, 2024
1b1b2a2
Added RoA estimation demo to documentation
nicholaskl97 Jun 6, 2024
8f40a91
Added link to `make_RoA_aware` docs in RoA estimation demo
nicholaskl97 Jun 6, 2024
44fed18
Fixed typo
nicholaskl97 Jun 6, 2024
af9edf7
Cleaned up test import statements
nicholaskl97 Jun 6, 2024
6660d34
Added setup to damped SHO and RoA estimation demos; cleaned up imports
nicholaskl97 Jun 6, 2024
895ce78
Removed symbolic discretization from demos
nicholaskl97 Jun 6, 2024
07ea727
Replaced `GridTraining` with `QuasiRandomTraining` in inverted pendul…
nicholaskl97 Jun 6, 2024
3116eee
Sped up inverted pendulum ODESystem test
nicholaskl97 Jun 10, 2024
a1135d2
Added policy search demo to docs
nicholaskl97 Jun 10, 2024
c34be64
Sped up damped SHO test by decreasing training iterations
nicholaskl97 Jun 11, 2024
7be0f1b
Increasing iterations in policy search test 2, since it fails on GitH…
nicholaskl97 Jun 11, 2024
2cdcc66
Wrote documentation homepage
nicholaskl97 Jun 12, 2024
8aacd7e
Formatting changes
nicholaskl97 Jun 12, 2024
db9e358
Homogenized and pruned test imports
nicholaskl97 Jun 12, 2024
fc41720
Create Documentation.yml
ChrisRackauckas Jun 13, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: "Documentation"

on:
push:
branches:
- master
tags: '*'
pull_request:

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.ref_name != github.event.repository.default_branch || github.ref != 'refs/tags/v*' }}

jobs:
build-and-deploy-docs:
name: "Documentation"
uses: "SciML/.github/.github/workflows/documentation.yml@v1"
secrets: "inherit"
8 changes: 8 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
[deps]
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
NeuralLyapunov = "61252e1c-87f0-426a-93a7-3cdb1ed5a867"
NeuralPDE = "315f7962-48a3-4962-8226-d0f33b1235f0"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
20 changes: 19 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,22 @@ using Documenter
DocMeta.setdocmeta!(
NeuralLyapunov, :DocTestSetup, :(using NeuralLyapunov); recursive = true)

MANUAL_PAGES = [
"man.md",
"man/pdesystem.md",
"man/minimization.md",
"man/decrease.md",
"man/structure.md",
"man/roa.md",
"man/policy_search.md",
"man/local_lyapunov.md"
]
DEMONSTRATION_PAGES = [
"demos/damped_SHO.md",
"demos/roa_estimation.md",
"demos/policy_search.md"
]

makedocs(;
modules = [NeuralLyapunov],
authors = "Nicholas Klugman <13633349+nicholaskl97@users.noreply.github.com> and contributors",
Expand All @@ -14,7 +30,9 @@ makedocs(;
assets = String[]
),
pages = [
"Home" => "index.md"
"Home" => "index.md",
"Manual" => vcat(MANUAL_PAGES, hide("man/internals.md")),
"Demonstrations" => DEMONSTRATION_PAGES
]
)

Expand Down
282 changes: 282 additions & 0 deletions docs/src/demos/damped_SHO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
# Damped Simple Harmonic Oscillator

Let's train a neural network to prove the exponential stability of the damped simple harmonic oscillator (SHO).

The damped SHO is represented by the system of differential equations
```math
\begin{align*}
\frac{dx}{dt} &= v \\
\frac{dv}{dt} &= -2 \zeta \omega_0 v - \omega_0^2 x
\end{align*}
```
where ``x`` is the position, ``v`` is the velocity, ``t`` is time, and ``\zeta, \omega_0`` are parameters.

We'll consider just the box domain ``x \in [-5, 5], v \in [-2, 2]``.

## Copy-Pastable Code

```julia
using NeuralPDE, Lux, NeuralLyapunov
using Optimization, OptimizationOptimisers, OptimizationOptimJL
using Random

Random.seed!(200)

######################### Define dynamics and domain ##########################

"Simple Harmonic Oscillator Dynamics"
function f(state, p, t)
ζ, ω_0 = p
pos = state[1]
vel = state[2]
vcat(vel, -2ζ * ω_0 * vel - ω_0^2 * pos)
end
lb = [-5.0, -2.0];
ub = [ 5.0, 2.0];
p = [0.5, 1.0];
fixed_point = [0.0, 0.0];
dynamics = ODEFunction(f; sys = SciMLBase.SymbolCache([:x, :v], [, :ω_0]))

####################### Specify neural Lyapunov problem #######################

# Define neural network discretization
dim_state = length(lb)
dim_hidden = 10
dim_output = 3
chain = [Lux.Chain(
Dense(dim_state, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
) for _ in 1:dim_output]

# Define training strategy
strategy = QuasiRandomTraining(1000)
discretization = PhysicsInformedNN(chain, strategy)

# Define neural Lyapunov structure
structure = NonnegativeNeuralLyapunov(
dim_output;
δ = 1e-6
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)

# Define Lyapunov decrease condition
# Damped SHO has exponential decrease at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialDecrease(prod(p))

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
structure,
minimization_condition,
decrease_condition
)

############################# Construct PDESystem #############################

@named pde_system = NeuralLyapunovPDESystem(
dynamics,
lb,
ub,
spec;
p = p
)

######################## Construct OptimizationProblem ########################

prob = discretize(pde_system, discretization)

########################## Solve OptimizationProblem ##########################

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)

###################### Get numerical numerical functions ######################
net = discretization.phi
θ = res.u.depvar

V, V̇ = get_numerical_lyapunov_function(
net,
θ,
structure,
f,
fixed_point;
p = p
)
```

## Detailed description

In this example, we set the dynamics up as an `ODEFunction` and use a `SciMLBase.SymbolCache` to tell the ultimate `PDESystem` what to call our state and parameter variables.

```@setup SHO
using Random
Random.seed!(200)
```

```@example SHO
using NeuralPDE # for ODEFunction and SciMLBase.SymbolCache
"Simple Harmonic Oscillator Dynamics"
function f(state, p, t)
ζ, ω_0 = p
pos = state[1]
vel = state[2]
vcat(vel, -2ζ * ω_0 * vel - ω_0^2 * pos)
end
lb = [-5.0, -2.0];
ub = [ 5.0, 2.0];
p = [0.5, 1.0];
fixed_point = [0.0, 0.0];
dynamics = ODEFunction(f; sys = SciMLBase.SymbolCache([:x, :v], [:ζ, :ω_0]))
```

Setting up the neural network using Lux and NeuralPDE training strategy is no different from any other physics-informed neural network problem.
For more on that aspect, see the [NeuralPDE documentation](https://docs.sciml.ai/NeuralPDE/stable/).

```@example SHO
using Lux
# Define neural network discretization
dim_state = length(lb)
dim_hidden = 10
dim_output = 3
chain = [Lux.Chain(
Dense(dim_state, dim_hidden, tanh),
Dense(dim_hidden, dim_hidden, tanh),
Dense(dim_hidden, 1)
) for _ in 1:dim_output]
```

```@example SHO
# Define training strategy
strategy = QuasiRandomTraining(1000)
discretization = PhysicsInformedNN(chain, strategy)
```

We now define our Lyapunov candidate structure along with the form of the Lyapunov conditions we'll be using.

For this example, let's use a Lyapunov candidate
```math
V(x) = \lVert \phi(x) \rVert^2 + \delta \log \left( 1 + \lVert x \rVert^2 \right),
```
which structurally enforces nonnegativity, but doesn't guarantee ``V([0, 0]) = 0``.
We therefore don't need a term in the loss function enforcing ``V(x) > 0 \, \forall x \ne 0``, but we do need something enforcing ``V([0, 0]) = 0``.
So, we use [`DontCheckNonnegativity(check_fixed_point = true)`](@ref).

To train for exponential decrease we use [`ExponentialDecrease`](@ref), but we must specify the rate of exponential decrease, which we know in this case to be ``\zeta \omega_0``.

```@example SHO
using NeuralLyapunov
# Define neural Lyapunov structure
structure = NonnegativeNeuralLyapunov(
dim_output;
δ = 1e-6
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)
# Define Lyapunov decrease condition
# Damped SHO has exponential decrease at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialDecrease(prod(p))
# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
structure,
minimization_condition,
decrease_condition
)
# Construct PDESystem
@named pde_system = NeuralLyapunovPDESystem(
dynamics,
lb,
ub,
spec;
p = p
)
```

Now, we solve the PDESystem using NeuralPDE the same way we would any PINN problem.

```@example SHO
prob = discretize(pde_system, discretization)
using Optimization, OptimizationOptimisers, OptimizationOptimJL
res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 500)
net = discretization.phi
θ = res.u.depvar
```

We can use the result of the optimization problem to build the Lyapunov candidate as a Julia function.

```@example SHO
V, V̇ = get_numerical_lyapunov_function(
net,
θ,
structure,
f,
fixed_point;
p = p
)
```

Now let's see how we did.
We'll evaluate both ``V`` and ``\dot{V}`` on a ``101 \times 101`` grid:

```@example SHO
Δx = (ub[1] - lb[1]) / 100
Δv = (ub[2] - lb[2]) / 100
xs = lb[1]:Δx:ub[1]
vs = lb[2]:Δv:ub[2]
states = Iterators.map(collect, Iterators.product(xs, vs))
V_samples = vec(V(hcat(states...)))
V̇_samples = vec(V̇(hcat(states...)))
# Print statistics
V_min, i_min = findmin(V_samples)
state_min = collect(states)[i_min]
V_min, state_min = if V(fixed_point) ≤ V_min
V(fixed_point), fixed_point
else
V_min, state_min
end
println("V(0.,0.) = ", V(fixed_point))
println("V ∋ [", V_min, ", ", maximum(V_samples), "]")
println("Minimial sample of V is at ", state_min)
println(
"V̇ ∋ [",
minimum(V̇_samples),
", ",
max(V̇(fixed_point), maximum(V̇_samples)),
"]",
)
```

At least at these validation samples, the conditions that ``\dot{V}`` be negative semi-definite and ``V`` be minimized at the origin are nearly sastisfied.

```@example SHO
using Plots
p1 = plot(xs, vs, V_samples, linetype = :contourf, title = "V", xlabel = "x", ylabel = "v");
p1 = scatter!([0], [0], label = "Equilibrium");
p2 = plot(
xs,
vs,
V̇_samples,
linetype = :contourf,
title = "V̇",
xlabel = "x",
ylabel = "v",
);
p2 = scatter!([0], [0], label = "Equilibrium");
plot(p1, p2)
```

Each sublevel set of ``V`` completely contained in the plot above has been verified as a subset of the region of attraction.
Loading
Loading