diff --git a/docs/Project.toml b/docs/Project.toml index 994ec2ee4..9d6f2f299 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -29,6 +29,7 @@ SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462" SciMLSensitivity = "1ed8b502-d754-442c-8d5d-10ac956f44a1" Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2" StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0" +Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f" [compat] CSV = "0.10" @@ -58,3 +59,4 @@ ReverseDiff = "1.14" SciMLBase = "1.72" SciMLSensitivity = "7.11" StochasticDiffEq = "6.56" +Zygote = "0.6.62" \ No newline at end of file diff --git a/docs/src/examples/hamiltonian_nn.md b/docs/src/examples/hamiltonian_nn.md index d49a6a370..1e3d372d6 100644 --- a/docs/src/examples/hamiltonian_nn.md +++ b/docs/src/examples/hamiltonian_nn.md @@ -63,7 +63,7 @@ ylabel!("Momentum (p)") The HNN predicts the gradients ``(\dot q, \dot p)`` given ``(q, p)``. Hence, we generate the pairs ``(q, p)`` using the equations given at the top. Additionally, to supervise the training, we also generate the gradients. Next, we use Flux DataLoader for automatically batching our dataset. ```@example hamiltonian -using Flux, DiffEqFlux, DifferentialEquations, Statistics, Plots, ReverseDiff +using Flux, DiffEqFlux, DifferentialEquations, Statistics, Plots, ReverseDiff, Random, IterTools, Lux, ComponentArrays, Optimization t = range(0.0f0, 1.0f0, length = 1024) π_32 = Float32(π) diff --git a/docs/src/examples/neural_sde.md b/docs/src/examples/neural_sde.md index e70252c37..b507ff389 100644 --- a/docs/src/examples/neural_sde.md +++ b/docs/src/examples/neural_sde.md @@ -81,7 +81,8 @@ diffusion_dudt = Flux.Chain(Flux.Dense(2, 2)) p2, re2 = Flux.destructure(diffusion_dudt) neuralsde = NeuralDSDE(drift_dudt, diffusion_dudt, tspan, SOSRI(), - saveat = tsteps, reltol = 1e-1, abstol = 1e-1) + saveat = tsteps, reltol = 1e-1, abstol = 1e-1); +nothing ``` Let's see what that looks like: diff --git a/docs/src/index.md b/docs/src/index.md index dcb787273..cd9086074 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -50,7 +50,7 @@ using Flux, Tracker x = [0.8; 0.8] ann = Chain(Dense(2, 10, tanh), Dense(10, 1)) p, re = Flux.destructure(ann) -z = re(Float64(p)) +z = re(Float64.(p)) ``` While one may think this recreates the neural network to act in `Float64` precision, [it does not](https://github.com/FluxML/Flux.jl/pull/2156)