diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 4888898..7a0329f 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.3","generation_timestamp":"2024-05-10T01:25:33","documenter_version":"1.4.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.3","generation_timestamp":"2024-05-17T01:24:53","documenter_version":"1.4.1"}} \ No newline at end of file diff --git a/dev/DeepBSDE/index.html b/dev/DeepBSDE/index.html index d2e861d..d1e64a1 100644 --- a/dev/DeepBSDE/index.html +++ b/dev/DeepBSDE/index.html @@ -64,4 +64,4 @@ trajectories_lower, maxiters_limits ) -

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

+

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

diff --git a/dev/DeepSplitting/index.html b/dev/DeepSplitting/index.html index 149d8cf..8dee229 100644 --- a/dev/DeepSplitting/index.html +++ b/dev/DeepSplitting/index.html @@ -20,4 +20,4 @@ cuda_device, verbose_rate ) -> PIDESolution{_A, _B, _C, Vector{_A1}, Vector{Any}, Nothing} where {_A, _B, _C, _A1} -

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

+

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

diff --git a/dev/Feynman_Kac/index.html b/dev/Feynman_Kac/index.html index 0fc32ac..1b2bfec 100644 --- a/dev/Feynman_Kac/index.html +++ b/dev/Feynman_Kac/index.html @@ -7,4 +7,4 @@ v(\tau, x) &= \int_{-\tau}^0 \mathbb{E} \left[ f(X^x_{s + \tau}, v(s + T, X^x_{s + \tau}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= - \int_{\tau}^0 \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= \int_{0}^\tau \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]. -\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

+\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

diff --git a/dev/MLP/index.html b/dev/MLP/index.html index 1f2f4b5..1597d75 100644 --- a/dev/MLP/index.html +++ b/dev/MLP/index.html @@ -16,4 +16,4 @@ u_L &= \sum_{l=1}^{L-1} \frac{1}{M^{L-l}}\sum_{i=1}^{M^{L-l}} \frac{1}{K}\sum_{j=1}^{K} \bigg[ f(X^{x,(l, i)}_{t - s_{(l, i)}}, Z^{(l,j)}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}), u(T-s_{l,i}, Z^{(l,j)})) + \\ &\qquad \mathbf{1}_\N(l) f(X^{x,(l, i)}_{t - s_{(l, i)}}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}))\bigg] + \frac{1}{M^{L}}\sum_i^{M^{L}} u(0, X^{x,(l, i)}_t)\\ -\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

References

+\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

References

diff --git a/dev/NNKolmogorov/index.html b/dev/NNKolmogorov/index.html index 37a8727..3fa7c56 100644 --- a/dev/NNKolmogorov/index.html +++ b/dev/NNKolmogorov/index.html @@ -14,4 +14,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

+

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

diff --git a/dev/NNParamKolmogorov/index.html b/dev/NNParamKolmogorov/index.html index a204184..d4bafd8 100644 --- a/dev/NNParamKolmogorov/index.html +++ b/dev/NNParamKolmogorov/index.html @@ -20,4 +20,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

+

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

diff --git a/dev/NNStopping/index.html b/dev/NNStopping/index.html index 71046db..d4d0328 100644 --- a/dev/NNStopping/index.html +++ b/dev/NNStopping/index.html @@ -10,4 +10,4 @@ ensemblealg, kwargs... ) -> NamedTuple{(:payoff, :stopping_time), <:Tuple{Any, Any}} -

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where Ï„ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (Ï„) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

+

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where Ï„ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (Ï„) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

diff --git a/dev/assets/Manifest.toml b/dev/assets/Manifest.toml index 45c1766..bf75ae9 100644 --- a/dev/assets/Manifest.toml +++ b/dev/assets/Manifest.toml @@ -62,10 +62,10 @@ weakdeps = ["StaticArrays"] AdaptStaticArraysExt = "StaticArrays" [[deps.AliasTables]] -deps = ["Random"] -git-tree-sha1 = "82b912bb5215792fd33df26f407d064d3602af98" +deps = ["PtrArrays", "Random"] +git-tree-sha1 = "9876e1e164b144ca45e9e3198d0b689cadfed9ff" uuid = "66dad0bd-aa9a-41b7-9441-69ab47430ed8" -version = "1.1.2" +version = "1.1.3" [[deps.ArgCheck]] git-tree-sha1 = "a3a402a35a2f7e0b87828ccabbd5ebfbebe356b4" @@ -176,19 +176,20 @@ version = "0.5.0" [[deps.CPUSummary]] deps = ["CpuId", "IfElse", "PrecompileTools", "Static"] -git-tree-sha1 = "601f7e7b3d36f18790e2caf83a882d88e9b71ff1" +git-tree-sha1 = "585a387a490f1c4bd88be67eea15b93da5e85db7" uuid = "2a0fbf3d-bb9c-48f3-b0a9-814d99fd7ab9" -version = "0.2.4" +version = "0.2.5" [[deps.CUDA]] deps = ["AbstractFFTs", "Adapt", "BFloat16s", "CEnum", "CUDA_Driver_jll", "CUDA_Runtime_Discovery", "CUDA_Runtime_jll", "Crayons", "DataFrames", "ExprTools", "GPUArrays", "GPUCompiler", "KernelAbstractions", "LLVM", "LLVMLoopInfo", "LazyArtifacts", "Libdl", "LinearAlgebra", "Logging", "NVTX", "Preferences", "PrettyTables", "Printf", "Random", "Random123", "RandomNumbers", "Reexport", "Requires", "SparseArrays", "StaticArrays", "Statistics"] -git-tree-sha1 = "4e33522a036b39fc6f5cb7447ae3b28eb8fbe99b" +git-tree-sha1 = "fe61a257e94621e25471071ca58d29ea45eef13b" uuid = "052768ef-5323-5732-b1bb-66c8b64840ba" -version = "5.3.3" -weakdeps = ["ChainRulesCore", "SpecialFunctions"] +version = "5.3.4" +weakdeps = ["ChainRulesCore", "EnzymeCore", "SpecialFunctions"] [deps.CUDA.extensions] ChainRulesCoreExt = "ChainRulesCore" + EnzymeCoreExt = "EnzymeCore" SpecialFunctionsExt = "SpecialFunctions" [[deps.CUDA_Driver_jll]] @@ -262,9 +263,9 @@ version = "0.11.5" [[deps.Colors]] deps = ["ColorTypes", "FixedPointNumbers", "Reexport"] -git-tree-sha1 = "fc08e5930ee9a4e03f84bfb5211cb54e7769758a" +git-tree-sha1 = "362a287c3aa50601b0bc359053d5c2468f0e7ce0" uuid = "5ae59095-9a9b-59fe-a467-6f913c188581" -version = "0.12.10" +version = "0.12.11" [[deps.CommonSolve]] git-tree-sha1 = "0eee5eb66b1cf62cd6ad1b460238e60e4b09400c" @@ -376,9 +377,9 @@ version = "1.9.1" [[deps.DiffEqBase]] deps = ["ArrayInterface", "ConcreteStructs", "DataStructures", "DocStringExtensions", "EnumX", "EnzymeCore", "FastBroadcast", "FastClosures", "ForwardDiff", "FunctionWrappers", "FunctionWrappersWrappers", "LinearAlgebra", "Logging", "Markdown", "MuladdMacro", "Parameters", "PreallocationTools", "PrecompileTools", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "SparseArrays", "Static", "StaticArraysCore", "Statistics", "Tricks", "TruncatedStacktraces"] -git-tree-sha1 = "c8b0bdee28a1addddb7ab939365fe6543d7d2d0d" +git-tree-sha1 = "d520d3007de793f4fca16c77a25a9774ebe4ad6d" uuid = "2b5f629d-d688-5b77-993f-72d75c75574e" -version = "6.149.2" +version = "6.150.0" [deps.DiffEqBase.extensions] DiffEqBaseChainRulesCoreExt = "ChainRulesCore" @@ -503,18 +504,20 @@ version = "1.0.4" [[deps.Enzyme]] deps = ["CEnum", "EnzymeCore", "Enzyme_jll", "GPUCompiler", "LLVM", "Libdl", "LinearAlgebra", "ObjectFile", "Preferences", "Printf", "Random"] -git-tree-sha1 = "3fb48f9c18de1993c477457265b85130756746ae" +git-tree-sha1 = "5051d46f8795cd1adb16016f6b7145d42e98297f" uuid = "7da242da-08ed-463a-9acd-ee780be4f1d9" -version = "0.11.20" -weakdeps = ["SpecialFunctions"] +version = "0.12.6" +weakdeps = ["ChainRulesCore", "SpecialFunctions", "StaticArrays"] [deps.Enzyme.extensions] + EnzymeChainRulesCoreExt = "ChainRulesCore" EnzymeSpecialFunctionsExt = "SpecialFunctions" + EnzymeStaticArraysExt = "StaticArrays" [[deps.EnzymeCore]] -git-tree-sha1 = "1bc328eec34ffd80357f84a84bb30e4374e9bd60" +git-tree-sha1 = "18394bc78ac2814ff38fe5e0c9dc2cd171e2810c" uuid = "f151be2c-9106-41f4-ab19-57ee4f262869" -version = "0.6.6" +version = "0.7.2" weakdeps = ["Adapt"] [deps.EnzymeCore.extensions] @@ -522,9 +525,9 @@ weakdeps = ["Adapt"] [[deps.Enzyme_jll]] deps = ["Artifacts", "JLLWrappers", "LazyArtifacts", "Libdl", "TOML"] -git-tree-sha1 = "32d418c804279c60dd38ac7868126696f3205a4f" +git-tree-sha1 = "117141562896ca38b1a13bc515dfd2728bd86e55" uuid = "7cc45869-7501-5eee-bdea-0790c847d4ef" -version = "0.0.102+0" +version = "0.0.109+0" [[deps.Expat_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -605,9 +608,9 @@ version = "2.23.1" [[deps.FixedPointNumbers]] deps = ["Statistics"] -git-tree-sha1 = "335bfdceacc84c5cdf16aadc768aa5ddfc5383cc" +git-tree-sha1 = "05882d6995ae5c12bb5f36dd2ed3f61c98cbb172" uuid = "53c48c17-4a7d-5ca2-90c5-79b7896eea93" -version = "0.8.4" +version = "0.8.5" [[deps.Flux]] deps = ["Adapt", "ChainRulesCore", "Compat", "Functors", "LinearAlgebra", "MLUtils", "MacroTools", "NNlib", "OneHotArrays", "Optimisers", "Preferences", "ProgressLogging", "Random", "Reexport", "SparseArrays", "SpecialFunctions", "Statistics", "Zygote"] @@ -678,9 +681,9 @@ version = "0.1.6" [[deps.GPUCompiler]] deps = ["ExprTools", "InteractiveUtils", "LLVM", "Libdl", "Logging", "Scratch", "TimerOutputs", "UUIDs"] -git-tree-sha1 = "a846f297ce9d09ccba02ead0cae70690e072a119" +git-tree-sha1 = "1600477fba37c9fc067b9be21f5e8101f24a8865" uuid = "61eb1bfa-7361-4325-ad38-22787b887f55" -version = "0.25.0" +version = "0.26.4" [[deps.GenericSchur]] deps = ["LinearAlgebra", "Printf"] @@ -834,9 +837,9 @@ version = "0.6.0" [[deps.KernelAbstractions]] deps = ["Adapt", "Atomix", "InteractiveUtils", "LinearAlgebra", "MacroTools", "PrecompileTools", "Requires", "SparseArrays", "StaticArrays", "UUIDs", "UnsafeAtomics", "UnsafeAtomicsLLVM"] -git-tree-sha1 = "ed7167240f40e62d97c1f5f7735dea6de3cc5c49" +git-tree-sha1 = "db02395e4c374030c53dc28f3c1d33dec35f7272" uuid = "63c18a36-062a-441e-b654-da1e3ab1ce7c" -version = "0.9.18" +version = "0.9.19" weakdeps = ["EnzymeCore"] [deps.KernelAbstractions.extensions] @@ -950,9 +953,9 @@ uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" [[deps.LinearSolve]] deps = ["ArrayInterface", "ChainRulesCore", "ConcreteStructs", "DocStringExtensions", "EnumX", "FastLapackInterface", "GPUArraysCore", "InteractiveUtils", "KLU", "Krylov", "LazyArrays", "Libdl", "LinearAlgebra", "MKL_jll", "Markdown", "PrecompileTools", "Preferences", "RecursiveFactorization", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "SparseArrays", "Sparspak", "StaticArraysCore", "UnPack"] -git-tree-sha1 = "c55172df0d19b34db93c410cfcd79dbc3e52ba6f" +git-tree-sha1 = "efd815eaa56c0ffdf86581df5aaefb7e901323a0" uuid = "7ed4a6bd-45f5-4d41-b270-4a48e9bafcae" -version = "2.29.1" +version = "2.30.0" [deps.LinearSolve.extensions] LinearSolveBandedMatricesExt = "BandedMatrices" @@ -1108,9 +1111,9 @@ version = "4.5.1" [[deps.NNlib]] deps = ["Adapt", "Atomix", "ChainRulesCore", "GPUArraysCore", "KernelAbstractions", "LinearAlgebra", "Pkg", "Random", "Requires", "Statistics"] -git-tree-sha1 = "5055845dd316575ae2fc1f6dcb3545ff15fe547a" +git-tree-sha1 = "e0cea7ec219ada9ac80ec2e82e374ab2f154ae05" uuid = "872c559c-99b0-510c-b3b7-b6c96a88d5cd" -version = "0.9.14" +version = "0.9.16" [deps.NNlib.extensions] NNlibAMDGPUExt = "AMDGPU" @@ -1154,9 +1157,9 @@ version = "1.2.0" [[deps.NonlinearSolve]] deps = ["ADTypes", "ArrayInterface", "ConcreteStructs", "DiffEqBase", "FastBroadcast", "FastClosures", "FiniteDiff", "ForwardDiff", "LazyArrays", "LineSearches", "LinearAlgebra", "LinearSolve", "MaybeInplace", "PrecompileTools", "Preferences", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SimpleNonlinearSolve", "SparseArrays", "SparseDiffTools", "StaticArraysCore", "SymbolicIndexingInterface", "TimerOutputs"] -git-tree-sha1 = "4891b745bd621f88aac661f2504d014931b443ba" +git-tree-sha1 = "dc0d78eeed89323526203b8a11a4fa6cdbe25cd6" uuid = "8913a72c-1f9b-4ce2-8d82-65094dcecaec" -version = "3.10.0" +version = "3.11.0" [deps.NonlinearSolve.extensions] NonlinearSolveBandedMatricesExt = "BandedMatrices" @@ -1251,10 +1254,10 @@ uuid = "bac558e1-5e72-5ebc-8fee-abe8a469f55d" version = "1.6.3" [[deps.OrdinaryDiffEq]] -deps = ["ADTypes", "Adapt", "ArrayInterface", "DataStructures", "DiffEqBase", "DocStringExtensions", "ExponentialUtilities", "FastBroadcast", "FastClosures", "FillArrays", "FiniteDiff", "ForwardDiff", "FunctionWrappersWrappers", "IfElse", "InteractiveUtils", "LineSearches", "LinearAlgebra", "LinearSolve", "Logging", "MacroTools", "MuladdMacro", "NonlinearSolve", "Polyester", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SimpleNonlinearSolve", "SimpleUnPack", "SparseArrays", "SparseDiffTools", "StaticArrayInterface", "StaticArrays", "TruncatedStacktraces"] -git-tree-sha1 = "cd8c4fb1cc88e65e27f92c7e714afc430cd1debc" +deps = ["ADTypes", "Adapt", "ArrayInterface", "DataStructures", "DiffEqBase", "DocStringExtensions", "ExponentialUtilities", "FastBroadcast", "FastClosures", "FillArrays", "FiniteDiff", "ForwardDiff", "FunctionWrappersWrappers", "IfElse", "InteractiveUtils", "LineSearches", "LinearAlgebra", "LinearSolve", "Logging", "MacroTools", "MuladdMacro", "NonlinearSolve", "Polyester", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SciMLStructures", "SimpleNonlinearSolve", "SimpleUnPack", "SparseArrays", "SparseDiffTools", "StaticArrayInterface", "StaticArrays", "TruncatedStacktraces"] +git-tree-sha1 = "4cf03bfe9c6159f66b57cda85f169cd0eff0818d" uuid = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed" -version = "6.75.0" +version = "6.76.0" [[deps.PCRE2_jll]] deps = ["Artifacts", "Libdl"] @@ -1298,9 +1301,9 @@ version = "0.4.4" [[deps.Polyester]] deps = ["ArrayInterface", "BitTwiddlingConvenienceFunctions", "CPUSummary", "IfElse", "ManualMemory", "PolyesterWeave", "Requires", "Static", "StaticArrayInterface", "StrideArraysCore", "ThreadingUtilities"] -git-tree-sha1 = "2ba5f33cbb51a85ef58a850749492b08f9bf2193" +git-tree-sha1 = "b3e2bae88cf07baf0a051fe09666b8ef97aefe93" uuid = "f517fe37-dbe3-4b94-8317-1923a5111588" -version = "0.7.13" +version = "0.7.14" [[deps.PolyesterWeave]] deps = ["BitTwiddlingConvenienceFunctions", "CPUSummary", "IfElse", "Static", "ThreadingUtilities"] @@ -1363,6 +1366,11 @@ git-tree-sha1 = "80d919dee55b9c50e8d9e2da5eeafff3fe58b539" uuid = "33c8b6b6-d38a-422a-b730-caa89a2f386c" version = "0.1.4" +[[deps.PtrArrays]] +git-tree-sha1 = "077664975d750757f30e739c870fbbdc01db7913" +uuid = "43287f4e-b6f4-7ad1-bb20-aadabca52c3d" +version = "1.1.0" + [[deps.QuadGK]] deps = ["DataStructures", "LinearAlgebra"] git-tree-sha1 = "9b23c31e76e333e6fb4c1595ae6afa74966a729e" @@ -1403,9 +1411,9 @@ version = "1.3.4" [[deps.RecursiveArrayTools]] deps = ["Adapt", "ArrayInterface", "DocStringExtensions", "GPUArraysCore", "IteratorInterfaceExtensions", "LinearAlgebra", "RecipesBase", "SparseArrays", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] -git-tree-sha1 = "f599a896fb28043dd63a4d372231dfcbdd117394" +git-tree-sha1 = "758bc86b90e9fee2edc4af2a750b0d3f2d5c02c5" uuid = "731186ca-8d62-57ce-b412-fbd966d074cd" -version = "3.16.0" +version = "3.19.0" [deps.RecursiveArrayTools.extensions] RecursiveArrayToolsFastBroadcastExt = "FastBroadcast" @@ -1495,9 +1503,9 @@ version = "0.6.42" [[deps.SciMLBase]] deps = ["ADTypes", "ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "PrecompileTools", "Preferences", "Printf", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLOperators", "SciMLStructures", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] -git-tree-sha1 = "397367599b9526a49cc06a4db70835807498b561" +git-tree-sha1 = "265f1a7a804d8093fa0b17e33e45373a77e56ca5" uuid = "0bca4576-84f4-4d90-8ffe-ffa030f20462" -version = "2.36.1" +version = "2.38.0" [deps.SciMLBase.extensions] SciMLBaseChainRulesCoreExt = "ChainRulesCore" @@ -1526,14 +1534,14 @@ version = "0.3.8" [[deps.SciMLSensitivity]] deps = ["ADTypes", "Adapt", "ArrayInterface", "ChainRulesCore", "DiffEqBase", "DiffEqCallbacks", "DiffEqNoiseProcess", "Distributions", "EllipsisNotation", "Enzyme", "FiniteDiff", "ForwardDiff", "FunctionProperties", "FunctionWrappersWrappers", "Functors", "GPUArraysCore", "LinearAlgebra", "LinearSolve", "Markdown", "OrdinaryDiffEq", "Parameters", "PreallocationTools", "QuadGK", "Random", "RandomNumbers", "RecursiveArrayTools", "Reexport", "ReverseDiff", "SciMLBase", "SciMLOperators", "SparseDiffTools", "StaticArrays", "StaticArraysCore", "Statistics", "StochasticDiffEq", "Tracker", "TruncatedStacktraces", "Zygote"] -git-tree-sha1 = "a7f777fff9cc15920e1e6c040c1e25b769760a8e" +git-tree-sha1 = "d3a211a19c01187a2818d581fd24593bd6469255" uuid = "1ed8b502-d754-442c-8d5d-10ac956f44a1" -version = "7.56.2" +version = "7.58.0" [[deps.SciMLStructures]] -git-tree-sha1 = "5833c10ce83d690c124beedfe5f621b50b02ba4d" +git-tree-sha1 = "d778a74df2f64059c38453b34abad1953b2b8722" uuid = "53ae85a6-f571-4167-b2af-e1d143709226" -version = "1.1.0" +version = "1.2.0" [[deps.Scratch]] deps = ["Dates"] @@ -1543,9 +1551,9 @@ version = "1.2.1" [[deps.SentinelArrays]] deps = ["Dates", "Random"] -git-tree-sha1 = "0e7508ff27ba32f26cd459474ca2ede1bc10991f" +git-tree-sha1 = "363c4e82b66be7b9f7c7c7da7478fdae07de44b9" uuid = "91c51154-3ec4-41a3-a24f-3f23e20d615c" -version = "1.4.1" +version = "1.4.2" [[deps.Serialization]] uuid = "9e88b42a-f829-5b0c-bbe9-9e923198166b" @@ -1679,9 +1687,9 @@ weakdeps = ["OffsetArrays", "StaticArrays"] [[deps.StaticArrays]] deps = ["LinearAlgebra", "PrecompileTools", "Random", "StaticArraysCore"] -git-tree-sha1 = "bf074c045d3d5ffd956fa0a461da38a44685d6b2" +git-tree-sha1 = "9ae599cd7529cfce7fea36cf00a62cfc56f0f37c" uuid = "90137ffa-7385-5640-81b9-e52037218182" -version = "1.9.3" +version = "1.9.4" weakdeps = ["ChainRulesCore", "Statistics"] [deps.StaticArrays.extensions] diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index 50f6447..7a02e4a 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -17,7 +17,7 @@ ## Solving with multiple threads sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 0.967727048798521]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
+u(x,t): [1.0, 0.9680111932757011]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
 
 ## Definition of the problem
 d = 10 # dimension of the problem
@@ -35,7 +35,7 @@
 
 sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 1.2232695874016986]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
+u(x,t): [1.0, 1.2239476260731434]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
 using Flux # needed to define the neural network
 
 ## Definition of the problem
@@ -72,11 +72,11 @@
             maxiters = 1000,
             batch_size = 1000)
PIDESolution
 timespan: 0.0:0.09999988228082657:0.49999941140413284
-u(x,t): Float32[1.0, 0.9090527, 0.95790917, 0.99590546, 1.0465239, 1.0918268]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
+u(x,t): Float32[1.0, 0.90499204, 0.94682145, 1.0006437, 1.0455492, 1.0892645]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
             alg, 
             0.1, 
             verbose = true, 
             abstol = 2e-3,
             maxiters = 1000,
             batch_size = 1000,
-            use_cuda=true)
+ use_cuda=true) diff --git a/dev/index.html b/dev/index.html index eef678d..82d37f0 100644 --- a/dev/index.html +++ b/dev/index.html @@ -25,7 +25,7 @@ [1520ce14] AbstractTrees v0.4.5 [7d9f7c33] Accessors v0.1.36 [79e6a3ab] Adapt v4.0.4 - [66dad0bd] AliasTables v1.1.2 + [66dad0bd] AliasTables v1.1.3 [dce04be8] ArgCheck v2.3.0 [ec485272] ArnoldiMethod v0.4.0 [4fba245c] ArrayInterface v7.10.0 @@ -36,8 +36,8 @@ [9718e550] Baselet v0.1.1 [62783981] BitTwiddlingConvenienceFunctions v0.1.5 [fa961155] CEnum v0.5.0 - [2a0fbf3d] CPUSummary v0.2.4 - [052768ef] CUDA v5.3.3 + [2a0fbf3d] CPUSummary v0.2.5 + [052768ef] CUDA v5.3.4 ⌅ [1af6417a] CUDA_Runtime_Discovery v0.2.4 [49dc2e85] Calculus v0.5.1 [7057c7e9] Cassette v0.3.13 @@ -46,7 +46,7 @@ [fb6a15b2] CloseOpenIntervals v0.1.12 [944b1d66] CodecZlib v0.7.4 [3da002f7] ColorTypes v0.11.5 - [5ae59095] Colors v0.12.10 + [5ae59095] Colors v0.12.11 [38540f10] CommonSolve v0.2.4 [bbf7d656] CommonSubexpressions v0.3.0 [34da2185] Compat v4.15.0 @@ -62,7 +62,7 @@ [e2d170a0] DataValueInterfaces v1.0.0 [244e2a9f] DefineSingletons v0.1.2 [8bb1440f] DelimitedFiles v1.9.1 - [2b5f629d] DiffEqBase v6.149.2 + [2b5f629d] DiffEqBase v6.150.0 [459566f4] DiffEqCallbacks v3.6.2 [77a26b50] DiffEqNoiseProcess v5.21.0 [163ba53b] DiffResults v1.1.0 @@ -74,8 +74,8 @@ [fa6b7ba4] DualNumbers v0.6.8 [da5c29d0] EllipsisNotation v1.8.0 [4e289a0a] EnumX v1.0.4 -⌅ [7da242da] Enzyme v0.11.20 -⌅ [f151be2c] EnzymeCore v0.6.6 + [7da242da] Enzyme v0.12.6 + [f151be2c] EnzymeCore v0.7.2 [d4d017d3] ExponentialUtilities v1.26.1 [e2ba6199] ExprTools v0.1.10 [cc61a311] FLoops v0.2.1 @@ -85,7 +85,7 @@ [29a986be] FastLapackInterface v2.0.3 [1a297f60] FillArrays v1.11.0 [6a86dc24] FiniteDiff v2.23.1 - [53c48c17] FixedPointNumbers v0.8.4 + [53c48c17] FixedPointNumbers v0.8.5 [587475ba] Flux v0.14.15 [f6369f11] ForwardDiff v0.10.36 [f62d2435] FunctionProperties v0.1.2 @@ -94,7 +94,7 @@ [d9f16b24] Functors v0.4.10 [0c68f7d7] GPUArrays v10.1.0 [46192b85] GPUArraysCore v0.1.6 -⌅ [61eb1bfa] GPUCompiler v0.25.0 + [61eb1bfa] GPUCompiler v0.26.4 [c145ed77] GenericSchur v0.5.4 [d7ba0133] Git v1.3.1 [86223c79] Graphs v1.11.0 @@ -116,7 +116,7 @@ [b14d175d] JuliaVariables v0.2.4 [ccbc3e58] JumpProcesses v9.11.1 [ef3ab10e] KLU v0.6.0 - [63c18a36] KernelAbstractions v0.9.18 + [63c18a36] KernelAbstractions v0.9.19 [ba0b0d4f] Krylov v0.9.6 ⌅ [929cbde3] LLVM v6.6.3 [8b046642] LLVMLoopInfo v1.0.0 @@ -126,7 +126,7 @@ [5078a376] LazyArrays v1.10.0 [2d8b4e74] LevyArea v1.0.0 [d3d80556] LineSearches v7.2.0 - [7ed4a6bd] LinearSolve v2.29.1 + [7ed4a6bd] LinearSolve v2.30.0 [2ab3a3ac] LogExpFunctions v0.3.27 [bdcacae8] LoopVectorization v0.12.170 [d8e11817] MLStyle v0.4.17 @@ -141,24 +141,24 @@ [46d2c3a1] MuladdMacro v0.2.4 [d41bc354] NLSolversBase v7.8.3 [2774e3e8] NLsolve v4.5.1 - [872c559c] NNlib v0.9.14 + [872c559c] NNlib v0.9.16 [5da4648a] NVTX v0.3.4 [77ba4419] NaNMath v1.0.2 [71a1bf82] NameResolution v0.1.5 - [8913a72c] NonlinearSolve v3.10.0 + [8913a72c] NonlinearSolve v3.11.0 [d8793406] ObjectFile v0.4.1 [6fe1bfb0] OffsetArrays v1.14.0 [0b1bfda6] OneHotArrays v0.2.5 [429524aa] Optim v1.9.4 [3bd65402] Optimisers v0.3.3 [bac558e1] OrderedCollections v1.6.3 - [1dea7af3] OrdinaryDiffEq v6.75.0 + [1dea7af3] OrdinaryDiffEq v6.76.0 [90014a1f] PDMats v0.11.31 [65ce6f38] PackageExtensionCompat v1.0.2 [d96e819e] Parameters v0.12.3 [69de0a69] Parsers v2.8.1 [e409e4f3] PoissonRandom v0.4.4 - [f517fe37] Polyester v0.7.13 + [f517fe37] Polyester v0.7.14 [1d0040c9] PolyesterWeave v0.2.1 [2dfb63ee] PooledArrays v1.4.3 [85a6dd25] PositiveFactorizations v0.2.4 @@ -168,12 +168,13 @@ [8162dcfd] PrettyPrint v0.2.0 [08abe8d2] PrettyTables v2.3.1 [33c8b6b6] ProgressLogging v0.1.4 + [43287f4e] PtrArrays v1.1.0 [1fd47b50] QuadGK v2.9.4 [74087812] Random123 v1.7.0 [e6cf234a] RandomNumbers v1.5.3 [c1ae055f] RealDot v0.1.0 [3cdcf5f2] RecipesBase v1.3.4 - [731186ca] RecursiveArrayTools v3.16.0 + [731186ca] RecursiveArrayTools v3.19.0 [f2c3362d] RecursiveFactorization v0.2.23 [189a3867] Reexport v1.2.2 [2792f1a3] RegistryInstances v0.1.0 @@ -184,12 +185,12 @@ [7e49a35a] RuntimeGeneratedFunctions v0.5.13 [94e857df] SIMDTypes v0.1.0 [476501e8] SLEEFPirates v0.6.42 - [0bca4576] SciMLBase v2.36.1 + [0bca4576] SciMLBase v2.38.0 [c0aeaf25] SciMLOperators v0.3.8 - [1ed8b502] SciMLSensitivity v7.56.2 - [53ae85a6] SciMLStructures v1.1.0 + [1ed8b502] SciMLSensitivity v7.58.0 + [53ae85a6] SciMLStructures v1.2.0 [6c6a2e73] Scratch v1.2.1 - [91c51154] SentinelArrays v1.4.1 + [91c51154] SentinelArrays v1.4.2 [efcf1570] Setfield v1.1.1 [605ecd9f] ShowCases v0.1.0 [727e6d20] SimpleNonlinearSolve v1.8.0 @@ -203,7 +204,7 @@ [171d559e] SplittablesBase v0.1.15 [aedffcd0] Static v0.8.10 [0d7ed370] StaticArrayInterface v1.5.0 - [90137ffa] StaticArrays v1.9.3 + [90137ffa] StaticArrays v1.9.4 [1e83bf80] StaticArraysCore v1.4.2 [82ae8749] StatsAPI v1.7.0 [2913bbd2] StatsBase v0.34.3 @@ -235,7 +236,7 @@ [4ee394cb] CUDA_Driver_jll v0.8.1+0 ⌅ [76a88914] CUDA_Runtime_jll v0.12.1+0 [62b44479] CUDNN_jll v9.0.0+1 -⌅ [7cc45869] Enzyme_jll v0.0.102+0 +⌅ [7cc45869] Enzyme_jll v0.0.109+0 [2e619515] Expat_jll v2.6.2+0 [f8c6e375] Git_jll v2.44.0+2 [1d5cc7b8] IntelOpenMP_jll v2024.1.0+0 @@ -296,4 +297,4 @@ [8e850b90] libblastrampoline_jll v5.8.0+1 [8e850ede] nghttp2_jll v1.52.0+1 [3f19e933] p7zip_jll v17.4.0+2 -Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/dev/problems/index.html b/dev/problems/index.html index ff74b3b..5a2a951 100644 --- a/dev/problems/index.html +++ b/dev/problems/index.html @@ -31,4 +31,4 @@

Defines a Parabolic Partial Differential Equation of the form:

\[\begin{aligned} \frac{du}{dt} &= \tfrac{1}{2} \text{Tr}(\sigma \sigma^T) \Delta u(x, t) + \mu \nabla u(x, t) \\ &\quad + f(x, u(x, t), ( \nabla_x u )(x, t), p, t) -\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

+\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

diff --git a/dev/tutorials/deepbsde/index.html b/dev/tutorials/deepbsde/index.html index e5182ca..7414d29 100644 --- a/dev/tutorials/deepbsde/index.html +++ b/dev/tutorials/deepbsde/index.html @@ -67,4 +67,4 @@ Dense(hls,hls,relu), Dense(hls,d)) pdealg = NNPDENS(u0, σᵀ∇u, opt=opt)

And now we solve the PDE. Here, we say we want to solve the underlying neural SDE using the Euler-Maruyama SDE solver with our chosen dt=0.2, do at most 150 iterations of the optimizer, 100 SDE solves per loss evaluation (for averaging), and stop if the loss ever goes below 1f-6.

ans = solve(prob, pdealg, verbose=true, maxiters=150, trajectories=100,
-                            alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
+ alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
diff --git a/dev/tutorials/deepsplitting/index.html b/dev/tutorials/deepsplitting/index.html index 2d5a9f6..0b59f6b 100644 --- a/dev/tutorials/deepsplitting/index.html +++ b/dev/tutorials/deepsplitting/index.html @@ -41,4 +41,4 @@ abstol = 2e-3, maxiters = 1000, batch_size = 1000, - use_cuda=true) + use_cuda=true) diff --git a/dev/tutorials/mlp/index.html b/dev/tutorials/mlp/index.html index 2fd407c..afc05a9 100644 --- a/dev/tutorials/mlp/index.html +++ b/dev/tutorials/mlp/index.html @@ -31,4 +31,4 @@ ## Definition of the algorithm alg = MLP(mc_sample = mc_sample ) -sol = solve(prob, alg, multithreading=true) +sol = solve(prob, alg, multithreading=true) diff --git a/dev/tutorials/nnkolmogorov/index.html b/dev/tutorials/nnkolmogorov/index.html index d7142d8..9165ab0 100644 --- a/dev/tutorials/nnkolmogorov/index.html +++ b/dev/tutorials/nnkolmogorov/index.html @@ -25,4 +25,4 @@ alg = NNKolmogorov(m, opt) m = Chain(Dense(d, 16, elu), Dense(16, 32, elu), Dense(32, 16, elu), Dense(16, 1)) sol = solve(prob, alg, sdealg, verbose = true, dt = 0.01, - dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) + dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) diff --git a/dev/tutorials/nnparamkolmogorov/index.html b/dev/tutorials/nnparamkolmogorov/index.html index 1366ca2..1a3fe7a 100644 --- a/dev/tutorials/nnparamkolmogorov/index.html +++ b/dev/tutorials/nnparamkolmogorov/index.html @@ -43,4 +43,4 @@ p_sigma_test = rand(p_domain.p_sigma[1]:dps.p_sigma:p_domain.p_sigma[2], 1, 1) t_test = rand(tspan[1]:dt:tspan[2], 1, 1) p_mu_test = nothing -p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
+p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
diff --git a/dev/tutorials/nnstopping/index.html b/dev/tutorials/nnstopping/index.html index d510da4..4ce9260 100644 --- a/dev/tutorials/nnstopping/index.html +++ b/dev/tutorials/nnstopping/index.html @@ -21,4 +21,4 @@ for i in 1:N]
Note

The number of models should be equal to the time discritization.

And finally we define our optimizer and algorithm, and call solve:

opt = Flux.Optimisers.Adam(0.01)
 alg = NNStopping(models, opt)
 
-sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)
+sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)