From 012ecc80948018bdc9c2bd20cdb430855fa699ed Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 29 Mar 2024 01:24:04 +0000 Subject: [PATCH] build based on cf21830 --- dev/.documenter-siteinfo.json | 2 +- dev/DeepBSDE/index.html | 2 +- dev/DeepSplitting/index.html | 2 +- dev/Feynman_Kac/index.html | 2 +- dev/MLP/index.html | 2 +- dev/NNKolmogorov/index.html | 2 +- dev/NNParamKolmogorov/index.html | 2 +- dev/NNStopping/index.html | 2 +- dev/assets/Manifest.toml | 94 ++++++++++++++-------- dev/getting_started/index.html | 8 +- dev/index.html | 28 ++++--- dev/problems/index.html | 2 +- dev/tutorials/deepbsde/index.html | 2 +- dev/tutorials/deepsplitting/index.html | 2 +- dev/tutorials/mlp/index.html | 2 +- dev/tutorials/nnkolmogorov/index.html | 2 +- dev/tutorials/nnparamkolmogorov/index.html | 2 +- dev/tutorials/nnstopping/index.html | 2 +- 18 files changed, 94 insertions(+), 66 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index f96fe78..60f4020 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-22T01:23:45","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-29T01:24:00","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/dev/DeepBSDE/index.html b/dev/DeepBSDE/index.html index 8f1bb97..61098f2 100644 --- a/dev/DeepBSDE/index.html +++ b/dev/DeepBSDE/index.html @@ -64,4 +64,4 @@ trajectories_lower, maxiters_limits ) -

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

+

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

diff --git a/dev/DeepSplitting/index.html b/dev/DeepSplitting/index.html index 4058a4f..835f73d 100644 --- a/dev/DeepSplitting/index.html +++ b/dev/DeepSplitting/index.html @@ -20,4 +20,4 @@ cuda_device, verbose_rate ) -> PIDESolution{_A, _B, _C, Vector{_A1}, Vector{Any}, Nothing} where {_A, _B, _C, _A1} -

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

  • dt
  • batch_size
  • maxiters: the number of iterations for minimizing the loss function
  • abstol: the absolute tolerance for the loss function
  • use_cuda: if you have a Nvidia GPU, recommended.

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

+

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

  • dt
  • batch_size
  • maxiters: the number of iterations for minimizing the loss function
  • abstol: the absolute tolerance for the loss function
  • use_cuda: if you have a Nvidia GPU, recommended.

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

diff --git a/dev/Feynman_Kac/index.html b/dev/Feynman_Kac/index.html index ddee92f..9a30c35 100644 --- a/dev/Feynman_Kac/index.html +++ b/dev/Feynman_Kac/index.html @@ -7,4 +7,4 @@ v(\tau, x) &= \int_{-\tau}^0 \mathbb{E} \left[ f(X^x_{s + \tau}, v(s + T, X^x_{s + \tau}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= - \int_{\tau}^0 \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= \int_{0}^\tau \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]. -\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

+\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

diff --git a/dev/MLP/index.html b/dev/MLP/index.html index 1bedd02..6798139 100644 --- a/dev/MLP/index.html +++ b/dev/MLP/index.html @@ -16,4 +16,4 @@ u_L &= \sum_{l=1}^{L-1} \frac{1}{M^{L-l}}\sum_{i=1}^{M^{L-l}} \frac{1}{K}\sum_{j=1}^{K} \bigg[ f(X^{x,(l, i)}_{t - s_{(l, i)}}, Z^{(l,j)}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}), u(T-s_{l,i}, Z^{(l,j)})) + \\ &\qquad \mathbf{1}_\N(l) f(X^{x,(l, i)}_{t - s_{(l, i)}}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}))\bigg] + \frac{1}{M^{L}}\sum_i^{M^{L}} u(0, X^{x,(l, i)}_t)\\ -\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

  • K characterizes the number of samples for the Monte Carlo approximation of the last term.
  • mc_sample characterizes the distribution of the Z variables

References

+\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

  • K characterizes the number of samples for the Monte Carlo approximation of the last term.
  • mc_sample characterizes the distribution of the Z variables

References

diff --git a/dev/NNKolmogorov/index.html b/dev/NNKolmogorov/index.html index 1b8dd2f..abea744 100644 --- a/dev/NNKolmogorov/index.html +++ b/dev/NNKolmogorov/index.html @@ -14,4 +14,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

+

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

diff --git a/dev/NNParamKolmogorov/index.html b/dev/NNParamKolmogorov/index.html index beda6d0..884906d 100644 --- a/dev/NNParamKolmogorov/index.html +++ b/dev/NNParamKolmogorov/index.html @@ -20,4 +20,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

+

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

diff --git a/dev/NNStopping/index.html b/dev/NNStopping/index.html index ded75ca..ae0ae28 100644 --- a/dev/NNStopping/index.html +++ b/dev/NNStopping/index.html @@ -10,4 +10,4 @@ ensemblealg, kwargs... ) -> NamedTuple{(:payoff, :stopping_time), <:Tuple{Any, Any}} -

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where Ï„ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (Ï„) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

+

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where Ï„ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (Ï„) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

diff --git a/dev/assets/Manifest.toml b/dev/assets/Manifest.toml index 83b8009..10426d3 100644 --- a/dev/assets/Manifest.toml +++ b/dev/assets/Manifest.toml @@ -30,6 +30,27 @@ git-tree-sha1 = "2d9c9a55f9c93e8887ad391fbae72f8ef55e1177" uuid = "1520ce14-60c1-5f80-bbc7-55ef81b5835c" version = "0.4.5" +[[deps.Accessors]] +deps = ["CompositionsBase", "ConstructionBase", "Dates", "InverseFunctions", "LinearAlgebra", "MacroTools", "Markdown", "Test"] +git-tree-sha1 = "c0d491ef0b135fd7d63cbc6404286bc633329425" +uuid = "7d9f7c33-5ae7-4f3b-8dc6-eff91059b697" +version = "0.1.36" + + [deps.Accessors.extensions] + AccessorsAxisKeysExt = "AxisKeys" + AccessorsIntervalSetsExt = "IntervalSets" + AccessorsStaticArraysExt = "StaticArrays" + AccessorsStructArraysExt = "StructArrays" + AccessorsUnitfulExt = "Unitful" + + [deps.Accessors.weakdeps] + AxisKeys = "94b1ba4f-4ee9-5380-92f1-94cde586c3c5" + IntervalSets = "8197267c-284f-5f27-9208-e0e47529a953" + Requires = "ae029012-a4dd-5104-9daa-d747884805df" + StaticArrays = "90137ffa-7385-5640-81b9-e52037218182" + StructArrays = "09ab397b-f2b6-538f-b94a-2f83cf4a842a" + Unitful = "1986cc42-f94f-5a68-af5c-568840ba703d" + [[deps.Adapt]] deps = ["LinearAlgebra", "Requires"] git-tree-sha1 = "6a55b747d1812e699320963ffde36f1ebdda4099" @@ -83,9 +104,9 @@ version = "7.9.0" [[deps.ArrayLayouts]] deps = ["FillArrays", "LinearAlgebra"] -git-tree-sha1 = "2aeaeaff72cdedaa0b5f30dfb8c1f16aefdac65d" +git-tree-sha1 = "6404a564c24a994814106c374bec893195e19bac" uuid = "4c555306-a7a7-4459-81d9-ec55ddd5c99a" -version = "1.7.0" +version = "1.8.0" weakdeps = ["SparseArrays"] [deps.ArrayLayouts.extensions] @@ -267,13 +288,11 @@ version = "1.1.0+0" git-tree-sha1 = "802bb88cd69dfd1509f6670416bd4434015693ad" uuid = "a33af91c-f02d-484b-be07-31d278c5ca2b" version = "0.1.2" +weakdeps = ["InverseFunctions"] [deps.CompositionsBase.extensions] CompositionsBaseInverseFunctionsExt = "InverseFunctions" - [deps.CompositionsBase.weakdeps] - InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112" - [[deps.ConcreteStructs]] git-tree-sha1 = "f749037478283d372048690eb3b5f92a79432b34" uuid = "2569d6c7-a4a2-43d3-a901-331e8e4be471" @@ -379,9 +398,9 @@ version = "6.148.0" [[deps.DiffEqCallbacks]] deps = ["DataStructures", "DiffEqBase", "ForwardDiff", "Functors", "LinearAlgebra", "Markdown", "NonlinearSolve", "Parameters", "RecipesBase", "RecursiveArrayTools", "SciMLBase", "StaticArraysCore"] -git-tree-sha1 = "a731383bbafb87d496fb5e66f60c40e4a5f8f726" +git-tree-sha1 = "e73f4d7e780cf78eea9f13dd6eaccb0ef3c6a241" uuid = "459566f4-90b8-5000-8ac3-15dfb0a30def" -version = "3.4.0" +version = "3.4.1" [deps.DiffEqCallbacks.weakdeps] OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed" @@ -657,9 +676,9 @@ version = "0.25.0" [[deps.GenericSchur]] deps = ["LinearAlgebra", "Printf"] -git-tree-sha1 = "fb69b2a645fa69ba5f474af09221b9308b160ce6" +git-tree-sha1 = "af49a0851f8113fcfae2ef5027c6d49d0acec39b" uuid = "c145ed77-6b09-5dd9-b285-bf645a82121e" -version = "0.5.3" +version = "0.5.4" [[deps.Git]] deps = ["Git_jll"] @@ -740,6 +759,16 @@ version = "2024.0.2+0" deps = ["Markdown"] uuid = "b77e0a4c-d291-57a0-90e8-8db25a27a240" +[[deps.InverseFunctions]] +deps = ["Test"] +git-tree-sha1 = "896385798a8d49a255c398bd49162062e4a4c435" +uuid = "3587e190-3f89-42d0-90ee-14403ec27112" +version = "0.1.13" +weakdeps = ["Dates"] + + [deps.InverseFunctions.extensions] + DatesExt = "Dates" + [[deps.InvertedIndices]] git-tree-sha1 = "0dc7b50b8d436461be01300fd8cd45aa0274b038" uuid = "41ab1584-1d38-5bbf-9106-f11c6c58b48f" @@ -780,10 +809,10 @@ uuid = "b14d175d-62b4-44ba-8fb7-3064adc8c3ec" version = "0.2.4" [[deps.JumpProcesses]] -deps = ["ArrayInterface", "DataStructures", "DiffEqBase", "DocStringExtensions", "FunctionWrappers", "Graphs", "LinearAlgebra", "Markdown", "PoissonRandom", "Random", "RandomNumbers", "RecursiveArrayTools", "Reexport", "SciMLBase", "StaticArrays", "UnPack"] -git-tree-sha1 = "c451feb97251965a9fe40bacd62551a72cc5902c" +deps = ["ArrayInterface", "DataStructures", "DiffEqBase", "DocStringExtensions", "FunctionWrappers", "Graphs", "LinearAlgebra", "Markdown", "PoissonRandom", "Random", "RandomNumbers", "RecursiveArrayTools", "Reexport", "SciMLBase", "StaticArrays", "SymbolicIndexingInterface", "UnPack"] +git-tree-sha1 = "ed08d89318be7d625613f3c435d1f6678fba4850" uuid = "ccbc3e58-028d-4f4c-8cd5-9ae44345cda5" -version = "9.10.1" +version = "9.11.1" weakdeps = ["FastBroadcast"] [deps.JumpProcesses.extensions] @@ -912,10 +941,10 @@ deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"] uuid = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" [[deps.LinearSolve]] -deps = ["ArrayInterface", "ChainRulesCore", "ConcreteStructs", "DocStringExtensions", "EnumX", "FastLapackInterface", "GPUArraysCore", "InteractiveUtils", "KLU", "Krylov", "Libdl", "LinearAlgebra", "MKL_jll", "Markdown", "PrecompileTools", "Preferences", "RecursiveFactorization", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "SparseArrays", "Sparspak", "StaticArraysCore", "UnPack"] -git-tree-sha1 = "73d8f61f8d27f279edfbafc93faaea93ea447e94" +deps = ["ArrayInterface", "ChainRulesCore", "ConcreteStructs", "DocStringExtensions", "EnumX", "FastLapackInterface", "GPUArraysCore", "InteractiveUtils", "KLU", "Krylov", "LazyArrays", "Libdl", "LinearAlgebra", "MKL_jll", "Markdown", "PrecompileTools", "Preferences", "RecursiveFactorization", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "SparseArrays", "Sparspak", "StaticArraysCore", "UnPack"] +git-tree-sha1 = "775e5e5d9ace42ef8deeb236587abc69e70dc455" uuid = "7ed4a6bd-45f5-4d41-b270-4a48e9bafcae" -version = "2.27.0" +version = "2.28.0" [deps.LinearSolve.extensions] LinearSolveBandedMatricesExt = "BandedMatrices" @@ -1069,9 +1098,9 @@ version = "4.5.1" [[deps.NNlib]] deps = ["Adapt", "Atomix", "ChainRulesCore", "GPUArraysCore", "KernelAbstractions", "LinearAlgebra", "Pkg", "Random", "Requires", "Statistics"] -git-tree-sha1 = "877f15c331337d54cf24c797d5bcb2e48ce21221" +git-tree-sha1 = "1fa1a14766c60e66ab22e242d45c1857c83a3805" uuid = "872c559c-99b0-510c-b3b7-b6c96a88d5cd" -version = "0.9.12" +version = "0.9.13" [deps.NNlib.extensions] NNlibAMDGPUExt = "AMDGPU" @@ -1115,9 +1144,9 @@ version = "1.2.0" [[deps.NonlinearSolve]] deps = ["ADTypes", "ArrayInterface", "ConcreteStructs", "DiffEqBase", "FastBroadcast", "FastClosures", "FiniteDiff", "ForwardDiff", "LazyArrays", "LineSearches", "LinearAlgebra", "LinearSolve", "MaybeInplace", "PrecompileTools", "Preferences", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SimpleNonlinearSolve", "SparseArrays", "SparseDiffTools", "StaticArraysCore", "TimerOutputs"] -git-tree-sha1 = "d52bac2b94358b4b960cbfb896d5193d67f3ff09" +git-tree-sha1 = "1638addfc31707aea26333ff822afcf9d2e6f7de" uuid = "8913a72c-1f9b-4ce2-8d82-65094dcecaec" -version = "3.8.0" +version = "3.8.3" [deps.NonlinearSolve.extensions] NonlinearSolveBandedMatricesExt = "BandedMatrices" @@ -1364,9 +1393,9 @@ version = "1.3.4" [[deps.RecursiveArrayTools]] deps = ["Adapt", "ArrayInterface", "DocStringExtensions", "GPUArraysCore", "IteratorInterfaceExtensions", "LinearAlgebra", "RecipesBase", "SparseArrays", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] -git-tree-sha1 = "a94d22ca9ad49a7a169ecbc5419c59b9793937cc" +git-tree-sha1 = "d8f131090f2e44b145084928856a561c83f43b27" uuid = "731186ca-8d62-57ce-b412-fbd966d074cd" -version = "3.12.0" +version = "3.13.0" [deps.RecursiveArrayTools.extensions] RecursiveArrayToolsFastBroadcastExt = "FastBroadcast" @@ -1456,9 +1485,9 @@ version = "0.6.42" [[deps.SciMLBase]] deps = ["ADTypes", "ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "PrecompileTools", "Preferences", "Printf", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLOperators", "SciMLStructures", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] -git-tree-sha1 = "48f724c6a3355f11dae5f762983073d367c8b934" +git-tree-sha1 = "d15c65e25615272e1b1c5edb1d307484c7942824" uuid = "0bca4576-84f4-4d90-8ffe-ffa030f20462" -version = "2.30.1" +version = "2.31.0" [deps.SciMLBase.extensions] SciMLBaseChainRulesCoreExt = "ChainRulesCore" @@ -1676,15 +1705,12 @@ deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Re git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a" uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c" version = "1.3.1" +weakdeps = ["ChainRulesCore", "InverseFunctions"] [deps.StatsFuns.extensions] StatsFunsChainRulesCoreExt = "ChainRulesCore" StatsFunsInverseFunctionsExt = "InverseFunctions" - [deps.StatsFuns.weakdeps] - ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4" - InverseFunctions = "3587e190-3f89-42d0-90ee-14403ec27112" - [[deps.StochasticDiffEq]] deps = ["Adapt", "ArrayInterface", "DataStructures", "DiffEqBase", "DiffEqNoiseProcess", "DocStringExtensions", "FiniteDiff", "ForwardDiff", "JumpProcesses", "LevyArea", "LinearAlgebra", "Logging", "MuladdMacro", "NLsolve", "OrdinaryDiffEq", "Random", "RandomNumbers", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SparseArrays", "SparseDiffTools", "StaticArrays", "UnPack"] git-tree-sha1 = "97e5d0b7e5ec2e68eec6626af97c59e9f6b6c3d0" @@ -1732,10 +1758,10 @@ uuid = "bea87d4a-7f5b-5778-9afe-8cc45184846c" version = "7.2.1+1" [[deps.SymbolicIndexingInterface]] -deps = ["MacroTools", "RuntimeGeneratedFunctions"] -git-tree-sha1 = "f7b1fc9fc2bc938436b7684c243be7d317919056" +deps = ["Accessors", "ArrayInterface", "MacroTools", "RuntimeGeneratedFunctions", "StaticArraysCore"] +git-tree-sha1 = "4b7f4c80449d8baae8857d55535033981862619c" uuid = "2efcf032-c050-4f8e-a9bb-153293bab1f5" -version = "0.3.11" +version = "0.3.15" [[deps.TOML]] deps = ["Dates"] @@ -1786,9 +1812,9 @@ weakdeps = ["PDMats"] TrackerPDMatsExt = "PDMats" [[deps.TranscodingStreams]] -git-tree-sha1 = "a09c933bebed12501890d8e92946bbab6a1690f1" +git-tree-sha1 = "71509f04d045ec714c4748c785a59045c3736349" uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa" -version = "0.10.5" +version = "0.10.7" weakdeps = ["Random", "Test"] [deps.TranscodingStreams.extensions] @@ -1816,9 +1842,9 @@ version = "0.4.80" [[deps.TriangularSolve]] deps = ["CloseOpenIntervals", "IfElse", "LayoutPointers", "LinearAlgebra", "LoopVectorization", "Polyester", "Static", "VectorizationBase"] -git-tree-sha1 = "fadebab77bf3ae041f77346dd1c290173da5a443" +git-tree-sha1 = "7ee8ed8904e7dd5d31bb46294ef5644d9e2e44e4" uuid = "d5829a12-d9aa-46ab-831f-fb7c9ab06edf" -version = "0.1.20" +version = "0.1.21" [[deps.Tricks]] git-tree-sha1 = "eae1bb484cd63b36999ee58be2de6c178105112f" diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index 42158a5..d207666 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -17,7 +17,7 @@ ## Solving with multiple threads sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 0.9667212410384998]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
+u(x,t): [1.0, 0.9682420316274496]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
 
 ## Definition of the problem
 d = 10 # dimension of the problem
@@ -35,7 +35,7 @@
 
 sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 1.2285750360496064]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
+u(x,t): [1.0, 1.2244347621507332]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
 using Flux # needed to define the neural network
 
 ## Definition of the problem
@@ -72,11 +72,11 @@
             maxiters = 1000,
             batch_size = 1000)
PIDESolution
 timespan: 0.0:0.09999988228082657:0.49999941140413284
-u(x,t): Float32[1.0, 0.8929079, 0.93796796, 0.9744463, 1.0345025, 1.0775464]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
+u(x,t): Float32[1.0, 0.9062339, 0.9416901, 0.9911105, 1.0241605, 1.0796193]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
             alg, 
             0.1, 
             verbose = true, 
             abstol = 2e-3,
             maxiters = 1000,
             batch_size = 1000,
-            use_cuda=true)
+ use_cuda=true) diff --git a/dev/index.html b/dev/index.html index f38458d..1fc0bfc 100644 --- a/dev/index.html +++ b/dev/index.html @@ -23,11 +23,12 @@ [a4c015fc] ANSIColoredPrinters v0.0.1 [621f4979] AbstractFFTs v1.5.0 [1520ce14] AbstractTrees v0.4.5 + [7d9f7c33] Accessors v0.1.36 [79e6a3ab] Adapt v4.0.4 [dce04be8] ArgCheck v2.3.0 ⌅ [ec485272] ArnoldiMethod v0.2.0 [4fba245c] ArrayInterface v7.9.0 - [4c555306] ArrayLayouts v1.7.0 + [4c555306] ArrayLayouts v1.8.0 [a9b6321e] Atomix v0.1.0 ⌃ [ab4f0b2a] BFloat16s v0.4.2 ⌅ [198e06fe] BangBang v0.3.40 @@ -61,7 +62,7 @@ [244e2a9f] DefineSingletons v0.1.2 [8bb1440f] DelimitedFiles v1.9.1 [2b5f629d] DiffEqBase v6.148.0 - [459566f4] DiffEqCallbacks v3.4.0 + [459566f4] DiffEqCallbacks v3.4.1 [77a26b50] DiffEqNoiseProcess v5.21.0 [163ba53b] DiffResults v1.1.0 [b552c78f] DiffRules v1.15.1 @@ -93,7 +94,7 @@ [0c68f7d7] GPUArrays v10.0.2 [46192b85] GPUArraysCore v0.1.6 ⌅ [61eb1bfa] GPUCompiler v0.25.0 - [c145ed77] GenericSchur v0.5.3 + [c145ed77] GenericSchur v0.5.4 [d7ba0133] Git v1.3.1 [86223c79] Graphs v1.9.0 [57c578d5] HighDimPDE v2.0.0 `~/work/HighDimPDE.jl/HighDimPDE.jl` @@ -105,13 +106,14 @@ [d25df0c9] Inflate v0.1.4 [22cec73e] InitialValues v0.3.1 [842dd82b] InlineStrings v1.4.0 + [3587e190] InverseFunctions v0.1.13 [41ab1584] InvertedIndices v1.3.0 [92d709cd] IrrationalConstants v0.2.2 [82899510] IteratorInterfaceExtensions v1.0.0 [692b3bcd] JLLWrappers v1.5.0 [682c06a0] JSON v0.21.4 [b14d175d] JuliaVariables v0.2.4 - [ccbc3e58] JumpProcesses v9.10.1 + [ccbc3e58] JumpProcesses v9.11.1 [ef3ab10e] KLU v0.6.0 [63c18a36] KernelAbstractions v0.9.18 [ba0b0d4f] Krylov v0.9.5 @@ -123,7 +125,7 @@ [5078a376] LazyArrays v1.8.3 [2d8b4e74] LevyArea v1.0.0 [d3d80556] LineSearches v7.2.0 - [7ed4a6bd] LinearSolve v2.27.0 + [7ed4a6bd] LinearSolve v2.28.0 [2ab3a3ac] LogExpFunctions v0.3.27 [bdcacae8] LoopVectorization v0.12.166 [d8e11817] MLStyle v0.4.17 @@ -138,11 +140,11 @@ [46d2c3a1] MuladdMacro v0.2.4 [d41bc354] NLSolversBase v7.8.3 [2774e3e8] NLsolve v4.5.1 - [872c559c] NNlib v0.9.12 + [872c559c] NNlib v0.9.13 [5da4648a] NVTX v0.3.4 [77ba4419] NaNMath v1.0.2 [71a1bf82] NameResolution v0.1.5 - [8913a72c] NonlinearSolve v3.8.0 + [8913a72c] NonlinearSolve v3.8.3 [d8793406] ObjectFile v0.4.1 [6fe1bfb0] OffsetArrays v1.13.0 [0b1bfda6] OneHotArrays v0.2.5 @@ -170,7 +172,7 @@ [e6cf234a] RandomNumbers v1.5.3 [c1ae055f] RealDot v0.1.0 [3cdcf5f2] RecipesBase v1.3.4 - [731186ca] RecursiveArrayTools v3.12.0 + [731186ca] RecursiveArrayTools v3.13.0 [f2c3362d] RecursiveFactorization v0.2.21 [189a3867] Reexport v1.2.2 [2792f1a3] RegistryInstances v0.1.0 @@ -181,7 +183,7 @@ [7e49a35a] RuntimeGeneratedFunctions v0.5.12 [94e857df] SIMDTypes v0.1.0 [476501e8] SLEEFPirates v0.6.42 - [0bca4576] SciMLBase v2.30.1 + [0bca4576] SciMLBase v2.31.0 [c0aeaf25] SciMLOperators v0.3.8 [1ed8b502] SciMLSensitivity v7.56.2 [53ae85a6] SciMLStructures v1.1.0 @@ -210,15 +212,15 @@ [892a3eda] StringManipulation v0.3.4 [09ab397b] StructArrays v0.6.18 [53d494c1] StructIO v0.3.0 - [2efcf032] SymbolicIndexingInterface v0.3.11 + [2efcf032] SymbolicIndexingInterface v0.3.15 [3783bdb8] TableTraits v1.0.1 [bd369af6] Tables v1.11.1 [8290d209] ThreadingUtilities v0.5.2 [a759f4b9] TimerOutputs v0.5.23 [9f7883ad] Tracker v0.2.33 - [3bb67fe8] TranscodingStreams v0.10.5 + [3bb67fe8] TranscodingStreams v0.10.7 ⌃ [28d57a85] Transducers v0.4.80 - [d5829a12] TriangularSolve v0.1.20 + [d5829a12] TriangularSolve v0.1.21 [410a4b4d] Tricks v0.1.8 [781d530d] TruncatedStacktraces v1.4.0 [3a884ed6] UnPack v1.0.2 @@ -292,4 +294,4 @@ [8e850b90] libblastrampoline_jll v5.8.0+1 [8e850ede] nghttp2_jll v1.52.0+1 [3f19e933] p7zip_jll v17.4.0+2 -Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/dev/problems/index.html b/dev/problems/index.html index f19dff6..dd0c8d4 100644 --- a/dev/problems/index.html +++ b/dev/problems/index.html @@ -31,4 +31,4 @@

Defines a Parabolic Partial Differential Equation of the form:

\[\begin{aligned} \frac{du}{dt} &= \tfrac{1}{2} \text{Tr}(\sigma \sigma^T) \Delta u(x, t) + \mu \nabla u(x, t) \\ &\quad + f(x, u(x, t), ( \nabla_x u )(x, t), p, t) -\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

+\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

diff --git a/dev/tutorials/deepbsde/index.html b/dev/tutorials/deepbsde/index.html index a64c8ba..1549b99 100644 --- a/dev/tutorials/deepbsde/index.html +++ b/dev/tutorials/deepbsde/index.html @@ -67,4 +67,4 @@ Dense(hls,hls,relu), Dense(hls,d)) pdealg = NNPDENS(u0, σᵀ∇u, opt=opt)

And now we solve the PDE. Here, we say we want to solve the underlying neural SDE using the Euler-Maruyama SDE solver with our chosen dt=0.2, do at most 150 iterations of the optimizer, 100 SDE solves per loss evaluation (for averaging), and stop if the loss ever goes below 1f-6.

ans = solve(prob, pdealg, verbose=true, maxiters=150, trajectories=100,
-                            alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
+ alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
diff --git a/dev/tutorials/deepsplitting/index.html b/dev/tutorials/deepsplitting/index.html index d981341..21076ba 100644 --- a/dev/tutorials/deepsplitting/index.html +++ b/dev/tutorials/deepsplitting/index.html @@ -41,4 +41,4 @@ abstol = 2e-3, maxiters = 1000, batch_size = 1000, - use_cuda=true) + use_cuda=true) diff --git a/dev/tutorials/mlp/index.html b/dev/tutorials/mlp/index.html index bcf26f9..76b6378 100644 --- a/dev/tutorials/mlp/index.html +++ b/dev/tutorials/mlp/index.html @@ -31,4 +31,4 @@ ## Definition of the algorithm alg = MLP(mc_sample = mc_sample ) -sol = solve(prob, alg, multithreading=true) +sol = solve(prob, alg, multithreading=true) diff --git a/dev/tutorials/nnkolmogorov/index.html b/dev/tutorials/nnkolmogorov/index.html index 2b5af7c..2c4f6b6 100644 --- a/dev/tutorials/nnkolmogorov/index.html +++ b/dev/tutorials/nnkolmogorov/index.html @@ -25,4 +25,4 @@ alg = NNKolmogorov(m, opt) m = Chain(Dense(d, 16, elu), Dense(16, 32, elu), Dense(32, 16, elu), Dense(16, 1)) sol = solve(prob, alg, sdealg, verbose = true, dt = 0.01, - dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) + dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) diff --git a/dev/tutorials/nnparamkolmogorov/index.html b/dev/tutorials/nnparamkolmogorov/index.html index 1a44716..ef3e92f 100644 --- a/dev/tutorials/nnparamkolmogorov/index.html +++ b/dev/tutorials/nnparamkolmogorov/index.html @@ -43,4 +43,4 @@ p_sigma_test = rand(p_domain.p_sigma[1]:dps.p_sigma:p_domain.p_sigma[2], 1, 1) t_test = rand(tspan[1]:dt:tspan[2], 1, 1) p_mu_test = nothing -p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
+p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
diff --git a/dev/tutorials/nnstopping/index.html b/dev/tutorials/nnstopping/index.html index e26dc20..dd42313 100644 --- a/dev/tutorials/nnstopping/index.html +++ b/dev/tutorials/nnstopping/index.html @@ -21,4 +21,4 @@ for i in 1:N]
Note

The number of models should be equal to the time discritization.

And finally we define our optimizer and algorithm, and call solve:

opt = Flux.Optimisers.Adam(0.01)
 alg = NNStopping(models, opt)
 
-sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)
+sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)