From 3445f5bd7b1c768d7749544f4bd36e53518b02f9 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 21 Dec 2023 18:54:12 +0000 Subject: [PATCH] build based on b452fe5 --- dev/.documenter-siteinfo.json | 2 +- dev/alternatives/index.html | 2 +- dev/api/index.html | 42 +++++++++++++++--------------- dev/debugging/index.html | 2 +- dev/examples/autodiff/index.html | 2 +- dev/examples/basics/index.html | 2 +- dev/examples/controlled/index.html | 2 +- dev/examples/interfaces/index.html | 2 +- dev/examples/temporal/index.html | 2 +- dev/examples/types/index.html | 2 +- dev/formulas/index.html | 2 +- dev/index.html | 2 +- 12 files changed, 32 insertions(+), 32 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 3f8cd44f..56e5b157 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-12T13:45:59","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-21T18:54:09","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/alternatives/index.html b/dev/alternatives/index.html index 65b9c406..00bf0873 100644 --- a/dev/alternatives/index.html +++ b/dev/alternatives/index.html @@ -1,2 +1,2 @@ -Alternatives · HiddenMarkovModels.jl

Competitors

Julia

We compare features among the following Julia packages:

We discard MarkovModels.jl because its focus is GPU computation. There are also more generic packages for probabilistic programming, which are able to perform MCMC or variational inference (eg. Turing.jl) but we leave those aside.

HMMs.jlHMMBase.jlHMMGradients.jl
AlgorithmsSim, FB, Vit, BWSim, FB, Vit, BWFB
Observation typesanythingNumber / Vectoranything
Observation distributionsDensityInterface.jlDistributions.jlmanual
Multiple sequencesyesnoyes
Priors / structurespossiblenopossible
Temporal dependencyyesnono
Control dependencyyesnono
Number typesanythingFloat64AbstractFloat
Automatic differentiationyesnoyes
Linear algebrayesyesno
Logarithmic probabilitieshalfwayhalfwayyes

Sim = Simulation, FB = Forward-Backward, Vit = Viterbi, BW = Baum-Welch

Python

+Alternatives · HiddenMarkovModels.jl

Competitors

Julia

We compare features among the following Julia packages:

We discard MarkovModels.jl because its focus is GPU computation. There are also more generic packages for probabilistic programming, which are able to perform MCMC or variational inference (eg. Turing.jl) but we leave those aside.

HMMs.jlHMMBase.jlHMMGradients.jl
AlgorithmsSim, FB, Vit, BWSim, FB, Vit, BWFB
Observation typesanythingNumber / Vectoranything
Observation distributionsDensityInterface.jlDistributions.jlmanual
Multiple sequencesyesnoyes
Priors / structurespossiblenopossible
Temporal dependencyyesnono
Control dependencyyesnono
Number typesanythingFloat64AbstractFloat
Automatic differentiationyesnoyes
Linear algebrayesyesno
Logarithmic probabilitieshalfwayhalfwayyes

Sim = Simulation, FB = Forward-Backward, Vit = Viterbi, BW = Baum-Welch

Python

diff --git a/dev/api/index.html b/dev/api/index.html index b400ecaa..7987c77f 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,16 +1,16 @@ -API reference · HiddenMarkovModels.jl

API reference

HiddenMarkovModelsModule
HiddenMarkovModels

A Julia package for HMM modeling, simulation, inference and learning.

source

Sequence formatting

Most algorithms below ingest the data with three (keyword) arguments: obs_seq, control_seq and seq_ends.

  • If the data consists of a single sequence, obs_seq and control_seq are the corresponding vectors of observations and controls, and you don't need to provide seq_ends.
  • If the data consists of multiple sequences, obs_seq and control_seq are concatenations of several vectors, whose end indices are given by seq_ends. Starting from separate sequences obs_seqs and control_seqs, you can run the following snippet:
obs_seq = reduce(vcat, obs_seqs)
+API reference · HiddenMarkovModels.jl

API reference

HiddenMarkovModelsModule
HiddenMarkovModels

A Julia package for HMM modeling, simulation, inference and learning.

source

Sequence formatting

Most algorithms below ingest the data with three (keyword) arguments: obs_seq, control_seq and seq_ends.

  • If the data consists of a single sequence, obs_seq and control_seq are the corresponding vectors of observations and controls, and you don't need to provide seq_ends.
  • If the data consists of multiple sequences, obs_seq and control_seq are concatenations of several vectors, whose end indices are given by seq_ends. Starting from separate sequences obs_seqs and control_seqs, you can run the following snippet:
obs_seq = reduce(vcat, obs_seqs)
 control_seq = reduce(vcat, control_seqs)
-seq_ends = cumsum(length.(obs_seqs))

Types

HiddenMarkovModels.HMMType
struct HMM{V<:(AbstractVector), M<:(AbstractMatrix), VD<:(AbstractVector)} <: AbstractHMM

Basic implementation of an HMM.

Fields

  • init::AbstractVector: initial state probabilities

  • trans::AbstractMatrix: state transition matrix

  • dists::AbstractVector: observation distributions

source

Interface

HiddenMarkovModels.obs_distributionsFunction
obs_distributions(hmm)
-obs_distributions(hmm, control)

Return a vector of observation distributions, one for each state of hmm (possibly when control is applied).

These distribution objects should implement

  • Random.rand(rng, dist) for sampling
  • DensityInterface.logdensityof(dist, obs) for inference
  • StatsAPI.fit!(dist, obs_seq, weight_seq) for learning
source

Utils

Base.randFunction
rand([rng,] hmm, T)
-rand([rng,] hmm, control_seq)

Simulate hmm for T time steps, or when the sequence control_seq is applied.

Return a named tuple (; state_seq, obs_seq).

source
Base.eltypeFunction
eltype(hmm, obs, control)

Return a type that can accommodate forward-backward computations for hmm on observations similar to obs.

It is typically a promotion between the element type of the initialization, the element type of the transition matrix, and the type of an observation logdensity evaluated at obs.

source
HiddenMarkovModels.seq_limitsFunction
seq_limits(seq_ends, k)
-

Return a tuple (t1, t2) giving the begin and end indices of subsequence k within a set of sequences ending at seq_ends.

source

Inference

DensityInterface.logdensityofFunction
logdensityof(hmm)

Return the prior loglikelihood associated with the parameters of hmm.

source
logdensityof(hmm, obs_seq; control_seq, seq_ends)
-

Run the forward algorithm to compute the loglikelihood of obs_seq for hmm, integrating over all possible state sequences.

source
logdensityof(hmm, obs_seq, state_seq; control_seq, seq_ends)
-

Run the forward algorithm to compute the the joint loglikelihood of obs_seq and state_seq for hmm.

source
HiddenMarkovModels.forwardFunction
forward(hmm, obs_seq; control_seq, seq_ends)
-

Apply the forward algorithm to infer the current state after sequence obs_seq for hmm.

Return a tuple (storage.α, sum(storage.logL)) where storage is of type ForwardStorage.

source
HiddenMarkovModels.viterbiFunction
viterbi(hmm, obs_seq; control_seq, seq_ends)
-

Apply the Viterbi algorithm to infer the most likely state sequence corresponding to obs_seq for hmm.

Return a tuple (storage.q, sum(storage.logL)) where storage is of type ViterbiStorage.

source
HiddenMarkovModels.forward_backwardFunction
forward_backward(hmm, obs_seq; control_seq, seq_ends)
-

Apply the forward-backward algorithm to infer the posterior state and transition marginals during sequence obs_seq for hmm.

Return a tuple (storage.γ, sum(storage.logL)) where storage is of type ForwardBackwardStorage.

source

Learning

HiddenMarkovModels.baum_welchFunction
baum_welch(
+seq_ends = cumsum(length.(obs_seqs))

Types

HiddenMarkovModels.HMMType
struct HMM{V<:(AbstractVector), M<:(AbstractMatrix), VD<:(AbstractVector)} <: AbstractHMM

Basic implementation of an HMM.

Fields

  • init::AbstractVector: initial state probabilities

  • trans::AbstractMatrix: state transition matrix

  • dists::AbstractVector: observation distributions

source

Interface

HiddenMarkovModels.obs_distributionsFunction
obs_distributions(hmm)
+obs_distributions(hmm, control)

Return a vector of observation distributions, one for each state of hmm (possibly when control is applied).

These distribution objects should implement

  • Random.rand(rng, dist) for sampling
  • DensityInterface.logdensityof(dist, obs) for inference
  • StatsAPI.fit!(dist, obs_seq, weight_seq) for learning
source

Utils

Base.randFunction
rand([rng,] hmm, T)
+rand([rng,] hmm, control_seq)

Simulate hmm for T time steps, or when the sequence control_seq is applied.

Return a named tuple (; state_seq, obs_seq).

source
Base.eltypeFunction
eltype(hmm, obs, control)

Return a type that can accommodate forward-backward computations for hmm on observations similar to obs.

It is typically a promotion between the element type of the initialization, the element type of the transition matrix, and the type of an observation logdensity evaluated at obs.

source
HiddenMarkovModels.seq_limitsFunction
seq_limits(seq_ends, k)
+

Return a tuple (t1, t2) giving the begin and end indices of subsequence k within a set of sequences ending at seq_ends.

source

Inference

DensityInterface.logdensityofFunction
logdensityof(hmm)

Return the prior loglikelihood associated with the parameters of hmm.

source
logdensityof(hmm, obs_seq; control_seq, seq_ends)
+

Run the forward algorithm to compute the loglikelihood of obs_seq for hmm, integrating over all possible state sequences.

source
logdensityof(hmm, obs_seq, state_seq; control_seq, seq_ends)
+

Run the forward algorithm to compute the the joint loglikelihood of obs_seq and state_seq for hmm.

source
HiddenMarkovModels.forwardFunction
forward(hmm, obs_seq; control_seq, seq_ends)
+

Apply the forward algorithm to infer the current state after sequence obs_seq for hmm.

Return a tuple (storage.α, sum(storage.logL)) where storage is of type ForwardStorage.

source
HiddenMarkovModels.viterbiFunction
viterbi(hmm, obs_seq; control_seq, seq_ends)
+

Apply the Viterbi algorithm to infer the most likely state sequence corresponding to obs_seq for hmm.

Return a tuple (storage.q, sum(storage.logL)) where storage is of type ViterbiStorage.

source
HiddenMarkovModels.forward_backwardFunction
forward_backward(hmm, obs_seq; control_seq, seq_ends)
+

Apply the forward-backward algorithm to infer the posterior state and transition marginals during sequence obs_seq for hmm.

Return a tuple (storage.γ, sum(storage.logL)) where storage is of type ForwardBackwardStorage.

source

Learning

HiddenMarkovModels.baum_welchFunction
baum_welch(
     hmm_guess,
     obs_seq;
     control_seq,
@@ -19,24 +19,24 @@
     max_iterations,
     loglikelihood_increasing
 )
-

Apply the Baum-Welch algorithm to estimate the parameters of an HMM on obs_seq, starting from hmm_guess.

Return a tuple (hmm_est, loglikelihood_evolution) where hmm_est is the estimated HMM and loglikelihood_evolution is a vector of loglikelihood values, one per iteration of the algorithm.

Keyword arguments

  • atol: minimum loglikelihood increase at an iteration of the algorithm (otherwise the algorithm is deemed to have converged)
  • max_iterations: maximum number of iterations of the algorithm
  • loglikelihood_increasing: whether to throw an error if the loglikelihood decreases
source
StatsAPI.fit!Function
fit!(
+

Apply the Baum-Welch algorithm to estimate the parameters of an HMM on obs_seq, starting from hmm_guess.

Return a tuple (hmm_est, loglikelihood_evolution) where hmm_est is the estimated HMM and loglikelihood_evolution is a vector of loglikelihood values, one per iteration of the algorithm.

Keyword arguments

  • atol: minimum loglikelihood increase at an iteration of the algorithm (otherwise the algorithm is deemed to have converged)
  • max_iterations: maximum number of iterations of the algorithm
  • loglikelihood_increasing: whether to throw an error if the loglikelihood decreases
source
StatsAPI.fit!Function
fit!(
     hmm::AbstractHMM,
     fb_storage::ForwardBackwardStorage,
     obs_seq::AbstractVector;
     control_seq::AbstractVector,
     seq_ends::AbstractVector{Int},
-)

Update hmm in-place based on information generated during forward-backward.

source

In-place versions

Forward

HiddenMarkovModels.ForwardStorageType
struct ForwardStorage{R}

Fields

Only the fields with a description are part of the public API.

  • α::Matrix: posterior last state marginals α[i] = ℙ(X[T]=i | Y[1:T])

  • logL::Vector: one loglikelihood per observation sequence

  • B::Matrix

  • c::Vector

source

Viterbi

HiddenMarkovModels.ViterbiStorageType
struct ViterbiStorage{R}

Fields

Only the fields with a description are part of the public API.

  • q::Vector{Int64}: most likely state sequence q[t] = argmaxᵢ ℙ(X[t]=i | Y[1:T])

  • logL::Vector: one joint loglikelihood per pair of observation sequence and most likely state sequence

  • logB::Matrix

  • ϕ::Matrix

  • ψ::Matrix{Int64}

source

Forward-backward

HiddenMarkovModels.ForwardBackwardStorageType
struct ForwardBackwardStorage{R, M<:AbstractArray{R, 2}}

Fields

Only the fields with a description are part of the public API.

  • γ::Matrix: posterior state marginals γ[i,t] = ℙ(X[t]=i | Y[1:T])

  • ξ::Vector{M} where {R, M<:AbstractMatrix{R}}: posterior transition marginals ξ[t][i,j] = ℙ(X[t]=i, X[t+1]=j | Y[1:T])

  • logL::Vector: one loglikelihood per observation sequence

  • B::Matrix

  • α::Matrix

  • c::Vector

  • β::Matrix

  • Bβ::Matrix

source

In-place versions

Forward

HiddenMarkovModels.ForwardStorageType
struct ForwardStorage{R}

Fields

Only the fields with a description are part of the public API.

  • α::Matrix: posterior last state marginals α[i] = ℙ(X[T]=i | Y[1:T])

  • logL::Vector: one loglikelihood per observation sequence

  • B::Matrix

  • c::Vector

source

Viterbi

HiddenMarkovModels.ViterbiStorageType
struct ViterbiStorage{R}

Fields

Only the fields with a description are part of the public API.

  • q::Vector{Int64}: most likely state sequence q[t] = argmaxᵢ ℙ(X[t]=i | Y[1:T])

  • logL::Vector: one joint loglikelihood per pair of observation sequence and most likely state sequence

  • logB::Matrix

  • ϕ::Matrix

  • ψ::Matrix{Int64}

source

Forward-backward

HiddenMarkovModels.ForwardBackwardStorageType
struct ForwardBackwardStorage{R, M<:AbstractArray{R, 2}}

Fields

Only the fields with a description are part of the public API.

  • γ::Matrix: posterior state marginals γ[i,t] = ℙ(X[t]=i | Y[1:T])

  • ξ::Vector{M} where {R, M<:AbstractMatrix{R}}: posterior transition marginals ξ[t][i,j] = ℙ(X[t]=i, X[t+1]=j | Y[1:T])

  • logL::Vector: one loglikelihood per observation sequence

  • B::Matrix

  • α::Matrix

  • c::Vector

  • β::Matrix

  • Bβ::Matrix

source

Baum-Welch

Baum-Welch

Misc

HiddenMarkovModels.LightDiagNormalType
struct LightDiagNormal{T1, T2, T3, V1<:AbstractArray{T1, 1}, V2<:AbstractArray{T2, 1}, V3<:AbstractArray{T3, 1}}

An HMMs-compatible implementation of a multivariate normal distribution with diagonal covariance, enabling allocation-free in-place estimation.

This is not part of the public API and is expected to change.

Fields

  • μ::AbstractVector: means

  • σ::AbstractVector: standard deviations

  • logσ::AbstractVector: log standard deviations

source
HiddenMarkovModels.LightCategoricalType
struct LightCategorical{T1, T2, V1<:AbstractArray{T1, 1}, V2<:AbstractArray{T2, 1}}

An HMMs-compatible implementation of a discrete categorical distribution, enabling allocation-free in-place estimation.

This is not part of the public API and is expected to change.

Fields

  • p::AbstractVector: class probabilities

  • logp::AbstractVector: log class probabilities

source
HiddenMarkovModels.fit_in_sequence!Function
fit_in_sequence!(dists, i, x, w)
-

Modify the i-th element of dists by fitting it to an observation sequence x with associated weight sequence w.

Default behavior:

fit!(dists[i], x, w)

Override for Distributions.jl (in the package extension)

dists[i] = fit(eltype(dists), x, w)
source

Index

+
source

Misc

HiddenMarkovModels.LightDiagNormalType
struct LightDiagNormal{T1, T2, T3, V1<:AbstractArray{T1, 1}, V2<:AbstractArray{T2, 1}, V3<:AbstractArray{T3, 1}}

An HMMs-compatible implementation of a multivariate normal distribution with diagonal covariance, enabling allocation-free in-place estimation.

This is not part of the public API and is expected to change.

Fields

  • μ::AbstractVector: means

  • σ::AbstractVector: standard deviations

  • logσ::AbstractVector: log standard deviations

source
HiddenMarkovModels.LightCategoricalType
struct LightCategorical{T1, T2, V1<:AbstractArray{T1, 1}, V2<:AbstractArray{T2, 1}}

An HMMs-compatible implementation of a discrete categorical distribution, enabling allocation-free in-place estimation.

This is not part of the public API and is expected to change.

Fields

  • p::AbstractVector: class probabilities

  • logp::AbstractVector: log class probabilities

source
HiddenMarkovModels.fit_in_sequence!Function
fit_in_sequence!(dists, i, x, w)
+

Modify the i-th element of dists by fitting it to an observation sequence x with associated weight sequence w.

Default behavior:

fit!(dists[i], x, w)

Override for Distributions.jl (in the package extension)

dists[i] = fit(eltype(dists), x, w)
source

Index

diff --git a/dev/debugging/index.html b/dev/debugging/index.html index 69bbf8ae..21a57e61 100644 --- a/dev/debugging/index.html +++ b/dev/debugging/index.html @@ -1,2 +1,2 @@ -Debugging · HiddenMarkovModels.jl

Debugging

Numerical overflow

The most frequent error you will encounter is an OverflowError during inference, telling you that some values are infinite or NaN. This can happen for a variety of reasons, so here are a few leads worth investigating:

  • Increase the duration of the sequence / the number of sequences to get more data
  • Add a prior to your transition matrix / observation distributions to avoid degenerate behavior like zero variance in a Gaussian
  • Reduce the number of states to make every one of them useful
  • Pick a better initialization to start closer to the supposed ground truth
  • Use numerically stable number types (such as LogarithmicNumbers.jl) in strategic places, but beware: these numbers don't play nicely with Distributions.jl, so you may have to roll out your own observation distributions.

Performance

If your algorithms are too slow, the general advice always applies:

  • Use BenchmarkTools.jl to establish a baseline
  • Use profiling to see where you spend most of your time
  • Use JET.jl to track down type instabilities
  • Use AllocCheck.jl to reduce allocations
+Debugging · HiddenMarkovModels.jl

Debugging

Numerical overflow

The most frequent error you will encounter is an OverflowError during inference, telling you that some values are infinite or NaN. This can happen for a variety of reasons, so here are a few leads worth investigating:

  • Increase the duration of the sequence / the number of sequences to get more data
  • Add a prior to your transition matrix / observation distributions to avoid degenerate behavior like zero variance in a Gaussian
  • Reduce the number of states to make every one of them useful
  • Pick a better initialization to start closer to the supposed ground truth
  • Use numerically stable number types (such as LogarithmicNumbers.jl) in strategic places, but beware: these numbers don't play nicely with Distributions.jl, so you may have to roll out your own observation distributions.

Performance

If your algorithms are too slow, the general advice always applies:

  • Use BenchmarkTools.jl to establish a baseline
  • Use profiling to see where you spend most of your time
  • Use JET.jl to track down type instabilities
  • Use AllocCheck.jl to reduce allocations
diff --git a/dev/examples/autodiff/index.html b/dev/examples/autodiff/index.html index e5301a48..aaf20cc1 100644 --- a/dev/examples/autodiff/index.html +++ b/dev/examples/autodiff/index.html @@ -41,4 +41,4 @@ Enzyme.Duplicated(params, params_shadow), ) -grad_e = params_shadow
ComponentVector{Float64}(init = [2.461586483904971, 0.15365406438011528], trans = [13.541230697414639 16.653550286760073; 13.64721549300123 13.472726825544802], means = [-1.5783255775403202, -2.098528923154286])
grad_e ≈ grad_f
true

Gradient methods

Once we have gradients of the loglikelihood, it is a natural idea to perform gradient descent in order to fit the parameters of a custom HMM. However, there are two caveats we must keep in mind.

First, computing a gradient essentially requires running the forward-backward algorithm, which means it is expensive. Given the output of forward-backward, if there is a way to perform a more accurate parameter update (like going straight to the maximum likelihood value), it is probably worth it. That is what we show in the other tutorials with the reimplementation of the fit! method.

Second, HMM parameters live in a constrained space, which calls for a projected gradient descent. Most notably, the transition matrix must be stochastic, and the orthogonal projection onto this set (the Birkhoff polytope) is not easy to obtain.

Still, first order optimization can be relevant when we lack explicit formulas for maximum likelihood.


This page was generated using Literate.jl.

+grad_e = params_shadow
ComponentVector{Float64}(init = [2.461586483904971, 0.15365406438011528], trans = [13.541230697414639 16.653550286760073; 13.64721549300123 13.472726825544802], means = [-1.5783255775403202, -2.098528923154286])
grad_e ≈ grad_f
true

Gradient methods

Once we have gradients of the loglikelihood, it is a natural idea to perform gradient descent in order to fit the parameters of a custom HMM. However, there are two caveats we must keep in mind.

First, computing a gradient essentially requires running the forward-backward algorithm, which means it is expensive. Given the output of forward-backward, if there is a way to perform a more accurate parameter update (like going straight to the maximum likelihood value), it is probably worth it. That is what we show in the other tutorials with the reimplementation of the fit! method.

Second, HMM parameters live in a constrained space, which calls for a projected gradient descent. Most notably, the transition matrix must be stochastic, and the orthogonal projection onto this set (the Birkhoff polytope) is not easy to obtain.

Still, first order optimization can be relevant when we lack explicit formulas for maximum likelihood.


This page was generated using Literate.jl.

diff --git a/dev/examples/basics/index.html b/dev/examples/basics/index.html index 6e204314..62db3426 100644 --- a/dev/examples/basics/index.html +++ b/dev/examples/basics/index.html @@ -44,4 +44,4 @@ 0.7 0.3 0.3 0.7
map(dist -> dist.μ, hcat(obs_distributions(hmm_est_concat), obs_distributions(hmm)))
2×2 Matrix{Vector{Float64}}:
  [-0.924126, -1.03495, -1.03126]  [-1.0, -1.0, -1.0]
- [0.989244, 1.00748, 0.933342]    [1.0, 1.0, 1.0]

This page was generated using Literate.jl.

+ [0.989244, 1.00748, 0.933342] [1.0, 1.0, 1.0]

This page was generated using Literate.jl.

diff --git a/dev/examples/controlled/index.html b/dev/examples/controlled/index.html index b58a07b6..5640e509 100644 --- a/dev/examples/controlled/index.html +++ b/dev/examples/controlled/index.html @@ -74,4 +74,4 @@ -0.994459 -1.0
hcat(hmm_est.dist_coeffs[2], hmm.dist_coeffs[2])
3×2 Matrix{Float64}:
  0.994041  1.0
  0.994866  1.0
- 1.02258   1.0

This page was generated using Literate.jl.

+ 1.02258 1.0

This page was generated using Literate.jl.

diff --git a/dev/examples/interfaces/index.html b/dev/examples/interfaces/index.html index 3d5d51bd..cf520e86 100644 --- a/dev/examples/interfaces/index.html +++ b/dev/examples/interfaces/index.html @@ -88,4 +88,4 @@ [:, :, 2] = 0.591969 0.408031 - 0.42277 0.57723

This page was generated using Literate.jl.

+ 0.42277 0.57723

This page was generated using Literate.jl.

diff --git a/dev/examples/temporal/index.html b/dev/examples/temporal/index.html index 36b4e578..acc697ea 100644 --- a/dev/examples/temporal/index.html +++ b/dev/examples/temporal/index.html @@ -99,4 +99,4 @@ Distributions.Normal{Float64}(μ=-0.953169, σ=0.999476) … Distributions.Normal{Float64}(μ=-1.0, σ=1.0) Distributions.Normal{Float64}(μ=-1.96275, σ=1.00499) Distributions.Normal{Float64}(μ=-2.0, σ=1.0)
hcat(obs_distributions(hmm_est, 2), obs_distributions(hmm, 2))
2×2 Matrix{Distributions.Normal{Float64}}:
  Distributions.Normal{Float64}(μ=0.948362, σ=0.976429)  …  Distributions.Normal{Float64}(μ=1.0, σ=1.0)
- Distributions.Normal{Float64}(μ=1.99179, σ=0.994492)      Distributions.Normal{Float64}(μ=2.0, σ=1.0)

This page was generated using Literate.jl.

+ Distributions.Normal{Float64}(μ=1.99179, σ=0.994492) Distributions.Normal{Float64}(μ=2.0, σ=1.0)

This page was generated using Literate.jl.

diff --git a/dev/examples/types/index.html b/dev/examples/types/index.html index 884d6c9f..b420e265 100644 --- a/dev/examples/types/index.html +++ b/dev/examples/types/index.html @@ -27,4 +27,4 @@ 0.215753 ⋅ 0.784247
transition_matrix(hmm)
3×3 SparseArrays.SparseMatrixCSC{Float64, Int64} with 6 stored entries:
  0.8  0.2   ⋅ 
   ⋅   0.8  0.2
- 0.2   ⋅   0.8

This page was generated using Literate.jl.

+ 0.2 ⋅ 0.8

This page was generated using Literate.jl.

diff --git a/dev/formulas/index.html b/dev/formulas/index.html index bc7330a5..2fd90f36 100644 --- a/dev/formulas/index.html +++ b/dev/formulas/index.html @@ -56,4 +56,4 @@ \frac{\partial \log \mathcal{L}}{\partial a_{i,j}} &= \sum_{t=1}^{T-1} \bar{\alpha}_{i,t} \frac{b_{j,t+1}}{m_{t+1}} \bar{\beta}_{j,t+1} \\ \frac{\partial \log \mathcal{L}}{\partial \log b_{j,1}} &= \pi_j \frac{b_{j,1}}{m_1} \bar{\beta}_{j,1} = \frac{\bar{\alpha}_{j,1} \bar{\beta}_{j,1}}{c_1} = \gamma_{j,1} \\ \frac{\partial \log \mathcal{L}}{\partial \log b_{j,t}} &= \sum_{i=1}^N \bar{\alpha}_{i,t-1} a_{i,j} \frac{b_{j,t}}{m_t} \bar{\beta}_{j,t} = \frac{\bar{\alpha}_{j,t} \bar{\beta}_{j,t}}{c_t} = \gamma_{j,t} -\end{align*}\]

Bibliography

+\end{align*}\]

Bibliography

diff --git a/dev/index.html b/dev/index.html index a60775d8..627c802a 100644 --- a/dev/index.html +++ b/dev/index.html @@ -3,4 +3,4 @@ init = [0.4, 0.6] trans = [0.9 0.1; 0.2 0.8] dists = [Normal(-1.0), Normal(1.0)] -hmm = HMM(init, trans, dists)

Take a look at the documentation to know what to do next!

Some background

Hidden Markov Models (HMMs) are a widely used modeling framework in signal processing, bioinformatics and plenty of other fields. They explain an observation sequence $(Y_t)$ by assuming the existence of a latent Markovian state sequence $(X_t)$ whose current value determines the distribution of observations. In our framework, both the state and the observation sequence are also allowed to depend on a known control sequence $(U_t)$. Each of the problems below has an efficient solution algorithm which our package implements:

ProblemGoalAlgorithm
EvaluationLikelihood of the observation sequenceForward
FilteringLast state marginalsForward
SmoothingAll state marginalsForward-backward
DecodingMost likely state sequenceViterbi
LearningMaximum likelihood parameterBaum-Welch

Main features

This package is generic. Observations can be arbitrary Julia objects, not just scalars or arrays. Number types are not restricted to floating point, which enables automatic differentiation. Time-dependent or controlled HMMs are supported out of the box.

This package is fast. All the inference functions have allocation-free versions, which leverage efficient linear algebra subroutines. We will include extensive benchmarks against Julia and Python competitors.

This package is reliable. It gives the same results as the previous reference package up to numerical accuracy. The test suite incorporates quality checks as well as type stability and allocation analysis.

Contributing

If you spot a bug or want to ask about a new feature, please open an issue on the GitHub repository. Once the issue receives positive feedback, feel free to try and fix it with a pull request that follows the BlueStyle guidelines.

Acknowledgements

A big thank you to Maxime Mouchet and Jacob Schreiber, the respective lead devs of alternative packages HMMBase.jl and pomegranate, for their help and advice. Logo by Clément Mantoux based on a portrait of Andrey Markov.

+hmm = HMM(init, trans, dists)

Take a look at the documentation to know what to do next!

Some background

Hidden Markov Models (HMMs) are a widely used modeling framework in signal processing, bioinformatics and plenty of other fields. They explain an observation sequence $(Y_t)$ by assuming the existence of a latent Markovian state sequence $(X_t)$ whose current value determines the distribution of observations. In our framework, both the state and the observation sequence are also allowed to depend on a known control sequence $(U_t)$. Each of the problems below has an efficient solution algorithm which our package implements:

ProblemGoalAlgorithm
EvaluationLikelihood of the observation sequenceForward
FilteringLast state marginalsForward
SmoothingAll state marginalsForward-backward
DecodingMost likely state sequenceViterbi
LearningMaximum likelihood parameterBaum-Welch

Main features

This package is generic. Observations can be arbitrary Julia objects, not just scalars or arrays. Number types are not restricted to floating point, which enables automatic differentiation. Time-dependent or controlled HMMs are supported out of the box.

This package is fast. All the inference functions have allocation-free versions, which leverage efficient linear algebra subroutines. We will include extensive benchmarks against Julia and Python competitors.

This package is reliable. It gives the same results as the previous reference package up to numerical accuracy. The test suite incorporates quality checks as well as type stability and allocation analysis.

Contributing

If you spot a bug or want to ask about a new feature, please open an issue on the GitHub repository. Once the issue receives positive feedback, feel free to try and fix it with a pull request that follows the BlueStyle guidelines.

Acknowledgements

A big thank you to Maxime Mouchet and Jacob Schreiber, the respective lead devs of alternative packages HMMBase.jl and pomegranate, for their help and advice. Logo by Clément Mantoux based on a portrait of Andrey Markov.