Skip to content

hpcanalytics/Hidden-Markov-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hidden Markov Models

A Hidden Markov Model is often used for sequential data: the words of a sentence, the sentences of a document, the phonemes of speech, financial market prices over time, etc. They are typically used when one wants to estimate the value of a variable over time, given imperfect observations.

Here we describe the theory behind HMMs, as well as some sample applications. Often, values of some variables connected together by an HMM are known, while others are to be estimated. There are three different algorithms available for performing different types of inference on HMMs:

  • The Forward-Backward Algorithm: When one wants to estimate a distribution over an individual variable, i.e. Marginal Inference.
  • The Viterbi Algorithm: When one wants to jointly estimate the most likely value for all variables simultaneously, i.e. Maximum A Posterior (MAP) inference.  
  • The Markov Chain Monte Carlo (MCMC) Algorithm: When one wants to estimate confidence intervals on variable values, or if the dimensionality of each variable is very high, the approximation algorithm be best.

Theory behind HMMs

Background: Markov Chains

Markov Chain Examples

Hidden Markov Models

Examples

Turbo codes, Natural Language Processing

Part-of-speech tagging, Temporal popularity modeling

Scene recognition

About

SPIDAL Image Processing and Optimization Library

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published