Hidden Markov Model with von Mises Emissions
The von Mises distribution, (also known as the circular normal distribution or Tikhonov distribution) is a continuous probability distribution on the circle. For multivariate signals, the emmissions distribution implemented by this model is a product of univariate von Mises distributuons – analogous to the multivariate Gaussian distribution with a diagonal covariance matrix.
This class allows for easy evaluation of, sampling from, and maximum-likelihood estimation of the parameters of a HMM.
Parameters: | n_states : int
random_state: RandomState or an int seed (0 by default)
n_iter : int, optional
thresh : float, optional
params : string, optional
reversible_type : str
init_params : string, optional
|
---|
Notes
The formulas for the maximization step of the E-M algorithim are adapted from [R25], especially equations (11) and (13).
References
[R25] | (1, 2) Prati, Andrea, Simone Calderara, and Rita Cucchiara. “Using circular |
statistics for trajectory shape analysis.” Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008. .. [R26] Murray, Richard F., and Yaniv Morgenstern. “Cue combination on the circle and the sphere.” Journal of vision 10.11 (2010).
Attributes
transmat_ | Matrix of transition probabilities. |
startprob_ | Mixing startprob for each state. |
means_ | Mean parameters for each state. |
kappas_ | Concentration parameter for each state. |
n_features | (int) Dimensionality of the emissions. |
Methods
decode(obs[, algorithm]) | Find most likely state sequence corresponding to obs. |
eval(X) | |
fit(obs) | Estimate model parameters. |
get_params([deep]) | Get parameters for this estimator. |
overlap_() | Compute the matrix of normalized log overlap integrals between the hidden state distributions |
predict(obs[, algorithm]) | Find most likely state sequence corresponding to obs. |
predict_proba(obs) | Compute the posterior probability for each state in the model |
sample([n, random_state]) | Generate random samples from the model. |
score(obs) | Compute the log probability under the model. |
score_samples(obs) | Compute the log probability under the model and compute posteriors. |
set_params(**params) | Set the parameters of this estimator. |
summarize() | Return some diagnostic summary statistics about this Markov model |
timescales_() | The implied relaxation timescales of the hidden Markov transition |
Mean parameters for each state.
Concentration parameter for each state. If kappa is zero, the distriution is uniform. If large, it gets very concentrated about mean
Estimate model parameters.
An initialization step is performed before entering the EM algorithm. If you want to avoid this step, pass proper init_params keyword argument to estimator’s constructor.
Parameters: | obs : list
|
---|
Notes
In general, logprob should be non-decreasing unless aggressive pruning is used. Decreasing logprob is generally a sign of overfitting (e.g. a covariance parameter getting too small). You can fix this by getting more training data, or strengthening the appropriate subclass-specific regularization parameter.
Compute the matrix of normalized log overlap integrals between the hidden state distributions
Returns: | noverlap : array, shape=(n_components, n_components)
|
---|
Notes
The analytic formula used here follows from equation (A4) in 2
The implied relaxation timescales of the hidden Markov transition matrix
By diagonalizing the transition matrix, its propagation of an arbitrary initial probability vector can be written as a sum of the eigenvectors of the transition weighted by per-eigenvector term that decays exponentially with time. Each of these eigenvectors describes a “dynamical mode” of the transition matrix and has a characteristic timescale, which gives the timescale on which that mode decays towards equilibrium. These timescales are given by \(-1/log(u_i)\) where \(u_i\) are the eigenvalues of the transition matrix. In an HMM with N components, the number of non-infinite timescales is N-1. (The -1 comes from the fact that the stationary distribution of the chain is associated with an eigenvalue of 1, and an infinite characteritic timescale).
Returns: | timescales : array, shape=[n_states-1]
|
---|
decoder algorithm
Find most likely state sequence corresponding to obs. Uses the selected algorithm for decoding.
Parameters: | obs : array_like, shape (n, n_features)
algorithm : string, one of the decoder_algorithms
|
---|---|
Returns: | logprob : float
state_sequence : array_like, shape (n,)
|
See also
Get parameters for this estimator.
Parameters: | deep: boolean, optional
|
---|---|
Returns: | params : mapping of string to any
|
Find most likely state sequence corresponding to obs.
Parameters: | obs : array_like, shape (n, n_features)
|
---|---|
Returns: | state_sequence : array_like, shape (n,)
|
Compute the posterior probability for each state in the model
Parameters: | obs : array_like, shape (n, n_features)
|
---|---|
Returns: | T : array-like, shape (n, n_components)
|
Generate random samples from the model.
Parameters: | n : int
random_state: RandomState or an int seed (0 by default)
|
---|---|
Returns: | (obs, hidden_states) obs : array_like, length n List of samples hidden_states : array_like, length n List of hidden states |
Compute the log probability under the model.
Parameters: | obs : array_like, shape (n, n_features)
|
---|---|
Returns: | logprob : float
|
See also
Compute the log probability under the model and compute posteriors.
Parameters: | obs : array_like, shape (n, n_features)
|
---|---|
Returns: | logprob : float
posteriors : array_like, shape (n, n_components)
|
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns: | self |
---|
Mixing startprob for each state.
Return some diagnostic summary statistics about this Markov model
Matrix of transition probabilities.