# Underlying Algorithms¶

## The Viterbi Algorithm¶

An extension of the dynamic programming idea of backward induction, this is made possible by the Markovian property.

Notation can be a little confusing, the number of possible states will be S, (2, for the unfair casino, 3 for the weather: Sunny, cloudy, rainy, 8 for the CpG islands).

The transitions between states is an \(S\times S\) stochastic matrix, we then have observables, in the unfair casino, the dice roll, in the weather example the states of the seaweed: soggy, dry, damp, wet. There may the sounds that are heard. For these observables we suppose we have a confusion matrix which is the matrix of probabilities of seeing a certain observable, given a state, \(P(6 \mid F)=1/6\), \(P(wet\mid rainy)\), in the specific case of the nucleotides we know for sure that if we see an A, the underlying state can either be a A+ or A-, the other probabilities are 0. We saw that we denote the emission probabilities by indexing by the state: \(e_{s}(o)=P(\mbox{see o}\mid \mbox{state is } s)\).

We compute partial probabilities up to a point.

We call \(v_k(i)\) the most probable path ending in state k with observation \(i\) equal to \(x_i\), for all possible states k.

Calculation of \(v_Z(i)\), the most probable path to the state \(Z\) will have to path through A, B, or C at step \(i-1\). The most probable path to Z at step \(i\) will be either:

...sequence of states ending in A,Z ...sequence of states ending in B,Z ...sequence of states ending in C,Z

Viterbi Algorithm

We want the path ending in A,Z , B,Z or C,Z with an observation of X at step Z that has the highest (complete)probability. The path going through A has probability

The other two probabilities are the same with A replaced by B or C. We have to compute these three probabilities and take the maximum one:

Step-0: Initialize (i=0): \(v_0 (0) = 1, v_k(0) = 0\) for \(k>0\)

Steps- \(i=1..L\): \(v_l(i)=e_l(x_i ) max_k \{v_k(i-1) a_{kl} \}\)

* The Viterbi algorithm employs the strategy of Dynamic Programming * Probabilities should be expressed in a log space to avoid underflow errors *We need a backward pointer to the state \(k^*\) that gave the \(v_\ell(i)\) max, the backward pointer doesn’t need the emission probabilities, since for the same state \(\ell\) the emission probabilities are the same, so the pointer is

Actual Example: Three states of the weather Sun, Cloudy, Rainy with transition matrix

We saw Brittle, Damp, Soggy... What was the weather over those 3 days? Initializations with first observation Brittle:

Step one: State 1: sunny. \(P(sunny\mid sunny) \times P(Damp\mid sunny) \times 0.378=0.5 * 0.15 * 0.378= 0.02835\) \(P(sunny\mid cloudy) \times P(Damp\mid sunny) \times 0.0.0425=0.375 * 0.15 * 0.0425= 0.00239\) \(P(sunny\mid rainy) \times P(Damp\mid sunny) \times 0.0.001=0.125 * 0.15 *0.01= 0.0001875\)

Step two:

Advantages of Viterbi: Computational complexity is reduced. Not deterred by noise in the middle, looks at the whole path before deciding which is the best.

Disadvantages: We don’t get to see any slightly suboptimal paths. We will remedy this, when we look at the Gibbs sampler.

### Forward Algorithm¶

We have a sequence \(x\) for which we would like to compute the probability:

In fact we can compute this with a similar algorithm where the maximum is replaced by sums:

For a sequence of length L :\(P(x)=\sum_k f_k(L)\).

### Backward Algorithm¶

Suppose we observe a sequence \(x\) and want to evaluate \(P(\pi_i=k\mid x)\).

The second term is called \(b_k(i)\) and the first is \(f_k(i)\), can be computed backwards from the ending sequence, in the same way the forward algorithm worked.

## Estimation of the Hidden Markov Model¶

### Parameters¶

If a state sequence is known, we have an observed matrix of counts of the number of times j followed i in the matrix \(A_{ij}\) and the number of counts that state s emitted observable \(l\) as a matrix \(E_{sl}\), maximum likelihood theory gives as estimates for a and e:

We may have to use pseudocounts (equivalent to a Dirichlet prior) if the numbers are small or zero.

### Baum-Welch , or EM specialized to HMM¶

We want to find the parameters (the model) that maximizes

\(x\) means all the observations, including all the different sequences \(x^j, j=1...n\) We build a sequence of models \(\theta^t\), the missing data are the paths \(\pi\). The EM algorithm calculates \(A_{kl}\) and \(E_k(b)\) as the expected number of times each transition/emission occurs, given the training sequences. \(P(\hat{a}_{kl}\) used at position \(i\) in sequence \(x)\) \(=P(\pi_i=k,\pi_{i+1}=l|x,\theta)=\frac{f_k(i)a_{kl}e_l(x_{i+1})b_l(i+1)}{P(x)}\) thus the estimate by expectation(averaging) is:

Initialize at arbitrary parameter values For each sequence \(j=1\ldots n\):

- Calculate \(f_k(i)\) for sequence \(j\) using the forward algorithm.
- Calculate \(b_k(i)\) for sequence \(j\) using the backward algorithm.
- Add the contribution of sequence \(j\) to \(A\) and \(E\).

Calculate the new model parameters from the new A and E. Calculate the new log likelihood of the model. Stop when the incremental change is small, or that there are enough iterations.

## Structure of HMMs¶

Prior knowledge should be used as optimally as possible.

### Duration¶

* Length of extent of model: staying with the same nucleotide frequencies for instance:

- Exponentially decaying length (geometric) \(P (\mbox{k residues}) = (1-p)p^{k-1}\)
- Defined range of length, e.g., Model with distribution of lengths

between 2 and 8

- Non-geometric length distribution, e.g., array of \(n\) states

* Silent (or, Null) states, e.g. Begin, End states Do not emit any symbols

Are used to simplify the models, reduce the number of transition probabilities to estimate from training data.

No loops consisting entirely of silent states are allowed. All HMM-related algorithms can be easily adjusted to incorporate silent states.