# Viterbi algorithm

The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources, and more generally, hidden Markov models. The forward algorithm is a closely related algorithm for computing the probability of a sequence of observed events. These algorithms belong to the realm of information theory.

The algorithm makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an instance of an observed event needs to correspond to exactly one instance of a hidden event. Third, computing the most likely hidden sequence up to a certain point t must depend only on the observed event at point t, and the most likely sequence at point t − 1. These assumptions are all satisfied in a first-order hidden Markov model.

The terms "Viterbi path" and "Viterbi algorithm" are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the "Viterbi parse".

The Viterbi algorithm was conceived by Andrew Viterbi in 1967 as an error-correction scheme for noisy digital communication links, finding universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.

## Overview

The assumptions listed above can be elaborated as follows. The Viterbi algorithm operates on a state machine assumption. That is, at any time the system being modeled is in some state. There are a finite number of states, however large, that can be listed. Each state is represented as a node. Multiple sequences of states (paths) can lead to a given state, but one is the most likely path to that state, called the "survivor path". This is a fundamental assumption of the algorithm because the algorithm will examine all possible paths leading to a state and only keep the one most likely. This way the algorithm does not have to keep track of all possible paths, only one per state.

A second key assumption is that a transition from a previous state to a new state is marked by an incremental metric, usually a number. This transition is computed from the event. The third key assumption is that the events are cumulative over a path in some sense, usually additive. So the crux of the algorithm is to keep a number for each state. When an event occurs, the algorithm examines moving forward to a new set of states by combining the metric of a possible previous state with the incremental metric of the transition due to the event and chooses the best. The incremental metric associated with an event depends on the transition possibility from the old state to the new state. For example in data communications, it may be possible to only transmit half the symbols from an odd numbered state and the other half from an even numbered state. Additionally, in many cases the state transition graph is not fully connected. A simple example is a car that has 3 states — forward, stop and reverse — and a transition from forward to reverse is not allowed. It must first enter the stop state. After computing the combinations of incremental metric and state metric, only the best survives and all other paths are discarded. There are modifications to the basic algorithm which allow for a forward search in addition to the backwards one described here.

Path history must be stored. In some cases, the search history is complete because the state machine at the encoder starts in a known state and there is sufficient memory to keep all the paths. In other cases, a programmatic solution must be found for limited resources: one example is convolutional encoding, where the decoder must truncate the history at a depth large enough to keep performance to an acceptable level. Although the Viterbi algorithm is very efficient and there are modifications that reduce the computational load, the memory requirements tend to remain constant.

## Example

Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather where Bob lives, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.

Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).

Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be written down in the Python programming language:

```states = ('Rainy', 'Sunny')

observations = ('walk', 'shop', 'clean')

start_probability = {'Rainy': 0.6, 'Sunny': 0.4}

transition_probability = {
'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3},
'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6},
}

emission_probability = {
'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5},
'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1},
}
```

In this piece of code, `start_probability` represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) actually approximately `{'Rainy': 0.571, 'Sunny': 0.429}`. The `transition_probability` represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The `emission_probability` represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk.

Alice talks to Bob three days in a row and discovers that on the first day he went for a walk, on the second day he went shopping, and on the third day he cleaned his apartment. Alice has two questions: What is the overall probability of this sequence of observations? And what is the most likely sequence of rainy/sunny days that would explain these observations? The first question is answered by the forward algorithm; the second question is answered by the Viterbi algorithm. These two algorithms are structurally so similar (in fact, they are both instances of the same abstract algorithm) that they can be implemented in a single function:

```def forward_viterbi(obs, states, start_p, trans_p, emit_p):
T = {}
for state in states:
##          prob.           V. path  V. prob.
T[state] = (start_p[state], [state], start_p[state])
for output in obs:
U = {}
for next_state in states:
total = 0
argmax = None
valmax = 0
for source_state in states:
(prob, v_path, v_prob) = T[source_state]
p = emit_p[source_state][output] * trans_p[source_state][next_state]
prob *= p
v_prob *= p
total += prob
if v_prob > valmax:
argmax = v_path + [next_state]
valmax = v_prob
U[next_state] = (total, argmax, valmax)
T = U
## apply sum/max to the final states:
total = 0
argmax = None
valmax = 0
for state in states:
(prob, v_path, v_prob) = T[state]
total += prob
if v_prob > valmax:
argmax = v_path
valmax = v_prob
return (total, argmax, valmax)
```

The function `forward_viterbi` takes the following arguments: `obs` is the sequence of observations, e.g. `['walk', 'shop', 'clean']`; `states` is the set of hidden states; `start_p` is the start probability; `trans_p` are the transition probabilities; and `emit_p` are the emission probabilities.

The algorithm works on the mappings `T` and `U`. Each is a mapping from a state to a triple `(prob, v_path, v_prob)`, where `prob` is the total probability of all paths from the start to the current state (constrained by the observations), `v_path` is the Viterbi path up to the current state, and `v_prob` is the probability of the Viterbi path up to the current state. The mapping `T` holds this information for a given point t in time, and the main loop constructs `U`, which holds similar information for time t+1. Because of the Markov property, information about any point in time prior to t is not needed.

The algorithm begins by initializing T to the start probabilities: the total probability for a state is just the start probability of that state; and the Viterbi path to a start state is the singleton path consisting only of that state; the probability of the Viterbi path is the same as the start probability.

The main loop considers the observations from `obs` in sequence. Its loop invariant is that `T` contains the correct information up to but excluding the point in time of the current observation. The algorithm then computes the triple `(prob, v_path, v_prob)` for each possible next state. The total probability of a given next state, `total` is obtained by adding up the probabilities of all paths reaching that state. More precisely, the algorithm iterates over all possible source states. For each source state, `T` holds the total probability of all paths to that state. This probability is then multiplied by the emission probability of the current observation and the transition probability from the source state to the next state. The resulting probability `prob` is then added to `total`. The probability of the Viterbi path is computed in a similar fashion, but instead of adding across all paths one performs a discrete maximization. Initially the maximum value `valmax` is zero. For each source state, the probability of the Viterbi path to that state is known. This too is multiplied with the emission and transition probabilities and replaces `valmax` if it is greater than its current value. The Viterbi path itself is computed as the corresponding argmax of that maximization, by extending the Viterbi path that leads to the current state with the next state. The triple `(prob, v_path, v_prob)` computed in this fashion is stored in `U` and once `U` has been computed for all possible next states, it replaces `T`, thus ensuring that the loop invariant holds at the end of the iteration.

In the end another summation/maximization is performed (this could also be done inside the main loop by adding a pseudo-observation after the last real observation).

In the running example, the forward/Viterbi algorithm is used as follows:

```def example():
return forward_viterbi(observations,
states,
start_probability,
transition_probability,
emission_probability)
print example()
```

This reveals that the total probability of `['walk', 'shop', 'clean']` is 0.033612 and that the Viterbi path is `['Sunny', 'Rainy', 'Rainy', 'Rainy']`, with probability 0.009408. The Viterbi path contains four states because the fourth observation was generated by the third state and a transition to the fourth state. In other words, given the observed activities, it was most likely sunny when Bob went for a walk and then it started to rain the next day and kept on raining.

When implementing this algorithm, it should be noted that many languages use Floating Point arithmetic - as p is small, this may lead to underflow in the results. A common technique to avoid this is to take the logarithm of the probabilities and use it throughout the computation, the same technique used in the Logarithmic Number System. Once the algorithm has terminated, an accurate value can be obtained by performing the appropriate exponentiation.

If you like to implement this algorithm in a way it can accept any number of states, observations, and probabilities from the keyboard, substitute the parameters declaration part with this one:

```states = []
number_of_states=input('how many states? Please write an integer different from 0 or 1')
index=0
while index<number_of_states:
states.append(str(raw_input('give a name to the state number'+str(index))))
index=index+1

observations = []
number_of_observations=input('how many observations? Please write an integer different from 0 or 1')
index=0
while index<number_of_observations:
observations.append(str(raw_input('give a name to the observation number '+str(index))))
index=index+1

start_probability = {}
for state in states:
start_probability[state] = input('give a value for the start probability of the state '+state)

transition_probability = {}
for initial_state in states:
transition_probability[initial_state] = {}
for final_state in states:
transition_probability[initial_state][final_state]=input('give a value for the transition probability from the state '+initial_state+' to the state '+final_state)

emission_probability = {}
for state in states:
emission_probability[state]= {}
for observation in observations:
emission_probability[state][observation]=input('give a value for the emission probability from the state '+state+' to the observation '+observation)
```

## Extensions

With the algorithm called iterative Viterbi decoding one can find the subsequence of an observation that matches best (on average) to a given HMM. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.

An alternate algorithm, the Lazy Viterbi algorithm, has been proposed recently. This works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result - however, it is not so easy to parallelize in hardware.