Dirichlet distribution

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Several images of the probability density of the Dirichlet distribution when K=3 for various parameter vectors α. Clockwise from top left: α=(6, 2, 2), (3, 7, 5), (6, 2, 6), (2, 3, 4).

In probability and statistics, the Dirichlet distribution (after Johann Peter Gustav Lejeune Dirichlet), often denoted Dir(α), is a family of continuous multivariate probability distributions parametrized by the vector α of positive reals. It is the multivariate generalization of the beta distribution, and conjugate prior of the categorical distribution and multinomial distribution in Bayesian statistics. That is, its probability density function returns the belief that the probabilities of K rival events are xi given that each event has been observed αi − 1 times.

Contents

[edit] Probability density function

Illustrating how the log of the density function changes when K=3 as we change the vector α from α=(0.3, 0.3, 0.3) to (2.0, 2.0, 2.0), keeping all the individual αi's equal to each other.

The Dirichlet distribution of order K ≥ 2 with parameters α1, ..., αK > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK–1 given by

f(x_1,\dots, x_{K-1}; \alpha_1,\dots, \alpha_K) = \frac{1}{\mathrm{B}(\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1}

for all x1, ..., xK–1 > 0 satisfying x1 + ... + xK–1 < 1, where xK is an abbreviation for 1 – x1 – ... – xK–1. The density is zero outside this open (K − 1)-dimensional simplex.

The normalizing constant is the multinomial beta function, which can be expressed in terms of the gamma function:

\mathrm{B}(\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)}{\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)},\qquad\alpha=(\alpha_1,\dots,\alpha_K).

[edit] Properties

Let X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\alpha), meaning that the first K – 1 components have the above density and

X_K=1-X_1-\cdots-X_{K-1}.

Define \textstyle\alpha_0 = \sum_{i=1}^K\alpha_i. Then

 \mathrm{E}[X_i] = \frac{\alpha_i}{\alpha_0},
\mathrm{Var}[X_i] = \frac{\alpha_i (\alpha_0-\alpha_i)}{\alpha_0^2 (\alpha_0+1)},

in fact, the marginals are Beta distributions:

X_i \sim \operatorname{Beta}(\alpha_i, \alpha_0 - \alpha_i).

Furthermore,

\mathrm{Cov}[X_iX_j] = \frac{- \alpha_i \alpha_j}{\alpha_0^2 (\alpha_0+1)}.

The mode of the distribution is the vector (x1, ..., xK) with

 x_i = \frac{\alpha_i - 1}{\alpha_0 - K}, \quad \alpha_i > 1.

The Dirichlet distribution is conjugate to the multinomial distribution in the following sense: if

\beta \mid X=(\beta_1, \ldots, \beta_{K}) \mid X \sim \operatorname{Mult}(X),

where βi is the number of occurrences of i in a sample of n points from the discrete distribution on {1, ..., K} defined by X, then

X \mid \beta \sim \operatorname{Dir}(\alpha + \beta).

This relationship is used in Bayesian statistics to estimate the hidden parameters, X, of a discrete probability distribution given a collection of n samples. Intuitively, if the prior is represented as Dir(α), then Dir(α + β) is the posterior following a sequence of observations with histogram β.

[edit] Entropy

If X is a Dir(α) random variable, then we can use the exponential family differential identities to get an analytic expression for the expectation of logXi:

E[logXi] = ψ(αi) − ψ(α0)

where ψ is the digamma function. This yields the following formula for the information entropy of X:

 H(X) = \log \mathrm{B}(\alpha) + (\alpha_0-K)\psi(\alpha_0) - \sum_{j=1}^K (\alpha_j-1)\psi(\alpha_j)

[edit] Aggregation

If X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\alpha_1,\ldots,\alpha_K), then X' = (X_1, \ldots, X_i + X_j, \ldots, X_K)\sim\operatorname{Dir}(\alpha_1,\ldots,\alpha_i+\alpha_j,\ldots,\alpha_K). This aggregation property may be used to derive the marginal distribution of Xi mentioned above.

[edit] Neutrality

(main article: neutral vector).

If X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\alpha), then the vector~X is said to be neutral[1] in the sense that X1 is independent of X_2/(1-X_1),X_3/(1-X_1),\ldots,X_K/(1-X_1) and similarly for X_2,\ldots,X_{K-1}.

Observe that any permutation of X is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution).

[edit] Related distributions

  • If, for \scriptstyle i\in\{1,2,\ldots,K\},
Y_i\sim\operatorname{Gamma}(\textrm{shape}=\alpha_i,\textrm{scale}=\theta)\text{ independently},
then
V=\sum_{i=1}^K Y_i\sim\operatorname{Gamma}(\textrm{shape}=\sum_{i=1}^K\alpha_i,\textrm{scale}=\theta),
and
X = (X_1,\ldots,X_K) = (Y_1/V,\ldots,Y_K/V)\sim \operatorname{Dir}(\alpha_1,\ldots,\alpha_K).
Though the Xis are not independent from one another, they can be seen to be generated from a set of K independent gamma random variables. Unfortunately, since the sum V is lost in forming X, it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.
The following is a derivation of Dirichlet distribution from Gamma distribution.
Let Yi, i=1,2,...K be a list of i.i.d variables, following Gamma distributions with the same scale parameter θ
Y_i\sim\operatorname{Gamma}(\textrm{shape}=\alpha_i,\textrm{scale}=\theta)\text{ independently},
then the joint distribution of Yi, i=1,2,...K is
 f(Y_1, Y_2, \cdots, Y_{K};\alpha_1, \alpha_2, \cdots, \alpha_K, \theta) =  \prod_{i=1}^K Y_i^{\alpha_i-1} \frac{e^{-Y_i/\theta}}{\theta^{\alpha_i} \Gamma(\alpha_i)} 
.
Through the change of variables, set

\gamma = \sum_{i=1}^K Y_i , \quad
X_1 = \frac{Y_1}{\sum_{i=1}^K Y_i} , \quad
X_2 = \frac{Y_2}{\sum_{i=1}^K Y_i} , \quad
\cdots , \quad
X_{K-1} = \frac{Y_{K-1}}{\sum_{i=1}^K Y_i}.
Then, it's easy to derive that

Y_1 = \gamma X_1 , \quad
Y_2 = \gamma X_2 , \quad
\cdots , \quad
Y_{K-1} = \gamma X_{K-1} , \quad
Y_K = \gamma (1-X_1-X_2 \cdots - X_{K-1}).
Then, the Jacobian is

Jacobian\left( \frac{Y_1, Y_2, \cdots, Y_K}{\gamma, X_1, X_2, \cdots, X_{K-1}} \right)
         =\begin{vmatrix}
                 X_1                     & \gamma  & 0        & \cdots & 0             \\
                 X_2                     & 0       & \gamma   & \cdots & 0             \\
                 \vdots                  & \vdots  & \vdots   & \ddots &  \vdots       \\
                 X_{K-1}                 & 0       & 0        & \cdots & \gamma        \\
                 1-\sum_{i=1}^{K-1}X_{i} & -\gamma & -\gamma  &\cdots  & -\gamma       \\
               \end{vmatrix}
       = \gamma^{K-1}
It means

dY_1 dY_2 \cdots dY_K = \gamma^{K-1} d\gamma dX_1 \cdots dX_{K-1}
So,
 f(X_1, X_2, \cdots, X_{K-1}, \gamma;\alpha_1, \alpha_2, \cdots, \alpha_K, \theta)

=  \frac{e^{-\gamma/\theta}}{\theta^{\sum_{i=1}^K\alpha_i}\prod_{i=1}^K \Gamma(\alpha_i)} \gamma^{\sum_{i=1}^K\alpha_i-1} X_1^{\alpha_1-1} X_2^{\alpha_2-1} \cdots X_{K-1}^{\alpha_{K-1}-1} (1-\sum_{i=1}^{K-1}X_{i}) ^{\alpha_K-1}.
By integrating out γ, we can get the Dirichlet distribution as the following.

\int_0^{\infty} f(X_1, X_2, \cdots, X_{K-1}, \gamma;\alpha_1, \alpha_2, \cdots, \alpha_K, \theta) d\gamma

= \frac{1}{\prod_{i=1}^K \Gamma(\alpha_i)}  X_1^{\alpha_1-1} X_2^{\alpha_2-1} \cdots X_{K-1}^{\alpha_{K-1}-1} (1-\sum_{i=1}^{K-1}X_{i}) ^{\alpha_K-1} 

\int_0^{\infty} \frac{e^{-\gamma/\theta}}{\theta^{\sum_{i=1}^K\alpha_i}} \gamma^{\sum_{i=1}^K\alpha_i-1} d\gamma
According to the Gamma distribution,

\int_0^{\infty} \frac{e^{-\gamma/\theta}}{\theta^{\sum_{i=1}^K\alpha_i}} \gamma^{\sum_{i=1}^K\alpha_i-1} d\gamma = \Gamma(\sum_{i=1}^K \alpha_i)
Finally, we get the following Dirichlet distribution
(X_1,\ldots,X_K) \sim \operatorname{Dir}(\alpha_1,\ldots,\alpha_K).
where XK is (1-X1 - X2... -XK-1)
  • Multinomial opinions in subjective logic are equivalent to Dirichlet distributions.

[edit] Random number generation

[edit] Gamma distribution

A fast method to sample a random vector x=(x_1, \ldots, x_K) from the K-dimensional Dirichlet distribution with parameters (\alpha_1, \ldots, \alpha_K) follows immediately from this connection. First, draw K independent random samples y_1, \ldots, y_K from gamma distributions each with density

 \textrm{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)}, \!

and then set

x_i = y_i/\sum_{j=1}^K y_j. \!

[edit] Marginal beta distributions

A less efficient algorithm[2] relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate x1 from a \textrm{Beta}(\alpha_1, \sum_{i=2}^K \alpha_i) distribution. Then simulate x_2, \ldots, x_{K-1} in order, as follows. For j=2, \ldots, K-1, simulate φj from a \textrm{Beta}(\alpha_j, \sum_{i=j+1}^K \alpha_i) distribution, and let x_j=(1-\sum_{i=1}^{j-1} x_i)\phi_j. Finally, set x_K=1-\sum_{i=1}^{K-1} x_i.

[edit] Intuitive interpretations of the parameters

[edit] String cutting

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. The α/α0 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with α0.

Example of Dirichlet(1/2,1/3,1/6) distribution

[edit] Pólya urn

Consider an urn containing balls of K different colors. Initially, the urn contains α1 balls of color 1, α2 balls of color 2, and so on. Now perform N draws from the urn, where after each draw, the ball is placed back into the urn with another ball of the same color. In the limit as N approaches infinity, the proportions of different colored balls in the urn will be distributed as \operatorname{Dir}(\alpha_1,\ldots,\alpha_K).[3]

Note that each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls. This "diminishing returns" effect can also help explain how large α values yield Dirichlet distributions with most of the probability mass concentrated around a single point on the simplex.

[edit] See also

[edit] References

  1. ^ Connor, Robert J. (1969). "Concepts of Independence for Proportions with a Generalization of the Dirichlet Distribution". journal of the American statistical association 64 (325): 194-206. doi:10.2307/2283728. 
  2. ^ A. Gelman and J. B. Carlin and H. S. Stern and D. B. Rubin (2003). Bayesian Data Analysis (2nd ed.). pp. 582. ISBN 1-58488-388-X. 
  3. ^ Blackwell, David (1973). "Ferguson Distributions Via Polya Urn Schemes". Ann Stat 1 (2): 353-355. doi:10.1214/aos/1176342372. 

[edit] External links

Personal tools