# Beta distribution

Parameters Probability density function Cumulative distribution function α > 0 shape (real) β > 0 shape (real) $x \in [0; 1]\!$ $\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\mathrm{B}(\alpha,\beta)}\!$ $I_x(\alpha,\beta)\!$ $\frac{\alpha}{\alpha+\beta}\!$ $\frac{\alpha-1}{\alpha+\beta-2}\!$ for α > 1,β > 1 $\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\!$ $\frac{2\,(\beta-\alpha)\sqrt{\alpha+\beta+1}}{(\alpha+\beta+2)\sqrt{\alpha\beta}}$ see text see text $1 +\sum_{k=1}^{\infty} \left( \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r} \right) \frac{t^k}{k!}$ ${}_1F_1(\alpha; \alpha+\beta; i\,t)\!$

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parameterized by two positive shape parameters, typically denoted by α and β. It is the special case of the Dirichlet distribution with only two parameters. Since the Dirichlet distribution is the conjugate prior of the multinomial distribution, the beta distribution is the conjugate prior of the binomial distribution. In Bayesian statistics, it can be seen as the posterior distribution of the parameter p of a binomial distribution after observing α − 1 independent events with probability p and β − 1 with probability 1 − p, if the prior distribution of p was uniform.

## Characterization

### Probability density function

The probability density function of the beta distribution is: $f(x;\alpha,\beta) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{\int_0^1 u^{\alpha-1} (1-u)^{\beta-1}\, du} \!$ $= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\, x^{\alpha-1}(1-x)^{\beta-1}\!$ $= \frac{1}{\mathrm{B}(\alpha,\beta)}\, x ^{\alpha-1}(1-x)^{\beta-1}\!$

where Γ is the gamma function. The beta function, B, appears as a normalization constant to ensure that the total probability integrates to unity.

### Cumulative distribution function $F(x;\alpha,\beta) = \frac{\mathrm{B}_x(\alpha,\beta)}{\mathrm{B}(\alpha,\beta)} = I_x(\alpha,\beta) \!$

where Bx(α,β) is the incomplete beta function and Ix(α,β) is the regularized incomplete beta function.

## Properties

### Moments

The expected value, second moment and variance of a beta random variable X with parameters α and β are given by the formulae: \begin{align} \operatorname{E}(X) = & \frac{\alpha}{\alpha+\beta} \\ \operatorname{E}(X^2) = & \frac{\alpha (\alpha+1)}{(\alpha+\beta)(\alpha+\beta+1)} \\ \operatorname{Var}(X) = & \frac{\alpha \beta}{(\alpha+\beta)^2(\alpha+\beta+1)} \end{align}

The skewness is $\frac{2 (\beta - \alpha) \sqrt{\alpha + \beta + 1} } {(\alpha + \beta + 2) \sqrt{\alpha \beta}}. \,\!$

The kurtosis excess is: $6\,\frac{\alpha^3-\alpha^2(2\beta-1)+\beta^2(\beta+1)-2\alpha\beta(\beta+2)} {\alpha \beta (\alpha+\beta+2) (\alpha+\beta+3)}.\,\!$

In general, the kth raw moment is given by $\operatorname{E}(X^k) = \frac{\operatorname{B}(\alpha+k, \beta)}{\operatorname{B}(\alpha,\beta)},$

which can also be written in a recursive form as $\operatorname{E}(X^k) = \frac{\alpha+k-1}{\alpha+\beta+k-1}\operatorname{E}(X^{k-1}).$

### Quantities of information

Given two beta distributed random variables, X ~ Beta(α, β) and Y ~ Beta(α', β'), the information entropy of X is \begin{align} H(X) &= \ln\mathrm{B}(\alpha,\beta)-(\alpha-1)\psi(\alpha)-(\beta-1)\psi(\beta)+(\alpha+\beta-2)\psi(\alpha+\beta) \end{align} \,

where ψ is the digamma function.

The cross entropy is $H(X,Y) = \ln\mathrm{B}(\alpha',\beta')-(\alpha'-1)\psi(\alpha)-(\beta'-1)\psi(\beta)+(\alpha'+\beta'-2)\psi(\alpha+\beta).\,$

It follows that the Kullback-Leibler divergence between these two beta distributions is $D_{\mathrm{KL}}(X,Y) = \ln\frac{\mathrm{B}(\alpha',\beta')} {\mathrm{B}(\alpha,\beta)} - (\alpha'-\alpha)\psi(\alpha) - (\beta'-\beta)\psi(\beta) + (\alpha'-\alpha+\beta'-\beta)\psi(\alpha+\beta).$

### Shapes

The beta density function can take on different shapes depending on the values of the two parameters:

• $\alpha = 1,\ \beta = 1$ is the uniform [0,1] distribution
• $\alpha < 1,\ \beta < 1$ is U-shaped (red plot)
• $\alpha < 1,\ \beta \geq 1$ or $\alpha = 1,\ \beta > 1$ is strictly decreasing (blue plot)
• $\alpha = 1,\ \beta > 2$ is strictly convex
• $\alpha = 1,\ \beta = 2$ is a straight line
• $\alpha = 1,\ 1 < \beta < 2$ is strictly concave
• $\alpha = 1,\ \beta < 1$ or $\alpha > 1,\ \beta \leq 1$ is strictly increasing (green plot)
• $\alpha > 2,\ \beta = 1$ is strictly convex
• $\alpha = 2,\ \beta = 1$ is a straight line
• $1 < \alpha < 2,\ \beta = 1$ is strictly concave
• $\alpha > 1,\ \beta > 1$ is unimodal (purple & black plots)

Moreover, if α = β then the density function is symmetric about 1/2 (red & purple plots).

## Parameter estimation

Let $\bar{x} = \frac{1}{N}\sum_{i=1}^N x_i$

be the sample mean and $v = \frac{1}{N}\sum_{i=1}^N (x_i - \bar{x})^2$

be the sample variance. The method-of-moments estimates of the parameters are $\alpha = \bar{x} \left(\frac{\bar{x} (1 - \bar{x})}{v} - 1 \right),$ $\beta = (1-\bar{x}) \left(\frac{\bar{x} (1 - \bar{x})}{v} - 1 \right).$

If the distribution is required over an interval other than [0, 1], say $[\ell,h]$, then replace $\bar{x}$ with $\frac{(\bar{x}-\ell)}{(h-\ell)} ,$ and $\ v$ with $\frac{v}{(h-\ell)^2}$ in the above equations.

## Related distributions

• If X has a beta distribution, then T=X/(1-X) has a "beta distribution of the second kind", also called the beta prime distribution.
• The connection with the binomial distribution is mentioned below.
• The Beta(1,1) distribution is identical to the standard uniform distribution.
• If X and Y are independently distributed Gamma(α, θ) and Gamma(β, θ) respectively, then X / (X + Y) is distributed Beta(α,β).
• If X and Y are independently distributed Beta(α,β) and F(2β,2α) (Snedecor's F distribution with 2β and 2α degrees of freedom), then Pr(X ≤ α/(α+xβ)) = Pr(Y > x) for all x > 0.
• The beta distribution is a special case of the Dirichlet distribution for only two parameters.
• The Kumaraswamy distribution resembles the beta distribution.
• If $X \sim {\rm U}(0, 1]\,$ has a uniform distribution, then $X^2 \sim {\rm Beta}(1/2,1) \$ or for the 4 parameter case, $X^2 \sim {\rm Beta}(0,1,1/2,1) \$ which is a special case of the Beta distribution called the power-function distribution.
• Binomial opinions in subjective logic are equivalent to Beta distributions.

## Applications

Beta(ij) with integer values of i and j is the distribution of the i-th order statistic (the i-th smallest value) of a sample of i + j − 1 independent random variables uniformly distributed between 0 and 1. The cumulative probability from 0 to x is thus the probability that the i-th smallest value is less than x, in other words, it is the probability that at least i of the random variables are less than x, a probability given by summing over the binomial distribution with its p parameter set to x. This shows the intimate connection between the beta distribution and the binomial distribution.

Beta distributions are used extensively in Bayesian statistics, since beta distributions provide a family of conjugate prior distributions for binomial (including Bernoulli) and geometric distributions. The Beta(0,0) distribution is an improper prior and sometimes used to represent ignorance of parameter values.

The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM) and other project management / control systems to describe the time to completion of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution: \begin{align} \mu(X) & {} = \frac{a + 4b + c}{6} \\ \sigma(X) & {} = \frac{c-a}{6} \end{align}

where a is the minimum, c is the maximum, and b is the most likely value.

These approximations are exact only for particular values of α and β, specifically when: $\alpha = 3 - \sqrt2$ $\beta = 3 + \sqrt2$

or vice versa.

These are notably poor approximations for most other beta distributions exhibiting average errors of 40% in the mean and 549% in the variance

### Information Theory

We introduce one exemplary use of beta distribution in information theory, particularly for the information theoretic performance analysis for a communication system. In sensor array systems, the distribution of two vector production is used for the performance estimation in frequent. Assume that s and v are vectors the (M-1)-dimensional nullspace of h with isotropic i.i.d. where s, v and h are in CM and the elements of h are i.i.d complex Gaussian random values. Then, the production of s and v with absolute of the result |sHv| is beta(1,M-2) distributed.

### Rule of succession

A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem: it states that, given s successes in n conditionally independent Bernoulli trials with probability p, that p should be estimated as $\frac{s+1}{n+2}$, as this is the expected value of the beta distribution, starting with a uniform prior and observing s successes in n trials.

## Four parameters

A beta distribution with the two shape parameters α and β is supported on the range [0,1]. It is possible to alter the location and scale of the distribution by introducing two further parameters representing the minimum and maximum values of the distribution.