# .

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval (0, 1) parameterized by two positive shape parameters, typically denoted by α and β. The beta distribution can be suited to the statistical modelling of proportions in applications where values of proportions equal to 0 or 1 do not occur. One theoretical case where the beta distribution arises is as the distribution of the ratio formed by one random variable having a Gamma distribution divided by the sum of it and another independent random variable also having a Gamma distribution with the same scale parameter (but possibly different shape parameter).

The usual formulation of the beta distribution is also known as the beta distribution of the first kind, whereas beta distribution of the second kind is an alternative name for the beta prime distribution.

Characterization
Probability density function

The probability density function of the beta distribution is:

\begin{align} f(x;\alpha,\beta) & = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{\int_0^1 u^{\alpha-1} (1-u)^{\beta-1}\, du} \\[6pt] & = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\, x^{\alpha-1}(1-x)^{\beta-1} \\[6pt] & = \frac{1}{\mathrm{B}(\alpha,\beta)}\, x ^{\alpha-1}(1-x)^{\beta-1} \end{align}

where $$\Gamma(z)$$ is the gamma function. The beta function, B, appears as a normalization constant to ensure that the total probability integrates to unity.

A random variable X that is Beta-distributed with shape α and β is denoted

$$X \sim \textrm{Be}(\alpha, \beta)$$

Cumulative distribution function

The cumulative distribution function is

$$F(x;\alpha,\beta) = \frac{\mathrm{B}_x(\alpha,\beta)}{\mathrm{B}(\alpha,\beta)} = I_x(\alpha,\beta) \!$$

where $$\mathrm{B}_x(\alpha,\beta)$$ is the incomplete beta function and $$I_x(\alpha,\beta)$$ is the regularized incomplete beta function.
Properties

The mode of a Beta distributed random variable X with parameters α > 1 and β > 1 is:

\begin{align} \frac{\alpha - 1}{\alpha + \beta - 2} \\ \end{align}[1]

The expected value (mean) ( $$\mu$$ ), variance (second central moment), skewness (third central moment), and kurtosis excess (fourth central moment) of a Beta distribution random variable X with parameters α and β are:

\begin{align} \mu &= \operatorname{E}(X) &&= \frac{\alpha}{\alpha + \beta} \\ \operatorname{Var}(X) &= \operatorname{E}(X - \mu)^2 &&= \frac{\alpha \beta}{(\alpha + \beta)^2(\alpha + \beta + 1)} \end{align} \)

The skewness is

$$\frac{\operatorname{E}(X - \mu)^3}{[\operatorname{E}(X - \mu)^2]^{3/2}} = \frac{2 (\beta - \alpha) \sqrt{\alpha + \beta + 1} } {(\alpha + \beta + 2) \sqrt{\alpha \beta}}$$

The kurtosis excess is:

$$\frac{\operatorname{E}(X - \mu)^4}{[\operatorname{E}(X - \mu)^2]^{2}}-3 = \frac{6[\alpha^3-\alpha^2(2\beta - 1) + \beta^2(\beta + 1) - 2\alpha\beta(\beta + 2)]} {\alpha \beta (\alpha + \beta + 2) (\alpha + \beta + 3)}$$

or:

$$\frac{6[(\alpha - \beta)^2 (\alpha +\beta + 1) - \alpha \beta (\alpha + \beta + 2)]} {\alpha \beta (\alpha + \beta + 2) (\alpha + \beta + 3)}$$

In general, the kth raw moment is given by

$$\operatorname{E}(X^k) = \frac{\operatorname{B}(\alpha + k, \beta)}{\operatorname{B}(\alpha,\beta)} = \frac{(\alpha)_{k}}{(\alpha + \beta)_{k}}$$

where $$(x)_{k}$$ is a Pochhammer symbol representing rising factorial. It can also be written in a recursive form as

$$\operatorname{E}(X^k) = \frac{\alpha + k - 1}{\alpha + \beta + k - 1}\operatorname{E}(X^{k - 1})$$

One can also show that

$$\operatorname{E}(\log X) = \psi(\alpha) - \psi(\alpha + \beta)$$

Quantities of information

Given two beta distributed random variables, X ~ Beta(α, β) and Y ~ Beta(α', β'), the differential entropy of X is [2]

\begin{align} h(X) &= \ln\mathrm{B}(\alpha,\beta)-(\alpha-1)\psi(\alpha)-(\beta-1)\psi(\beta)+(\alpha+\beta-2)\psi(\alpha+\beta) \end{align}

where $$\psi$$ is the digamma function.

The cross entropy is

$$H(X,Y) = \ln\mathrm{B}(\alpha',\beta')-(\alpha'-1)\psi(\alpha)-(\beta'-1)\psi(\beta)+(\alpha'+\beta'-2)\psi(\alpha+\beta).\,$$

It follows that the Kullback–Leibler divergence between these two beta distributions is

$$D_{\mathrm{KL}}(X,Y) = \ln\frac{\mathrm{B}(\alpha',\beta')} {\mathrm{B}(\alpha,\beta)} - (\alpha'-\alpha)\psi(\alpha) - (\beta'-\beta)\psi(\beta) + (\alpha'-\alpha+\beta'-\beta)\psi(\alpha+\beta).$$

Shapes

The beta density function can take on different shapes depending on the values of the two parameters:

$$\alpha = 1,\ \beta = 1$$ is the uniform [0,1] distribution
$$\alpha < 1,\ \beta < 1$$ is U-shaped (blue plot)
$$\alpha = \tfrac{1}{2},\ \beta = \tfrac{1}{2}$$is the arcsine distribution
$$\alpha < 1,\ \beta \geq 1 or \alpha = 1,\ \beta > 1$$is strictly decreasing (red plot)
$$\alpha = 1,\ \beta > 2$$ is strictly convex
$$\alpha = 1,\ \beta = 2$$ is a straight line
$$\alpha = 1,\ 1 < \beta < 2$$is strictly concave
$$\alpha = 1,\ \beta < 1 or \alpha > 1,\ \beta \leq 1$$is strictly increasing (green plot)
$$\alpha > 2,\ \beta = 1$$ is strictly convex
$$\alpha = 2,\ \beta = 1$$ is a straight line
$$1 < \alpha < 2,\ \beta = 1$$is strictly concave
$$\alpha > 1,\ \beta > 1$$is unimodal (magenta & cyan plots)

Moreover, if $$\alpha = \beta$$ then the density function is symmetric about 1/2 (blue & teal plots).
Parameter estimation

Let

$$\bar{x} = \frac{1}{N}\sum_{i=1}^N x_i$$

be the sample mean and

$$v = \frac{1}{N-1}\sum_{i=1}^N (x_i - \bar{x})^2$$

be the sample variance. The method-of-moments estimates of the parameters are

$$\hat{\alpha} = \bar{x} \left(\frac{\bar{x} (1 - \bar{x})}{v} - 1 \right),$$

$$\hat{\beta} = (1-\bar{x}) \left(\frac{\bar{x} (1 - \bar{x})}{v} - 1 \right).$$

When the distribution is required over an interval other than [0, 1], say $$\scriptstyle [\ell,h]$$ , then replace $$\bar{x}$$ with $$\frac{\bar{x}-\ell}{h-\ell}$$ , and $$\ v$$ with $$\frac{v}{(h-\ell)^2}$$ in the above equations.[3][4]

There is no closed-form of the maximum likelihood estimates for the parameters.
Generating beta-distributed random variates

If X and Y are independent, with $$X \sim {\rm \Gamma}(\alpha, \theta)\$$, and $$Y \sim {\rm \Gamma}(\beta, \theta)\$$, then $$\tfrac{X}{X+Y} \sim {\rm Beta}(\alpha, \beta)\,,$$ so one algorithm for generating beta variates is to generate X/(X+Y), where X is a gamma variate with parameters $$(\alpha, 1)$$ and Y is an independent gamma variate with parameters (\beta, 1).[5]

Also, the kth order statistic of n uniformly distributed variates is $${\rm Beta}(k, n+1-k)$$, so an alternative if $$\alpha$$ and $$\beta$$ are small integers is to generate $$\alpha + \beta - 1$$ uniform variates and choose the $$\alpha$$-th largest.[6]
Related distributions
Transformations

If $$X \sim {\rm Beta}(a, b)\$$, then $$1-X \sim {\rm Beta}(b, a) \,$$
If $$X \sim {\rm Beta}(a,b)\,$$ then $$\tfrac{X}{1-X} \sim {\rm BetaPrime}(a,b) \$$,. The beta prime distribution, also called "beta distribution of the second kind".
If $$X \sim {\rm Beta}(\tfrac{n}{2}, \tfrac{m}{2})\,$$ then $$\tfrac{mX}{n(1-X)} \sim F(n,m)$$ (assuming n>0 and m>0)
If $$X \sim {\rm Beta}\left(1+\lambda\tfrac{c-min}{max-min},1+\lambda\tfrac{max-c}{max-min}\right) \!\! \$$, then $$\!\! min+X(max-min) \sim PERT(min,max,c,\lambda) \,$$, where PERT denotes a distribution used in PERT analysis. Usually $$\lambda=4$$ to approximate the shape of Normal distribution. This is the new better fitted parametrization[7].
If $$X \sim {\rm Beta}(1, \beta)\$$, then $$X \sim \,$$ Kumaraswamy distribution with parameters (1,\beta)\,
If $$X \sim {\rm Beta}(\alpha, 1)\,$$ then $$X \sim \,$$ Kumaraswamy distribution with parameters (\alpha,1)\,
If $$X \sim {\rm Beta}(\alpha, 1)\,$$ then $$-ln(X) \sim \textrm{Exponential}(\alpha)\,$$

Special and limiting cases

$${\rm Beta}(1, 1) \sim {\rm U}(0,1) \,$$ the standard uniform distribution.
If $$\( X \sim {\rm Beta}(\tfrac{3}{2}, \tfrac{3}{2})\,$$ and $$r>0\,$$ then $$2rX-r \sim \,$$ Wigner semicircle distribution.
$${\rm Beta}(\tfrac{1}{2},\tfrac{1}{2})\$$ is the Jeffreys prior for a proportion and is equivalent to arcsine distribution.
$$\lim_{n \to \infty}n{\rm Beta}(1,n) = {\rm Exp}(1) \,$$ the exponential distribution
$$\lim_{n \to \infty} n {\rm Beta}(k,n) = \textrm{Gamma}(k,1 ) \,$$ the gamma distribution

Derived from other distributions

The kth order statistic of a sample of size n from the uniform distribution is a beta random variable, $$U_{(k)} \sim B(k,n+1-k)$$.[6]
If $$X \sim {\rm \Gamma}(\alpha, \theta)\, and \( Y \sim {\rm \Gamma}(\beta, \theta)\,$$ then $$\tfrac{X}{X+Y} \sim {\rm Beta}(\alpha, \beta)\,$$
If $$X \sim \chi^2(\alpha)\,$$ and $$Y \sim \chi^2(\beta)\,$$ then $$\tfrac{X}{X+Y} \sim {\rm Beta}(\tfrac{\alpha}{2}, \tfrac{\beta}{2})\,$$
If $$X \sim \operatorname{Unif}(0,1)$$ and $$\alpha\,>0$$ then $$X^{\frac{1}{\alpha}}\sim\operatorname{Beta}(\alpha, 1).$$
If $$X \sim {\rm U}(0, 1]\,,$$ then $$X^2 \sim {\rm Beta}(\tfrac{1}{2},1) \$$, which is a special case of the Beta distribution called the power-function distribution.

Combination with other distributions

$$X \sim {\rm Beta}(\alpha, \beta)\,$$ and $$Y \sim F(2\alpha,2\beta)\,$$ then $$\Pr(X \leq \tfrac{\alpha}{\alpha+\beta x}) = \Pr(Y \geq x)\$$, for all $$x > 0.$$

Compounding with other distributions

If $$p \sim \mathrm{Beta}(\alpha,\beta)\,$$ and $$X \sim \operatorname{Bin}(k,p)\$$, then $$X \sim \,$$ beta-binomial distribution
If $$p \sim \mathrm{Beta}(\alpha,\beta)\,$$ and $$\( X \sim \operatorname{NB}(r,p)\,$$ then $$X \sim \,$$ beta negative binomial distribution

Generalisations

The Dirichlet distribution is a multivariate generalization of the beta distribution. Univariate marginals of the Dirichlet distribution have a beta distribution.
The beta distribution is a special case of the Pearson type I distribution
$${\rm Beta}(\alpha, \beta) = \lim_{\delta \to 0}{\rm NonCentralBeta}(\alpha,\beta,\delta)\,$$ the noncentral beta distribution

Other

Binomial opinions in subjective logic are equivalent to Beta distributions.

Applications
Order statistics
Main article: Order statistic

The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the k'th largest of a sample of size n from a continuous uniform distribution has a beta distribution.[6] This result is summarized as:

$$U_{(k)} \sim B(k,n+1-k).$$

From this, and application of the theory related to the probability integral transform, the distribution of any individual order statistic from any continuous distribution can be derived.[6]
Rule of succession
Main article: Rule of succession

A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent Bernoulli trials with probability p, that p should be estimated as $$\frac{s+1}{n+2}$$. This estimate may be regarded as the expected value of the posterior distribution over p, namely Beta(s + 1, n − s + 1), which is given by Bayes' rule if one assumes a uniform prior over p (i.e., Beta(1, 1)) and then observes that p generated s successes in n trials.
Bayesian inference
Main article: Bayesian inference

Beta distributions are used extensively in Bayesian inference, since beta distributions provide a family of conjugate prior distributions for binomial (including Bernoulli) and geometric distributions. The Beta(0,0) distribution is an improper prior and sometimes used to represent ignorance of parameter values.

The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of an unknown probability value — typically, as the prior distribution over a probability parameter, such as the probability of success in a binomial distribution or Bernoulli distribution. In fact, the beta distribution is the conjugate prior of the binomial distribution and Bernoulli distribution.

The beta distribution is the special case of the Dirichlet distribution with only two parameters, and the beta is conjugate to the binomial and Bernoulli distributions in exactly the same way as the Dirichlet distribution is conjugate to the multinomial distribution and categorical distribution.

In Bayesian inference, the beta distribution can be derived as the posterior probability of the parameter p of a binomial distribution after observing α − 1 successes (with probability p of success) and β − 1 failures (with probability 1 − p of failure). Another way to express this is that placing a prior distribution of Beta(α,β) on the parameter p of a binomial distribution is equivalent to adding α pseudo-observations of "success" and β pseudo-observations of "failure" to the actual number of successes and failures observed, then estimating the parameter p by the proportion of successes over both real- and pseudo-observations. If α and β are greater than 0, this has the effect of smoothing out the distribution of the parameters by ensuring that some positive probability mass is assigned to all parameters even when no actual observations corresponding to those parameters is observed. Values of α and β less than 1 favor sparsity, i.e. distributions where the parameter p is close to either 0 or 1. In effect, α and β, when operating together, function as a concentration parameter; see that article for more details.

The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM) and other project management / control systems to describe the time to completion of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution:

\begin{align} \mu(X) & {} = \frac{a + 4b + c}{6} \\ \sigma(X) & {} = \frac{c-a}{6} \end{align}

where a is the minimum, c is the maximum, and b is the most likely value.

Using this set of approximations is known as three-point estimation and are exact only for particular values of α and β, specifically when[8]:

$$\alpha = 3 - \sqrt2 \,$$
$$\beta = 3 + \sqrt2 \,$$

or vice versa.

These are notably poor approximations for most other beta distributions exhibiting average errors of 40% in the mean and 549% in the variance[9][10][11]
Alternative parameterizations
Mean and sample size

The beta distribution may also be reparameterized in terms of its mean μ (0 ≤ μ ≤ 1) and sample size ν = α + β (ν > 0). This is useful in Bayesian parameter estimation if one wants to place an unbiased (uniform) prior over the mean. For example, one may administer a test to a number of individuals. If it is assumed that each person's score (0 ≤ θ ≤ 1) is drawn from a population-level Beta distribution, then an important statistic is the mean of this population-level distribution. The mean and sample size parameters are related to the shape parameters α and β via[12]

\begin{align} \alpha & {} = \mu \nu ,\\ \beta & {} = (1 - \mu) \nu . \end{align}

Under this parameterization, one can place a uniform prior over the mean, and a vague prior (such as an exponential or gamma distribution) over the positive reals for the sample size.

The Balding–Nichols model is a similar two-parameter reparameterization of the beta distribution.
Four parameters

A beta distribution with the two shape parameters α and β is supported on the range [0,1]. It is possible to alter the location and scale of the distribution by introducing two further parameters representing the minimum and maximum values of the distribution.[13]

The probability density function of the four parameter beta distribution is given by

$$f(y; \alpha, \beta, a, b) = \frac{1}{B(\alpha, \beta)} \frac{ (y-a)^{\alpha-1} (b-y)^{\beta-1} }{(b-a)^{\alpha+\beta-1}} .$$

The mean, mode and variance of the four parameters Beta distribution are:

$$\text{mean} = \frac{\alpha b+ \beta a}{\alpha+\beta}\$$

$$\text{mode} = \frac{(\alpha-1) b+(\beta-1) a}{\alpha+\beta-2} \qquad \text{for} \ \alpha>1, \beta>1\$$

$$\text{variance} = \frac{\alpha\beta (b-a)^2}{(\alpha+\beta)^2(\alpha+\beta+1)}\$$

The standard form can be obtained by letting

$$x = \frac{y-a}{b-a}.$$

References

^ Johnson, Norman L., Samuel Kotz, and N. Balakrishnan (1995). "Continuous Univariate Distributions, Vol. 2", Wiley, ISBN 978-0-471-58494-0.
^ A. C. G. Verdugo Lazo and P. N. Rathie. "On the entropy of continuous probability distributions," IEEE Trans. Inf. Theory, IT-24:120–122,1978.
^ Engineering Statistics Handbook
^ Brighton Webs Ltd. Data & Analysis Services for Industry & Education
^ van der Waerden, B. L., "Mathematical Statistics", Springer, ISBN 978-3-540-04507-6.
^ a b c d David, H. A., Nagaraja, H. N. (2003) Order Statistics (3rd Edition). Wiley, New Jersey pp 458. ISBN 0-471-38926-9
^ Herrerías-Velasco, José Manuel and Herrerías-Pleguezuelo, Rafael and René van Dorp, Johan. (2011). Revisiting the PERT mean and Variance. European Journal of Operational Research (210), p. 448–451.
^ Grubbs, Frank E. (1962). Attempts to Validate Certain PERT Statistics or ‘Picking on PERT’. Operations Research 10(6), p. 912–915.
^ Keefer, Donald L. and Verdini, William A. (1993). Better Estimation of PERT Activity Time Parameters. Management Science 39(9), p. 1086–1091.
^ Keefer, Donald L. and Bodily, Samuel E. (1983). Three-point Approximations for Continuous Random variables. Management Science 29(5), p. 595–609.
^ DRMI Newsletter, Issue 12, April 8, 2005
^ Kruschke, J. (2011). Doing Bayesian data analysis: A tutorial with R and BUGS. Academic Press / Elsevier, p. 83.
^ Beta4 distribution