Hellenica World

.


In statistical mechanics, Bose–Einstein statistics (or more colloquially B–E statistics) determines the statistical distribution of identical indistinguishable bosons over the energy states in thermal equilibrium. It is named after Satyendra Nath Bose and Albert Einstein.

Concept

At low temperatures, bosons behave differently from fermions (which obey the Fermi-Dirac Statistics) in that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – Bose Einstein Condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies \( \frac{N}{V} \ge n_q \) . Where N is the number of particles and V is the volume and nq is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are touching but not overlapping. Fermi–Dirac statistics apply to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics apply to bosons. As the quantum concentration depends on temperature; most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit unless they have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration.

Bosons, unlike fermions, are not subject to the Pauli exclusion principle: an unlimited number of particles may occupy the same state at the same time. This explains why, at low temperatures, bosons can behave very differently from fermions; all the particles will tend to congregate at the same lowest-energy state, forming what is known as a Bose–Einstein condensate.

B–E statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924-25.

The expected number of particles in an energy state i for B–E statistics is

\( n_i = \frac{g_i}{e^{(\varepsilon_i-\mu)/kT}-1} \)

with εi > μ and where ni is the number of particles in state i, gi is the degeneracy of state i, εi is the energy of the ith state, μ is the chemical potential, k is the Boltzmann constant, and T is absolute temperature.

This reduces to the Rayleigh-Jeans Law distribution for \( kT \gg \varepsilon_i-\mu \), namely \( n_i = \frac{g_i kT}{\varepsilon_i-\mu} \).
History

While presenting a lecture at the University of Dhaka on the theory of radiation and the ultraviolet catastrophe, Satyendra Nath Bose a Bengali scientist, intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment (he later adapted this lecture into a short article called Planck's Law and the Hypothesis of Light Quanta).[1] [2] The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads one-third of the time—that would appear obviously wrong to anyone with a basic understanding of statistics. However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake at all. He for the first time took the position that the Maxwell–Boltzmann distribution would not be true for microscopic particles where fluctuations due to Heisenberg's uncertainty principle will be significant. Thus he stressed the probability of finding particles in the phase space, each state having volume h³, and discarding the distinct position and momentum of the particles.

Physics journals refused to publish Bose's paper. Various editors ignored his findings, contending that he had presented them with a simple mistake. Discouraged, he wrote to Albert Einstein, who immediately agreed with him. His theory finally achieved respect when Einstein sent his own paper in support of Bose's to Zeitschrift für Physik, asking that they be published together. This was done in 1924. Bose had earlier translated Einstein's theory of General Relativity from German to English.

The reason Bose's "mistake" produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal energy as being two distinct identifiable photons. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third (tail-head = head-tail). Bose's "error" is now called Bose–Einstein statistics.

Einstein adopted the idea and extended it to atoms. This led to the prediction of the existence of phenomena which became known as Bose-Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.
A derivation of the Bose–Einstein distribution

Suppose we have a number of energy levels, labeled by index \( \displaystyle i \), each level having energy \( \displaystyle \varepsilon_i \) and containing a total of \( \displaystyle n_i \) particles. Suppose each level contains \( \displaystyle g_i \) distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy. The value of \( \displaystyle g_i \) associated with level \( \displaystyle i \) is called the "degeneracy" of that energy level. Any number of bosons can occupy the same sublevel.

Let \( \displaystyle w(n,g) \) be the number of ways of distributing \( \displaystyle n \) particles among the \( \displaystyle g \) sublevels of an energy level. There is only one way of distributing \( \displaystyle n \) particles with one sublevel, therefore \( \displaystyle w(n,1)=1 \). It is easy to see that there are \( \displaystyle (n+1) \) ways of distributing \( \displaystyle n \) particles in two sublevels which we will write as:

\( w(n,2)=\frac{(n+1)!}{n!1!}. \)

With a little thought (see Notes below) it can be seen that the number of ways of distributing \( \displaystyle n \) particles in three sublevels is

\( w(n,3) = w(n,2) + w(n-1,2) + \cdots + w(1,2) + w(0,2) \)

so that

\( w(n,3)=\sum_{k=0}^n w(n-k,2) = \sum_{k=0}^n\frac{(n-k+1)!}{(n-k)!1!}=\frac{(n+2)!}{n!2!} \)

where we have used the following theorem involving binomial coefficients:

\( \sum_{k=0}^n\frac{(k+a)!}{k!a!}=\frac{(n+a+1)!}{n!(a+1)!}. \)

Continuing this process, we can see that \( \displaystyle w(n,g) \) is just a binomial coefficient (See Notes below)

\( w(n,g)=\frac{(n+g-1)!}{n!(g-1)!}. \)

For example, the population numbers for two particles in three sublevels are 200, 110, 101, 020, 011, or 002 for a total of six which equals 4!/(2!2!). The number of ways that a set of occupation numbers \displaystyle n_i can be realized is the product of the ways that each individual energy level can be populated:

\( W = \prod_i w(n_i,g_i) = \prod_i \frac{(n_i+g_i-1)!}{n_i!(g_i-1)!} \approx\prod_i \frac{(n_i+g_i)!}{n_i!(g_i-1)!} \)

where the approximation assumes that n_i \gg 1.

Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of \displaystyle n_i for which W is maximised, subject to the constraint that there be a fixed total number of particles, and a fixed total energy. The maxima of \( \displaystyle W \) and \( \displaystyle \ln(W) \) occur at the same value of \displaystyle n_i and, since it is easier to accomplish mathematically, we will maximise the latter function instead. We constrain our solution using Lagrange multipliers forming the function:

\( f(n_i)=\ln(W)+\alpha(N-\sum n_i)+\beta(E-\sum n_i \varepsilon_i) \)

Using the \( n_i \gg 1 \)approximation and using Stirling's approximation for the factorials \( \left(x!\approx x^x\,e^{-x}\,\sqrt{2\pi x}\right) \) gives

\( f(n_i)=\sum_i (n_i + g_i) \ln(n_i + g_i) - n_i \ln(n_i) +\alpha\left(N-\sum n_i\right)+\beta\left(E-\sum n_i \varepsilon_i\right)+K. \)

Where K is the sum of a number of terms which are not functions of the \( n_i \). Taking the derivative with respect to \( \displaystyle n_i \) , and setting the result to zero and solving for \( \displaystyle n_i \), yields the Bose–Einstein population numbers:

\( n_i = \frac{g_i}{e^{\alpha+\beta \varepsilon_i}-1}. \)

By a process similar to that outlined in the Maxwell-Boltzmann statistics article, it can be seen that:

\( d\ln W=\alpha\,dN+\beta\,dE \)

which, using Boltzmann's famous relationship \( S=k\,\ln W \) becomes a statement of the second law of thermodynamics at constant volume, and it follows that \( \beta = \frac{1}{kT} \) and \( \alpha = - \frac{\mu}{kT} \) where S is the entropy,\( \mu \) is the chemical potential, k is Boltzmann's constant and T is the temperature, so that finally:

\( n_i = \frac{g_i}{e^{(\varepsilon_i-\mu)/kT}-1}. \)

Note that the above formula is sometimes written:

\( n_i = \frac{g_i}{e^{\varepsilon_i/kT}/z-1}, \)

where \( \displaystyle z=\exp(\mu/kT) \) is the absolute activity.
Notes

A much simpler way to think of Bose–Einstein distribution function is to consider that n particles are denoted by identical balls and g shells are marked by g-1 line partitions. It is clear that the permutations of these n balls and g-1 partitions will give different ways of arranging bosons in different energy levels.

Say, for 3(=n) particles and 3(=g) shells, therefore (g-1)=2, the arrangement may be like

|..|. or ||... or |.|.. etc.

Hence the number of distinct permutations of n + (g-1) objects which have n identical items and (g-1) identical items will be:

(n+g-1)!/n!(g-1)!

OR

The purpose of these notes is to clarify some aspects of the derivation of the Bose–Einstein (B–E) distribution for beginners. The enumeration of cases (or ways) in the B–E distribution can be recast as follows. Consider a game of dice throwing in which there are \( \displaystyle n \) dice, with each die taking values in the set \( \displaystyle \left\{ 1, \dots, g \right\}\) , for \( g \ge 1 \). The constraints of the game are that the value of a die \( \displaystyle i \), denoted by \( \displaystyle m_i \), has to be greater than or equal to the value of die \( \displaystyle (i-1) \), denoted by \( \displaystyle m_{i-1} \), in the previous throw, i.e., \( m_i \ge m_{i-1} \). Thus a valid sequence of die throws can be described by an n-tuple \( \displaystyle \left( m_1 , m_2 , \dots , m_n \right)\), such that \( m_i \ge m_{i-1} \). Let \( \displaystyle S(n,g) \) denote the set of these valid n-tuples:

\( S(n,g) = \Big\{ \left( m_1 , m_2 , \dots , m_n \right) \Big| \Big. m_i \ge m_{i-1} , m_i \in \left\{ 1, \dots, g \right\} , \forall i = 1, \dots , n \Big\}. \)
(1)

Then the quantity \( \displaystyle w(n,g) \) (defined above as the number of ways to distribute \( \displaystyle n /) particles among the \( \displaystyle g \) sublevels of an energy level) is the cardinality of \( \displaystyle S(n,g) \), i.e., the number of elements (or valid n-tuples) in \( \displaystyle S(n,g) \) . Thus the problem of finding an expression for \( \displaystyle w(n,g) \) becomes the problem of counting the elements in \( \displaystyle S(n,g). \)

Example n = 4, g = 3:

\( S(4,3) = \left\{ \underbrace{(1111), (1112), (1113)}_{(a)}, \underbrace{(1122), (1123), (1133)}_{(b)}, \underbrace{(1222), (1223), (1233), (1333)}_{(c)}, \right. \)

\( \left. \underbrace{(2222), (2223), (2233), (2333), (3333)}_{(d)} \right\} \)

\( \displaystyle w(4,3) = 15 \) (there are \( \displaystyle 15\) elements in \( \displaystyle S(4,3) \) )

Subset \( \displaystyle (a) \) is obtained by fixing all indices \( \displaystyle m_i \) to \( \displaystyle 1 \), except for the last index, \( \displaystyle m_n \), which is incremented from \( \displaystyle 1 \) to \( \displaystyle g=3 \). Subset \( \displaystyle (b) \) is obtained by fixing \( \displaystyle m_1 = m_2 = 1 \), and incrementing \( \displaystyle m_3 \) from \( \displaystyle 2 \) to \( \displaystyle g=3 \). Due to the constraint \( \displaystyle m_i \ge m_{i-1} \) on the indices in \( \displaystyle S(n,g) \), the index \( \displaystyle m_4 \) must automatically take values in \( \displaystyle \left\{ 2, 3 \right\} \). The construction of subsets \( \displaystyle (c) \) and \( \displaystyle (d) \) follows in the same manner.

Each element of \( \displaystyle S(4,3) \) can be thought of as a multiset of cardinality \( \displaystyle n=4 \) ; the elements of such multiset are taken from the set \( \displaystyle \left\{ 1, 2, 3 \right\} \) of cardinality \( \displaystyle g=3 \), and the number of such multisets is the multiset coefficient

\( \displaystyle \left\langle \begin{matrix} 3 \\ 4 \end{matrix} \right\rangle = {3 + 4 - 1 \choose 3-1} = {3 + 4 - 1 \choose 4} = \frac {6!} {4! 2!} = 15 \)


More generally, each element of \( \displaystyle S(n,g) \) is a multiset of cardinality \( \displaystyle n \) (number of dice) with elements taken from the set \( \displaystyle \left\{ 1, \dots, g \right\} \) of cardinality \( \displaystyle g \) (number of possible values of each die), and the number of such multisets, i.e., \( \displaystyle w(n,g) \) is the multiset coefficient

\( \displaystyle w(n,g) = \left\langle \begin{matrix} g \\ n \end{matrix} \right\rangle = {g + n - 1 \choose g-1} = {g + n - 1 \choose n} = \frac{(g + n - 1)!} {n! (g-1)!}
(2) \)

which is exactly the same as the formula for \( \displaystyle w(n,g) \), as derived above with the aid of a theorem involving binomial coefficients, namely

\( \sum_{k=0}^n\frac{(k+a)!}{k!a!}=\frac{(n+a+1)!}{n!(a+1)!}.
(3) \)

To understand the decomposition

\( \displaystyle w(n,g) = \sum_{k=0}^{n} w(n-k, g-1) = w(n, g-1) + w(n-1, g-1) + \cdots + w(1, g-1) + w(0, g-1) \)
(4)

or for example, \( \displaystyle n=4 \) and \( \displaystyle g=3 \)

\( \displaystyle w(4,3) = w(4,2) + w(3,2) + w(2,2) + w(1,2) + w(0,2), \)

let us rearrange the elements of \( \displaystyle S(4,3) \) as follows

\( S(4,3) = \left\{ \underbrace{ (1111), (1112), (1122), (1222), (2222) }_{(\alpha)}, \underbrace{ (111{\underset{=}{3}}), (112{\underset{=}{3}}), (122{\underset{=}{3}}), (222{\underset{=}{3}}) }_{(\beta)}, \right. \)

\( \left. \underbrace{ (11{\underset{==}{33}}), (12{\underset{==}{33}}), (22{\underset{==}{33}}) }_{(\gamma)}, \underbrace{ (1{\underset{===}{333}}), (2{\underset{===}{333}}) }_{(\delta)} \underbrace{ ({\underset{====}{3333}}) }_{(\omega)} \right\}. \)

Clearly, the subset \( \displaystyle (\alpha) of \displaystyle S(4,3) \) is the same as the set

\( \displaystyle S(4,2) = \left\{ (1111), (1112), (1122), (1222), (2222) \right\} \).

By deleting the index \( \displaystyle m_4=3 \) (shown in red with double underline) in the subset \( \displaystyle (\beta) \) of \( \displaystyle S(4,3) \), one obtains the set

\( \displaystyle S(3,2) = \left\{ (111), (112), (122), (222) \right\} .\)

In other words, there is a one-to-one correspondence between the subset \( \displaystyle (\beta) \) of \( \displaystyle S(4,3) \) and the set \( \displaystyle S(3,2) \). We write

\( \displaystyle (\beta) \longleftrightarrow S(3,2) \).

Similarly, it is easy to see that

\( \displaystyle (\gamma) \longleftrightarrow S(2,2) = \left\{ (11), (12), (22) \right\} \)
\( \displaystyle (\delta) \longleftrightarrow S(1,2) = \left\{ (1), (2) \right\} \)
\( \displaystyle (\omega) \longleftrightarrow S(0,2) = \varnothing (empty set). \)

Thus we can write

\( \displaystyle S(4,3) = \bigcup_{k=0}^{4} S(4-k,2) \)

or more generally,

\( \displaystyle S(n,g) = \bigcup_{k=0}^{n} S(n-k,g-1) ; \)
(5)

and since the sets

\( \displaystyle S(i,g-1) \ , \ {\rm for} \ i = 0, \dots , n \)

are non-intersecting, we thus have

\( \displaystyle w(n,g) = \sum_{k=0}^{n} w(n-k,g-1) , \)
(6)

with the convention that

\( \displaystyle w(0,g) = 1 \ , \forall g \ , {\rm and} \ w(n,0) = 1 \ , \forall n .\)

(7)

Continuing the process, we arrive at the following formula

\( \displaystyle w(n,g) = \sum_{k_1=0}^{n} \sum_{k_2=0}^{n-k_1} w(n - k_1 - k_2, g-2) = \sum_{k_1=0}^{n} \sum_{k_2=0}^{n-k_1} \cdots \sum_{k_g=0}^{n-\sum_{j=1}^{g-1} k_j} w(n - \sum_{i=1}^{g} k_i, 0). \)

Using the convention (7)2 above, we obtain the formula

\( \displaystyle w(n,g) = \sum_{k_1=0}^{n} \sum_{k_2=0}^{n-k_1} \cdots \sum_{k_g=0}^{n-\sum_{j=1}^{g-1} k_j} 1, \)
(8)

keeping in mind that for \( \displaystyle q \) and \( \displaystyle p \) being constants, we have

\( \displaystyle \sum_{k=0}^{q} p = q p . \)
(9)

It can then be verified that (8) and (2) give the same result for \( \displaystyle w(4,3), \displaystyle w(3,3), \displaystyle w(3,2), \) etc.
Interdisciplinary applications

Viewed as a pure probability distribution, the Bose-Einstein distribution has found application in other fields:

In recent years, Bose Einstein statistics have also been used as a method for term weighting in information retrieval. The method is one of a collection of DFR ("Divergence From Randomness") models, the basic notion being that Bose Einstein statistics may be a useful indicator in cases where a particular term and a particular document have a significant relationship that would not have occurred purely by chance. Source code for implementing this model is available from the Terrier project at the University of Glasgow.

Main article: Bose–Einstein condensation (network theory)
The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich(FGR),” and “winner-takes-all” phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks.[3]

See also

Bose–Einstein correlations
Boson
Higgs boson
Maxwell–Boltzmann statistics
Fermi–Dirac statistics
Parastatistics
Planck's law of black body radiation

Notes

^ See p. 14, note 3, of the Ph.D. Thesis emtitled Bose-Einstein condensation: analysis of problems and rigorous results, presented by Alessandro Michelangeli to the International School for Advanced Studies, Mathematical Physics Sector, October 2007 for the degree of Ph.D. See: http://digitallibrary.sissa.it/handle/1963/5272?show=full, and download from http://digitallibrary.sissa.it/handle/1963/5272
^ To download the Bose paper, see: http://www.condmat.uni-oldenburg.de/TeachingSP/bose.ps
^ Bianconi, G.; Barabási, A.-L. (2001). "Bose–Einstein Condensation in Complex Networks." Phys. Rev. Lett. 86: 5632–35.

References

Annett, James F. (2004). Superconductivity, Superfluids and Condensates. New York: Oxford University Press. ISBN 0-19-850755-0.
Carter, Ashley H. (2001). Classical and Statistical Thermodynamics. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-779208-5.
Griffiths, David J. (2005). Introduction to Quantum Mechanics (2nd ed.). Upper Saddle River, NJ: Pearson, Prentice Hall. ISBN 0-13-191175-9.

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home