Fine Art

.

In matrix theory, Sylvester's determinant theorem is a theorem useful for evaluating certain types of determinants. It is named after James Joseph Sylvester, who stated this theorem without proof in 1851.[1]

The theorem states that if A, B are matrices of size p × n and n × p respectively, then

\( \det(I_p + AB) = \det(I_n + BA),\ \)

where Ia is the identity matrix of order a.[2][3]

This can be seen for invertible A, B by conjugating I + AB by A−1, then extended to arbitrary square matrices by density of invertible matrices, and then to arbitrary rectangular matrices by adding zero column or row vectors as necessary.

It is closely related to the Matrix determinant lemma and its generalization. It is the determinant analogue of the Woodbury matrix identity for matrix inverses.


Proof

The theorem may be proven as follows.[4] Let M be a matrix comprising the four blocks \( -A, B, I_n and I_p \)

\( M = \begin{pmatrix}I_p & -A \\ B & I_n \end{pmatrix} . \)

Block LU decomposition of M yields

\( M = \begin{pmatrix}I_p & 0 \\ B & I_n \end{pmatrix} \begin{pmatrix}I_p & -A \\ 0 & I_n + B A \end{pmatrix} \)

from which

\( \det(M) = \det(I_n + B A) \)

follows. Decomposing M to an upper and a lower triangular matrix instead,

\( M = \begin{pmatrix}I_p + A B & -A \\ 0 & I_n \end{pmatrix} \begin{pmatrix}I_p & 0 \\ B & I_n \end{pmatrix}, \)

yields

\( \det(M) = \det(I_p + A B). \)

This proves

\( \det(I_n + B A) = \det(I_p + A B). \)

Applications

This theorem is useful in developing a Bayes estimator for multivariate Gaussian distributions.

The identity also finds applications in random matrix theory by relating determinants of large matrices to determinants of smaller ones.[5]


References

Sylvester, James Joseph (1851). "On the relation between the minor determinants of linearly equivalent quadratic functions". Philosophical Magazine 1: 295–305.
Cited in Akritas, A. G.; Akritas, E. K.; Malaschonok, G. I. (1996). "Various proofs of Sylvester's (determinant) identity". Mathematics and Computers in Simulation 42 (4–6): 585. doi:10.1016/S0378-4754(96)00035-3. edit
Harville, David A. (2008). Matrix algebra from a statistician's perspective. Berlin: Springer. ISBN 0-387-78356-3. page 416
Weisstein, Eric W. "Sylvester's Determinant Identity". MathWorld--A Wolfram Web Resource. Retrieved 2012-03-03.
Pozrikidis, C. (2014), An Introduction to Grids, Graphs, and Networks, Oxford University Press, p. 271, ISBN 9780199996735.
http://terrytao.wordpress.com/2010/12/17/the-mesoscopic-structure-of-gue-eigenvalues/

A very important consequence of Theorem 3 is that the condition np = 1 is equivalent to saying that the Sylow p-subgroup of G is a normal subgroup (there are groups that have normal subgroups but no normal Sylow subgroups, such as S4).

Less trivial applications of the Sylow theorems include the focal subgroup theorem, which studies the control a Sylow p-subgroup of the derived subgroup has on the structure of the entire group. This control is exploited at several stages of the classification of finite simple groups, and for instance defines the case divisions used in the Alperin–Brauer–Gorenstein theorem classifying finite simple groups whose Sylow 2-subgroup is a quasi-dihedral group. These rely on J. L. Alperin's strengthening of the conjugacy portion of Sylow's theorem to control what sorts of elements are used in the conjugation.

In permutation groups, it has been proven in (Kantor 1985a, 1985b, 1990; Kantor & Taylor 1988) that a Sylow p-subgroup and its normalizer can be found in polynomial time of the input (the degree of the group times the number of generators). These algorithms are described in textbook form in (Seress 2003), and are now becoming practical as the constructive recognition of finite simple groups becomes a reality. In particular, versions of this algorithm are used in the Magma computer algebra system.

Mathematics Encyclopedia

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home - Hellenica World