Formulas for the determinant, characteristic polynomial and inverses


Calculating determinants, characteristics polynomials and inverses is necessarily expensive, but it doesn't have to be that expensive under specific circumstances.

Important: the normal form for a polynomial when trying to find roots is to divide by the leading coefficient to result in a monic polynomial. Almost ubiquitous is that the characteristic polynomial is described as det(AλIn), but in this case, the coefficient of the leading term is (1)n. By finding det(λInA), the leading term is λn.

  1. 2×2 matrices
  2. 3×3 matrices
  3. 4×4 matrices
  4. diagonal, upper-triangular and lower-triangular matrices
  5. tri-diagonal matrices
  6. elementary matrices
  7. isometric (orthogonal and unitary) matrices
  8. block-diagonal, block–upper-triangular and block–lower-triangular matrices
  9. projectios, perpendicular components and reflections
  10. formulas for general matrices

1. General matrices

Given A,B:FnFn, then

If A is block diagonal with blocks A1,1 and A2,2 being square matrices with dimensions n1+n2=n, respectively, on the diagonal then det(A)=det(A1,1)det(A2,2). This, of course, generalizes if A is comprised of many blocks on its diagonal.

If A has eigenvalues λ1,,λn, then det(A)=nk=1λk and tr(A)=nk=1λk.

2. 2×2 matrices

These formulas are easy to memorize.

2.1 Determinant of a 2×2 matrix

The determinant of the matrix (abcd) can be calculated as adbc.

2.1.1 Self-adjoint (symmetric or conjugate symmetric)

The determinant of the self-adjoint matrix (abbd) can be calculated as ad|b|2 where a and d are real. This is, of course, real even if b is complex, because both eigenvalues are real and the determinant is the product of the eigenvalues.

2.1.2 Skew-adjoint (skew-symmetric or conjugate skew-symmetric)

The determinant of the skew-adjoint matrix (ajbbdj) can be calculated as ad+|b|2 where a and d are real. This, too, is real even if b is complex, because both eigenvalues are imaginary and the determinant is the product of the eigenvalues.

2.2 Characteristic polynomial of a 2×2 matrix

The characteristic polynomial of the matrix (abcd) is (λa)(λd)bc=λ2(a+d)λ+adbc where you will observe that this equals λ2(a+d)λ+det(A): the determinant is zero if and only if there is a zero eigenvalue.

2.2.1 Self-adjoint (symmetric or conjugate symmetric)

The characteristic polynomial of the self-adjoint matrix (abbd), with a and d real, is (λa)(λd)|b|2=λ2(a+d)λ+ad|b|2. You will observe that all coefficients are real, but also, the discriminant is (a+d)24(ad|b|2)=a2+2ad+d24ad+4|b|2 results in a22ad+d2+4|b|2=(ad)2+4|b|2, which is non-negative for all possible values of real a and d and complex b, meaning that the quadratic must always have real roots because a+d2 also real.

2.2.2 Skew-adjoint (skew-symmetric or conjugate skew-symmetric)

The characteristic polynomial of the skew-adjoint matrix (ajbbdj), with a and d real, is (λaj)(λdj)+|b|2=λ2(a+d)jλad+|b|2. You will observe that all coefficients are real, but the discriminant is ((a+d)j)24(|b|2ad)=a22add2+4ad4|b|2 results in a2+2add24|b|2=(ad)24|b|2, which is non-positive for all possible values of real a and d and complex b, meaning that the quadratic must always have imaginary roots because (a+d)j2 also imaginary.

Of course, the characteristic polynomial of a diagonal, upper-triangular and lower-triangular matrix (as above) is just (λa)(λd).

2.3 Inverse of a 2×2 matrix

The inverse of the matrix (abcd) is 1adbc(dbca), which has a solution if and only if the determinant adbc is non-zero.

2.3.1 Self-adjoint (symmetric or conjugate symmetric)

The inverse of the self-adjoint matrix (abbd), with a and d real, is 1ad|b|2(dbba), which has a solution if and only if the determinatn ad|b|2 is non-zero.

2.3.2 Skew-adjoint (symmetric or conjugate symmetric)

The inverse of the skew-adjoint matrix (ajbbdj), with a and d real, is 1|b|2ad(djbbaj), which has a solution if and only if the determinant |b|2ad is non-zero.

3. 3×3 matrices

3.1 Determinant of a 3×3 matrix

The determinant of the matrix (abcdefghi) can be calculated by writing out the matrix twice:

(abcabcdefdefghighi)

and then multiplying the three diagonals starting with the first a, b and c on the left to get aei, bfg and cdh and then multiplying the three anti-diagonals starting with the second a, b and c on the right to get afh, bdi, and ceg and then adding the first and subtracting the second to get aei+bfg+cdhafhbdiceg.

3.1.1 Self-adjoint (symmetric or conjugate symmetric)

The determinant of the self-adjoint matrix (abcbefcfi) where a, e and i are real simplifies to aei+2Re(bcf)(a|f|2+e|c|2+i|b|2), which you will note must be real, which must be the case, as the determinant is the product of the eigenvalues, and all eigenvalues of a self-adjoint operator are real.

3.1.2 Skew-adjoint (skew-symmetric or conjugate skew-symmetric)

The determinant of the skew-adjoint matrix (ajbcbejfcfij) where a, e and i are real simplifies to aeij2Im(bcf)j+j(a|f|2+e|c|2+i|b|2), which you will note must be imaginary, which must be the case, as the determinant is the product of the eigenvalues, and all eigenvalues of a skew-adjoint operator are imaginary.

3.2 Characteristic polynomial of a 3×3 matrix

The characteristic polynomial of the matrix (abcdefghi) is (λa)(λe)(λi)fh(λa)cg(λe)bd(λi)bfgcdh where you will observe that this equals λ3(a+e+i)λ2+(ae+ei+iabdcgfh)λdet(A): the determinant is zero if and only if there is a zero eigenvalue.

3.2.1 Self-adjoint (symmetric or conjugate symmetric)

The characteristic polynomial of the self-adjoint matrix (abcbefcfi) where a, e and i are real is (λa)(λe)(λi)|f|2(λa)|c|2(λe)|b|2(λi)+2Re(bcf) which expands to λ3(a+e+i)λ2+(ae+ei+ia|b|2|c|2|f|2)λdet(A).

3.2.2 Skew-adjoint (skew-symmetric or conjugate skew-symmetric)

The characteristic polynomial of the skew-adjoint matrix (ajbcbejfcfij) where a, e and i are real is (λaj)(λej)(λij)+|f|2(λaj)+|c|2(λej)+|b|2(λij)+2Im(bcf) which expands to λ3(a+e+i)λ2+(ae+ei+ia|b|2|c|2|f|2)λdet(A).

4. 4×4 and higher matrices

These are algorithms that must be implemented or executed by hand.

4.1 Determinant of a 4×4 matrix and higher

Never use cofactor expansion to find the determinant of a 4×4 matrix. Instead, while Gaussian elimination uses only row swaps, in calculating the determinant, you can also use column swaps. You can use this to simplify the process of reducing the matrix to an upper-triangular or lower-triangular matrix, in which case, the determinant will be the product of the diagonal entries multiplied by 1 raised to the power equal to the number of row and column swaps. Remember to only add multiples of one row onto another or multiples of one column onto another, and do not multiply any row or column by a scalar. There is no situation where cofactor expansion is more efficient.

One may claim that if you use cofactor expansion, it is more efficient in specific circumstances, such as when one can expand upon a row or column containing many zeros, but similarly, we can swap that row or column into the appropriate location when reducing to an upper- or lower-triangular matrix.

For example, to calculate the determinant of the matrix (3011210311302021222220001),
start by swapping Rows 1 and 4 and Columns 1 and 2 to get (2122201311032020311202001),
next, swap Columns 2 and 4 to get (2221201311002320113200021),
and then add 1 times Row 2 to Row 4 to get (2221201311002320042100021),
and then add 2 times Row 3 to Row 4 to get (2221201311002320008300021),
and now the determinant is (1)3212(812(3))=56, where 3 is the number of row or column swaps we made and that last term is the determinant of the bottom right 2×2 matrix (later, we will see that the determinant of a block–upper-triangular matrix is the product of the determinant of the diagonal blocks).

4.2 Characteristic polynomial of a 4×4 matrix

Because of all of the "nice" 4×4 orthogonal matrices, it is useful sometimes to use such matrices in performing eigendecompositions; however, finding the characteristic polynomial det(λI4A) is often difficult.

5. Diagonal, upper-triangular or lower-triangular matrices

A matrix A is "diagonal" if ai,j=0 whenever ij. A matrix A is "upper-triangular" if ai,j=0 whenever j<i, and a matrix A is "lower-triangular" if ai,j=0 whenever i<j. The following, respectively, are examles of such matrices when they are square:

(7000030000100001), (2152078100830000) and (2000530037805255).

A matrix must not necessarily be square to be diagonal, upper-triangular or lower-triangular, but the determiannt, characteristic polynomials and inverses are only defined when the matrix represents a linear operator (that is, it is a square matrix).

5.1 Determinant of a diagonal, upper-triangular or lower-triangular matrix

The determiant of a diagonal, upper-triangular, or lower-triangular m×n matrix A is the product of the diagonal entries: det(A)=nk=1ak,k.

5.2 Characteristic polynomial of a diagonal, upper-triangular or lower-triangular matrix

The characteristic polynomial of a diagonal, upper-triangular, or lower-triangular n×n matrix A is the product det(λIA)=nk=1(λak,k).

5.3 Inverses of a diagonal, upper-triangular or lower-triangular matrix

A diagonal, upper-triangular or lower-triangular matrix is invertible if and only if all diagonal entries are non-zero.

The inverse of a diagonal matrix A is a diaogonal matrix where each entry on the diagonal is the reciprocal of the corresponding entry in A.

The inverse of a upper-triangular matrix is an upper-triangular matrix, and can be calculated iteratively as follows. Given the upper-triangular matrix A, let E be its inverse.

  1. Set all the entries of E to zero.
  2. Assign ei,i1ai,i for each i=1,,n.
  3. Starting from the the second row from the bottom, so i=n1, and going up to the top (i=1),
    starting from j=i+1 going up to n, calculate the sum of products spi,j of the ith row of A and the jth column of E, so spi,j=nk=1ai,kek,j, and set ei,jei,ispi,j.

You will note that most of the products are 0, so so we can simplify spi,j=nk=1ai,kek,j to spi,j=jk=i+1ai,kek,j. This still, however requires n36+O(n2) multiplications and additions. To give an example:

If A=(25740103200530001), then we start with E=(0.500000.100000.200001), and iterate:

First, 13=3, so E=(0.500000.100000.20.60001).

Next, 30.2=0.6 and 30.6+(2)(1)=3.8 so we E=(0.500000.10.060.38000.20.60001).

Finally, 50.1=0.5, 5(0.06)+(7)0.2=1.7, and 5(0.38)+(7)0.6+4(1)=10.1, so E=(0.50.250.855.0500.10.060.38000.20.60001).

Here is an implementation:

    // Assume 'A' is an n x n instance of a matrix class
    // that overloads the member operator:
    //     double operator()( unsigned int i, unsigned int j );
    // Assume 'A' is upper triangular
    matrix E{ n }; // Create a n x n zero-matrix

    for ( unsigned int i{ 0 }; i < N; ++i ) {
        E(i, i) = 1.0/A(i, i);
    }

    for ( unsigned int i{ n - 2 }; i < n; --i ) {
    	for ( unsigned int j{ i + 1 }; j < n; ++j ) {
	    double sp{ 0.0 };

            for ( unsigned int k{ i + 1 }; k < j; ++k ) {
                sp += A(i, k)*E(k, j);
            }

            E(i, j) = -E(i,i)*sp;
	}
    }

The inverse of a lower-triangular matrix is an lower-triangular matrix, and can similarly be calculated iteratively as follows. Given the lower-triangular matrix A, let E be its inverse.

  1. Set all the entries of E to zero.
  2. Assign ei,i1ai,i for each i=1,,n.
  3. Starting from the the second row, so i=2, and going up to the bottom (i=n),

You will note that most of the products are 0, so so we can simplify spi,j=nk=1ai,kek,j to spi,j=i1k=jai,kek,j. This still, however requires n36+O(n2) multiplications and additions.

Here is an implementation:

    // Assume 'A' is an n x n instance of a matrix class
    // that overloads the member operator:
    //     double operator()( unsigned int i, unsigned int j );
    // Assume 'A' is lower triangular
    matrix E{ n }; // Create a n x n zero-matrix

    for ( unsigned int i{ 0 }; i < N; ++i ) {
        E(i, i) = 1.0/A(i, i);
    }

    for ( unsigned int i{ 1 }; i < n; ++i ) {
    	for ( unsigned int j{ i - 1 }; j < n; --j ) {
	    double sp{ 0.0 };

            for ( unsigned int k{ i + 1 }; k < j; ++k ) {
                sp += A(i, k)*E(k, j);
            }

            E(i, j) = -E(i,i)*sp;
	}
    }

Unlike calculating the inverse of an arbitrary matrix, calculating the inverse of an upper- or lower-triangular matrix is more numerically stable unless the diagonal entries are much smaller in magnitude than the off-diagonal entries.

6. Tri-diagonal matrices

6.1 Determinant of a tri-diagonal matrix

The determinant of a tri-diagonal n×n matrix A may be found through a recursive formula:

  1. Set d00.
  2. Set d1a1,1.
  3. For k running from 2 to n,
    set dkak,kdk1ak,k1ak1,kdk2.

where dn is the determinant. Most efficiently, this can be implemented in C++ as

double det( tri_diagonal const &A ) {
    double previous{ 0.0 };
    double current{ A(0, 0) };   // Assume operator()( unsigned int, unsigned int ) is defined

    for ( unsigned int k{ 1 }; k < n; ++k ) {
        double next{ A(k, k)*current - A(k - 1, k)*A(k, k - 1)*previous };
        previous = current;
        current = next;
    }

    return current;
}

6.1.1 Self-adjoint (symmetric or conjugate symmetric)

The determinant of a self-adjoint tri-diagonal n×n matrix A where ak,k is real may be found through a recursive formula:

  1. Set d00.
  2. Set d1a1,1.
  3. For k running from 2 to n,
    set dkak,kdk1|ak1,k|2dk2.

where dn is the determinant. It should be clear that the determinant is real.

6.1.2 Skew-adjoint (skew-symmetric or conjugate skew-symmetric)

The determinant of a skew-adjoint tri-diagonal n×n matrix A where ak,kj is imaginary may be found through a recursive formula:

  1. Set d00.
  2. Set d1a1,1j.
  3. For k running from 2 to n,
    set dkak,kdk1j+|ak1,k|2dk2.

where dn is the determinant. It should be clear that the determinant alters between being real or imaginary depending on the parity of the dimension n.

6.2 Characteristic polynomial of a tri-diagonal matrix

The characteristic polynomial det(λIA) of a tri-diagonal n×n matrix A may be found through a recursive formula:

  1. Set p00.
  2. Set p1λa1,1.
  3. For k running from 2 to n,
      set pk(λak,k)pk1ak,k1ak1,kpk2.

where pn is the characteristic polynomial.

6.2.1 Self-adjoint (symmetric or conjugate symmetric)

The characteristic polynomial of a self-adjoint tri-diagonal n×n matrix A where ak,k is real may be found through a recursive formula:

  1. Set p00.
  2. Set p1λa1,1.
  3. For k running from 2 to n,
       set pk(λak,k)pk1|ak1,k|2pk2.

where pn is the characteristic. It should be clear that the determinant is real.

6.2.2 Skew-adjoint (skew-symmetric or conjugate skew-symmetric)

The characteristic polynomial of a skew-adjoint tri-diagonal n×n matrix A where ak,kj is imaginary may be found through a recursive formula:

  1. Set p00.
  2. Set p1λa1,1j.
  3. For k running from 2 to n,
    set pk(λak,kj)pk1+|ak1,k|2pk2.

where pn is the characteristic.

6.3 Inverse of a tri-diagonal matrix

Unfortunately, the inverse of a tri-diagonal matrix is almost certainly dense. Consequently, you would not want to calculate the inverse. However, solving a system of linear equations using Gaussian elimination if the matrix is diagonally dominant allows for very fast O(n) algorithms:

    // Assume 'A' is a matrix class that overloads the
    //     double operator()( unsigned int i, unsigned int j );
    // member operator
    // Assume 'x' and 'b' are arrays of capacity N

    // Gaussian elimination
    //   - We assume A is diagonally dominant,
    //     so there will be no reason to swap rows
    for ( unsigned int k{ 1 }; k < N; ++k ) {
	double multiplier{ A(k, k-1)/A(k-1, k-1) };
        A(k, k) = A(k, k) - multiplier*A(k-1, k);
        b[k]    = b[k]    - multiplier*b[k-1];
    }

    // Back substitution
    x[N - 1] = b[N - 1]/A(N-1, N-1);

    for ( unsigned int k{N - 2}; k < N; --k ) {
        x[k] = (b[k] - A(k, k+1)*x[k+1])/A(k,k);
    }

7. Elementary matrices

7.1 Determinant of an elementary matrix

If R(θ)=(cos(θ)sin(θ)sin(θ)cos(θ)) is the rotation matrix, then:

Note: ER(θ)Ri,j rotates the unit vector ˆei in the direction of ˆej, just like in the x-y plane, multiply R(θ)(10)=(cos(θ)sin(θ)) rotates this vector in the direction of (01).

7.2 Characteristic polynomial of an elementary matrix

7.3 Inverse of an elementary matrix

8. Isometric matrices

8.1 Determinant of an isometric matrix

All eigenvalues of an isometric matrix (either orthogonal or unitary) have the property that |z|=1. For an orthogonal matrix A, because all complex roots come in complex conjugate pairs, the determinant must be det(A)=±1. For a unitary matrix A, |det(A)|=1.

8.2 Characteristic polynomial of an isometric matrix

There are no algorithms to more easily find the characteristic polynomial of a isometric matrix than for a general matrix.

8.3 Inverse of an isometric matrix

The inverse of a orthogonal matrix is its transpose. The inverse of a unitary matrix is its conjugate transpose.

9. Block-diagonal, block–upper-triangular and block–lower-triangular matrices

A matrix A is "block diagonal" if there are two square matrices A1,1 and A2,2 such that A1,1 is n1×n1 and A2,2 is n2×n2 where n=n1+n2 where A=(A1,1On1,n2On2,n1A2,2) where Om,n is an m×n matrix of zeros.

As matrices A1,1 and A2,2 may themselves be block diagonal, it may be possible to represent a matrix A as m nk×nk matrices where n1++nm=n where A=(A1,1On1,n2On1,nmOn2,n1A2,2On2,nmOnm,n1Onm,n2Am,m).

The block Ai,j is the ni×nj matrix occupying the entries in the matrix that intersect the rows of Ai,i and the columns of Aj,j.

A matrix A with diagonal blocks A1,1,,Am,m is "block upper-triangular" if Ai,j=Oni,nj whenever whenever j<i, and a matrix A is "block lower-triangular" if Ai,j=Oni,nj whenever i<j.

9.1 Determinant of a block-diagonal, block–upper-triangular or block–lower-triangular matrix

The determinant of a block-diagonal, block–upper-triangular or block–lower-triangular matrix is equal to the product of the determinants of the blocks: det(A)=mk=1det(Ak,k)=det(A1,1)det(A2,2)det(Am,m).

9.2 Characteristic polynomial of a block-diagonal matrix

The characteristic polynomial of a block-diagonal, block–upper-triangular or block–lower-triangular matrix is equal to the product of the characteristic polynomials of the blocks: det(λInA)=mk=1det(λInkAk,k)=det(λIn1A1,1)det(λIn2A2,2)det(λInmAm,m).

Consequently, the eigenvalues of a block-diagonal matrix is the collection of all the eigenvalues of the individual blocks, and if uk, is an nk-dimensional eigenvector corresponding to the eigenvalue λk, of the nk×nk matrix Ak,k, then vk,=(0n10nk1uk,0nk+10nm) is an n-dimensional eigenvector of A corresponding to λk,.

9.3 Inverse of a block-diagonal matrix

A block-diagonal block-diagonal, block–upper-triangular or block–lower-triangular matrix is invertible if and only if all of the diagonal blocks A1,1,,Am,m are invertible.

The inverse of a block-diagonal matrix A is that matrix with the inverses of the blocks on the diagonal: A1=(A11,1On1,n2On1,nmOn2,n1A12,2On2,nmOnm,n1Onm,n2A1m,m).

The inverse of a block–upper-triangular matrix is a block–upper-triangular matrix, and can be calculated as follows: Given the block–upper-triangular matrix, let E be its inverse.

You will note that most of the products are matrices times a zero matrix, so we can simplify SPi,j=mk=1Ai,kEk,j to SPi,j=jk=i+1Ai,kEk,j.

The inverse of a block–lower-triangular matrix is a block–lower-triangular matrix, and can be calculated as follows: Given the block–lower-triangular matrix, let E be its inverse.

You will again note that most of the products are matrices times a zero matrix, so we can simplify SPi,j=mk=1Ai,kEk,j to SPi,j=i1k=jAi,kEk,j.

10. Projections and reflections

10.1 The projection

The projection of a vector u onto a vector v is that scalar multiple of v that is closest to u using the 2-norm. The formula is projv(u)=v,uv,vv. Because the project is linear projv(α1u1+α2u2)=α1projv(u1)+α2projv(u2), there must be a matrix Pv such that projv(u)=Pvu.

This matrix is Pv=1v,v|vv|, or a scalar times the "outer product". The inner product reduces two vectors to a scalar value, while the outer product converts two vectors into a linear map (matrix). The outer product of two real vectors is the m×n matrix |uv|=(v1uv2uvnu).

If the dimension of the vector space is greater than 1, then there is at least one vector u perpendicular to v, and thus Pvu=0n, so 0 is an eigenvalue of Pv, and so the determinant is 0.

The only vectors left unchanged by a projection are multiples of v, and the vectors that are mapped to the zero vectore are those perpendicular to v. Thus, there is one eigenvalue 1 and n1 eigenvalues equal to 0. Thus, the charcteristic polynomial must be (λ1)λn1.

Except in the trivial case when the dimension is n=1, there is no inverse.

The compact eigendecomposition and SVD is where U=(vv2) with U(1)U, which you will note is equivalent to our original representation v,uv,vv, as the denominator is v22.

The projection operator Pv has properties including being

  1. self-adjoint, so Pv=Pv,
  2. idempotent, so P2v=Pv,
  3. rank(Pv)=1, and
  4. det(Pv)=0, and
  5. positive semidefinite, so u,Pvu0, so Pvu never points in the opposite direction of u.

10.2 The perpendicular comonent

The perpendicular component of v with respect to v is perpv(u)=uprojv(u), and thus may be seen as being equal to InPv. Now, v is mapped to the zero vector, while all vectors orthogonal to v are orthogonal. Thus, the determinant is still zero, and the characteristic polynomial is (λ1)n1λ. There is, of course, no inverse.

The perpendicular component operator InPv has properties including being

  1. self-adjoint, so (InPv)=InPv,
  2. idempotent, so (InPv)2=InPv,
  3. rank(InPv)=n1, and
  4. det(InPv)=0, and
  5. positive semidefinite, so u,(InPv)u0, so (InPv)u never points in the opposite direction of u.

10.3 The reflection

The Householder reflection is a reflection through an n1-dimensional hyperplane defined by all vectors that are a given vector v. The is defined as u2projv(u), and so is also the reflection of u through the perpendicular component perpv(u). Thus, this operator is equal to In2Pv.

The Householder reflection operator In2Pv has properties including being

  1. self-adjoint, so (In2Pv)=In2Pv,
  2. rank(InPv)=n and is therefore invertible,
  3. self-inverse, so (In2Pv)1=In2Pv,
  4. det(In2Pv)=1, and
  5. isometric, so (In2Pv)u2=u2 for all vectors u.

11. Vandermonde matrices

Given n x-values, x1 through xn, the associated Vandermonde matrix is V=(xn11xn21x21x11xn12xn22x22x21xn1n2xn2n2x2n2xn21xn1n1xn2n1x2n1xn11xn1nxn2nx2nxn1), although, in general, this is a small matrix, such as a 3×3 or 4×4: V=(x21x11x22x21x23x31) or V=(x31x21x11x32x22x21x33x23x31x34x24x41), respectively.

The determinant of this matrix is i<j(xixj), and therefore, as this is the product of all possible differences between the x-values, this is non-zero so long as all the x-values are different.

This matrix does not have an easy inverse, if the x values are equally spaced, the inverse can be calculated; for example, if the x values are 0,1,2,, then we have V1=(1110), V=12(121341200), V=16(13216151231118926000) or V=124(14641103648286351041145611509672326240000).

12. Exponential of a matrix

Given a square matrix A, then eA=In+A+A22+A36+A424+=k=0Akk!. det(eA)=etr(A), and thus, eA is always invertible. If λ1,,λn are the eigenvalues of A, then eλ1,,eλn are the eigenvalues of eA.

12. Nilpotent matrix

A matrix is nilpotent if An=0 for some n0. nil means "nothing" or "zero" and potens means "powerful" or "having power".

Because det(Ak)=det(A)k, then if An=O then det(An)=det(O)=0, but det(An)=det(A)n, and therefore, combining these two, det(A)=0.