# Introduction

In this topic, we see that under certain circumstances, we may factor a matrix M in the form M = LLT. Such a decomposition is called a Cholesky decomposition. It requires half the memory, and half the number operations of an PLU decomposition, but it may only be applied in restricted circumstances, namely when the matrix M is real, symmetric, and positive definite.

# Interactive Maplet

A Cholesky Decomposition Tutorial

# Solving Systems of Linear Equations

Suppose that the n × n matrix M given by a system of linear equations is both real and symmetric. The symmetry suggests that we can store the matrix in half the memory required by a full non-symmetric matrix of the same size. Consequentially, one may suspect that it may also be possible to write M = LLT instead of write M = LU, as we saw in the topic on the LU decomposition.

First, if the matrix M is 1 × 1, then M contains a single entry. In this case, we can write

This immediately suggests that if we want to keep L real, then m1, 1 ≥ 0.

Let us take a more general lower triangular matrix, for example, the 4 × 4 matrix:

Calculating LLT, we get:

By inspection, we see that this matrix is symmetric. Thus, if we wanted to write a general symmetric matrix M as LLT, from the first column, we get that:

1. l1, 1 = sqrt( m1, 1 )
2. l2, 1 = m2, 1/l1, 1
3. l3, 1 = m3, 1/l1, 1
4. l4, 1 = m4, 1/l1, 1

Having calculated these values from the entries of the matrix M, we may go to the second column, and we note that, because we have already solved for the entries of the form li, 1, we may continue to solve:

1. l2, 2 = sqrt( m2, 2 - l2, 12)
2. l3, 2 = (m3, 2 - l2, 1l3, 1)/l2, 2
3. l4, 2 = (m4, 2 - l2, 1l4, 1)/l2, 2

Having solved these three, we find that we can solve for l3, 3 and l4, 3:

1. l3, 3 = sqrt( m3, 3 - l3, 12 - l3, 22)
2. l4, 3 = (m4, 3 - l3, 1l4, 1 - l3, 2l4, 2)/l3, 3

and thus, finally we may solve for the last entry:

1. l4, 4 = sqrt( m4, 4 - l4, 12 - l4, 22 - l4, 32)

This may seem exceptionally complex, but by using dot products, we can simplify this algorithm significantly, as is covered in the howto.

## Three Observations

How can we ensure that all of the square roots are positive? Without proof, we will state that the Cholesky decomposition is real if the matrix M is positive definite.

Solving a problem Mx = b where M is real and positive definite may be reduced to finding the Cholesky decomposition and then setting y = LTx, solving Ly = b and then solving LTy = b.

We have not discussed pivoting. If the matrix is diagonally dominant, then pivoting is not required for the PLU decomposition, and consequentially, not required for Cholesky decomposition, either.

# Problem

Given the n × n real, symmetric, and diagonally dominant matrix M, find a decomposition M = LLT where L is a real lower-triangular matrix. Next, given a column vector b, solve the system of linear equations defined by M x = b for the vector of unknowns x.

# Assumptions

We will assume that M is real, symmetric, and diagonally dominant, and consequently, it must be invertible.

# Cholesky Decomposition

Start with the candidate matrix L = 0n, n where 0m, n is the m × n zero matrix.

Then, for i = 1, 2, ..., n do:

 Subtract from mi, i the dot product of the ith row of L (as constructed so far) with itself and set li, i to be the square root of this result. For j = i + 1, ..., n, subtract the dot product of the ith and jth rows of L (as constructed so far) and set lj, i to be the this result divided by li, i.

This is so simple to program in Matlab that we should cover it here:

```M = [5 1.2 0.3 -0.6; 1.2 6 -0.4 0.9; 0.3 -0.4 8 1.7; -0.6 0.9 1.7 10];
n = length( M );
L = zeros( n, n );
for i=1:n
L(i, i) = sqrt( M(i, i) - L(i, :)*L(i, :)' );

for j=(i + 1):n
L(j, i) = ( M(j, i) - L(i, :)*L(j, :)' )/L(i, i);
end
end
```

to get the result:

```>> format long
>> L
L =

2.236067977499790   0.000000000000000   0.000000000000000   0.000000000000000
0.536656314599949   2.389979079406345   0.000000000000000   0.000000000000000
0.134164078649987  -0.197491268466351   2.818332343581848   0.000000000000000
-0.268328157299975   0.436823907370487   0.646577012719190   3.052723872310221
```

# Solving with Forward and Backward Substitution

Calculate use forward substitution to solve for y in Ly = b. Then, having found y, use backward substitution to solve for x in LTx = y.

# Error Analysis

The error analysis for the Cholesky decomposition is similar to that for the PLU decomposition, which we will look at when we look at matrix and vector norms.

# Example 1

Find the Cholesky decomposition of the matrix M = (mi, j):

To begin, we note that M is real, symmetric, and diagonally dominant, and therefore positive definite, and thus a real Cholesky decomposition exists.

To do this, we first set up the matrix L = 03 (the 3 × 3 zero matrix):

For the first column, we may make the following simplification: the first entry l1,1 is the square root of m1,1:

Next, for all other entries in the first column, li, 1 = mi, 1/l1, 1:

Next, we go to the 2nd column: we subtract the dot product of the 2nd row of L with itself from the entry m2, 2 and set l2, 2 to the square root of this result:

For the 3rd row of the 2nd column, we subtract the dot product of the 2nd and 3rd rows of L from m3,2 and set l3,2 to this result divided by l2, 2.

Finally, to complete our Cholesky decomposition, we subtract the dot product of the 3rd row of L with itself from the entry m3, 3 and set l3, 3 to the square root of this result:

This is the Cholesky decomposition of M, and a quick test shows that LLT = M.

# Example 2

Use the Cholesky decomposition from Example 1 to solve Mx = b for x when b = (55, -19, 114)T.

We rewrite Mx = b as LLTx = b and let LTx = y.

First we solve Ly = b using forward substitution to get y = (11, -2, 14)T. Next, using this, we solve LTx = y using backward substitution to get x = (1, -2, 2)T.

# Example 3

Find the Cholesky decomposition of the matrix M = (mi, j):

To begin, we note that M is real, symmetric, and diagonally dominant, and therefore positive definite, and thus a real Cholesky decomposition exists.

Set L to be the 4 × 4 zero matrix. Starting with the first column, set l1,1 to be the square root of m1, 1 = 9, and then set li, 1 = mi, 1/l1,1:

Next, for the 2nd column, we subtract off the dot product of the 2nd row of L with itself from m2, 2 and set l2, 2 to be the square root of this result:

For the entry l3, 2, we subtract off the dot product of rows 3 and 2 of L from m3, 2 and divide this by l2,2. Similarly, for the entry l4, 2, we subtract off the dot product of rows 4 and 2 of L from m4, 2 and divide this by l2,2:

Next, for the 3rd column, we subtract off the dot product of the 3rd row of L with itself from m3, 3 and set l3, 3 to be the square root of this result:

For the entry l4, 3, we subtract off the dot product of rows 4 and 3 of L from m4, 3 and divide this by l3,3:

Finally, for the 4th column, we subtract off the dot product of the 4th row of L with itself from m4, 4 and set l4, 4 to be the square root of this result:

In Matlab, we see that LLT = M:

```>> L = [3 0 0 0; 0.2 4 0 0; -0.1 0.3 2 0; 0.5 -0.4 -0.2 5]
L =

3.00000   0.00000   0.00000   0.00000
0.20000   4.00000   0.00000   0.00000
-0.10000   0.30000   2.00000   0.00000
0.50000  -0.40000  -0.20000   5.00000

>> L * L'
ans =

9.0000   0.6000  -0.3000   1.5000
0.6000  16.0400   1.1800  -1.5000
-0.3000   1.1800   4.1000  -0.5700
1.5000  -1.5000  -0.5700  25.4500
```

# Example 4

Use the Cholesky decomposition from Example 3 to solve Mx = b for x when b = (2.49, 0.566, 0.787, -2.209)T.

We rewrite Mx = b as LLTx = b and let LTx = y.

First we solve Ly = b using forward substitution to get y = (0.83, 0.1, 0.42, -0.5)T. Next, using this, we solve LTx = y using backward substitution to get x = (0.3, 0, 0.2, -0.1)T.

# Question 1

Find the Cholesky decomposition of the matrix M:

```L = [7 0 0 0; 2 5 0 0; -1 -2 6 0; 1 0 -3 5]
```

# Question 2

Using the Cholesky decomposition of M from Question 1, use backward and forward substitution to solve Mx = b where b = (14, -46, 36, -32)T.

Answer: x = (1, -2, 0, -1)T.

# Question 3

Find the Cholesky decomposition of the matrix M:

```L = [2 0 0 0;0.2 1 0 0;0.4 -0.2 3 0;-0.1 0.3 0.5 2]
```

# Question 4

Using the Cholesky decomposition of M from Question 3, use backward and forward substitution to solve Mx = b where b = (-0.2, -0.32, 13.52, 14.17)T.

Answer: x = (0, -1, 1 3)T.

# Applications to Engineering

The conductance matrix formed by a circuit is positive definite, as are the matrices required to solve a least-squares linear regression.

# Matlab

The follow Matlab code finds the Cholesky decomposition of the matrix M:

```M = [0.9 0.06 -0.39 -0.24; 0.06 1.604 0.134 0.464; -0.39 0.134 2.685 0.802; -0.24 0.464 0.802 1.977];
L = chol( M )'    % chol returns an upper triangular matrix
```

To do this manually, consider:

```M = [0.9 0.06 -0.39 -0.24; 0.06 1.604 0.134 0.464; -0.39 0.134 2.685 0.802; -0.24 0.464 0.802 1.977]
n = length( M );
L = zeros( n, n );
for i=1:n
L(i, i) = sqrt(M(i, i) - L(i, :)*L(i, :)')
for j=(i + 1):n
L(j, i) = (M(j, i) - L(i,:)*L(j ,:)')/L(i, i)
end
end
```

To solve the system M x = b where

```b = [0.063, -0.6358, 0.5937, -0.1907]'
```

let Matlab use forward and backward substitution as follows:

```y = L \ b
x = L' \ y
```

# Maple

The following commands in Maple finds the Cholesky decomposition of the given matrix M:

```M := Matrix([[9, 0.6, -0.3, 1.5], [0.6, 16.04, 1.1800, -1.5], [-0.3, 1.18, 4.1, -0.57], [1.5, -1.5, -0.57, 25.45]] );
LinearAlgebra:-LUDecomposition( M, method = Cholesky );
```

For more help on either of these routines or on the LinearAlgebra package, enter:

```?LinearAlgebra
?LinearAlgebra,LUDecomposition
?LinearAlgebra,LinearSolve
```