Introduction
Theory
HOWTO
Error Analysis
Examples
Questions
Applications in Engineering
Matlab
Maple

# Introduction

The Gauss-Seidel method is an technical improvement over the Jacobi
method. In a nutshell, given **x**_{n} = (*x*_{n, i}), each entry
of **x**_{n + 1} = (*x*_{n + 1, i}) may be calculated separately, so
if you have already calculated (*x*_{n + 1, 1}),
why continue to use (*x*_{n, 1}) to calculate
(*x*_{n + 1, 2})?

# Background

Useful background for this topic includes:

# References

# Theory

For the Jacobi method applied to a system of *n* linear equations defined
by **Mx** = **b**, we start with an initial vector
**x**_{0} and iterate the function:

f(**x**) = **D**^{-1}(**b** − **M**_{off}**x**)
To begin, let **x**_{k} = (*x*_{k, i}),
**D** = (*d*_{i, j}),
**b** = (*b*_{i}),
**M**_{k, :} represent the *k*th row of **M**.

Because the matrix **D**^{-1} is a diagonal matrix,
in essence, we are calculating the following *n* operations:

*x*_{k + 1,1} = *d*_{1,1}^{-1} ⋅ (*b*_{1} − **M**_{off;1,:} ⋅ **x**_{k})

*x*_{k + 1,2} = *d*_{2,2}^{-1} ⋅ (*b*_{2} − **M**_{off;2,:} ⋅ **x**_{k})

⋅

⋅

⋅

*x*_{k + 1,n} = *d*_{n,n}^{-1} ⋅ (*b*_{n} − **M**_{off;n,:} ⋅ **x**_{k})

Consider the last entry: even though we have calculated almost
all the new entries of **x**_{k + 1}, we are still
using **x**_{k} to calculate the last entry. If the
sequence is converging, would it not be better to use the updated entries?

Thus, the Gauss-Seidel method is a refinement of the Jacobi method
which may be stated as follows:

To calculate **x**_{k + 1}, first let
**x**_{k + 1} = **x**_{k} and
then update this vector with each calculation:

*x*_{k + 1,1} = *d*_{1,1}^{-1} ⋅ (*b*_{1} − **M**_{off;1,:} ⋅ **x**_{k + 1})

*x*_{k + 1,2} = *d*_{2,2}^{-1} ⋅ (*b*_{2} − **M**_{off;2,:} ⋅ **x**_{k + 1})

⋅

⋅

⋅

*x*_{k + 1,n} = *d*_{n,n}^{-1} ⋅ (*b*_{n} − **M**_{off;n,:} ⋅ **x**_{k + 1})

# HOWTO

# Problem

Given a system of linear equations **Mx** = **b**, find
a solution **x** iteratively.

# Assumptions

This technique assumes that we already have a reasonable
approximation of the solution and that the system is too
large to be solved using standard PLU techniques.

# Tools

We will use iteration and matrix operations.

# Initial Requirements

There are two requirements. First, it is necessary that the diagonal entries
of the matrix **M** are all non-zero. Also, for this technique to be reasonably
efficient (especially when we deal with 1000 × 1000 matrices) we must begin
with a reasonably good approximate solution **x**_{0}.

Write **M** = **D** + **M**_{off}, where
**D** = (*d*_{i,j}) is a diagonal matrix and **M**_{off} consists
of all the off-diagonal entries of **M**.
We represent the *k* row of **M**_{off} by **M**_{off;k,:}.

# Iteration Process

Given **x**_{k}, let
**x**_{k + 1} = **x**_{k}. Then,
update the individual entries of **x**_{k + 1} = (*x*_{k + 1, i}) one
entry at a time, as follows:

*x*_{k + 1,1} = *d*_{1,1}^{-1} ⋅ (*b*_{1} − **M**_{off;1,:} ⋅ **x**_{k + 1})

*x*_{k + 1,2} = *d*_{2,2}^{-1} ⋅ (*b*_{2} − **M**_{off;2,:} ⋅ **x**_{k + 1})

⋅

⋅

⋅

*x*_{k + 1,n} = *d*_{n,n}^{-1} ⋅ (*b*_{n} − **M**_{off;n,:} ⋅ **x**_{k + 1})
# Halting Conditions

There are two conditions
which may cause the iteration process to halt:

- We halt if the step between successive vectors is sufficiently small, that is,
||
**x**_{k + 1} - **x**_{k}||_{2} < ε_{step},
- If we have iterated some maximum number of times, say
*N*, and
have not met Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that **x**_{k + 1}
is our approximation of the solution to **Mx** = **b**.

If we halt due to Condition 2, we may either choose a different
initial approximation **x**_{0}, or state that a solution may not exist.

# Error Analysis

The rate of convergence is faster than that of the Jacobi
method, but is still limited by the condition number.

# Examples

# Example 1

Use the Gauss-Seidel method to find a solution to the linear system
defined by:

We rewrite this system as:

Thus, if we start with a random vector, say (0, 0, 0)^{T}, and iterate (using Matlab)
until ε_{step} < 0.1.

0.40000
-0.31429
-0.10952
ans = 0.52036
x =
0.50381
-0.31184
-0.12519
ans = 0.10501
x =
0.49970
-0.30336
-0.11886
ans = 0.011356

**x**_{0} = (0, 0, 0)^{T}

**x**_{1} = (0.4, -0.31429, -0.10952)^{T}

||**x**_{1} - **x**_{0}||_{2} = 0.52036

**x**_{2} = (0.50381, -0.31184, -0.12519)^{T}

||**x**_{2} - **x**_{1}||_{2} = 0.10501

**x**_{3} = (0.49970, -0.30336, -0.11886)^{T}

||**x**_{3} - **x**_{2}||_{2} = 0.011356

Thus, after 3 iterations, we have converged, and have almost
converged based on a criteria of ε_{step} < 0.01, which
took significantly longer (11 iterations) when using the Jacobi method.

Figure 1. A logarithmic plot of the step sizes, ||**x**_{n} − **x**_{n − 1}||_{2}.

Compare this with the rate of decrease of the Jacobi method
for the same problem.

# Example 2

Suppose you solved the system given in Example 1 to full precision
using PLU decomposition together with forward and backward substitution
to get (0.49807, -0.30502, -0.11969)^{T}. Use the Guass-Seidel
method to find the solution to:

until ε_{step} < 0.001.

As we already have a solution to a problem which is *close* to this problem,
we should use it as an initial value:
**x**_{0} = (0.49807, -0.30502, -0.11969)^{T}.

Simply setting up the Matlab, we have:

>> M = [5.02 2.01 -0.98; 3.03 6.95 3.04; 1.01 -3.99 5.98];
>> b = [2.05 -1.02 0.98]';
>> D = diag( M ).^-1
D =
0.19920
0.14388
0.16722
>> Moff = M - diag( diag( M ) )
Moff =
0.00000 2.01000 -0.98000
3.03000 0.00000 3.04000
1.01000 -3.99000 0.00000
>> x = [0.49807, -0.30502, -0.11969]'
x =
0.49807
-0.30502
-0.11969
>> for i = 1:100
xold = x;
for j = 1:3
x(j) = D(j)*(b(j) - Moff(j, :)*x);
end
if norm( x - xold ) < 0.001
break;
end
end
>> i % number of iterations: 5 for Jacobi method
i = 4
>> x
x =
0.50775
-0.31143
-0.12967

Note that the number of iterations to achieve much greater
accuracy is significantly reduced by using a solution which is
already known to be close to the expected solution. The actual
solution is (0.50774, -0.31141, -0.12966)^{T}.

# Example 3

Compare the total number of mathematical operations used in
Example 2 with the equivalent work for performing Gaussian
elimination and backward substitution:

Jacob:

3 ⋅ (6 + 3) = 27 multiplications and divisions

3 ⋅ 3 = 9 additions and subtractions

Gaussian elimination and backward substitution:

17 multiplications and divisions

11 additions and subtractions

Thus, for such a small example, the Gauss-Seidel method
requires little extra work over Gaussian elimination and backward
substitution. These continue to diverge as the Gauss-Seidel
method is still O(*n*^{2}).

# Questions

# Question 1

Use two iterations of the Jacobi method to find a solution to the
system of linear equations defined by:

starting with **x**_{0} = (0, 0, 0)^{T}.

Unlike the Jacobi method, you will have to use a calculator for even
this simple example:

(0.5, 1, -0.125)^{T} and (0.5625, 0.9375, -0.15625)^{T})

# Question 2

For the same problem in Question 1, start with the initial vector
**x**_{0} = (0.5, 1, 0)^{T}.

There should be no change from the result in Question 1.

# Question 3

Use Matlab to find a solution to the system of linear equations defined by:

M = ones( 10 ) + diag( [8 9 5 13 12 11 9 14 13 15] )
b = (1:10)'

starting with the initial vector which is the solution to:

M = eye( 10 ) + diag( [8 9 5 13 12 11 9 14 13 15] )
b = (1:10)'

using an ε_{step} value of 1e-5.

(After 8 steps, the entries of **x** are:

-0.180776784448946
-0.049578900803678
0.110758490690075
0.119522515813433
0.212816021680441
0.323071929317335
0.505976635944002
0.396699200503048
0.504137539195519
0.503585834506798

.)

# Applications to Engineering

The Gauss-Seidel method is a technical improvement which speeds the
convergence of the Jacobi method.

# Matlab

The following Matlab code converts a Matrix into
it a diagonal and off-diagonal component and
performs up to 100 iterations of the Jacobi method
or until ε_{step} < 1e-5:

>> M = rand( 5, 5 ) + 2*eye( 5 )
M =
2.99886 0.20112 0.33823 0.02418 0.48099
0.13534 2.93193 0.66292 0.61406 0.65023
0.95706 0.87389 2.11674 0.83306 0.04143
0.97267 0.67210 0.04181 2.21591 0.96350
0.24198 0.69004 0.46666 0.06648 2.97124
>> b = rand( 5, 1 )
b =
0.34267
0.94002
0.46967
0.10680
0.61100
>> invD = diag( M ).^-1
invD =
0.33346
0.34107
0.47243
0.45128
0.33656
>> Moff = M - diag( diag( M ) )
Moff =
0.00000 0.20112 0.33823 0.02418 0.48099
0.13534 0.00000 0.66292 0.61406 0.65023
0.95706 0.87389 0.00000 0.83306 0.04143
0.97267 0.67210 0.04181 0.00000 0.96350
0.24198 0.69004 0.46666 0.06648 0.00000
>> x = rand( 5, 1 )
x =
0.89940
0.67897
0.11489
0.77440
0.18706
>> for i=1:100
xold = x;
for j=1:length(x)
x(j) = invD(j)*(b(j) - Moff(j,:)*x);
end
if ( norm( x - xold ) < 1e-5 )
break;
end
end
>> i
i = 10
>> format long
>> M \ b
ans =
0.063615084366386
0.290206822372779
0.118603231123152
-0.120917750880683
0.117138426324804
>> x
x =
0.063615007133803
0.290204859470108
0.118601471784080
-0.120917488549852
0.117139158929149

### Matlab Function

The following is a reasonably efficient implementation of the
Jacobi method. There

function x = gauss_seidel( M, b, N, e )
% Solve Mx = b
% The diagonal entries of M and their inverses
n = length( b );
d = diag( M );
if ~all( d )
error 'at least one diagonal entry is zero';
end
invd = d.^-1;
% Matrix of off-diagonal entires of N
Moff = M - diag( d );
% Use d.^-1*b as the first approximation to x
invdb = invd.*b;
x = db;
% -1
% Iterate x = D (b - M *x)
% off
for k = 1:N
xprev = x;
for i = 1:n
x(i) = invdb(i) - invd(i).*(Moff(i,:)*x);
end
if norm( x - xprev, inf ) < e
return;
end
end
error 'the method did not converge';
end

# Maple

Maple does not have a numerical linear algebra package, and therefore
there is no equivalent functionality.