# Introduction

The Gauss-Seidel method is an technical improvement over the Jacobi method. In a nutshell, given xn = (xn, i), each entry of xn + 1 = (xn + 1, i) may be calculated separately, so if you have already calculated (xn + 1, 1), why continue to use (xn, 1) to calculate (xn + 1, 2)?

# Background

Useful background for this topic includes:

# Theory

For the Jacobi method applied to a system of n linear equations defined by Mx = b, we start with an initial vector x0 and iterate the function:

f(x) = D-1(bMoffx)

To begin, let xk = (xk, i), D = (di, j), b = (bi), Mk, : represent the kth row of M.

Because the matrix D-1 is a diagonal matrix, in essence, we are calculating the following n operations:

xk + 1,1 = d1,1-1 ⋅ (b1Moff;1,:xk)
xk + 1,2 = d2,2-1 ⋅ (b2Moff;2,:xk)

xk + 1,n = dn,n-1 ⋅ (bnMoff;n,:xk)

Consider the last entry: even though we have calculated almost all the new entries of xk + 1, we are still using xk to calculate the last entry. If the sequence is converging, would it not be better to use the updated entries?

Thus, the Gauss-Seidel method is a refinement of the Jacobi method which may be stated as follows:

To calculate xk + 1, first let xk + 1 = xk and then update this vector with each calculation:

xk + 1,1 = d1,1-1 ⋅ (b1Moff;1,:xk + 1)
xk + 1,2 = d2,2-1 ⋅ (b2Moff;2,:xk + 1)

xk + 1,n = dn,n-1 ⋅ (bnMoff;n,:xk + 1)

# Problem

Given a system of linear equations Mx = b, find a solution x iteratively.

# Assumptions

This technique assumes that we already have a reasonable approximation of the solution and that the system is too large to be solved using standard PLU techniques.

# Tools

We will use iteration and matrix operations.

# Initial Requirements

There are two requirements. First, it is necessary that the diagonal entries of the matrix M are all non-zero. Also, for this technique to be reasonably efficient (especially when we deal with 1000 × 1000 matrices) we must begin with a reasonably good approximate solution x0.

Write M = D + Moff, where D = (di,j) is a diagonal matrix and Moff consists of all the off-diagonal entries of M. We represent the k row of Moff by Moff;k,:.

# Iteration Process

Given xk, let xk + 1 = xk. Then, update the individual entries of xk + 1 = (xk + 1, i) one entry at a time, as follows:

xk + 1,1 = d1,1-1 ⋅ (b1Moff;1,:xk + 1)
xk + 1,2 = d2,2-1 ⋅ (b2Moff;2,:xk + 1)

xk + 1,n = dn,n-1 ⋅ (bnMoff;n,:xk + 1)

# Halting Conditions

There are two conditions which may cause the iteration process to halt:

1. We halt if the step between successive vectors is sufficiently small, that is, ||xk + 1 - xk||2 < εstep,
2. If we have iterated some maximum number of times, say N, and have not met Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that xk + 1 is our approximation of the solution to Mx = b.

If we halt due to Condition 2, we may either choose a different initial approximation x0, or state that a solution may not exist.

# Error Analysis

The rate of convergence is faster than that of the Jacobi method, but is still limited by the condition number.

# Example 1

Use the Gauss-Seidel method to find a solution to the linear system defined by:

We rewrite this system as:

Thus, if we start with a random vector, say (0, 0, 0)T, and iterate (using Matlab) until εstep < 0.1.

0.40000 -0.31429 -0.10952 ans = 0.52036 x = 0.50381 -0.31184 -0.12519 ans = 0.10501 x = 0.49970 -0.30336 -0.11886 ans = 0.011356

x0 = (0, 0, 0)T
x1 = (0.4, -0.31429, -0.10952)T
||x1 - x0||2 = 0.52036
x2 = (0.50381, -0.31184, -0.12519)T
||x2 - x1||2 = 0.10501
x3 = (0.49970, -0.30336, -0.11886)T
||x3 - x2||2 = 0.011356

Thus, after 3 iterations, we have converged, and have almost converged based on a criteria of εstep < 0.01, which took significantly longer (11 iterations) when using the Jacobi method. Figure 1. A logarithmic plot of the step sizes, ||xnxn − 1||2.

Compare this with the rate of decrease of the Jacobi method for the same problem.

# Example 2

Suppose you solved the system given in Example 1 to full precision using PLU decomposition together with forward and backward substitution to get (0.49807, -0.30502, -0.11969)T. Use the Guass-Seidel method to find the solution to:

until εstep < 0.001.

As we already have a solution to a problem which is close to this problem, we should use it as an initial value: x0 = (0.49807, -0.30502, -0.11969)T.

Simply setting up the Matlab, we have:

```>> M = [5.02 2.01 -0.98; 3.03 6.95 3.04; 1.01 -3.99 5.98];
>> b = [2.05 -1.02 0.98]';
>> D = diag( M ).^-1
D =

0.19920
0.14388
0.16722

>> Moff = M - diag( diag( M ) )
Moff =

0.00000   2.01000  -0.98000
3.03000   0.00000   3.04000
1.01000  -3.99000   0.00000

>> x = [0.49807, -0.30502, -0.11969]'
x =

0.49807
-0.30502
-0.11969

>> for i = 1:100
xold = x;

for j = 1:3
x(j) = D(j)*(b(j) - Moff(j, :)*x);
end

if norm( x - xold ) < 0.001
break;
end
end
>> i         % number of iterations: 5 for Jacobi method
i = 4
>> x
x =

0.50775
-0.31143
-0.12967
```

Note that the number of iterations to achieve much greater accuracy is significantly reduced by using a solution which is already known to be close to the expected solution. The actual solution is (0.50774, -0.31141, -0.12966)T.

# Example 3

Compare the total number of mathematical operations used in Example 2 with the equivalent work for performing Gaussian elimination and backward substitution:

Jacob:
3 ⋅ (6 + 3) = 27 multiplications and divisions
3 ⋅ 3 = 9 additions and subtractions

Gaussian elimination and backward substitution:
17 multiplications and divisions

Thus, for such a small example, the Gauss-Seidel method requires little extra work over Gaussian elimination and backward substitution. These continue to diverge as the Gauss-Seidel method is still O(n2).

# Question 1

Use two iterations of the Jacobi method to find a solution to the system of linear equations defined by:

starting with x0 = (0, 0, 0)T.

Unlike the Jacobi method, you will have to use a calculator for even this simple example:

(0.5, 1, -0.125)T and (0.5625, 0.9375, -0.15625)T)

# Question 2

For the same problem in Question 1, start with the initial vector x0 = (0.5, 1, 0)T.

There should be no change from the result in Question 1.

# Question 3

Use Matlab to find a solution to the system of linear equations defined by:

```M = ones( 10 ) + diag( [8 9 5 13 12 11 9 14 13 15] )
b = (1:10)'
```

starting with the initial vector which is the solution to:

```M = eye( 10 ) + diag( [8 9 5 13 12 11 9 14 13 15] )
b = (1:10)'
```

using an εstep value of 1e-5.

(After 8 steps, the entries of x are:

```  -0.180776784448946
-0.049578900803678
0.110758490690075
0.119522515813433
0.212816021680441
0.323071929317335
0.505976635944002
0.396699200503048
0.504137539195519
0.503585834506798
```

.)

# Applications to Engineering

The Gauss-Seidel method is a technical improvement which speeds the convergence of the Jacobi method.

# Matlab

The following Matlab code converts a Matrix into it a diagonal and off-diagonal component and performs up to 100 iterations of the Jacobi method or until εstep < 1e-5:

```>>  M = rand( 5, 5 ) + 2*eye( 5 )
M =

2.99886  0.20112  0.33823  0.02418  0.48099
0.13534  2.93193  0.66292  0.61406  0.65023
0.95706  0.87389  2.11674  0.83306  0.04143
0.97267  0.67210  0.04181  2.21591  0.96350
0.24198  0.69004  0.46666  0.06648  2.97124

>>  b = rand( 5, 1 )
b =

0.34267
0.94002
0.46967
0.10680
0.61100

>> invD = diag( M ).^-1
invD =

0.33346
0.34107
0.47243
0.45128
0.33656

>> Moff = M - diag( diag( M ) )
Moff =

0.00000  0.20112  0.33823  0.02418  0.48099
0.13534  0.00000  0.66292  0.61406  0.65023
0.95706  0.87389  0.00000  0.83306  0.04143
0.97267  0.67210  0.04181  0.00000  0.96350
0.24198  0.69004  0.46666  0.06648  0.00000

>> x = rand( 5, 1 )
x =

0.89940
0.67897
0.11489
0.77440
0.18706

>> for i=1:100
xold = x;

for j=1:length(x)
x(j) = invD(j)*(b(j) - Moff(j,:)*x);
end

if ( norm( x - xold ) < 1e-5 )
break;
end
end
>> i
i = 10
>> format long
>> M \ b
ans =

0.063615084366386
0.290206822372779
0.118603231123152
-0.120917750880683
0.117138426324804

>> x
x =

0.063615007133803
0.290204859470108
0.118601471784080
-0.120917488549852
0.117139158929149
```

### Matlab Function

The following is a reasonably efficient implementation of the Jacobi method. There

```function x = gauss_seidel( M, b, N, e )
% Solve Mx = b
% The diagonal entries of M and their inverses
n = length( b );
d = diag( M );

if ~all( d )
error 'at least one diagonal entry is zero';
end

invd = d.^-1;
% Matrix of off-diagonal entires of N
Moff = M - diag( d );

% Use d.^-1*b as the first approximation to x
invdb = invd.*b;
x = db;

%              -1
% Iterate x = D  (b - M   *x)
%                      off
for k = 1:N
xprev = x;

for i = 1:n
x(i) = invdb(i) - invd(i).*(Moff(i,:)*x);
end

if norm( x - xprev, inf ) < e
return;
end
end

error 'the method did not converge';
end
```

# Maple

Maple does not have a numerical linear algebra package, and therefore there is no equivalent functionality.