Skip to the content of the web site.

4.9 The Jacobi Method

Introduction Theory HOWTO Error Analysis Examples Questions Applications in Engineering Matlab Maple

Introduction

The PLU decomposition allows us to solve a system of linear equations. However, in some circumstances, for example, if we know a solution for a similar problem, it would be beneficial if we could use an iterative method to find a solution.

Background

Useful background for this topic includes:

References

Theory

Suppose we are trying to solve a system of linear equation Mx = b. If we assume that the diagonal entries are non-zero (true if the matrix M is positive definite), then we may rewrite this equation as:

Dx + Moffx = b

where D is the diagonal matrix containing the diagonal entries of M and Moff contains the off-diagonal entries of M. Because all the entries of the diagonal matrix are non-zero, the inverse is simply the diagonal matrix whose diagonal entries are the reciprocals of the corresponding entries of D. Thus, we may bring the off-diagonal entries to the right hand side and multiply by D-1:

x = D-1(b - Moffx)

You will recall from the class on iteration, we now have an equation of the form x = f(x), except in this case, the argument is a vector, and thus, one method of solving such a problem is to start with an initial vector x0.

HOWTO

Problem

Given a system of linear equations Mx = b, find a solution x iteratively.

Assumptions

This technique assumes that we already have a reasonable approximation of the solution and that the system is too large to be solved using standard PLU techniques.

Tools

We will use iteration and matrix operations.

Initial Requirements

There are two requirements. First, it is necessary that the diagonal entries of the matrix M are all non-zero. Also, for this technique to be reasonably efficient (especially when we deal with 1000 × 1000 matrices) we must begin with a reasonably good approximate solution x0.

Write M = D + Moff, where D is a diagonal matrix and Moff consists of all the off-diagonal entries of M. The inverse D-1 of the matrix D is simply the reciprocal of all the diagonal entries of D.

Iteration Process

Given xk, define xk + 1 = D-1(b - Moffxk).

Halting Conditions

There are two conditions which may cause the iteration process to halt:

  1. We halt if the step between successive vectors is sufficiently small, that is, ||xk + 1 - xk||2 < εstep,
  2. If we have iterated some maximum number of times, say N, and have not met Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that xk + 1 is our approximation of the solution to Mx = b.

If we halt due to Condition 2, we may either choose a different initial approximation x0, or state that a solution may not exist.

Error Analysis

The Jacobi method will converge for diagonally dominant matrices; however, the rate of convergence will depend on the norm of the matrix |||D-1Moff|||.

For a matrix which is diagonally dominant, then |||D-1Moff||| < |||D-1|||·||||Moff||| < 1.

Examples

Example 1

Use the Jacobi method to find a solution to the linear system defined by:

We rewrite this system as:

Thus, if we start with a random vector, say (0, 0, 0)T, and iterate (using Matlab) until εstep < 0.1.

x0 = (0, 0, 0)T
x1 = (0.4, -0.14286, 0.16667)T
||x1 - x0||2 = 0.45627
x2 = (0.49048, -0.38571, 0.00476)T
||x2 - x1||2 = 0.30558
x3 = (0.55524, -0.35510, -0.17222)T
||x3 - x2||2 = 0.19093
x4 = (0.50760, -0.30701, -0.16261)T
||x4 - x3||2 = 0.068376

Thus, after 4 iterations, we have converged. Figure 1 shows a logarithmic plot of the step size with-respect to the iterations. It takes 8 iterations to reduce the step size to less than 0.01 and 11 iterations to reduce the step size to less than 0.001.

Figure 1. A logarithmic plot of the step sizes, ||xnxn − 1||2.

Example 2

Suppose you solved the system given in Example 1 to full precision using PLU decomposition together with forward and backward substitution to get (0.49807, -0.30502, -0.11969)T. Use the Jacobi method to find the solution to:

until εstep < 0.001.

As we already have a solution to a problem which is close to this problem, we should use it as an initial value: x0 = (0.49807, -0.30502, -0.11969)T.

Simply setting up the Matlab, we have:

>> M = [5.02 2.01 -0.98; 3.03 6.95 3.04; 1.01 -3.99 5.98];
>> b = [2.05 -1.02 0.98]';                                
>> D = diag( M ).^-1
D =

  0.19920
  0.14388
  0.16722

>> Moff = M - diag( diag( M ) )
Moff =

   0.00000   2.01000  -0.98000
   3.03000   0.00000   3.04000
   1.01000  -3.99000   0.00000

>> x = [0.49807, -0.30502, -0.11969]'
x =

   0.49807
  -0.30502
  -0.11969

>> for i=1:100
    xn = D.*(b - Moff*x);         % element-wise multiplication
    if norm( x - xn ) < 0.001 
       break;
    end
    x = xn;
end
>> i         % number of iterations
i = 5
>> x                                                                           
x =

   0.50761
  -0.31105
  -0.13017

Note that the number of iterations to achieve much greater accuracy is significantly reduced by using a solution which is already known to be close to the expected solution. The actual solution is (0.50774, -0.31141, -0.12966)T.

Example 3

Compare the total number of mathematical operations used in Example 2 with the equivalent work for performing Gaussian elimination and backward substitution:

Jacob:
5 ⋅ (6 + 3) = 45 multiplications and divisions
5 ⋅ 3 = 15 additions and subtractions

Gaussian elimination and backward substitution:
17 multiplications and divisions
11 additions and subtractions

Thus, for such a small example, it would be cheaper to use Gaussian elimination and backward substitution, however, the number of multiplications and divisions grows O(n3) whereas the Jacobi method only requires one matrix-vector multiplication and is therefore O(n2).

Questions

Question 1

Use two iterations of the Jacobi method to find a solution to the system of linear equations defined by:

starting with x0 = (0, 0, 0)T.

((0.5, 1, -0.25)T and (0.625, 0.875, -0.125)T)

Question 2

For the same problem in Question 1, start with the initial vector x0 = (0.5, 1, 0)T.

((0.5, 1, -0.125)T and (0.5625, 0.9375, -0.125)T.)

Question 3

Use Matlab to find a solution to the system of linear equations defined by:

M = ones( 10 ) + diag( [8 9 5 13 12 11 9 14 13 15] )
b = (1:10)'

starting with the initial vector which is the solution to:

M = eye( 10 ) + diag( [8 9 5 13 12 11 9 14 13 15] )
b = (1:10)'

using an εstep value of 1e-5.

(After 57 steps, the entries of x are:

  -0.180774470140078
  -0.049577264091354
   0.110760489689348
   0.119523511986568
   0.212817121006387
   0.323073201594078
   0.505978291464202
   0.396700416747984
   0.504138896601953
   0.503587066159041

.)

Applications to Engineering

Efficient methods of solving systems of linear equations, especially when approximate solutions are already known, significantly reduce the amount of computation required.

Matlab

The following Matlab code converts a Matrix into it a diagonal and off-diagonal component and performs up to 100 iterations of the Jacobi method or until εstep < 1e-5:

>> M = rand( 5, 5 ) + 2*eye( 5 )
M =

  2.55509  0.85822  0.92679  0.23732  0.17628
  0.38558  2.55646  0.40726  0.88843  0.89019
  0.44825  0.37596  2.21124  0.61538  0.61754
  0.53346  0.29330  0.23489  2.51195  0.57504
  0.33918  0.58728  0.93330  0.45368  2.98108

>> b = rand( 5, 1 )
b =

  0.66360
  0.04431
  0.00939
  0.48425
  0.66024

>> invD = diag( M ).^-1
invD =

  0.39138
  0.39117
  0.45224
  0.39810
  0.33545

>> Moff = M - diag( diag( M ) )
Moff =

  0.00000  0.85822  0.92679  0.23732  0.17628
  0.38558  0.00000  0.40726  0.88843  0.89019
  0.44825  0.37596  0.00000  0.61538  0.61754
  0.53346  0.29330  0.23489  0.00000  0.57504
  0.33918  0.58728  0.93330  0.45368  0.00000

>> x = rand( 5, 1 )
x =

  0.21771
  0.08108
  0.65971
  0.68427
  0.72107

>> for i=1:100
    xnew = invD .* (b - Moff*x);

    if ( norm( x - xnew ) < 1e-5 )
       break;
    end

    x = xnew;
end
>> i               % number of iterations
i = 71
>> format long
>> M \ b           % the exact solution
ans =

   0.325181051208077
  -0.126451868227150
  -0.133124301983727
   0.096833762777846
   0.236329707577718

>> xnew            % our approximate solution
xnew =

   0.325178965574161
  -0.126454043358870
  -0.133126373431652
   0.096832201775605
   0.236327862519188

Matlab Function

The following is a reasonably efficient implementation of the Jacobi method. There

function x = jacobi( M, b, N, e )
	% Solve Mx = b
	% The diagonal entries of M and their inverses
	d = diag( M );

	if ~all( d ) 
		error 'at least one diagonal entry is zero';
	end

	invd = d.^-1;
	% Matrix of off-diagonal entires of N
	Moff = M - diag( d );

	% Use d.^-1*b as the first approximation to x
	invdb = invd.*b;
	x = db;

	%              -1
	% Iterate x = D  (b - M   *x)
	%                      off
	for k = 1:N
		xprev = x;
		x = invdb - invd.*(Moff*x);

		if norm( x - xprev, inf ) < e
			return;
		end
	end

	error 'the method did not converge';
end

Maple

Maple does not have a numerical linear algebra package, and therefore there is no equivalent functionality.