Introduction
Theory
HOWTO
Error Analysis
Examples
Questions
Applications in Engineering
Matlab
Maple

# Introduction

There are numerous numerical techniques for finding eigenvalues
and eigenvectors. In this topic, we will look at an elegant method
of finding the eigenvalue of a matrix which has all positive
eigenvalues. This is our first example of a numerical technique
which is based on iteration.

# Background

Useful background for this topic includes:

# References

- Bradie, Section 4.1, The Power Method, p.265.
- Mathews, Section 11.2, The Power Method, p.598.
- James [AMEM], Section 6.5.1 The Power Method, p.446.

# Interactive Maplet

A Power Method Demonstration
# Theory

Suppose **v** is an eigenvector of a diagonalizable *n* × *n* matrix **M**. Then
**Mv** = *λ***v** for some value λ. Thus, given any real number *c* > 0,
we may calculate:

If |*λ*/*c*| < 1 then the limit as *k* → ∞ is the zero vector **0**, and if
*λ* = *c*, this limit is **v**.

Thus, given an *n* × *n* matrix **M** with real
eigenvalues, let *λ*_{1} ≥ *λ*_{2} ≥ ⋅⋅⋅
≥ *λ*_{n} be the eigenvalues ordered from largest
to smallest and let **v**_{1}, ..., **v**_{n} be the corresponding
eigenvectors. Therefore, a random vector **u** may be written as:

**u** = *u*_{1} **v**_{1} + *u*_{2} **v**_{2} + ⋅⋅⋅ + *u*_{n} **v**_{n}
where, if the vector is truly random, *u*_{1} ≠ 0.

Multiplying by **M**/λ_{1} yields:

(**M**/λ_{1}) **u** = (*u*_{1}/*λ*_{1}) **Mv**_{1} + (*u*_{2}/*λ*_{1}) **Mv**_{2} + ⋅⋅⋅ + (*u*_{n}/*λ*_{1}) **Mv**_{n}

= *u*_{1} (*λ*_{1}/*λ*_{1}) **v**_{1} + *u*_{2} (*λ*_{2}/*λ*_{1}) **v**_{2} + ⋅⋅⋅ + *u*_{n} (*λ*_{n}/*λ*_{1}) **v**_{n}
and hence, multiplying **u** by (**M**/λ_{1})^{k} yields:

(**M**/λ_{1})^{k} **u** = *u*_{1} (*λ*_{1}/*λ*_{1})^{k} **v**_{1} + *u*_{2} (*λ*_{2}/*λ*_{1})^{k} **v**_{2} + ⋅⋅⋅ + *u*_{n} (*λ*_{n}/*λ*_{1})^{k} **v**_{n}
The first ratio is 1 and (1)^{k} = 1 while all other ratios are less than one, and therefore all other ratios go to zero, and thus

(**M**/λ_{1})^{k} **u** → *u*_{1} **v**_{1} as *k* → ∞
Thus, if this sequence approaches the maximum eigenvector **v**_{1}. Now, we use
the property of the norm that:

||α **v**_{1}||_{2} = |α| ||**v**_{1}||_{2}
for any scalar α to find the largest eigenvalue as follows:

Because we assumed that λ_{1} > 0, we simply calculate
*λ*_{1} = ||**M**(*u*_{1}**v**_{1})||_{2}/||*u*_{1}**v**_{1}||_{2}.

The howto describes the numerical technique for finding this largest eigenvalue and its
corresponding eigenvector.

# HOWTO

# Problem

Given a matrix **M** with real
coefficients, find the maximum eigenvalue.

# Assumptions

We assume that the largest eigenvalue is real and positive and that
the matrix is diagonalizable.

# Tools

We will use basic linear algebra and iteration.

# Initial Requirements

We begin with a random unit vector **v**_{0}. There are
certain vectors for which this method will not converge to the
largest eigenvalue, however, the probability of choosing such
a vector randomly is zero.

# Iteration Process

Given the vector **v**_{k}, calculate
**v**_{k + 1} = **Mv**_{k}/||**v**_{k}||_{2} where ||**v**_{k}||_{2} is the Euclidean
norm of **v**_{k}.

# Halting Conditions

There are two conditions
which may cause the iteration process to halt:

- We halt if the difference between successive vectors is sufficiently small, that is,
||
**v**_{k + 1} - **v**_{k}||_{2} < ε_{step},
- If we have iterated some maximum number of times, say
*N*, and
have not met Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that ||**v**_{k + 1}||_{2}
is our approximation to the eigenvalue and **v**_{k + 1} is
our approximation to the corresponding eigenvector.

If we halt due to Condition 2, then either the largest eigenvalue is complex
or the two largest eigenvalues are very close in absolute value.

# Error Analysis

You will note that with every iteration, we divided the
vector by its absolute value. While in theory, we could simply
continue multiplying **v**_{0} by higher and higher
powers of **M** and look at the relative norms, because we
are using the `double` format to store numbers, the
growth in the magnitude will be exponential, and thus we will
very quickly either overflow or underflow.

For example, if the resistors in a circuit are on the
order of 1 kΩ, then a matrix of resistances may have
a norm on the order 1000, and therefore, with as few as
50 iterations, an overflow may occur.

# Examples

# Example 1

Approximate the maximum eigenvalue of the matrix with the
halting value ε_{step} = 1e-10 and *N* = 200.

Adapting the code on the Matlab page:

>> format long
>> M = [8 3 -4; 3 7 2; -4 2 9];
>> v = rand( 3, 1 );
>> oldnorm = Inf; % set old norm to infinity
>> for i=1:100
v = M * v / norm(v);
newnorm = norm(v);
if ( abs( newnorm − oldnorm ) < 1e-10 )
break;
elseif ( i == 100 )
error( 'method failed to converge' );
else
oldnorm = newnorm;
end
end
>> newnorm
newnorm = 12.5899223267361
>> i % check to see the number of iterations, this may differ
i = 51

# Example 2

Show, using Matlab, that the maximum eigenvalue of the matrix [0 1; 1 -1] is φ, the golden ratio.

>> M = [0 -1; -1 1];
>> v = rand( 2, 1 );
>> for i=1:100
vold = v;
v = M*v / norm( v );
if ( abs( norm( v ) − norm( vold ) ) < 1e-5 )
break;
elif ( i = 100 )
error( 'method failed to converge' );
end
end
>> i % the number of iterations
i = 9
>> v % the eigenvector
v =
-0.851283180611599
1.375989714216622
>> norm( v ) % the eigenvalue
ans = 1.61803298706242

# Example 3

Plot the sequence of approximations of the maximum eigenvector for the matrix

starting with the initial vector

These are shown in Figure 1. The sequence of approximations is shaded from blue to red. The last plotted red vector is quite close to the actual eigenvector
of 9.54 ⋅ (0.763, 0.646)^{T} (9.54 being the corresponding eigenvalue).

Figure 1. The sequence of approximations of the maximum eigenvector
with the initial vector **v** = (-3.1, 5.2)^{T} in black.

# Example 4

Plot the sequence of approximations of the maximum eigenvector for the matrix

starting with the initial vector

These are shown in Figure 2. The sequence of approximations is shaded from blue to red. The last plotted red vector is quite close to the actual eigenvector
of 10.1 ⋅ (0.746, 0.423, -0.515)^{T} (10.1 being the corresponding eigenvalue).

Figure 2. The sequence of approximations of the maximum eigenvector
with the initial vector **v** = (-2.1, 3.5, 0.9)^{T} in black.

# Questions

# Question 1

Use Matlab using the technique described in this topic to find the largest
eigenvalue of the matrix

with ε_{step} = 1e-10 and *N* = 100 and starting with
the *random* vector (4/5, 3/5, 0)^{T}.

(13.7851650423974)

# Question 2

Assuming that `eig` returns the correct eigenvalue, what is
the relative error of the approximation found in Question 1?

(4.8e-13)

# Question 3

Show what happens to this algorithm if the *random* vector is
the *k*th eigenvector.

(The technique converges to the corresponding eigenvalue as opposed to the maximum
eigenvalue.)

# Applications to Engineering

The maximum eigenvalue is used in finding the *condition
number* of a matrix. The condition number indicates how
good a solution found using numerical techniques can be.

# Matlab

The `eig` function returns the eigenvalues
of a matrix, and thus, the following code works:

>> M = rand( 5, 5 ); % create a random matrix
>> M = M' * M; % make it positive definite
>> max( eig( M ) ) % use the built-in functions

The technique described in this topic is given below
with ε_{step} = 1e-5 and *N* = 100:

>> v = rand( 5, 1 );
>> v = v / norm(v); % normalize it
>> oldnorm = Inf; % set old norm to infinity
>> for i=1:100
v = M * v;
newnorm = norm(v);
if ( abs( newnorm − oldnorm ) < 1e-5 )
break;
elseif ( i == 100 )
error( 'method failed to converge' );
else
v = v/newnorm;
oldnorm = newnorm;
end
end
newnorm

# Maple

The eigenvalues of a matrix may be found with the Eigenvalues command:

M := <<-6, 12, 2.4, 0>|<-1, 2, 10.4, 1>|<3.25, 1, -1.8, 14.8>|<10.25, 0, 2, 1.2>>;
LinearAlgebra[Eigenvalues]( M );

For more help on either of these routines or on the LinearAlgebra
package, enter:

?LinearAlgebra
?LinearAlgebra,Eigenvalues