Introduction
Theory
HOWTO
Examples
Questions
Matlab
Maple

# Introduction

The first tool which is used almost universally in numerical
methods (I say almost because the very first topic we see does not
use it...) where we take an approximation of a good answer and
perform some algorithm which (hopefully) creates a better
approximation. This processes of taking one approximation and
creating a better one is called *iteration*.

## Background

None.

## References

# Theory

Given a function f(*x*), if we start with an initial value
*x*_{0} and continue to calculate
f(*x*_{0}),
f(f(*x*_{0})),
f(f(f(*x*_{0}))),
f(f(f(f(*x*_{0})))), such a process is called iteration. That
is, we are starting with an initial value and applying the function f over
and over again. In general, we start with a value *x*_{0} and
define *x*_{k} = f(*x*_{k − 1})
for *k* ≥ 1.

## Experiment

If you take any scientific calculator and starting with 0., calculate cos(0),
and then continue to calculate cos of the previous answer many times, the
values appear to converge:

- If the calculator is in degrees, the sequence
doesn't change after four iterations: 0, 1, 0.99984770, 0.99984774, 0.99984774, and
- If the calculator is in radians, the first ten
terms are 0, 1, 0.54030231, 0.85755321, 0.65428979, 0.79348036, 0.70136877, 0.76395969, 0.72210242, 0.75041777.
This appears to be moving closer to a value between 0.73 and 0.74. After 47 iterations, however,
the iterates begin to alternate between 0.73908514 and 0.73908513.

The calculator used stores eight decimal digits of precision.

In the first case, the iterates converged quickly to a specific value; however, in the second, the
sequence begins alternating between two values.

# Process

Given a function f(*x*), we start with an initial value
*x*_{0} and define *x*_{k} = f(*x*_{k − 1})
for *k* = 1, 2, 3, ... . With an appropriate choice of function and
initial value,
the sequence will converge to a value, that is, it will appear
that the limit of the sequence *x*_{k} converges to
some fixed value.

This is the basis of most
numerical methods: we begin with an initial approximation to
a solution for a given problem and we define some method of taking
an approximation and (hopefully) making a better approximation. Numerical
methods are not exact: in some cases, the sequence will appear to
converge but in reality, no solution exists, and in other cases,
it may not converge even if a solution does exist.

# Examples

You have already seen Newton's method in your other classes. Given a
function g(*x*) for which we wish to find a root, Newton's method indicates that you start with
some initial approximation *x*_{0} and let

Thus, the function f(*x*) = *x* - g(*x*)/g^{(1)}(*x*).

Another interesting example is the Verhulst discrete dynamical system. For a given
value of *c* ∈ [0, 4], define f(*x*) = *cx*(1 - *x*).
Starting with *x*_{0} = 0.5, you will note that iterating this
function converges for values of *c* ∈ [0, 3], but does not converge for values
of *c* ∈ (3, 4).

# Halting Conditions

As important as iteration is for numerical methods, it is equally important
to know when to stop iterating. Any conditions which are used to indicate
that we should stop iterating are called *halting conditions*. For any
numerical method which uses iteration, we will specify two types
of halting conditions:

- Halting conditions which indicate success, and
- Halting conditions which indicate failure.

The first indicates that the last iterate is sufficiently good for our
purposes (based on our requirements), that is, the sequence of iterates has converged.

The second indicates that the numerical method failed to converge
for whatever reason.

The specific conditions depend on the numerical method, but in general,
all numerical methods which use iteration will have the condition:

Halt if we have iterated *N* times.

where *N* is some large number, say 100, after which we indicate
that the numerical method failed to converge or we start with a different
initial condition.

Note: if a numerical method fails to converge, we cannot say that
no solution exists, only that we failed to find a solution for the
given initial conditions. Similarly, we are never 100% sure that if
a numerical method appears to converge that it actually converged to
a valid answer.

# HOWTO

Given a function f(x) and an initial value *x*_{0},
define *x*_{n + 1} = f(*x*_{n}) and
start with *n* = 0, 1, 2, 3, ..., in that order.

In some examples in this class, we will see cases where we require
two initial values *x*_{0} and *x*_{1} and
based on two values, we calculate the next. In this case, we
define *x*_{n + 1} = f(*x*_{n}, *x*_{n - 1}) and
start with *n* = 1, 2, 3, 4, ..., in that order.

For example, the Fibonacci sequence defines f(x, y) = x + y and requires that
*x*_{0} = *x*_{1} = 1. The examples we see in class
will (hopefully) converge rather than diverge.

In two examples in this class, we will see cases where we require
three initial values *x*_{0}, *x*_{1}, and *x*_{2} and
based on three values, we calculate the next. In this case, we
define *x*_{n + 1} = f(*x*_{n}, *x*_{n - 1}, *x*_{n - 2}) and
start with *n* = 2, 3, 4, 5, ..., in that order.

# Examples

# Example 1

Iterate the function f(*x*) = sin(*x*) + 1 ten times starting with
*x*_{0} = 1.0 .

For interest, unchanging digits are highlighted.

- 1.84147098480790
- 1.96359072454183
- 1.92384305244206
- 1.93832363904163
- 1.93321865653551
- 1.93504075438949
- 1.93439319553526
- 1.93462368817282
- 1.93454169134957
- 1.93457086707749

# Example 2

What happens if you iterate the two functions
f(*x*) = 2.5*x*(1 - *x*) and
f(*x*) = 3.5*x*(1 - *x*) one hundred times starting with
0.5?

Using the Matlab loop:

format long
x = 0.5;
for i=1:100
x = 2.5*x*(1 - x)
end
x = 0.5;
for i=1:100
x = 3.5*x*(1 - x)
end

The first converges to the value 0.6, while the second seems
to jump between the four values 0.874997263602464, 0.382819683017324, 0.826940706591439, 0.500884210307218,
in that order.

# Example 3

What happens if you iterate f(*x*) = 3*x*(1 - *x*) one million times starting with
0.5?

Using the Matlab loop:

format long
x = 0.5;
for i=1:1000000
x = 3*x*(1 - x)
end

you will note that it appears to be very slowly converging to 2/3 = 0.666⋅⋅⋅
and indeed, after half a million iterations, only the first three digits have converged.

# Example 4

Iterate f(*x*) = *x* - (sin(*x*) + 0.5)/cos(*x*) (Newton's method
applied to sin(x) + 0.5) 10 times starting with *x*_{0} = 0.5 and then again
with *x*_{0} = 1.5 .

A relatively small change (of 1) resulted in a significant change to the value
to which the sequence of iterates converges:

- -0.616049453506065
- -0.520707051869702
- -0.523596373742976
- -0.523598775596633
- -0.523598775598299
- -0.523598775598299
- -0.523598775598299
- -0.523598775598299
- -0.523598775598299
- -0.523598775598299

- -19.6698363986567
- -19.3306403815145
- -19.3726702222728
- -19.3731546294373
- -19.3731546971371
- -19.3731546971371
- -19.3731546971371
- -19.3731546971371
- -19.3731546971371
- -19.3731546971371

# Example 5

Using Matlab, iterate Newton's method applied to sin(x) + 1.5 twenty times starting with
*x*_{0} = 0.5 and then again with *x*_{0} = 0.5 + 0.5j (a complex
number).

It does not appear to converge in the first case (and if you iterate 6529 times, you will
see that it jumps from -4349.53509470918 to 3247.61943813042), but it does in the second (but to a
complex value) and all iterates after the seventh are unchanged (and the real part
looks a lot like π/2):

- -1.75554338083061
- 1.05895399897153
- -3.78367498980600
- -1.16288061149179
- -2.63012253179485
- -1.47128050883193
- -6.55370898769264
- -7.83299928321013
- -31.6747905750827
- -32.9616855234576
- -52.9464646048922
- -51.7681580995132
- -59.1478258954813
- -57.9991630501425
- -62.2256682605214
- -64.7441235592290
- -63.0786200963634
- -64.3735777788401
- -81.5894415822197
- -83.1880350967653

- -1.328864756739268 - 0.423824587674631i
- -1.97201798562560 - 1.06659751314751i
- -1.641590659008061 - 0.892315968025513i
- -1.563753275970338 - 0.961894124817246i
- -1.570801135365454 - 0.962390519304598i
- -1.570796326581148 - 0.962423650840042i
- -1.570796326794897 - 0.962423650119207i
- -1.570796326794897 - 0.962423650119207i

# Questions

# Question 1

Which converges faster? Iterating f(*x*) = sin(*x*) or f(*x*) = cos(*x*)? Is the difference significant?

# Question 2

The ancient Greeks knew that if they iterated
f(*x*) = (*x*^{2} + 2)/(2 *x*),
that it would converge to a number of significance. What number is that, and
can it only converge to that one number?
Can you suggest why it converges to the numbers you found?

# Matlab

We may iterate a function as follows in Matlab:

x = 0.5
for i=1:100
x = x - (sin(x) + 0.5)/cos(x)
end

You may replace the `x - (sin(x) + 0.5)/cos(x)` with whatever function
you wish to iterate.

# Maple

We may iterate a function as follows in Maple:

f := x -> x - (sin(x) + 0.5)/cos(x);
x[0] := 0.5;
for i from 0 to 100 do
x[i + 1] := f(x[i]);
end do;

You may replace the `x - (sin(x) + 0.5)/cos(x)` with whatever function
you wish to iterate.