Introduction
Theory
HOWTO
Error Analysis
Examples
Questions
Applications in Engineering
Matlab
Maple

# Introduction

Newton's method is a technique for finding the root of a
scalar-valued function f(*x*) of a single variable *x*.
It has rapid convergence properties but requires that model information
providing the derivative exists.

# Background

Useful background for this topic includes:

# References

- Stoer, Section 5.1, The Development of Iterative Methods, p.263.

# Theory

# Assumptions

This method is based on the assumption that any function with
continuous partial derivatives looks like an *n* dimensional
plane. For example, Figure 1 zooms in on a two-dimensional function
with continuous partial derivatives. While the function looks reasonably
*wild*, as you zoom in on the red dot, the function looks more and
more like a flat plane.

Figure 1. Zoom on a function of two variables.

Similar to Newton's method in one dimension, we will assume that each of the
*n* functions which make up **f**(**x**) have continuous
partial derivatives which we can compute.

# Partial Derivation

To be filled later.

# HOWTO

# Problem

Given *n* functions, each of *n* variables (given by a vector **x**),
f_{i}(**x**), find a vector
**r** (called a root) such that **f**(**r**) = **0**.

We will write the *n* functions as a vector-valued function
**f**(**x**).

# Assumptions

We will assume that each of the functions f_{i}(**x**)
is continuous and has continuous partial derivatives.

# Tools

We will use sampling, partial derivatives, iteration, and solving systems
of linear equations. Information
about the partial derivatives is derived from the model.

# Initial Requirements

We have an initial approximation **x**_{0} of the root.

We have calculated the Jacobian **J**(**f**)(**x**).

# Iteration Process

Given the approximation **x**_{n}, we solve
the system of linear equations defined by

**J**(**f**)(**x**_{n}) **Δx**_{n} = -**f**(**x**_{n})

for **Δx**_{n} and then set:

**x**_{n + 1} = **x**_{n} + **Δx**_{n}

# Halting Conditions

There are three conditions
which may cause the iteration process to halt (where the norm used may be
any of the 1, 2, or infinity norms):

- We halt if both of the following conditions are met:
- The step between successive iterates is sufficiently small,
||
**x**_{n + 1} - **x**_{n}|| < ε_{step}, and
- The function evaluated at the point
**x**_{n + 1} is
sufficiently small, ||**f**(**x**_{n + 1})|| < ε_{abs}.

- If the Jacobian is indeterminate, that is, the determinant |
**J**(**f**)(**x**_{n})| = 0, the iteration process fails and we halt.
- If we have iterated some maximum number of times, say
*N*, and
have not met Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that **x**_{n + 1} is our
approximation to the root.

If we halt due to either Condition 2 or 3, we may either choose a different
initial approximation **x**_{0}, or state that a solution may not exist.

# Error Analysis

The error analysis is beyond the scope of this course, but is similar
to that of Newton's method in one dimension,
that is, it is O(*h*^{2}) where *h* is the magnitude of
the error of our current approximation.

# Examples

# Example 1

A Maple worksheet showing Newton's method in 2 dimensions is given here.
If Maple does not automatically open, save this file on your desktop, start Maple, and open the
file. If you want to view this
worksheet in HTML, click here.

Note: if you download the worksheet and run Maple, you will be able to select the images and
rotate them. This would help in seeing the points and understanding how the approximations converge. Maple
is available on all Engineering computers.

# Example 2

Approximate a root of the function
**f**(**x**) defined by:

Let our initial approximation be **x**_{0} = (0, 0)^{T}.

To visualize this, we have two functions (the first red, the second blue),
each of two variables, defining two surfaces. We want to find a point where
both functions are simultaneously zero. Thus, we are looking for either of
the two black points in Figure 1 (click to enlarge the image). The two
roots are (0.849308816, 0.9927594664)^{T} and (-0.6397934171, -0.1918694996)^{T}

Figure 1. The two functions in Example 1, the roots (black) and our initial approximation (0, 0)^{T} (cyan).

Next, we calculate the Jacobian:

The first column is **f**(**x**) with partial derivatives with
respect to *x*, and the second column is **f**(**x**) with
partial derivatives with respect to *y*. We will continue iterating
until ||**Δx**_{n}||_{1} < ε_{step} = 0.01
and
||f(**x**_{n + 1}||_{1} < ε_{abs} = 0.01. Recall that the 1 norm of a vector is the sum of the absolute
values of the entries.

Rather than tabulating the results, we will explicitly compute each iteration:

### Calculating **x**_{1}

Evaluating both the Jacobian **J**(**f**)(**x**_{0}) and
-**f**(**x**_{0}), we get the system:

Solving this for **Δx**_{0}, we get
**Δx**_{0} = (-1.0, -0.66667)^{T}, and
therefore our next approximation is:

**x**_{1} = **x**_{0} + **Δx**_{0} = (-1.0, -0.66667)^{T}
We note that, using the 1 norm, ||**Δx**_{0}||_{1} = 1.6667
and ||**f**(**x**_{1})||_{1} = 5.7778.

### Calculating **x**_{2}

Evaluating both the Jacobian **J**(**f**)(**x**_{1}) and
-**f**(**x**_{1}), we get the system:

Solving this for **Δx**_{1}, we get
**Δx**_{1} = (0.12483, 0.55851)^{T}, and
therefore our next approximation is:

**x**_{2} = **x**_{1} + **Δx**_{1} = (-0.87517, -0.10816)^{T}
We note that, using the 1 norm, ||**Δx**_{1}||_{1} = 0.68334
and ||**f**(**x**_{2})||_{1} = 1.3102.

### Calculating **x**_{3}

Evaluating both the Jacobian **J**(**f**)(**x**_{2}) and
-**f**(**x**_{2}), we get the system:

Solving this for **Δx**_{2}, we get
**Δx**_{2} = (0.20229, -0.079792)^{T}, and
therefore our next approximation is:

**x**_{3} = **x**_{2} + **Δx**_{2} = (-0.67288, -0.18795)^{T}
We note that, using the 1 norm, ||**Δx**_{2}||_{1} = 0.28208
and ||**f**(**x**_{3})||_{1} = 0.1892.

### Calculating **x**_{4}

Evaluating both the Jacobian **J**(**f**)(**x**_{3}) and
-**f**(**x**_{3}), we get the system:

Solving this for **Δx**_{3}, we get
**Δx**_{3} = (0.032496, -0.0040448)^{T}, and
therefore our next approximation is:

**x**_{4} = **x**_{3} + **Δx**_{3} = (-0.64038, -0.19199)^{T}
We note that, using the 1 norm, ||**Δx**_{3}||_{1} = 0.036541
and ||**f**(**x**_{4})||_{1} = 0.0042.

### Calculating **x**_{5}

Evaluating both the Jacobian **J**(**f**)(**x**_{4}) and
-**f**(**x**_{4}), we get the system:

Solving this for **Δx**_{4}, we get
**Δx**_{4} = (0.00056451, 0.00016391)^{T}, and
therefore our next approximation is:

**x**_{5} = **x**_{4} + **Δx**_{4} = (-0.63982, -0.19183)^{T}
We note that, using the 1 norm, ||**Δx**_{4}||_{1} = 0.0042
and ||**f**(**x**_{5})||_{1} = 0.0001.

### Final Solution

We may therefore stop and we use **x**_{5} = (-0.63982, -0.19183)^{T}
as our approximation to the root. The actual root is
(-0.6397934171, -0.1918694996)^{T}, and thus we have a reasonable approximation.

To find the other root, we would have to choose a different initial point **x**_{0}.

# Questions

# Question 1

Perform three iterations to approximate a root of the system described by

**f**(**x**) = (*x*^{2} + *y*^{2} - 3, -2 *x*^{2} - ½ *y*^{2} + 2)^{T}
starting with the point **x**_{0} = (1, 1)^{T}.

Answer: **x**_{1} = (0.6666667, 1.833333)^{T},
**x**_{2} = (0.5833333, 1.643939)^{T} and
**x**_{3} = (0.5773810, 1.633030)^{T}. Incidentally,
this is closest to the root (0.5773502693, 1.632993162)^{T} (there
are four altogether).

# Question 2

What happens if you start with **x**_{0} = (0, 0)^{T}
in Question 1?

# Question 3

Perform three iterations to approximate a root of the system described by

**f**(**x**) = (*x*^{2} - *x**y* + *y*^{2} - 3, *x* + *y* - *x**y*)^{T}
starting with the point **x**_{0} = (-1.5, 0.5)^{T}.

Answer: **x**_{1} = (-1.375, 0.575)^{T},
**x**_{2} = (-1.36921, 0.577912)^{T} and
**x**_{3} = (-1.36921, 0.577918)^{T}. Incidentally,
this is closest to the root (-1.369205407, 0.577917560)^{T} (there
are four altogether).

# Question 4

From the symmetry in the function given in Question 3, can you guess another
root?

Answer: By symmetry, (0.577918, -1.36921)^{T} would approximate
another root because if you swap all *x*s and *y*s in the function
you get the same expression.

# Applications to Engineering

No applications yet.

# Matlab

To be filled later.

# Maple

You can download this Maple worksheet
to visualize Newton's method in two dimensions. Save this file
as `newton2d.mws` and open it with whichever version of
Maple running on your machine (probably Maple 9).