# Introduction

Given three points which approximate a minimum, we can find the interpolating quadratic function and find the minimum of that quadratic function to be our next approximation of the minimum.

# Background

Useful background for this topic includes:

# References

• Mathews, Section 8.1, Quadratic Approximation to Find p, p.419.

# Theory

Suppose we are trying to find the minimum of a function f(x) and we have three initial approximations to that minimum, x1, x2, and x3. For example, consider the plot in Figure 1.

Figure 1. Three approximations to the minimum of cos(x) near x = π.

We can find an interpolating quadratic function

ax2 + bx + c

which passes through these three points. Finding the minimum to this interpolating quadratic function is equivalent finding a root of the derivative of the interpolating quadratic, that is:

2ax + b = 0

or -b/2a. This is shown in Figure 2.

Figure 2. The interpolating quadratic function and its minimum.

Note that there is no possibility of subtractive cancellation in our formula, and therefore we do not have to deal with shifting as we did with Müller's method.

We may check to ensure that we are approaching a minimum by checking that the leading coefficient of the interpolating polynomial, a, is positive.

# Problem

Given a function of one variable, f(x), find a local minima of that function.

# Assumptions

We will assume that the function is continuous and has a continuous derivative.

# Tools

We will use sampling, interpolation series, and iteration.

# Initial Requirements

We have three initial approximations x0, x1, and x2 of the minima.

# Iteration Process

Given the last three approximations xn − 3, xn − 2, and xn − 1, let

ax2 + bx + c

be the quadratic polynomial which these three points and set xn = −b/2a.

# Halting Conditions

There are three conditions which may cause the iteration process to halt:

1. We halt if both of the following conditions are met:
• The step between successive iterates is sufficiently small, |xn - xn − 1| < εstep, and
• The difference between y values sufficiently small, |f(xn) - f(xn − 1)| < εabs.
2. If the interpolating quadratic function is a straight line.
3. If we have iterated some maximum number of times, say N, and have not met Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that xn is our approximation to the minimum.

If we halt due to either Condition 2 or 3, we may either choose a different initial approximations x0, x1, and x2, or state that a solution may not exist.

# Error Analysis

The error may be derived from the error for Newton's method, except that the coefficient is now -½f(3)(ξ)/f(2)(xn − 1).

# Example 1

Perform two steps of quadratic optimization on the function f(x) = x2(x - 2) starting with the points x0 = 2, x1 = 1, and x2 = 1.5. Find the actual minimum.

The polynomial interpolating (2, 0), (1, -1), (1.5, -1.125) is 2.5x2 - 6.5x + 3, which has a minimum at x = 27/14 ≈ 1.3. Figure 1 shows the function, the three initial points in red, the interpolating polynomial in blue and the minimum of the interpolating polynomial as a blue point.

Figure 1. First iteration of quadratic optimization showing the points and interpolating quadratic polynomial.

The polynomial interpolating (1, -1), (1.5, -1.125), (1.3, -1.183) is 1.8x2 - 4.75x + 1.95, which has a minimum at x = 1.319444⋅⋅⋅. Figure 2 shows the second iteration.

Figure 2. Second iteration of quadratic optimization showing the points and interpolating quadratic polynomial.

The actual minimum is at x = 4/3, as can be found by differentiating the function, equating to zero, and choosing the appropriate root.

# Example 2

Use quadratic optimization to find a minimum of the function f(x) = sin(x) starting with the points x0 = 4, x1 = 4.1, x2 = 4.2. Use εstep = εabs = 0.00001.

Using polyfit:

```>> x = [4, 4.1, 4.2];
>> y = sin(x);
>> polyfit( x, y, 2 );
>> x = [x(2), x(3), -ans(2)/2/ans(1)]
x =
4.10000  4.20000  4.80190

>> [abs( x(3) - x(2) ), abs( sin( x(3) ) - sin( x(2) ) )]
ans =
0.60190  0.12442

>> y = sin(x);
>> polyfit( x, y, 2 );
>> x = [x(2), x(3), -ans(2)/2/ans(1)]
x =
4.20000  4.80190  4.72330

>> [abs( x(3) - x(2) ), abs( sin( x(3) ) - sin( x(2) ) )]
ans =
0.0785975  0.0039435

>> y = sin(x);
>> polyfit( x, y, 2 );
>> x = [x(2), x(3), -ans(2)/2/ans(1)]
x =
4.80190  4.72330  4.71149

>> [abs( x(3) - x(2) ), abs( sin( x(3) ) - sin( x(2) ) )]
ans =
0.0118044  0.0000591

>> y = sin(x);
>> polyfit( x, y, 2 );
>> x = [x(2), x(3), -ans(2)/2/ans(1)]
x =
4.72330  4.71149  4.71239

>> [abs( x(3) - x(2) ), abs( sin( x(3) ) - sin( x(2) ) )]
ans =
0.000891892  0.000000401

>> y = sin(x);
>> polyfit( x, y, 2 );
>> format long
>> x = [x(2), x(3), -ans(2)/2/ans(1)]
x =
4.711493374629565  4.712385266255900  4.712388984477041

>> [abs( x(3) - x(2) ), abs( sin( x(3) ) - sin( x(2) ) )]
ans =
0.000003718221140758260  0.000000000006897371563
```

Thus, after five iterations, both criteria are met and we halt and use 4.712388984477041 as our approximation.

Notice that the iteration does not begin to rapidly converge until the fourth iteration (in red), a calculation which was performed using none of the initial points?

# Example 3

Use the power transformation to estimate the rate of convergence of quadratic optimization using the points from Example 2, but continuing the iteration process.

The absolute errors (including one extra point) are

``` 0.0895063054005023
0.0109088347322741
0.0008956057551250
0.0000037141287894
0.0000000040923513
```

Because we want to say that the error hiahi − 11.5, we need to fit consecutive pairs of the errors:

(0.0895063054005023, 0.0109088347322741)
(0.0109088347322741, 0.0008956057551250)
(0.0008956057551250, 0.0000037141287894)
(0.0000037141287894, 0.0000000040923513)

Using the power transformation, we find:

```>> h = [0.0895063054005023, 0.0109088347322741, 0.0008956057551250, 0.0000037141287894]';
>> E = [0.0109088347322741, 0.0008956057551250, 0.0000037141287894, 0.0000000040923513]';
>> V = [log(h), h.^0]
>> V \ log( E )
c =

1.496787356515466
-0.939795357615119

>> exp( c( 2 ) )
ans = 0.390707782550363
```

Thus, this suggests that the convergence of quadratic optimization is approximately 0.39 h1.497. To see that this is reasonable, we plot the log of the h values versus the log of the E values in Figure 3 and note that they are approximately linear.

Figure 3. Plot of the logs of consecutive pairs of errors.

# Question 1

Find a minimum of f(x) = x2 starting with the points x0 = 1, x1 = 0.9, and x2 = 0.8.

Answer: the first iteration yields x3 = 0.

# Question 2

Find a minimum of f(x) = x4 starting with the points x0 = 1, x1 = 0.9, and x2 = 0.8 and iterating for four steps.

Answer: 0.5969199179, 0.5019420748, 0.4117297701, 0.3309782326 .

# Question 3

Find a minimum of f(x) = (x - 1) x (x + 1) starting with the point x0 = 1.5, x1 = 1, and x2 = 0.5 and using εstep = εabs = 0.1.

Answer: after two steps, x4 = 39/68 ≈ 0.5735294.

To be completed.

# Matlab

Assume a file f.m with contents

```function y = f(x)
y = cos(x)
```

exists. Then the following function will find an extreme point

```function x = quadratic( f, x1, x2, x3, N, eps_step, eps_abs )
x = [x1 x2 x3]'

for i=1:N
V = [x.^2 x ones(sizeof(x))];

p = V \ f(x);

x = [x(2) x(3) -p(2)/(2*p(1))]';

if ( abs( x(2) - x(3) ) < eps_step && abs( f(x(2)) - f(x(3)) ) < eps_abs )
x = x(3);
return;
end
end

error( 'the method did not converge' );
```

# Maple

Finding a minimum in Maple is quite straight-forward: