Topic 10.4: Secant Method (Error Analysis)

Contents Previous Chapter Start of Chapter Previous Topic Introduction Notes Theory HOWTO Examples Engineering Error Questions Matlab Maple Next Topic Next Chapter

The error analysis for the secant method much more complex than that for the false-position method, because both end points are continuously being updated, and therefore, the iterations will not fall into the O(h) trap of the false-position method. It can be shown (but beyond the scope of this course) that the rate of convergence at a simple root is better than linear, but poorer than the quadratic convergence of Newton's method (Pizer, 1975) and is given by the golden-ratio

and thus, it can be shown that the rate of convergence is O(h1.618).

Example

To demonstrate this rate of convergence, we will take the quadratic polynomial with a simple root at 0: p(x) = x2 + x. Starting with the points x0 = 1 and x1 = 0.5, we get the sequence of approximations

1.0
5.0e-1
2.0e-1
5.8824e-2
9.3458e-3
5.1467e-4
4.7630e-6
2.4501e-9
1.1670e-14
2.8592e-23
3.3367e-37
9.5402e-60
3.1832e-96
3.0369e-155
9.6672e-251

These are shown in Figure 1.

Figure 1. The function and sequence of approximations.

In this example, because the root is at the origin, each approximation equals the error of the approximation, i.e., hk = xk. Figure 2 shows a plot of the errors hk versus the error xk + 1 of the next approximation. The decrease in error is not linear, however, it is neither quadratic.

Figure 2. The plot of the errors hk versus hk + 1.

Because the claim is that the error decreases according to O(hp) for some real p, it follows that we can apply the power transformation and find the best fitting least-squares line which is shown in Figure 3.

Figure 3. The plot of the errors ln(hk) versus ln(hk + 1) and the least squares linear polynomial.

The best fitting least-squares line is -0.15403 + 1.61734h, and from the previous section, the linear coefficient is the power of the function shown in Figure 2. One may note that 1.61734 is already close to 1.61803398875, however, if you ignore the first few terms and find the best fitting least-squares line through the last five errors, you get an even better approximating curve of -0.00009577622657 + 1.618033658h. While the above example is not a proof, it demonstrates and lends credibility to the original claim.

Copyright ©2005 by Douglas Wilhelm Harder. All rights reserved.