Applications in Engineering
Given a curve, a secant line (or just secant) is a line which passes through two points of that curve.
To understand why the secant function is so named, see trigonometric functions.
The secant method is a technique for finding the root of a
scalar-valued function f(x) of a single variable x when
no information about the derivative exists. It is similar in many ways
to the false-position method, but trades the possibility of non-convergence
for faster convergence.
Useful background for this topic includes:
If we do not have the derivative of the function, we must resort to
interpolation when trying to find a root.
We will assume that f(x) is a real-valued function of a single
Suppose we have two approximations xa and xb to
a root r of f(x),
that is, f(r) = 0. Figure 1 shows a function with a root and
an approximation to that root.
Figure 1. A function f(x), a root r, and an approximation to that root xa.
Because f(x) is continuous, and in this case, differentiable, we could
approximate the function by interpolating the two points
(xa, f(xa)) and
This interpolating line is shown in Figure 2.
Figure 1. The line tangent to the point (xa, f(xa)).
The formula for this line may be deduced quite easily: using Lagrange, the
interpolating polynomial is:
Finding the root of this line is simply a matter of equating the above
line to zero and solving for x:
Multiplying through by b - a, we get:
and solving for x, we get:
Thus, this value of x should be a better approximation of the root than either xa or xb.
You should recognize this as the same formula we used for the false-position method. Note, however,
that an approximation of the derivative at xb is given by (f(xb) - f(xa))/(xb - xa), and
therefore, with a little algebra, the above equation can be written as:
which is similar to Newton's, but using an approximation of the derivative, rather than the exact value
of the derivative.
Given a function of one variable, f(x), find a value
r (called a root) such that f(r) = 0.
We will assume that the function f(x) is continuous and for the technique
to converge, it is useful that the function is also differentiable, but this is
only necessary for the error analysis.
We will use sampling, interpolation, and iteration.
We have two initial approximation x0 and x1 of the root.
It would be useful if |f(x0)| > |f(x1)|, that is,
the second point is a better approximation of the root.
Given the approximations xn − 1 and xn, the next approximation
xn + 1 is defined to be
There are three conditions
which may cause the iteration process to halt (these are the same as the halting conditions for Newton's method):
- We halt if both of the following conditions are met:
- The step between successive iterates is sufficiently small,
|xn + 1 - xn| < εstep, and
- The function evaluated at the point xn + 1 is
sufficiently small, |f(xn + 1)| < εabs.
- If the denominator is zero, in which case the
iteration process fails (division-by-zero) and we halt.
- If we have iterated some maximum number of times, say N, and
have not met Condition 1, we halt and indicate that a solution was not found.
If we halt due to Condition 1, we state that xn + 1 is our
approximation to the root.
If we halt due to either Condition 2 or 3, we may either choose a different
initial approximations x0 and x1, or state that a solution may not exist.
The error analysis for the secant method much more complex
than that for the false-position method, because both end points
are continuously being updated, and therefore, the iterations
will not fall into the O(h) trap of the false-position method.
It can be shown (but beyond the scope of this course) that
the rate of convergence at a simple root is better than linear, but poorer
than the quadratic convergence of Newton's method (Pizer, 1975)
and is given by the golden-ratio