Skip to the content of the web site.

Laboratory 5

This laboratory will investigate two topics:

  1. The double-precision floating-point format (MATH 211), and
  2. Newton Interpolation and Horner's Rule (MATH 215).

5.1 The Double-Precision Floating-Point Format

The IEEE Standard for Floating-point Arithmetic (IEEE 754-2008) is a ubiquitous standard for numerical computations. The first version, IEEE 754-1985 was the collaborative work of numerous individuals from many different companies with many potential conflicting interests and loyalties; however, lead by Dr. William Kahan, the committee was able to produce a very remarkable document. In a 1998 interview, Dr. Kahan spoke about some of the issues which had to be dealt with in creating the original standard.

In the MATH 211 Laboratory 1, you looked at a decimal floating-point representation in Figure 1.


Figure 1. The six-decimal-digit floating-point representation.

In this Laboratory, we will look at the double-precision floating-point representation which is implemented as a double in the C, C#, C++, and Java programming languages.

In this laboratory, you may need the table

Hexadecimal DigitBinary Equivalent
00000
10001
20010
30011
40100
50101
60110
70111
81000
91001
a1010
b1011
c1100
d1101
e1110
f1111

5.1.1 Required Background

Students must be aware of binary and hexadecimal numbers taught in ECE 124 and the decimal floating-point format taught in the MATH 211 Laboratory 1.

5.1.2 The Double-precision Floating-point Representation

The double-precision floating-point representation, or double, derives its name that each value occupies 8 bytes—twice that which the 4-byte single-precision floating-point value (or float) uses. It has been recognized that for most computations, the single-precision floating-point representation, or float, does not store sufficient precision. Unless you are sure that a single-precision floating-point number is appropriate, use double.

Like the representation given in the MATH 215 Laboratory 1, the 8 bytes = 8×8 bits = 64 bits are divided into a single sign bit, 11 bits for the exponent, and the remaining 52 bits for the mantissa. This is shown in Figure 2.


Figure 2. The double-precision floating-point format.

Any 64-bit number may be written as a 16-hexadecimal digit number. Therefore, when a double is written as a hexadecimal number, the first three hexadecimal digits represent the sign bit and the exponent. Figures 3 and 4 represent the interpretation of these as numbers.


Figure 3. The double-precision floating-point representation using 64 bits.


Figure 4. The double-precision floating-point representation using 16 hexadecimal digits.

Matlab will display a double by using the format hex, for example:

format long
1
       1
-2
      -2
pi
       3.14159265358979
format hex
1
      3ff0000000000000
-2
      c000000000000000
pi
      400921fb54442d18

5.1.2.1 Sign Bit

To begin, the first bit is the sign bit: 0 represents positive numbers and 1 represents negative numbers. Therefore, in the hexadecimal representation, if the first three bytes are between 800 and fff, the number is negative.

Unless you are absolutely sure that the precision of a double is not needed, do not use float.

5.1.2.2 Exponent

The next 11 bits represent the exponent. Thus, the exponent can be a value between 000000000002 and 111111111112 or 010 and 204710. As with the decimal format, the double format uses a bias and based on the first three hexadecimal digits of 1, i.e., 3FF, it is apparent that the bias is 011111111112 or 102310.

Two exponents are especially reserved for special representations: both 000000000002 = 010 and 111111111112 = 204710. Thus, the exponent represents numbers in the range 21 − 1023 = 2-1022, ..., 22046 − 1023 = 21023 which is from 2.2251×10-308 to 8.9885×10307.

5.1.2.3 The Mantissa

From the decimal floating-point format requires that the first digit of the mantissa is non-zero. This was required to ensure a unique representation. With binary numbers, the only non-zero bit is 1 and therefore there it is unnecessary to store this number. Therefore, the mantissa for each of the three numbers 1, -2, and π are 1.000000000000016, 1.000000000000016, and 1.921fb54442d1816, respectively. In binary, the last mantissa is 1.10010010000111111011010101000100010000101101000110002.

5.1.2.4 Zero and Denormalized Numbers

Previously, it was commented that the exponent 000000000002 is reserved for special representations. This occurs when the first three hexadecimal digits are 00016 or 80016. In this case, if the mantissa is XXXXXXXXXXXXX16, it represents the number tex:$$0.{\rm XXXXXXXXXXXXX}_{16} \times 2^{-1022}$$.

For example, the smallest normalized number is when the exponent is 001 or tex:$$2^{1 - 1023} = 2^{-1022}$$ which is demonstrated here:

format hex
2^(-1022)
  ans =
     0010000000000000

For any number smaller than this, we use denormalized numbers:

2^-1023
  ans =
     0008000000000000

ans/2
  ans =
     0004000000000000

The first number is tex:$$0.1_2 \times 2^{-1022} = 1.0_2 \times 2^{-1023}$$ while the second number is tex:$$0.01_2 \times 2^{-1022} = 1.0_2 \times 2^{-1024}$$.

For example, to interpret the number 0000580000000000, we note the first three hexadecimal digits are 000 and consequently, this is a denormalized number. The mantissa is 0580000000000 which represents the binary number 000001011000000...0. Interpreting this as a floating point number, we have tex:$$0.000001011_2 \times 2^{-1022}$$. You should recall the following values:

BinaryDecimal
tex:$$100$$tex:$$4$$
tex:$$10$$tex:$$2$$
tex:$$1$$tex:$$1$$
tex:$$0.1$$tex:$$\frac{1}{2^1} = 0.5$$
tex:$$0.01$$tex:$$\frac{1}{2^2} = 0.25$$
tex:$$0.001$$tex:$$\frac{1}{2^3} = 0.125$$
tex:$$0.0001$$tex:$$\frac{1}{2^4} = 0.0625$$
tex:$$0.00001$$tex:$$\frac{1}{2^5} = 0.3125$$
tex:$$0.000001$$tex:$$\frac{1}{2^6} = 0.015625$$
tex:$$0.0000001$$tex:$$\frac{1}{2^7} = 0.0078125$$
tex:$$0.00000001$$tex:$$\frac{1}{2^8} = 0.00390625$$
tex:$$0.000000001$$tex:$$\frac{1}{2^9} = 0.001953125$$

In binary, the mantissa tex:$$0.000001011_2$$ represents tex:$$0.015625 + 0.00390625 + 0.001953125 = 0.021484375$$. If we enter this into Matlab, we get the desired result:

0.021484375 * 2^-1022
  ans =
     0000580000000000

The smallest possible number which can be represented is 0000000000000001 which is tex:$$0.000\cdots0001 \times 2^{-1022}$$ with 51 zeros following the radix point. That is, it is the number tex:$$2^{-52} \times 2^{-1022} = 2^{-1074}$$. We can verify this in Matlab:

2^-1074
  ans =
     0000000000000001

ans / 2
  ans =
     0000000000000000

The last answer is, of course, the representation of 0.

5.1.2.5 Infinity and Not-a-Number

The other reserved exponent is the largest: tex:$$11111111111_2$$ or the first three hexadecimal digits being 7FF and FFF. If the number is plus or minus infinity, the mantissa is zero:

format hex
Inf
  ans =
     7ff0000000000000

-Inf
  ans =
     fff0000000000000

This does not represent the mathematical infinity, but rather, any floating point number outside and including tex:$$\pm 2^{1024}$$:

format
2^1023
  ans =
    8.9885e+307

2^1024
  ans =
     Inf

The properties are as as expected:

Inf + 10
  ans =
     Inf

1/0
  ans =
     Inf

-3 * Inf
  ans =
    -Inf

1/Inf
  ans =
       0

There are some mathematical operations which are not defined. For example, what is Inf - Inf or 0/0 or even 0 * Inf? It is not reasonable to represent any of these as tex:$$\pm \infty$$, and therefore there is one final number: NaN or not-a-number. This is what you would have referred to in high-school as undefined:

0/0
  ans =
     NaN

Inf - Inf
  ans =
     NaN

Inf * 0
  ans =
     NaN

format hex
NaN
  ans =
     fff8000000000000

As you can see in the last example, the representation is close to that of -Inf except that the mantissa is all zeros except for a leading 1 bit.

NaN has one particularly interesting property according to the IEEE 754 standard: when compared to itself, it must return false:

3 == 3
  ans =
     1

Inf == Inf
  ans =
     1
  
pi == pi
  ans =
     1

NaN == NaN
  ans =
     0

Thus, to detect NaN, one must make an explicit function call:

isnan( [0 1 -1 Inf -Inf NaN pi] )
  ans =
     0   0   0   0   0   1   0

5.1.2.6 Comparisons

A double-precision floating-point is positive if the first hexadecimal character is ≤7 and negative otherwise.

Two double precision floating-point numbers are equal if and only if all the bits are equal.

Because of denormalized numbers, two doubles x and y are equal if and only if (x - y) == 0 evaluates to true (1). For example, both tex:$$\sqrt{2}$$ and tex:$$\sqrt{8}/2$$ represent the same number and evaluate to the same value:

format
(sqrt(2) - sqrt(8)/2) == 0
  ans =
       1
format hex
sqrt(2)
  ans =
     3ff6a09e667f3bcd

sqrt(8)/2
  ans =
     3ff6a09e667f3bcd

On the other hand, while tex:$$\sqrt{18}/3$$ should also equal tex:$$\sqrt{2}$$, due to numerical error, they are not precisely the same:

format
(sqrt(2) - sqrt(18)/3) == 0
  ans =
       0
format hex
sqrt(2)
  ans =
     3ff6a09e667f3bcd

sqrt(18)/3
  ans =
     3ff6a09e667f3bcc

In addition, two positive numbers may be compared by simply comparing their bits. For example, given the hexadecimal representations of ten numbers:

   52832e8d36bf5eed
   15b7e09834b53d11
   2bd1f986a855506b
   17033585aedd9016
   1b9b8cfca975ca98
   5d191fca8e5f7883
   5180e4696a543549
   2f71833e25f42212
   6887272068f2910d
   15f45c2c1f3ad0d0

It is immediately clear that they are, from smallest to largest:

   15b7e09834b53d11
   15f45c2c1f3ad0d0
   17033585aedd9016
   1b9b8cfca975ca98
   2bd1f986a855506b
   2f71833e25f42212
   5180e4696a543549
   52832e8d36bf5eed
   5d191fca8e5f7883
   6887272068f2910d

When we look at each of these, we see that is true:

RepresentationActual ValueActual Value
52832e8d36bf5eed3.052664278175422e+089tex:$$3.052 \times 10^{89}$$
15b7e09834b53d114.759798560884662e-204tex:$$4.760 \times 10^{-204}$$
2bd1f986a855506b1.314871551396497e-097tex:$$1.315 \times 10^{-97}$$
17033585aedd90168.030437775543794e-198tex:$$8.030 \times 10^{-198}$$
1b9b8cfca975ca981.087816536642441e-175tex:$$1.088 \times 10^{-175}$$
5d191fca8e5f78832.991919959128576e+140tex:$$2.992 \times 10^{140}$$
5180e4696a5435494.102002070144849e+084tex:$$4.102 \times 10^{84}$$
2f71833e25f422123.692440052604274e-080tex:$$3.692 \times 10^{-80}$$
6887272068f2910d3.380270393986167e+195tex:$$3.380 \times 10^{+195}$$
15f45c2c1f3ad0d06.493842243300212e-203tex:$$6.494 \times 10^{-203}$$

These are operations which may be performed very quickly in hardware.

5.1.2.7 Boolean Operations in Matlab

Like C and C++, 0 is considered to be equivalent to false and any other number is considered to be true. Operators such as == and <= or > do not return Boolean values, but rather, they return 0 or 1.

You may ask: I can tell whether or not any one entry within a matrix or NaN, but how can I tell if no entries are NaN?

This introduces two column-based matrix functions (like sum, max, etc.). These are the any and all functions:

The any function returns 1 for a column if any of the entries of the column are non-zero. The all function returns 1 for a column if all entries are non-zero. For example:

M = [1 0 0 0; 1 1 0 0; 1 1 1 0]
  M =
       1     0     0     0
       1     1     0     0
       1     1     1     0

any( M )
  ans =
       1     1     1     0

all( M )
  ans =
       1     0     0     0

Thus, a quick test to determine if all entries a matrix are not NaN, use:

~any( any( isnan( M ) ) )

The ~ is the Matlab equivalent of not. It is the same as the C/C++/Java/C# !.

The first call to any will be one in any column which contains a NaN. If any column contained a NaN, then the result of the first call would produce a 1 in that column and therefore the second call to any would also return 1. The negation of 1 is 0.

5.1.2.8 While Loops

Boolean tests can be used in if statements, but they can also be used in while loops:

format long
s1 = 0;
s2 = 1;
i = 2;
while (s1 ~= s2 )
      s1 = s2;
      s2 = s2 + 1/i^2;
      i = i + 1;
end
Executes for a while...
s2
  s2 =
     1.644934057834575
i
  i =
      94906267

pi^2/6
  ans =
     1.644934066848226

This is not the way to test for convergence: we generally do not require this much accuracy and in some cases, it may not stop even if it is close enough.

5.2 Newton Polynomials and Horner's Rule

In this section, we will see a new means of finding and evaluating interpolating polynomials.

5.2.1 Subtractive Cancellation and Polynomials

In the previous laboratory, you saw how to calculate interpolating polynomials. For example, one could find the polynomial which interpolates the three points tex:$$(1,17.49), (2,39.02), (4,70.14)$$ is the polynomial tex:$$p(x) = 0.01 x^2 + 17.5 x - 0.02$$.

There are problems, however, in evaluating such polynomials at an arbitrary point tex:$$x$$. For example, it is not possible to avoid subtractive cancellation: it may be possible that tex:$$17.5x - 0.02$$ is approximately zero for one value of tex:$$x$$, for example, tex:$$x = 0.001142857$$. Evaluating the polynomial at this point yields the value 1.056122122394809e-8 while the correct answer is exactly 1.056122122449e-8.

In a higher degree polynomial, it is almost impossible to determine the order in which terms should be added and consequently, it is more appropriate to avoid such a situation altogether.

In order to avoid these problems, we will introduce what at first appears to be a non-answer: Newton polynomials. We will then look at a very efficient means of evaluating polynomials: Horner's rule.

5.2.2 Newton Polynomials

Given a single point tex:$$(x_1, y_1)$$, it's really easy to find the interpolating polynomial of degree zero:

tex:$$p_0(x) = y_1$$.

This is shown in Figure 5.


Figure 5. The constant function passing through tex:$$(x_1, y_1)$$.

This seems anti-climactic, but it is the first step to finding a Newton polynomial. The next question is: given a second point tex:$$(x_2, y_2)$$, instead of finding a completely new polynomial, can we find a corrector function tex:$$\Delta p_1(x)$$ which we can add to tex:$$p_0(x)$$ such that tex:$$p_1(x) = p_0(x) + \Delta p_1(x)$$ passes through both points? This is demonstrated in Figure 6 where we are trying to find the purple function which we can add to the red line.

The first step is determining that tex:$$\Delta p_1(x)$$ must be zero at tex:$$x_1$$ and therefore tex:$$\Delta p_1(x) = c_1 (x - x_1) $$.


Figure 6. Finding the correcting term tex:$$\Delta p_1(x)$$ which, when added to tex:$$p_0(x)$$, causes the result to pass through tex:$$(x_2, y_2)$$.

The next step is to find the coefficient tex:$$c_1$$ so that it forces tex:$$p_0(x_2) + c_1(x_2 - x_1) = y_1 + c_1(x_2 - x_1) = y_2$$.

We can solve this for tex:$$c_1$$ to get that

tex:$$c_1 = \frac{y_2 - y_1}{x_2 - x_1}$$.

Finally, in order to conclude this example, we will add a third point tex:$$(x_3, y_3)$$ and thus we must find the next quadratic function tex:$$\Delta p_2(x)$$ which causes the resulting sum tex:$$p_2(x) = p_1(x) + \Delta p_2(x)$$ to pass through the third point. This is shown in Figure 7 where we must now find the orange function so that when it is added to the blue function, the resulting sum (magenta) passes through all three points.


Figure 7. Finding the correcting term tex:$$\Delta p_2(x)$$ which, when added to tex:$$p_1(x)$$, causes the result to pass through tex:$$(x_3, y_3)$$.

To find this second polynomial tex:$$\Delta p_2(x)$$, we begin by noting it must be zero at the two points tex:$$x_1$$ and tex:$$x_2$$. This requires that it must be of the form tex:$$\Delta p_2(x) = c_2 (x - x_1)(x - x_2)$$.

We also require that

tex:$$p_1(x_3) + \Delta p_2(x_3) = y_3$$,

or, in other words,

tex:$$y_1 + \frac{y_2 - y_1}{x_2 - x_1}(x_3 - x_1) + c_2(x_3 - x_1)(x_3 - x_2) = y_3$$.

In order to find an appropriate solution, we will use the trick of adding zero; that is, we will replace tex:$$(x_3 - x_1)$$ in the second term with tex:$$(x_ 3 - x_2 + x_2 - x_1)$$:

tex:$$y_1 + \frac{y_2 - y_1}{x_2 - x_1}(x_3 - x_2 + x_2 - x_1) + c_2(x_3 - x_1)(x_3 - x_2) = y_3$$.

We can now distribute the multiplier:

tex:$$y_1 + \frac{y_2 - y_1}{x_2 - x_1}(x_3 - x_2) + \frac{y_2 - y_1}{x_2 - x_1}(x_2 - x_1) + c_2(x_3 - x_1)(x_3 - x_2) = y_3$$.

The third term now simplifies to

tex:$$y_1 + \frac{y_2 - y_1}{x_2 - x_1}(x_3 - x_2) + y_2 - y_1 + c_2(x_3 - x_1)(x_3 - x_2) = y_3$$

and we can now cancel the two tex:$$y_1$$ which appear on the left-hand side:

tex:$$\frac{y_2 - y_1}{x_2 - x_1}(x_3 - x_2) + y_2 + c_2(x_3 - x_1)(x_3 - x_2) = y_3$$.

With the next step, we subtract tex:$$y_2$$ from each side and then divide through by tex:$$x_3 - x_2$$ to get

tex:$$\frac{y_2 - y_1}{x_2 - x_1} + c_2(x_3 - x_1) = \frac{y_3 - y_2}{x_3 - x_2}$$.

Taking the first term on the left-hand side to the other and dividing through by tex:$$x_3 - x_1$$ yields

tex:$$c_2 = \frac{\frac{y_3 - y_2}{x_3 - x_2} - \frac{y_2 - y_1}{x_2 - x_1}}{x_3 - x_1}$$.

What you see here is a systematic mechanism for finding the coefficients. Consider the following approach: define

tex:$$y_{2,1} = \frac{y_2 - y_1}{x_2 - x_1}$$

and

tex:$$y_{3,2} = \frac{y_3 - y_2}{x_3 - x_2}$$.

Next, we see that we may write

tex:$$y_{3,2,1} = \frac{y_{3,2} - y_{2,1}}{x_3 - x_1}$$.

To find these in a straight-forward manner, consider that we can calculate the values using the format in Figure 8.


Figure 8. Finding three Newton polynomial coefficients.

The interpolating Newton polynomial is now

tex:$$p_2(x) = y_1 + y_{2,1}(x - x_1) + y_{3,2,1}(x - x_1)(x - x_2)$$.

Similarly, we can find the polynomial which interpolates a fourth point by calculating the table in Figure 9 to yield the polynomial

tex:$$p_3(x) = y_1 + y_{2,1}(x - x_1) + y_{3,2,1}(x - x_1)(x - x_2) + y_{4,3,2,1}(x - x_1)(x - x_2)(x - x_3)$$.


Figure 9. Finding the Newton polynomial coefficients for interpolating four points.

You can read more about Newton polynomials at the appropriate chapter of Numerical Analysis for Engineers.

5.2.3 Newton Polynomials in Matlab

We must now calculate these coefficients in Matlab. Consider the vectors

x = [-1 2 3]';
y = [-1.4 1.3 5.4]';

Notice that if we calculate

y1 = (y(3:2) - y(2:1))./(x(3:2) - x(2:1))
  y1 =
      0.9000
      4.1000

You will notice that tex:$$y_{2,1} = \frac{1.3 - (-1.4)}{2 - (-1)} = 0.9$$ and tex:$$y_{3,2} = \frac{5.4 - 1.3}{3 - 2} = 4.1$$. Similarly, we may calculate

y2 = (y1(2) - y(1))./(x(3) - x(1))
  y2 =
      0.8

which is again tex:$$y_{3,2,1}$$.

We will create a matrix with the entries

tex:$$Y = \left( \matrix{ y_1 & 0 & 0 \cr y_2 & 0 & 0 \cr y_3 & 0 & 0 } \right )$$

and then, using Matlab efficiently, calculate the second column as

tex:$$Y = \left( \matrix{ y_1 & y_{2,1} & 0 \cr y_2 & y_{3,2} & 0 \cr y_3 & 0 & 0 } \right )$$

and finally we will calculate the last entry

tex:$$Y = \left( \matrix{ y_1 & y_{2,1} & y_{3,2,1} \cr y_2 & y_{3,2} & 0 \cr y_3 & 0 & 0 } \right )$$

We may extract the simplest Newton polynomial by accessing the entries in the first row.

The algorithm itself may be used for an arbitrary number of points. For example, with five points, we would begin with

tex:$$Y = \left( \matrix{ y_1 & 0 & 0 & 0 & 0 \cr y_2 & 0 & 0 & 0 & 0 \cr y_3 & 0 & 0 & 0 & 0 \cr y_4 & 0 & 0 & 0 & 0 \cr y_5 & 0 & 0 & 0 & 0 } \right )$$

and then calculate, in order,

tex:$$\left( \matrix{ y_1 & y_{2,1} & 0 & 0 & 0 \cr y_2 & y_{3,2} & 0 & 0 & 0 \cr y_3 & y_{4,3} & 0 & 0 & 0 \cr y_4 & y_{5,4} & 0 & 0 & 0 \cr y_5 & 0 & 0 & 0 & 0 } \right )$$, tex:$$\left( \matrix{ y_1 & y_{2,1} & y_{3,2,1} & 0 & 0 \cr y_2 & y_{3,2} & y_{4,3,2} & 0 & 0 \cr y_3 & y_{4,3} & y_{5,4,3} & 0 & 0 \cr y_4 & y_{5,4} & 0 & 0 & 0 \cr y_5 & 0 & 0 & 0 & 0 } \right )$$, tex:$$\left( \matrix{ y_1 & y_{2,1} & y_{3,2,1} & y_{4,3,2,1} & 0 \cr y_2 & y_{3,2} & y_{4,3,2} & y_{5,4,3,2} & 0 \cr y_3 & y_{4,3} & y_{5,4,3} & 0 & 0 \cr y_4 & y_{5,4} & 0 & 0 & 0 \cr y_5 & 0 & 0 & 0 & 0 } \right )$$,

and finally

tex:$$\left( \matrix{ y_1 & y_{2,1} & y_{3,2,1} & y_{4,3,2,1} & y_{5,4,3,2,1} \cr y_2 & y_{3,2}& y_{4,3,2} & y_{5,4,3,2} & 0 \cr y_3 & y_{4,3} & y_{5,4,3} & 0 & 0 \cr y_4 & y_{5,4} & 0 & 0 & 0 \cr y_5 & 0 & 0 & 0 & 0 } \right )$$.

We can read the Newton polynomial off of the first row:

tex:$$p_4(x) = y_1 + y_{2,1}(x - x_1) + y_{3,2,1}(x - x_1)(x - x_2) + y_{4,3,2,1}(x - x_1)(x - x_2)(x - x_3) $$
tex:$$+ y_{5,4,3,2,1}(x - x_1)(x - x_2)(x - x_3)(x - x_4)$$.

The code for coming up with this matrix is given by

function [Y] = newtonpoly( x, y )
	[m n] = size( x );

	if ( m ~= 1 && n ~= 1 )
		throw( MException( 'MATLAB:invalid_argument', ...
		                   'The arguments must be vectors' ) );
	end

	if ( ~all( size( y ) == size( x ) ) )
		throw( MException( 'MATLAB:invalid_argument', ...
		                   'The vectors ''x'' and ''y'' must have the same dimensions' ) );
	end

	m = max( m, n );

	Y = zeros( m, m );

	Y(:,1) = y;

	for k=2:m
		Y(1:(end - k + 1), k) = (                                     ...
			Y(2:(end - k + 2), k - 1) - Y(1:(end - k + 1), k - 1) ...
		) ./ (x(k:end) - x(1:(end - k + 1)));
	end
end

A very quick trial run demonstrates that this works

newtonpoly( x, y )
  ans =
     -1.4000    0.9000    0.8000
      1.3000    4.1000         0
      5.4000         0         0

Thus, the Newton interpolating polynomial is

tex:$$-1.4 + 0.9(x + 1) + 0.8(x + 1)(x - 2)$$.

Suppose we now add a fourth point: tex:$$(5, 7.6)$$:

x = [x; 5];
y = [y; 7.6];

When calculating the coefficients of the Newton polynomial, we get

newtonpoly( x, y )
  ans =
     -1.4000    0.9000    0.8000   -0.3000
      1.3000    4.1000   -1.0000         0
      5.4000    1.1000         0         0
      7.6000         0         0         0

You will notice that the six non-zero entries remained unchanged and the only entries which were added were those on the anti-diagonal. Thus, the Newton interpolating polynomial is now

tex:$$-1.4 + 0.9(x + 1) + 0.8(x + 1)(x - 2) - 0.3(x + 1)(x - 2)(x - 3)$$.

Now comes the more important question: how do we evaluate this polynomial? This actually seems complicated, but using Horner's rule, it is actually a very fast tex:$$\Theta(n)$$ algorithm which tex:$$n$$ is the number of points (that is, the number of terms the polynomial).

5.2.4 Horner's Rule

If we have the polynomial defined by the vector a = [a1 a2 a3]

tex:$$a_1 x^2 + a_2 x + a_3$$

may be evaluated efficiently as follows:

% Evaluate the polynomial 'a' at the point 'x'
term = 1;
y = 0;

for k = length(a):-1:1
	y = y + a(k)*term;
	term = term * x;
end

With the first iteration, tex:$$y = a_3$$, with the second iteration tex:$$y = a_2x + a_3$$, and finally with the last iteration, tex:$$y = a_1x^2 + a_2x + a_3$$.

We can do away with the local variable term by using the backward approach:

% Evaluate the polynomial 'a' at the point 'x'
y = a(1);

for k = 2:length(a)
	y = y*x + a(k);
end

With the first iteration, tex:$$y = a_1$$, with the second iteration tex:$$y = (a_1)x + a_2 = a_1x + a_2$$, and finally with the last iteration, tex:$$y = (a_1x + a_2)x + a_3 = a_1x^2 + a_2x + a_3$$.

which evaluates the polynomial defined by the vector a at the point x and assigns the result to the variable y.

Your implementation must, in addition to evaluating the polynomial at the point x, allow the user to evaluate the polynomial at each point of a vector or matrix.

Your function must have the following output:

format long
horners( [1 2 3], 3.2 )
  ans =
    19.640000000000001

horners( [1 2 3], 3.5 )
  ans =
    22.250000000000000

Your function should also be able to perform the following:

format
horners( [1 2 3], 0:0.1:1 )
  ans =
      3.0000 3.2100 3.4400 3.6900 3.9600 4.2500 4.5600 4.8900 5.2400 5.6100 6.0000

If we define xs to be 100 equally spaced points on the interval tex:$$[-3, 3]$$, we can also plot the polynomial on that interval:

xs = linspace( -3, 3, 100 );
ys = horners( [1 2 3], xs );
plot( xs, ys );
ylim( [min( ys ) - 1, max( ys ) + 1] );
title( 'The polynomial x^2 + 2x + 3 plotted on the interval [-3, 3]' );

The output of this sequence of commands should be a plot similar to Figure 10.


Figure 10. The output of Matlab when plotting a polynomial.

What is the purpose of the statement ylim( [min( ys ) - 1, max( ys ) + 1] );?

Recall that in Matlab, the function defined by function [y] = f(x) must be saved in the file f.m. Many Matlab functions are also implemented in a similar fashion. Take a look at the Matlab library function polyval.m and determine how it evaluates polynomials.

During the lab, you will modify horners to evaluate Newton polynomials.

You can read more about Horner's rule at the appropriate chapter of Numerical Analysis for Engineers.