Skip to the content of the web site.

Lesson 1.7: Functions

Previous lesson Next lesson


We have now described almost every aspect of our first little Hello world! program, all except for the function int main() followed by an opening brace, some code, a return statement and a closing brace. This is how we define a function in C++.

Before we look at how to define functions, we will start with how to use them.

Consider the following program:

#include <iostream>
#include <cmath>

int main();

int main() {
	std::cout << "The sine of 0.3 is " << std::sin( 0.3 ) << std::endl;
	std::cout << "The square root of 0.3 is " << std::sqrt( 0.3 ) << std::endl;
	std::cout << "The absolute value of -0.3 is " << std::abs( -0.3 ) << std::endl;

	return 0;
}

You can execute this program on repl.it.

In each of these cases, an argument is passed to a function, and the function returns a value that is then printed to the screen. All functions in C++ work in this manner: they accept zero or more arguments, and they optionally return a value. Some functions may have no return values, as we will see.

Once you start running a program in any computer language, you will at some point start calling functions, and those functions may call other functions, which may in turn call other functions, etc., etc. Some programming languages, however, start execution with code that does not resemble a function. For example, you could simply start executing a C++ program by starting to execute commands that start at the top of a file:

#include <iostream>
#include <cmath>

// THIS DOES NOT WORK...
// Execution starts here:
std::cout << "The sine of 0.3 is " << std::sin( 0.3 ) << std::endl;
std::cout << "The square root of 0.3 is " << std::sqrt( 0.3 ) << std::endl;
std::cout << "The absolute value of -0.3 is " << std::abs( -0.3 ) << std::endl;
// Execution ends at the end of the file.

In a more user-friendly programming language, this is exactly what happens: execution begins with the first line of code in a file. This is what happens in scripting languages like Python or Matlab, both languages you will be exposed to during your undergraduate studies. For example, if you launch Matlab, you could simply enter the commands:

>> sin( 0.3 )
ans =  0.29552
>> sqrt( 0.3 )
ans =  0.54772
>> abs( -0.3 )
ans =  0.30000

The developers of C and C++, however, saw this as an oddity: Why should everything be a function call except for the first instructions? That is, why should there be a special case? Thus, instead, they made the requirement that if a file has a function called main, when the file is compiled into an executable, that is the function that will be automatically called.

We will therefore start with the simplest C++ program: a file containing the one function definition

int main();

int main() {
	return 0;
}

The first line describes the behavior of the function: The function main

  • returns an integer (int main()), and
  • takes no arguments (int main()).

Now, let us look at some functions. You will remember from secondary school that there are a number of functions you defined in mathematics class:

FunctionDomainRangeDescription
$\sin(\theta)$$\bf R$$\bf R$Calculate the sine of the angle $\theta$.
$\log_{10}(x)$${\bf R}^+$$\bf R$Calculate the base-10 logarithm of the strictly positive real number $x$.
$\ln(x)$${\bf R}^+$$\bf R$Calculate the natural logarithm of the strictly positive real number $x$.
$n!$$\bf N$$\bf N$Calculate the factorial of the natural number $n$.
$\gcd(m, n)$${\bf Z} \times {\bf Z}$$\bf N$The greatest common divisor of the integers $m$ and $n$.
${\rm lcm}(m, n)$${\bf Z} \times {\bf Z}$$\bf N$The least common multiple of the integers $m$ and $n$.
$|x|$$\bf R$${\bf R}_0^+$Calculate the absolute value the real number $x$, a result that will be a positive real number.
$\sqrt{x}$${\bf R}_0^+$${\bf R}_0^+$Calculate the square root of the positive real number $x$.

Your mathematics teacher may have used different symbols, but in general,

RepresentationSet
${\bf N}$The natural numbers, $0, 1, 2, ...$
${\bf Z}$The integers, $..., -2, -1, 0, 1, 2, ...$
${\bf R}$The real numbers
${\bf R^+}$The strictly positive real numbers
${\bf R_0^+}$The positive real numbers including $0$

Now, some of these functions take integers as arguments, and they return integers as results, and we have already seen that int main() returns an integer; specifically the integer 0.

Other functions above take real numbers as arguments, and return real numbers. We have seen how to enter a real number (e.g., 3.2 and -5.24e-2), but we haven't yet discussed how the computer deals with real numbers. We will expand more on this later, but right now, we will use the keyword double to indicate the representation used to store (or really, approximate) real numbers. The word double is the first part of the longer phrase double-precision floating-point representation, and you will learn more about this later. The term double is used be cause the representation has approximately twice the precision of another floating-point representation, namely float.

Another difference between mathematics and computer science is that in mathematics, we call ${\bf Z}$ and ${\bf R}$ sets, while we call int and double types. A type describes the representation and storage of data in memory.

The values returned by these functions depend on what values they are passed, so $\sin(3.2)$ is different from $\sin(4.7)$. Thus, we say that the function $\sin(\theta)$ is parameterized (see third definition) by $\theta$. When we actually pass $\sin$ a value like $3.2$, we say that $3.2$ is the argument passed to $\sin$. In general, a behavior that is desirable with functions is that:

functions given the same arguments will give the same result.

It would be undesirable if $\sin(1)$ evaluated to $0.8414709848$ one time that it is called, but $0.841471$ another time.

We could write functions that implement all of these algorithms in C++, and we may describe their behavior with function declarations:

	double sin( double theta );
	double log10( double x );
	// Almost all programming languages use 'log' for the natural logarithm
	double log( double x );
	int factorial( int n );
	int gcd( int a, int b );
	int lcm( int a, int b );
	double abs( double x );
	double sqrt( double x );

To see the relationship between mathematical notation and function declarations, consider

Here, the domain of the $\gcd$ function is Cartesian product ${\bf Z} \times {\bf Z}$ (that is, a pair of integers numbers) with the range being the natural numbers. In computer science, we say that the function identified by gcd takes two parameters of type int and has a return value of type int.

As another example,

Here, the domain of the $\sin$ function is ${\bf R}$ with the range also being the real numbers ${\bf R}$. In computer science, we say that the function identified by sin takes one parameter of type double and has a return value also of type double.

You will note that each parameter has its own identifier so that we will later be able to refer to the value passed as an argument.

Definition: function declaration
A statement that gives the return type of a function, its identifier, and the types of its parameters.

You will notice that in mathematics, we can be very precise about the domain (the possible arguments) and the range (the possible values). For example, up until now, unless you have learned complex arithmetic, you will be of the understanding that $\sqrt{-1}$ is not defined. In a programming language like C++, you cannot be so precise, and so you must therefore choose a type that best represents the domain or range. You will also note that we must operations such as the factorial as a function.

If you are ever implementing a numerical algorithm, you may have to calculate either an integer raised to the exponent of an integer ($m^n$) or a real raised to an exponent of another real ($x^y$). In C++, you will almost certainly make the mistake of using the caret (^) to implement exponentiation; you will, however, find yourself to be mistaken. While we have not yet discussed it in detail, the caret operator is used for a bit-wise exclusive-or operation (a bit-wise XOR). Instead, there are functions for exponentiation:

	double pow( double x, double y );
	double exp( double y );

The first calculates $x^y$ while the second calculates $e^y$ where $e$ is the base of the natural logarithm. This second exponential can be calculated much more precisely and faster than calculating:

#include <iostream>
#include <cmath>

int main();

int main() {
	std::cout << "e^2 = " << std::pow( 2.71828182845904523536, 2.0 ) << std::endl;

	return 0;
}

There is no built-in C++ function that calculates an integer raised to an integer, as this is quite honestly not a common operation. You can, however, implement your own functions. We will look at this now.

Function definitions

We will start with a simpler function (one that evaluates a polynomial at a point), then one that evaluates a bivariate polynomial, and then go on to defining an approximation of the sine function.

A simple polynomial

Suppose we want to write a function that calculates a polynomial $5x^2 - 3x - 9$. For this function, you need an identifier, so let us call this function the polynomial $p$.

The polynomial must take an argument $x$ and return a value (in this case, $5x^2 - 3x - 9$). Thus, the function declaration would be:

// Function declarations double p( double x );

The next step is to define this function, just like we defined the function main(). In this case, we have

// Function definitions
double p( double x ) {
	return 5.0*x*x - 3.0*x - 9.0;
}

We can now write a main() function that calls this function:

int main() {
    std::cout << "p(4.2) = " << p( 4.2 ) << std::endl;

    return 0;

You can try this function out at repl.it.

A bivariate polynomial

A bivariate polynomial is one that has two variables. For example, $(x - y)(x - y)$ which equals $x^2 - 2xy + y^2$. Such a polynomial, let us identify it by $q$, takes two variables, and thus the function declaration is:

// Function declarations
double q( double x, double y );

Now, remember, we could use two different identifiers, we are only choosing to use $x$ and $y$.

The function definition is one of the following:

// Function definitions
double q( double x, double y ) {
	return (x - y)*(x - y);
}

The multiplication sign between the parentheses is absolutely necessary. Now, we could implement this function in the expanded form:

// Function declarations
double q_expanded( double x, double y );

// Function definitions
double q_expanded( double x, double y ) {
	return x*x - 2.0*x*y + y*y;
}

Both implementations will return either the exact same or similar values; however, one important difference is efficiency: the expanded version requires twice as many floating-point operations:

ExpressionCodeMultiplications/divisionsAdditions/subtractions
$(x - y)(x - y)$(x - y)*(x - y)12
$x^2 - 2xy + y^2$x*x - 2.0*x*y + y*y42

Thus, the expanded form has twice as many floating-point operations (or FLOPs). There are other reasons to use the one using less computation, but you will see that in your numerical analysis course.

Approximating sine

Suppose you were asked in secondary school what the values of $\sin\left(\frac{\pi}{6}\right)$, $\sin\left(\frac{17\pi}{4}\right)$ and $\sin(0.1)$. The first you could find in the tabulated values you memorized, the second would require you to apply a few trigonometric identities first, but you'd have some issues trying to determine the third. You could draw a picture and measure the distances:

From this, you'd see that $\sin(0.1) \approx \frac{8.2\ {\rm mm}}{83.1\ {\rm mm}} = 0.09867629362$ when you enter this into your calculator. This is not a bad estimate using the Mark I Eyeball, but the correct answer is closer to $0.09983341665$.

To start, however, let us look at a simpler version of the sine function. Suppose you are authoring a piece of code that is meant to calculate the sine function very quickly for values on the interval $\left [-\frac{\pi}{2}, \frac{\pi}{2} \right]$. In your design, you have ensured that no function will ever try to calculate $\sin(x)$ for any value outside this interval. It would be nice if we could approximate the sine function with a cubic polynomial $p(x) = a_3 x^3 + a_2 x^2 + a_1 x + a_0$. If this matches the sine function at each end-point, then we first require that:

  1. $p\left( -\frac{\pi}{2} \right ) = -1$ and
  2. $p\left( \frac{\pi}{2} \right ) = 1$.

We also know that the derivative of the sine function at each of these points is zero, so we also have that

  1. $\frac{dp}{dx}\left( -\frac{\pi}{2} \right ) = 0$ and
  2. $\frac{dp}{dx}\left( \frac{\pi}{2} \right ) = 0$.

where $\frac{dp}{dx}(x) = 3a_3x^2 + 2a_2x + a_1$.

Thus, with four constraints, we can find the four unknowns $a_3$, $a_2$, $a_1$ and $a_0$.

If you were to do this and apply the rules of linear algebra you learned in secondary school, you will find that

  1. $a_3 = -\frac{4}{\pi^3}$,
  2. $a_2 = 0$,
  3. $a_1 = \frac{3}{\pi}$, and
  4. $a_0 = 0$,

and thus we could implement our sine function as:

// Function declarations
double fast_sin( double x );

// Function definitions
double fast_sin( double x ) {
	return -0.1290061377327980*x*x*x
	       + 0.9549296585513720*x;
}

We will now use this in a function:

#include <iostream>

int main();
double fast_sin( double x );

double fast_sin( double x ) {
	return -0.11073981636184074*x*x*x
	      - 0.057385341027109429*x*x + x;
}

int main() {
	std::cout << "sin(0.0)  = " << fast_sin( 0.0 ) << std::endl;
	std::cout << "sin(0.1)  = " << fast_sin( 0.1 ) << std::endl;
	std::cout << "sin(0.2)  = " << fast_sin( 0.2 ) << std::endl;
	std::cout << "sin(0.3)  = " << fast_sin( 0.3 ) << std::endl;
	std::cout << "sin(0.4)  = " << fast_sin( 0.4 ) << std::endl;
	std::cout << "sin(0.5)  = " << fast_sin( 0.5 ) << std::endl;
	std::cout << "sin(0.6)  = " << fast_sin( 0.6 ) << std::endl;
	std::cout << "sin(0.7)  = " << fast_sin( 0.7 ) << std::endl;
	std::cout << "sin(0.8)  = " << fast_sin( 0.8 ) << std::endl;
	std::cout << "sin(0.9)  = " << fast_sin( 0.9 ) << std::endl;
	std::cout << "sin(1.0)  = " << fast_sin( 1.0 ) << std::endl;
	std::cout << "sin(1.1)  = " << fast_sin( 1.1 ) << std::endl;
	std::cout << "sin(1.2)  = " << fast_sin( 1.2 ) << std::endl;
	std::cout << "sin(1.3)  = " << fast_sin( 1.3 ) << std::endl;
	std::cout << "sin(1.4)  = " << fast_sin( 1.4 ) << std::endl;
	std::cout << "sin(1.5)  = " << fast_sin( 1.5 ) << std::endl;
	std::cout << "sin(pi/2) = " << fast_sin(  1.5707963267948966 ) << std::endl;

	return 0;
}

You can run this code at repl.it.

The output of this code, when run, is as follows, where the correct values (to six digits) is shown in parentheses:

sin(0.0) = 0                        0
sin(0.1) = 0.0993154                0.0998334
sin(0.2) = 0.196819                 0.198669
sin(0.3) = 0.291845                 0.295520
sin(0.4) = 0.383731                 0.389418
sin(0.5) = 0.471811                 0.479426
sin(0.6) = 0.555421                 0.564642
sin(0.7) = 0.633897                 0.644218
sin(0.8) = 0.706575                 0.717356
sin(0.9) = 0.772789                 0.783327
sin(1.0) = 0.831875                 0.841471
sin(1.1) = 0.883169                 0.891207
sin(1.2) = 0.926007                 0.932039
sin(1.3) = 0.959723                 0.963558
sin(1.4) = 0.983655                 0.985450
sin(1.5) = 0.997136                 0.997495
sin(pi)  = 1                        1

If you are interested where this approximation comes from, see this derivation. It only requires secondary-school algebra and calculus, but is beyond the scope of this course.

Returning to our implementation:

// Function declaration
double fast_sin( double x );

// Function definition
double fast_sin( double x ) {
	return -0.11073981636184074*x*x*x - 0.057385341027109429*x*x + x;
}

we see that the expression -0.11073981636184074*x*x*x - 0.057385341027109429*x*x + x refers to the parameter x. If we call sin(0.8), the parameter x takes on the value of the argument 0.8. Then, whenever you use x in a calculation, it will use the value 0.8 in that calculation.

Therefore, in this case, if we did call sin(0.8), we would calculate

-0.11073981636184074*0.8*0.8*0.8 - 0.057385341027109429*0.8*0.8 + 0.8

and this would evaluate to 0.70657459576538751, but when this is printed to the console, the result is rounded to six significant digits.

Continuing, we note that the implementation

double fast_sin( double x ) {
	return -0.11073981636184074*x*x*x - 0.057385341027109429*x*x + x;
}

requires five multiplications and two additions. This is because of the way we must calculate

$$ax^3 + bx^2 + x$$

Could we do any better? We could factor out an $x$ and write the above polynomial as

$$(ax^2 + bx + 1)x$$

and we could repeat this again by factoring out an $x$ in the first two terms:

$$((ax + b)x + 1)x$$

This reduces this to three multiplications:

double fast_sin( double x ) {
	return ((-0.11073981636184074*x - 0.057385341027109429)*x + 1.0)*x;
}

This calculates our fast approximation of the sine function using only three multiplications, not five. This technique is called Horner's rule for evaluating polynomials.

Now, for simple problems, this may be a trivial difference, but suppose you are simulating the wind effects on the wings of an Airbus A380-Plus, the successor the the double-decker A380. If in this simulation you had to call fast_sin a total of 182.7 trillion times, such a computation could decrease the run time of your simulation by many hours.

Alternate requirements

The approximation of the sine function we used above matched the value of the sine function and its derivative at two points: $x = 0$ and $x = \frac{\pi}{2}$. This has a maximum absolute error less than $0.011$; however, there are other ways of measuring error. For example, if you were off $1 on a $2 purchase, someone may object; however, if you were off by $1 on a $32,353,591 purchase, likely no one would consider this an issue—the relative size of the error in the first case is 50%, while in the second case the relative size is negligible.

Similarly, the maximum relative error of our approximation is approximately 1.64%. There is a better approximation that minimizes the relative error:

double sin_min_rel_err( double x );

double sin_min_rel_err( double x ) {
    return ((
         -0.12982726700166469*x - 0.031041616418863258
    )*x + 1.0034924244212799)*x;
}

In this case, the maximum relative error is 0.349%, which is less than one quarter the maximum previous relative.

More advanced algorithms from secondary school

Finding the maximum of two numbers is also relatively easy, as is finding the absolute value: both of these can be described in English as:

  1. If $x \ge y$,
    1. return $x$,
    2. otherwise, return $y$.
  1. If $x \ge 0$,
    1. return $x$,
    2. otherwise, return $-x$.

On the other hand, if you learned to calculate the greatest common divisor of two numbers, you will remember that it was a repetitive (or iterative) algorithm. For example, suppose we were to find the gcd of $1170$ and $1856$. You may have learned to write

$2856 = 2 \times 1170 + 516$
$1170 = 2 \times 516 + 138$
$516 = 3 \times 138 + 102$
$138 = 1 \times 102 + 36$
$102 = 2 \times 36 + 30$
$36 = 1 \times 30 + 6$
$30 = 5 \times 6 + 0$

Once the remainder is $0$, you are finished, and the previous remainder, $6$, is the greatest common divisor. The least common multiple of the two numbers is the product of the two numbers divided by the least common divisor, so the least common multiple of $1170$ and $1856$ is $361920$.

You may even have been taught an algorithm how to calculate the square root of a real number. All of these algorithms, however, require us to perform certain operations based on properties of the arguments.

Finding the factorial of an integer $n$ requires us to perform one of two algorithms:

  1. Multiply the integers from $1$ to $n$ together, or
  2. Multiply the value of $(n - 1)!$ by $n$ to get $n!$.

These require us to either conditionally execute code, or to repetitively execute code. We will cover these in the next few lessons; however, first we will start with comments.

Questions and practice:

1. Write a function linear_root that takes two parameters a and b that represent the coefficients of a linear polynomial $ax + b$ and returns the solution to the equation $ax + b = 0$.

2. The following two functions attempt to convert Fahrenheit to Celsius:

#include <iostream>

int main();
double fahrenheit_to_celsius_1( double absolute_temperature );
double fahrenheit_to_celsius_2( double absolute_temperature );

double fahrenheit_to_celsius_1( double absolute_temperature ) {
	return (5/9)*(absolute_temperature - 32);
}

double fahrenheit_to_celsius_2( double absolute_temperature ) {
	return (5.0/9.0)*(absolute_temperature - 32.0);
}

int main() {
	std::cout << "84 degrees Fahrenheit in degrees Celsius is "
	          << fahrenheit_to_celsius_1( 84 ) << std::endl;
	std::cout << "84 degrees Fahrenheit in degrees Celsius is "
	          << fahrenheit_to_celsius_2( 84 ) << std::endl;

	return 0;
}

What happens when you use these functions? Can you deduce why the answers are not the same? Write a function that converts an absolute temperature in degrees Celsius to degrees Fahrenheit.

3. Write a function free_fall that takes two parameters x0 and v0 that give the initial vertical position of a ball (in metres) and the initial vertical velocity of that ball (in metres per second) and then takes a third argument t (in seconds) and calculates the position of the ball after so many seconds. Use the standard acceleration due to gravity constant of $g = 9.80665 m/s^2$.

4. In special relativity, if a particle is moving very fast relative to a frame of reference, time dilates, or gets longer (when your pupils are said to dilate, that indicates that they are getting bigger). The amount that time $t$ dilates is given by the formula

$$\frac{t}{\sqrt{1 - \frac{v^2}{c^2}}}$$

where $c$ is the speed of light, or 299792458 metres per second.

Write a function that calculates the time dilation of $t$ seconds given a velocity of $v$ meters per second.

Note that you have many choices for calculating $\frac{v^2}{c^2}$:

	v*v/299792458/299792458
	(v/299792458)*(v/299792458)
	v*v/(299792458*299792458)

Do all of these give the same result? Do all of these return the same result if you use 299792458.0 instead of 299792458?

Note: while you may not realize it yet, this effect is precisely what is required to cause a flowing current to induce a magnetic field. If Einstein's model of special relativity was incorrect, electric generators or electric motors would not function in the way they are observed to work in nature. However, we would not have to worry about crosstalk when dealing with VLSI (very large scale integrated circuits).

5. While you may not recall this, a logarithm base $a$ is just a scalar multiple of a logarithm base $b$, specifically, $\log_a(x) = \log_a(b) \log_b(x)$, or $\frac{\log_a(x)}{\log_b(x)} = \log_a(b) $. Consequently, $\ln(x) = \ln(b) \log_b(x)$. Use this to write a function

double log_b( double x, double base );

Recall that in the cmath library, the natural logarithm $\ln(x)$ is implemented as double log( double x );.

Note: while you may think that you will be leaving logarithms behind you, you will be using them significantly in, for example, algorithm analysis. For example, you will learn that the number of instructions required to sort an array of size $n$ using an algorithm like heap sort, merge sort or quick sort is approximately a scalar multiple of $n \ln(n)$.

6. Read why the Mars Climate Orbiter failed and write a function to help NASA avoid this problem.

This program an pseudo-Assembly language

At this point, it is not reasonable to introduce assembly language at the level at which use it in your courses on digital computers; however, for those interested:

You should be aware that there is main memory, or RAM, on a computer which is in some ways related to storing data and instructions during the execution of a program; however, main memory is actually very slow relative to the speed of the processor. Instead, when a program is executing, it accesses and manipulates registers that are hard-wired into the processor, and instructions can be executed to load data in main memory into a register, and also to store data in a register back to a location in main memory.

Some registers may store data, while other registers are dedicated to storing just addresses in main memory. We will simulate a processor by having ten such registers, including

  1. four data registers,
  2. four address registers,
  3. a dedicated frame address register, and
  4. a dedicated program address register.

When a function is called, the arguments are put at a known location, and that location is stored in the FRAME address register. After that, the address if the first instruction is put into the PROGRAM address register. The computer than:

  1. Executes the instruction at PROGRAM, and
  2. increases the PROGRAM entry to the next address.

Because the calling function knows where it put the argument, we will also put any return value there so that the calling function can access it. Suppose we now call our function fast_sin with the argument 0.1:

  1. The calling function puts the argument at address 89.
  2. The calling function sets FRAME to 89.
  3. The calling function then sets PROGRAM to 20.
  4. The next instruction we execute is at address 20 and we keep doing so until we reach the RETURN-FROM-FUNCTION instruction.

------------------------------------------------------------------------------
THE PROCESSOR
=============
      D0: 0             A0: 0
      D1: 0             A1: 0
      D2: 0             A2: 0
      D3: 0             A3: 0

   FRAME: 89
 PROGRAM: 20

------------------------------------------------------------------------------
MAIN MEMORY
===========
    20  LOAD (FRAME)     -> D0
    21  LOAD (31)        -> D1
    22  MULTIPLY D0 * D1 -> D2
    23  LOAD (32)        -> D1
    24  ADD D1 + D2      -> D2
    25  MULTIPLY D0 * D2 -> D2
    26  LOAD (33)        -> D1
    27  ADD D1 + D2      -> D2
    28  MULTIPLY D0 * D2 -> D2
    29  STORE D2         -> (FRAME)
    30  RETURN-FROM-FUNCTION
    31  -0.11073981636184074
    32  -0.057385341027109429
    33   1.0
    34
     .
     .
     .
    89  0.1                       // The argument 'x'

This function:

  1. Loads the argument at address FRAME to D0.
  2. Loads the value at address 31 to D1.
  3. Multiplies the values in D0 and D1 and stores the result in D2.
  4. Loads the value at address 32 to D1.
  5. Adds the values in D1 and D2 and stores the result in D2.
  6. Multiplies the values in D0 and D2 and stores the result in D2.
  7. Loads the value at address 33 to D1.
  8. Adds the values in D1 and D2 and stores the result in D2.
  9. Multiplies the values in D0 and D2 and stores the result in D2.
  10. Stores the value in D2 to the address FRAME—this is the return value.
  11. Return from this function.

Here is an exercise for you, if you're up to it: the function we converted to machine instructions was the more efficient version of the routine. How could you implement the less efficient algorithm:

double fast_sin( double x ) {
	return -0.11073981636184074*x*x*x - 0.057385341027109429*x*x + x;
}

You will have to move the locations of the three data items currently stored at addresses 31 to 33; however, you won't need the value 1.0.


Previous lesson Next lesson