Topic 14.5: Runge Kutta Fehlberg (Theory)

Contents Previous Chapter Start of Chapter Previous Topic Introduction Notes Theory HOWTO Examples Engineering Error Questions Matlab Maple Next Topic Next Chapter

Recall how with Richardson Extrapolation, we found two approximations, for example, R,0 and R1,0. We used the property that the error reduces by O(h2) to define R1,1 as a linear combination of the first two approximations in such a way that the error was canceled, resulting in an answer which was O(h4).

Unfortunately, we cannot come up with a method which works as simply as this for IVPs. Instead, however, we will do is use an b(h2) method, and then use and b(h3) method to approximate the error for the b(h2) method, allowing us to determine if the value of h is sufficiently small.

This document is divided into two sections:

Unfortunately, there are some controversies surrounding the application of the Runge-Kutta-Fehlberg method.

Euler and Heun

Consider the IVP

y(1)(t) = ½ y(t) - t + 1
y(0) = 0.5

If we use both Euler's method and Heun's method to approximate y(0.1), we get:

y1 = 0.625
z1 = 0.62125

If we calculate the difference, we get we get y1z1 = 0.001875. The error of y1 is y1 − y(0.1) = 0.001907. You will note that the difference between Heun's method and Euler's method is, in this case, a good approximation to the error of Euler's method. To examine why this is true, consider the following reasoning:

The Taylor series approximations for these two approximations are:

and

Subtracting the first equation from the second yields:

If the h3 is sufficiently small, the third term is negligible, and therefore, we have:

Notice that this is now a good approximation to the error for y1, and, for sufficiently small h, if we want to ensure that the error is less than some fixed value εabs, we can find the optimal step size sh by realizing that

|z1 - y1| ≈ ½f(2)(τ) h2.

We want to find that multiple sh such that

½f(2)(τ) (sh)2 = εabs.

By equating these two equations and solving for s, we get that:

This technique does not give us any idea as to how good our approximation with Huen's method is, it only gives us a reasonable estimate as to the error of Euler's method. Thus, we should determine the optimal h value to use and then use the corresponding Euler approximation.

Example

Consider the IVP:

y(1)(t) = 2 y2(t) - t2 + 1
y(1) = 1

This differential equation has no simple solution, however, suppose we start with a value of h = 0.1 but we want to ensure that the error is approximately εabs < 0.001.

To solve this, we find the approximations:

K0 = f( t0, y0 )
y1 = y0 + h K0
K1 = f( t1, y1 )
z1 = y0 + ½ h (K0 + K1)

that is,

K0 = f( 1, 1 ) = 2
y1 = 1 + 0.1⋅2 = 1.2
K1 = f( 1.1, 1.2 ) = 2.67
z1 = 1 + ½ 0.1 (2 + 2.67) = 1.2335

The actual value of the solution is y(1.1) = 1.23785803, and thus, the absolute error of Euler's approximation (the first) is 0.03785803 which is significantly larger than the acceptable error 0.001.

Thus, the optimal value of h may be found by calculating

s = (0.001 / |1.2335 - 1.2|)½ = (0.001/0.0335)½ ≈ 0.173

Thus, we should use h = hs = 0.1⋅0.173 = 0.0173. If we do, we get:

y1 = y0 + h*f(t0, y0) = 1 + 0.0173 f(1, 1) = 1.0346

and the correct value is y(1.0173) = 1.035531805 and thus, the absolute error is now 0.0009318 which is very close to the desired error of 0.001.

Runge Kutta Fehlberg

Unfortunately, Euler's method is not very efficient, being an O(h) method if are using it over multiple steps. Because Heun's method is O(h2), it is referred to as an order 1-2 method.

The Runge-Kutta-Fehlberg method uses an O(h4) method together with an O(h5) method and hence is often referred to as RKF45.

Thus, we are sampling:

With these six approximations of the slope, we notice that the sum of the linear combinations is, in all cases, one:

1/4 + 3/4 = 1
161/169 − 600/169 + 608/169 = 1
8341/4104 − 32832/4104 + 29440/4104 − 845/4104 = 1
−6080/10260 + 41040/10260 − 28352/10260 + 9295/10260 − 5643/10260 = 1

After this, the linear combination is multiplied by the step size times h.

Thus, each linear combination of slopes is an estimator of the actual slope.

The next insight was write two formulae for the approximate the value at t1:

and

Note that in each case, the sum of the coefficients of the terms in the numerator equals the denominator, so once again, each is an unbiased estimator of the slope.

It can be shown that the first formula is an O(h5) while the second is O(h6) (though only O(h4) and O(h5), respectively, if multiple steps are used). As in the example above, we can use these two solutions to estimate the appropriate size of sh to ensure that the error is close tot εabs:

Multiple Steps

The most naive application of either of these adaptive techniques is to start with h = h0 for some initial value of h0. Next, given (tkyk), use the current value h to calculate the value of s, assign h = sh and use this new value of h to approximate (tk + 1yk + 1).

Maximum and Minimum Sizes for h

The first thing we must guard against is an accidental increase in h which is too large and consequently yields an incorrect result. Thus, it is usual to define some maximum value hmax. Similarly, it may be too expensive to have h too small, and therefore, we set a minimum value hmin.

Irrational Step Size

The step size is going to be irrational, as in both cases, we are calculating either a square or 4th root. Consequently, each of the step sizes will be different. One solution to this is to allow the step size to be of the form 2nh0 where h0 is the initial step size and n is restricted to some range [-N, N] (this solves the previous problem.

Use The First Approximation

If s has been calculated and found to be greater than 1, then use the approximation of yk + 1 which was used to find s.

Do NOT use the Higher-Order Approximation

One obvious observation is that, if we have calculated both yk + 1 and zk + 1, why not use the second (higher-order) approximation. Unfortunately, the error analysis was performed for the first, and therefore, if the goal is to have the error set by some bound, use yk + 1. The error of the z approximation is unknown.

Many textbooks will use z, and this will be completely fine for nice IVPs, however, if the ODE is stiff (as we will see in the next topic), using the z term will be seen to be sub-optimal.

Example

The following example shows how we can solve the following IVP using the techniques explained above and observe some of the problems mentioned above.

Suppose we have the IVP

y(1)(t) = -2y(t) + e-2(t - 6)2
y(0) = 1

The solution to this differential equation is initially a curve which decays exponentially, followed by a pulse around time t = 6, followed again by an exponentially decaying curve. The solution is shown in Figure 1 on the interval [, 10].

Figure 1. The solution to the given IVP.

If we use Euler's method using h = 0.05, then we need 200 steps to approximate the solution. Approximation is shown in Figure 2 and the error at each point is shown in Figure 3.

Figure 2. Multiple steps of Euler's method using h = 0.05.

Figure 3. The errors at each of the 200 points.

We note that the error does not go above 0.02.

If we use the Euler-Heun method described above at each step, always using the optimal value of h, we get the approximation shown in Figure 4, the errors being shown in Figure 5.

Figure 4. Using the adaptive Euler-Heun method with εabs = 0.01.

Figure 5. The errors associated with the adaptive Euler-Heun method.

This required 82 iterations of the technique before the value of t82 = 10.3097 ≥ 1, or half as many points. On the first trough, it is quite apparent that the size of h quite large, however, after the second peak, we also note that the value of h was too large, and therefore the next step was negative. Before that point, the error never went above 0.012, which is significantly better than using Euler's method with h = 0.05 (this method having a maximum absolute error closer to 0.02). Also, looking at the first ten t values, we see that they are unevenly spaced:

0, 0.02236067977, 0.04523878102, 0.06865896511, 0.09264770682, 0.1172334795

If, instead, we adopt some of the suggestions above for multiple step methods, where we:

The result is an approximation to the solution using 104 points (instead of 200) and this is shown in Figure 6. The errors are shown in Figure 7, and it can be quite easily observed that the errors are significantly less then using an unmodified Euler's method with h = 0.05, even though we use a minimum h value of 0.025. The jump sizes are shown in Figure 8.

Figure 6. Using a modified adaptive Euler-Heun method with εabs = 0.01.

Figure 7. The errors associated with the modified adaptive Euler-Heun method.

Figure 8. The step sizes used with the modified adaptive Euler-Heun method.

The points at which approximations are found are:

0, 0.05, 0.075, 0.1, 0.125, 0.15, 0.175, 0.2, 0.225, 0.25, 0.275, 0.3, 0.325, 0.35, 0.375, 0.4, 0.425, 0.45, 0.475, 0.5, 0.525, 0.575, 0.625, 0.675, 0.725, 0.775, 0.825, 0.875, 0.925, 0.975,
1.025, 1.075, 1.125, 1.175, 1.275, 1.375, 1.475, 1.575, 1.675, 1.775, 1.875,
2.075, 2.275, 2.475, 2.875,
3.275, 3.675,
4.075, 4.475, 4.675, 4.775, 4.875, 4.975,
5.025, 5.075, 5.125, 5.175, 5.225, 5.275, 5.325, 5.375, 5.425, 5.475, 5.525, 5.575, 5.625, 5.725, 5.925,
6.025, 6.075, 6.125, 6.175, 6.225, 6.275, 6.325, 6.375, 6.425, 6.475, 6.525, 6.575, 6.625, 6.675, 6.725, 6.825,
7.025, 7.125, 7.225, 7.275, 7.325, 7.375, 7.475, 7.575, 7.675, 7.775, 7.875, 7.975,
8.075, 8.175, 8.275, 8.475, 8.675, 8.875,
9.275, 9.675,
10.075

and it is therefore obvious that most of the approximations are used to fit the curve on the intervals [0, 2] and at the start, middle, and end of the peak.

To conclude, we will also show the affect of using the default RKF45 method which always uses the optimal h value to approximate the solution on the interval, we reduce the number of points required to 16, as is shown in Figure 9. The errors are shown in Figure 10, and the step sizes are shown in Figure 11. The red line shows the h = 0.025 value used above. Note that the step size grows almost to size 1. Also, because fewer steps are taken, there is a significant reduction in accumulated errors.

Figure 9. Using the Runge-Kutta-Fehlberg method with εabs = 0.01.

Figure 10. The errors associated with the Runge-Kutta-Fehlberg method.

Figure 11. The step sizes used with the Runge-Kutta-Fehlberg method.

The points are:

0, 0.27282, 0.63613, 1.0650, 1.5887, 2.2509, 3.1137, 4.1975, 4.9416, 5.6021, 6.1579, 6.6112, 7.0857, 7.7649, 8.3529, 9.1176, 10.1089

If, like before, we restrict the size of h to powers of two multiplied by the default step size, we get the results shown in Figures 12 through 14. This requires 18 points, as opposed to 16.

Figure 12. Using the modified Runge-Kutta-Fehlberg method with εabs = 0.01.

Figure 13. The errors associated with the modified Runge-Kutta-Fehlberg method.

Figure 14. The step sizes used with the modified Runge-Kutta-Fehlberg method.

The points at which we approximate the solution are:

0, 0.1, 0.3, 0.7, 1.1, 1.5, 1.9, 2.7, 3.5, 4.3, 5.9, 6.7, 7.1, 7.5, 7.9, 8.3, 9.1, 9.9, 11.5

Copyright ©2006 by Douglas Wilhelm Harder. All rights reserved.