17  A3: Absolute Stability

Assignment 3: Absolute stability of Runge-Kutta and multistep methods

Published

February 16, 2026

Note
  • Complete the following and submit to Canvas before Feb 25, 23:59
  • Late work will recieve 0%,
  • Each assignment is worth the same,
  • Please get in contact with the TA/instructor in plenty of time if you need help,
  • Before submitting your work, make sure to check everything runs as expected. Click Kernel > Restart Kernel and Run All Cells.
  • Feel free to add more cells to experiment or test your answers,
  • I encourage you to discuss the course material and assignment questions with your classmates. However, unless otherwise explicitly stated on the assignment, you must complete and write up your solutions on your own,
  • The use of GenAI is prohibited as outlined in the course syllabus. If I suspect you of cheating, you may be asked to complete a written or oral exam on the content of this assignment.

Name:

Approx. time spent on this assignment:

Did you do the suggested reading? Yes/No

Section 5.11 of Burden, Faires, and Burden (2015) may be useful.

In lectures, we have discussed zero-stability of numerical methods for solving IVPs. We have seen that Runge-Kutta methods and linear multistep methods that satisfy the root condition are zero stable. Therefore, these methods are convergent: for small enough step size the numerical approximation is “close” to the exact solution. This notion does not take into account how small the step size has to be in order to achieve a reasonable error. Absolute stability on the other hand is about fixed step size. Roughly speaking, a numerical scheme is absolutely stable for a given step size if u_j remains bounded as j\to\infty.

To introduce this concept, we need the following test problem: for \lambda \in \mathbb C,

\begin{align} u(0) &= u_0 \nonumber\\ u'(t) &= \lambda u(t). \tag{test} \end{align}

Notice that the exact solution to this equation is u(t) = u_0 e^{\lambda t} and so, if \lambda = x + i y, then u(t) = u_0 e^{x t} \big[ \cos y + i \sin y \big]. This solution remains bounded if x \leq 0 and diverges when x > 0. It turns out that analysing how well numerical schemes can approximate this problem is informative:

Definition 17.1 We say that a numerical scheme is absolutely stable for a given \lambda \in \mathbb C and step size h if the numerical approximation, u_j, to \text{(test)} remains bounded as j \to \infty.

Definition 17.2 The region of absolute stability, \mathscr A, of a numerical method is defined as the set of all z := h\lambda for which the method is absolutely stable for \lambda and step size h.

17.1 A. RK Methods

Exercise 1. Show that Euler’s method has the following region of absolute stability

\begin{align} \mathscr A = \{ z \in \mathbb C : |z + 1| \leq 1 \}. \nonumber \end{align}

(That is, \mathscr A is a ball of radius 1 centered at -1)

Answer.

Your answer here

Exercise 2. Suppose you apply Euler’s method to

\begin{align} u(0) &= 1 \nonumber\\ u'(t) &= -10 u(t). \nonumber \end{align}

What step size h should you use to ensure the numerical solution remains bounded for large times?

Answer.

Your answer here

Exercise 3. Consider your favourite 2 stage, order 2 Runge-Kutta method. Show that the region of absolute stability is given by

\begin{align} \mathscr A = \left\{ z \in \mathbb C : \big| 1 + z + \tfrac{z^2}{2} \big| \leq 1 \right\}. \nonumber \end{align}

Answer.

Your answer here

It turns out that there exists a stability function R such that

\begin{align} \mathscr A = \{ z\in \mathbb C : |R(z)| \leq 1 \}.\nonumber \end{align}

Notice that for Euler’s method R(z) = 1 + z and all 2 stage order 2 RK methods have R(z) = 1 + z + \tfrac{z^2}{2}. In general, for an s-stage RK method R(z) is a polynomial of degree s. If the method has order p, then this polynomial must match the first p+1 terms of the Taylor expansion of e^z. We plot the region of absolute stabily for the first few RK methods:

17.2 B. Multistep methods

Recall that, for a general multistep method

\begin{align} u_{j+1} &= \alpha_{m-1} u_j + \cdots + \alpha_0 u_{j-m+1} \nonumber\\ % &\,\, + h \Big[ \beta_m f(t_{j+1},u_{j+1}) + \beta_{m-1} f(t_j,u_j) + \cdots + \beta_0 f(t_{j-m+1},u_{j-m+1}) \Big], \end{align}

we define the generating polynomials

\begin{align} \rho(z) &= z^m - \alpha_{m-1} z^{m-1} - \cdots - \alpha_0 \nonumber\\ \sigma(z) &= \beta_m z^m + \beta_{m-1}z^{m-1} + \dots + \beta_0. \nonumber \end{align}

Exercise 4. Suppose that \zeta is a solution of \rho(\zeta) = h\lambda \sigma(\zeta) and show that u_{j} := \zeta^j solves the difference equation (1).

Answer.

Your answer here

It turns out that if all the roots \zeta_1,\dots,\zeta_m of \rho - h\lambda\sigma are simple, then we can write a general solution to (1) as

\begin{align} u_j = c_1 (\zeta_1)^j + \cdots + c_m (\zeta_m)^j \nonumber \end{align}

for some constants c_1,\dots,c_m. This numerical solution remains bounded if and only if |\zeta_j| \leq 1 for all j.

Exercise 5. Explain why (\text{AB2}) has a bounded region of absolute stability.

Answer.

Your answer here

It turns out that all explicit multistep methods have bounded regions of absolute stability. We now plot regions of absolute stability for (\text{AB1})-(\text{AB4}):

Here, we plot the errors when approximating

\begin{align} u(0) &= 1 \nonumber\\ u'(t) &= -10 u(t) \end{align}

on [0,100] for the methods (\text{AB1})-(\text{AB4})


Exercise 6. Comment on this error plot with reference to the absoulte stability regions for the particular methods.

Answer.

Your answer here

17.3 C. Implicit Methods

As we have seen, explicit methods have bounded regions of absolute stability. A justification for using implicit methods over explicit methods is that they can have larger stability regions. In fact, backward’s Euler (\text{AM1}) and implicit trapezoid (\text{AM2}) have unbounded stability regions:

Exercise 7. Show that backwards Euler has the region of stability given by

\begin{align} \mathscr A = \{ z \in \mathbb C : |z - 1| \geq 1 \}. \end{align}

(Exterior of a ball of radius 1 centered at 1)

Answer.

Your answer here

It turns out that the stability region of the implicit trapezoid method is the entire left half plane:


The region of stabilty for higher order implicit methods then becomes bounded. Here, we compare them to the (explicit) AB methods (on the same scale):


Exercise 8. Explain what you would expect to see in the convergence plot above (the plot entitled “Convergence of explicit multistep methods”) if we also plotted the error curves for the implicit methods \text{(AM1)}-\text{(AM4)}?

Answer.

Your answer here

17.4 D. Reading

Make sure you are up to date with the reading:

  • Sections 5.1-5.6 & 5.10 of Burden, Faires, and Burden (2015)
Burden, Richard L., Douglas J. Faires, and Annette M. Burden. 2015. Numerical Analysis. 10th ed. CENGAGE Learning.