u0 = 1.; f(t,u) = -10.0u; T = 100.; h1=.2; h2=.201
u, v = Euler( u0, f, T, h1 ), Euler( u0, f, T, h2 )
scatter( [0:h1:T, 0:h2:T], [u, v], label=["Euler (h=$h1)" "Euler (h=$h2)"], m=3 )23 A3: Solutions
Assignment 3: Absolute stability of Runge-Kutta and multistep methods
Section 5.11 of Burden, Faires, and Burden (2015) may be useful.
In lectures, we have discussed zero-stability of numerical methods for solving IVPs. We have seen that Runge-Kutta methods and linear multistep methods that satisfy the root condition are zero stable. Therefore, these methods are convergent: for small enough step size the numerical approximation is “close” to the exact solution. This notion does not take into account how small the step size has to be in order to achieve a reasonable error. Absolute stability on the other hand is about fixed step size. Roughly speaking, a numerical scheme is absolutely stable for a given step size if u_j remains bounded as j\to\infty.
To introduce this concept, we need the following test problem: for \lambda \in \mathbb C,
\begin{align} u(0) &= u_0 \nonumber\\ u'(t) &= \lambda u(t). \tag{test} \end{align}
Notice that the exact solution to this equation is u(t) = u_0 e^{\lambda t} and so, if \lambda = x + i y, then u(t) = u_0 e^{x t} \big[ \cos y + i \sin y \big]. This solution remains bounded if x \leq 0 and diverges when x > 0. It turns out that analysing how well numerical schemes can approximate this problem is informative:
Definition 23.1 We say that a numerical scheme is absolutely stable for a given \lambda \in \mathbb C and step size h if the numerical approximation, u_j, to \text{(test)} remains bounded as j \to \infty.
Definition 23.2 The region of absolute stability, \mathscr A, of a numerical method is defined as the set of all z := h\lambda for which the method is absolutely stable for \lambda and step size h.
23.1 A. RK Methods
Exercise 1. Show that Euler’s method has the following region of absolute stability
\begin{align} \mathscr A = \{ z \in \mathbb C : |z + 1| \leq 1 \}. \nonumber \end{align}
(That is, \mathscr A is a ball of radius 1 centered at -1)
Answer.
Notice that the numerical solution given by Euler’s method is
\begin{align} u_{j+1} &= u_j + h f(t_j, u_j) \nonumber\\ % &= (1 + h\lambda) u_j \nonumber\\ % &= \cdots = (1 + h\lambda)^{j+1} u_0. \end{align}
This remains bounded as j\to\infty if |1 + h\lambda| \leq 1.
Exercise 2. Suppose you apply Euler’s method to
\begin{align} u(0) &= 1 \nonumber\\ u'(t) &= -10 u(t). \nonumber \end{align}
What step size h should you use to ensure the numerical solution remains bounded for large times?
Answer.
Here, \lambda = -10 and the numerical solution remains bounded if -1\leq 10h-1 \leq 1. i.e. 0 \leq h \leq \frac15. We plot the numerical solution with h = \frac15 and h = \frac15 + \varepsilon for \varepsilon small:
Exercise 3. Consider your favourite 2 stage, order 2 Runge-Kutta method. Show that the region of absolute stability is given by
\begin{align} \mathscr A = \left\{ z \in \mathbb C : \big| 1 + z + \tfrac{z^2}{2} \big| \leq 1 \right\}. \nonumber \end{align}
Answer.
Let’s look at a general 2 stage, order 2 RK method: for b \not= 0, we have
\begin{align} k_1 &= h f(t_j, u_j) \nonumber\\ k_2 &= h f\big(t_j+\tfrac{h}{2b}, u_j + \tfrac{k_1}{2b}\big) \nonumber\\ u_{j+1} &= u_j + (1-b) k_1 + b k_2. \end{align}
Applying this method to the equation u'(t) = \lambda u(t), we obtain:
\begin{align} u_{j+1} &= u_j + (1-b) h \lambda u_j + b h \lambda \big( u_j + \tfrac{h \lambda u_j}{2b}\big) \nonumber\\ % &= \left( 1 + h \lambda + \frac{(h \lambda)^2}{2} \right) u_j \nonumber\\ % &= \cdots \nonumber\\ % &= \left( 1 + h \lambda + \frac{(h \lambda)^2}{2} \right)^{j+1} u_0. \end{align}
As a result, the region of absolute stability is given by \mathscr A = \{ z\in \mathbb C : |1 + z + \tfrac{z}{2}| \leq 1 \}.
It turns out that there exists a stability function R such that
\begin{align} \mathscr A = \{ z\in \mathbb C : |R(z)| \leq 1 \}.\nonumber \end{align}
Notice that for Euler’s method R(z) = 1 + z and all 2 stage order 2 RK methods have R(z) = 1 + z + \tfrac{z^2}{2}. In general, for an s-stage RK method R(z) is a polynomial of degree s. If the method has order p, then this polynomial must match the first p+1 terms of the Taylor expansion of e^z. We plot the region of absolute stabily for the first few RK methods:
23.2 B. Multistep methods
Recall that, for a general multistep method
\begin{align} u_{j+1} &= \alpha_{m-1} u_j + \cdots + \alpha_0 u_{j-m+1} \nonumber\\ % &\,\, + h \Big[ \beta_m f(t_{j+1},u_{j+1}) + \beta_{m-1} f(t_j,u_j) + \cdots + \beta_0 f(t_{j-m+1},u_{j-m+1}) \Big], \end{align}
we define the generating polynomials
\begin{align} \rho(z) &= z^m - \alpha_{m-1} z^{m-1} - \cdots - \alpha_0 \nonumber\\ \sigma(z) &= \beta_m z^m + \beta_{m-1}z^{m-1} + \dots + \beta_0. \nonumber \end{align}
Exercise 4. Suppose that \zeta is a solution of \rho(\zeta) = h\lambda \sigma(\zeta) and show that u_{j} := \zeta^j solves the difference equation (1).
Answer.
The difference equation is equivalent to
\begin{align} &u_{j+1} - \alpha_{m-1} u_j - \cdots - \alpha_0 u_{j-m+1} \nonumber\\ % &= h \Big[ \beta_m f(t_{j+1},u_{j+1}) + \beta_{m-1} f(t_j,u_j) + \cdots + \beta_0 f(t_{j-m+1},u_{j-m+1}) \Big]. \end{align}
When u_{j} := \zeta^j, we have
\begin{align} &u_{j+1} - \alpha_{m-1} u_j - \cdots - \alpha_0 u_{j-m+1} \nonumber\\ % &= \zeta^{j+1} - \alpha_{m-1} \zeta^j - \cdots - \alpha_0 \zeta^{j-m+1} \nonumber\\ % &= \zeta^{j-m+1} \rho(\zeta). \end{align}
Similarly, when u_{j} := \zeta^j and f(t,u) = \lambda u, we have
\begin{align} &h \Big[ \beta_m f(t_{j+1},u_{j+1}) + \beta_{m-1} f(t_j,u_j) + \cdots + \beta_0 f(t_{j-m+1},u_{j-m+1}) \Big] \nonumber\\ % &= h\lambda \Big[ \beta_m u_{j+1} + \beta_{m-1} u_j + \cdots + \beta_0 u_{j-m+1} \Big] \nonumber\\ % &= h\lambda \zeta^{j-m+1} \Big[ \beta_m \zeta^m + \beta_{m-1} \zeta^{m-1} + \cdots + \beta_0 \Big] \nonumber\\ % &= \zeta^{j-m+1} h\lambda \sigma(\zeta). \end{align}
It turns out that if all the roots \zeta_1,\dots,\zeta_m of \rho - h\lambda\sigma are simple, then we can write a general solution to (1) as
\begin{align} u_j = c_1 (\zeta_1)^j + \cdots + c_m (\zeta_m)^j \nonumber \end{align}
for some constants c_1,\dots,c_m. This numerical solution remains bounded if and only if |\zeta_j| \leq 1 for all j.
Exercise 5. Explain why (\text{AB2}) has a bounded region of absolute stability.
Answer.
AB2 has
\begin{align} \rho(z) &= z^2 - z \nonumber\\ \sigma(z) &= \frac{1}{2}\big( 3z - 1 \big). \end{align}
We therefore have h\lambda \in \mathscr R if and only if all solutions \zeta with
\begin{align} \rho(\zeta) = h\lambda \sigma(\zeta) \end{align}
satisfy |\zeta|\leq 1.
One can see that, for large h\lambda, in order to keep \zeta^2-\zeta small (and thus |\zeta|<1) we require \frac{3(\zeta-1)}2 \approx 0 giving a solution \zeta \approx \frac{1}{3}. However, for large \zeta, we have \rho(\zeta) \sim \zeta^2 and h\lambda \sigma(\rho) \sim \frac{3}2 h\lambda \zeta. Thus there is a solution with \zeta \sim \frac{3}2 h\lambda. This means that h\lambda \not\in \mathscr R for all large |h\lambda|.
(I know this proof isn’t rigorous - if you know some complex analysis, I can give you the rigorous version! When questions say “Explain why…” (rather than “Prove…” or “Show that…”), I’m not expecting a fully rigorous argument.)
It turns out that all explicit multistep methods have bounded regions of absolute stability. We now plot regions of absolute stability for (\text{AB1})-(\text{AB4}):
Here, we plot the errors when approximating
\begin{align} u(0) &= 1 \nonumber\\ u'(t) &= -10 u(t) \end{align}
on [0,100] for the methods (\text{AB1})-(\text{AB4})
Exercise 6. Comment on this error plot with reference to the absoulte stability regions for the particular methods.
Answer.
We can see that eventually, the error curves follow the predicted asymptotic rates. However, in the pre-asymptotic regium we see a different behaviour that is determined by the region of absolute stability for the various methods. When -10 h is in the region of absolute stability (recall that h is inversely proportional to n), the numerical solution is bounded and thus the error between the numerical and exact solution is also bounded. Since the region of absolute stability for AB1 is the largest out of AB1-AB4, one can choose a smaller n (larger h) and still obtain a bounded numerical solution.
23.3 C. Implicit Methods
As we have seen, explicit methods have bounded regions of absolute stability. A justification for using implicit methods over explicit methods is that they can have larger stability regions. In fact, backward’s Euler (\text{AM1}) and implicit trapezoid (\text{AM2}) have unbounded stability regions:
Exercise 7. Show that backwards Euler has the region of stability given by
\begin{align} \mathscr A = \{ z \in \mathbb C : |z - 1| \geq 1 \}. \end{align}
(Exterior of a ball of radius 1 centered at 1)
Answer.
Backward’s Euler is of the form u_{j+1} = u_j + h f(t_{j+1}, u_{j+1}). For our particular test problem, we have
\begin{align} u_{j+1} = u_j + h\lambda u_{j+1}. \end{align}
That is,
\begin{align} u_{j+1} &= (1-h\lambda)^{-1} u_j \nonumber\\ % &= (1-h\lambda)^{-2} u_{j-1} = \cdots \nonumber\\ % &= (1-h\lambda)^{-(j+1)} u_{0}. \end{align}
Therefore, if |(1-h\lambda)^{-1}| \leq 1, this numerical solution remains bounded as j\to\infty. That is, we require |1-h\lambda| \geq 1.
It turns out that the stability region of the implicit trapezoid method is the entire left half plane:
The region of stabilty for higher order implicit methods then becomes bounded. Here, we compare them to the (explicit) AB methods (on the same scale):
Exercise 8. Explain what you would expect to see in the convergence plot above (the plot entitled “Convergence of explicit multistep methods”) if we also plotted the error curves for the implicit methods \text{(AM1)}-\text{(AM4)}?
Answer.
We would expect the methods to remain bounded (so have bounded error) for larger step size (smaller n). We can see this here:
plot( P, P′, size=(1000,500))