I encourage you to play around with the juptyer notebook for this lecture - you can copy the code with the this notebook button on the side of this page.
Warning
These notes are mainly a record of what we discussed and are not a substitute for attending the lectures and reading books! If anything is unclear/wrong, let me know and I will update the notes.
Last time, we saw that u(x,t) := u_0(x-ct) solves the linear transport equation (we used the method of characteristics). In A6, you’ll implement a finite difference approximation to this problem and look at the so-called CLF condition.
Then, we looked at the heat equation and a forward in time, centred in space (FTCS) finite difference approximation of this problem. Recall that we are looking at a uniform grid with x_i := a + \frac{b-a}{m} i and t_j := \frac{T}{n} j and h := \frac{b-a}{m} and \tau := \frac{T}{n}. Recall that this approximation can be written as
where \mu := \tau (\frac{\alpha}{h})^2 is the diffusion number. Last time, we saw that, if 0\leq \mu \leq \frac{1}{2}, then the method is stable and it is unstable when \mu > \frac{1}{2}:
Therefore, as h,\tau\to 0, the local trunction error vanishes. That is, the method is consistent with order of accuracy at least O(\tau + h^2). If one fixes \mu = \tau (\frac{\alpha}{h})^2, then the order of accuracy is at least O(h^2). We also saw (numerically) that, when \mu = \frac{1}{6}, then the order of accuracy is actually O(h^4):
Exercise 19.1 Show that, if u is smooth, then u_{tt} = \alpha^4 u_{xxxx}. Hint: (u_{xx})_t = (u_t)_{xx}.
Therefore, if \mu = \frac{1}{6}, the leading order in the local trunction error is O(h^4).
19.3 (von Neumann) Stability
We want to explain why our FTCS method converges (when 0 \leq \mu \leq \frac{1}{2}). We have shown this method is consistent so we are left with proving stability. We will use the so-called von Neumann stability analysis. We actually did this already for the linear advection-diffusion-reaction equations when we looked at the discrete Fourier transform.
The idea is to suppose that, at time t_j, the numerical solution is a complex exponential and look at the numerical solution at future times. Because finite differences applied to complex exponentials give constant multiples of complex exponentials (i.e. they are eigenvectors of the corresponding differentiation matrices), we can see when this numerical solution remains bounded as t \to \infty.
Let u_{i,j} := e^{\mathrm{i} k x_i}. Apply one step of the \text{(FTCS)} method (here, we take f = 0),
\begin{align}
u_{i,j+1} &= (1 - 2\mu) e^{\mathrm{i} k x_i }+ 2\mu \frac{e^{-\mathrm{i} k h} + e^{\mathrm{i} k h}}{2} e^{\mathrm{i} k x_i } \nonumber\\
%
&= \left[1 - 2\mu + 2\mu \cos k h \right] e^{\mathrm{i} k x_i } \nonumber\\
%
&= \left[1 - 4\mu \sin^2 \frac{k h}{2} \right] e^{\mathrm{i} k x_i } \nonumber\\
%
&=: \lambda(k) u_{i,j}.
\end{align}
That is, one step of the method is the same as multiplication by \lambda(k) and so u_{i,j+p} = \lambda(k)^p u_{i,j} and the numerical solution remains bounded if and only if |\lambda(k)| \leq 1. In our case, that is if
Therefore, when 0 \leq \mu \leq \frac{1}{2}, the method \text{(FTCS)} is stable. In this case we say the method is conditionally stable because we have the restriction on the diffusion number \mu := \tau (\frac{\alpha}{h})^2.
19.4 An implicit method
One may ask if there is an unconditionally stable finite difference approximation to the heat equation. For this, we replace the simple forwards difference with the (implicit) backwards difference. (Recall from A3 that implicit methods can have larger regions of absolute stability, so this approach seems reasonable!)
where \mu = \tau (\frac{\alpha}{h})^2. Notice that this is an implicit equation for u_{\bullet,j+1}. Again, to fix the initial condition, we set u_{i,0} := u_0(x_i) and, to get the correct boundary conditions, we have u_{0,j} := g_a(t_j) and u_{m,j} := g_b(t_j). The internal degrees of freedom are governed by the linear equation
where this matrix is (m-1)\times(m-1) and u_{\bullet,j} := \{ u_{i,j} \}_{i=1}^{m-1}. Solving this linear system of equations gives u_{\bullet, j+1}. We implement this method here:
functionfdm_heat_backwards( m, n, α, u0; a=0., b=1., g_a =0., g_b =0., T=.2, warn=true )# space discretisation h = (b-a)/m x =collect(a:h:b)# time discretisation τ = T/n t =collect(0:τ:T) μ =τ*(α/h)^2 u =zeros( m+1, n+1 ) u[:,1] =u0.(x)for j ∈1:n A =Tridiagonal( fill(-μ,m-2), fill(1+2μ,m-1), fill(-μ,m-2) ) v =zeros(m-1); v[1] =typeof(g_a) <: Function ? g_a(t[j]) : g_a v[end] =typeof(g_b) <: Function ? g_b(t[j]) : g_b u[2:m,j+1] = A \ ( u[2:m,j] + μ*v ) u[1,j+1] =typeof(g_a) <: Function ? g_a(t[j+1]) : g_a u[m+1,j+1] =typeof(g_b) <: Function ? g_b(t[j+1]) : g_bendreturn x,t,uend# initial conditionu0(x) =sin(π*x) # exact solutionu_e(x,t) =exp( -(π^2) * t ) *sin( π*x )x,t,u =fdm_heat_backwards(50, 1000, 1., u0)μ = (.2/1000)*(50)^2m=51x′,t′,u′ =fdm_heat_backwards(m, 1000, 1., u0)μ′ = (.2/1000)*(m)^2anim =@animatefor j ∈1:10:length(t)scatter(x, u[:, j]; title = L"Heat equation $u_t - u_{xx} = 0$ (implicit scheme)", label ="μ = $μ, t = $(t[j])", ylims = [0,1], m=2)plot!(x′, u′[:, j]; label ="μ = $(round(μ′,digits=2)), t = $(t[j])", m=2)plot!( x->u_e(x,t[j]); label="exact solution", linestyle=:dot)endgif(anim, "pics/heat-4.gif")
[ Info: Saved animation to c:\Users\math5\Math 5485\Math5486\pics\heat-4.gif
Exercise 19.2 Why is this method computationally feasible dispite having to solve a linear system of equations at each timestep? What is the computational cost to solve the linear system?
Exercise 19.3 Complete a von Neumann stability analysis on the method \text{(BTCS)}. Hint: Follow the same procedure as before: Suppose u_{i,j} := e^{\mathrm{i} k x_i} and u_{i,j+1} = \lambda(k) e^{\mathrm{i} k x_i}. What is |\lambda(k)|?
Exercise 19.4 Is there a choice of \mu for which \text{(BTCS)} is O(h^4) accurate?
Suppose that u is smooth. Since u(x_i,t_{j}) = u(x_i, t_{j+1} - \tau) = u(x_i, t_{j+1}) - \tau u_t(x_i, t_{j+1}) + \frac{\tau^2}{2} u_{tt}(x_i, t_{j+1}) + ...., we have
If u solves the equation u_t - \alpha^2 u_{xx}=0, then we have u_{tt} = (u_t)_t = (\alpha^2 u_{xx})_t = \alpha^2 (u_t)_{xx} = \alpha^4 u_{xxxx} and the local trunction error is
evaluated at (x_i,t_{j+1}) where \tau (\tfrac{\alpha}{h})^2. Now, since \mu is positive, we can not choose h, \tau to cancel the leading order contribution. Therefore, unlike in the explicit scheme, we cannot expect faster convergence in general.
Exercise 19.5 (Crank-Nicolson) Write the methods we discussed today as
Taking the average of these methods gives the Crank-Nicolson method. Write this method down and show that it is unconditionally stable. What is the order of accuracy of this method?
Next:
Numerical algorithms for the wave equation
TipTO-DO
Reading for the rest of the course: Driscoll and Braun (2018) 11.1, 11.2, 12.4
Burden, Richard L., Douglas J. Faires, and Annette M. Burden. 2015. Numerical Analysis. 10th ed. CENGAGE Learning.
Driscoll, Tobin A, and Richard J Braun. 2018. Fundamentals of Numerical Computation. https://fncbook.com/.
Olver, Peter J. 2014. Introduction to Partial Differential Equations. Undergraduate Texts in Mathematics. Cham: Springer. https://doi.org/10.1007/978-3-319-02099-0.
Suli, Endre, and David F Mayers. 2012. An Introduction to Numerical Analysis. Cambridge, England: Cambridge University Press.