27  A8: Solving Poisson’s equation in 1d

Code
using Plots
using LaTeXStrings
using Polynomials
using LinearAlgebra

"""
    ChebyshevNodes( n; type=2) 

Returns the ``n+1`` Chebyshev nodes ``X = \\{x_0, ..., x_n\\}`` of type type (default being type II) 
"""
function ChebyshevNodes( n; type=2) 
    if (type == 1)
        return @. cos( (2*(n:-1:0)+1)*pi/(2*(n+1)) )
    else
        return @. cos( (pi/n)*(n:-1:0) )
    end
end

"""
    ChebyshevInterpolant( f, n ) 

Returns the polynomial of degree ``n`` interpolating ``f`` at Chebyshev nodes using the Barycentric formula
"""
function ChebyshevInterpolant( f, n ) 
    x = ChebyshevNodes(n)
    y = f.(x)

    λ = zeros(n+1)
    λ[1] = 1/2
    λ[n+1] = (-1)^n/2
    for j in 1:(n-1)
        λ[j+1] = (-1)^j
    end

    return t -> t in x ? f(t) : sum( @. λ * y / (t - x) ) / sum( @. λ / (t - x) )
end

"""
    Lagrange( f, n ) 

"""
function Lagrange( i, n ) 
    x = ChebyshevNodes(n)
    y = [ j==i ? 1 : 0 for j=0:n ]

    return fit( ChebyshevT, x, y )
end

mesh = -1:0.001:1;

For a fixed forcing f:[-1,1] \to \mathbb R, we consider Poisson’s equation with Direchlet boundary conditions:

\begin{align} -u''(x) &= f(x) \\ u(-1) &= u_- \\ u(+1) &= u_+. \end{align}

(Poisson’s equation in dimension d is given by - \Delta u := - \sum_{i=1}^d \frac{\partial^2u}{\partial x_i^2} = f. Here, \Delta is the Laplacian and -\Delta u = 0 is known as Laplace’s equation.)

This equation appears in different areas of applied math and engineering. For example,

The boundary condition states that the e.g. temperature at the edge of the rod is fixed and equal to u_{\pm}.

In the following questions we will solve this boundary value problem numerically in the case that u_- = u_+ = 0:

  1. Suppose that \{\phi_{j}\}_{j=1}^{n-1} is a set of functions satisfying the boundary conditions \phi_j(-1) = \phi_j(1) = 0. Show that u(x) = \sum_{j=1}^{n-1} c_j \phi_j(x) satisfies the boundary conditions u(-1) = u(1) = 0 for any choice of coefficients \{c_j\}.

Answer.

We have u(\pm 1) = \sum_{j=1}^{n-1} c_j \phi_j(\pm 1) = \sum_{j=1}^{n-1} c_j \cdot 0 = 0.

  1. Show that if we assume u = \sum_{j=1}^{n-1} c_j \phi_j solves the differential equation -u'' = f, then

\begin{align} \sum_{j=1}^{n-1} \int_{-1}^1 \phi_i'(x) \phi_j'(x) \mathrm{d}x \,\, c_j % &= \int_{-1}^1 \phi_i(x) f(x) \mathrm{d}x \end{align}

for all i = 1,2, \dots, n-1. Hint: multiply -u'' = f by \phi_i and integrate (this is known as a weak formulation of the problem).

Answer.

Multiplying by \phi_i for i=1,\dots,n-1 and integrating, we obtain

\begin{align} \int_{-1}^{+1} \phi_i(x) f(x) \mathrm{d}x &= \int_{-1}^{+1} \phi_i(x) \big( -u''(x) \big) \mathrm{d}x \nonumber\\ % &=\int_{-1}^{+1} \phi_i(x) \bigg( - \sum_{j=1}^{n-1} c_j \phi_j''(x) \bigg) \mathrm{d}x \nonumber\\ % &=\sum_{j=1}^{n-1} -\int_{-1}^{+1} \phi_i(x) \phi_j''(x) \mathrm{d}x \,\, c_j \nonumber\\ % &=\sum_{j=1}^{n-1} -\int_{-1}^{+1} \phi_i(x) \phi_j''(x) \mathrm{d}x \,\, c_j \nonumber\\ % &=\sum_{j=1}^{n-1} \left[ - \phi_i(x) \phi_j'(x) \Big|_{-1}^{+1} + \int_{-1}^{+1} \phi_i'(x) \phi_j'(x) \mathrm{d}x \right] c_j \nonumber\\ % &= \sum_{j=1}^{n-1} \int_{-1}^{+1} \phi_i'(x) \phi_j'(x) \mathrm{d}x \,\, c_j . \end{align}

Here, we have integrated by parts and used the fact that \phi_i(\pm1) = 0.

Notice that this equation can be written as A \bm c = \bm b with

\begin{align} A_{ij} &:= \int_{-1}^{+1} \phi_i'(x) \phi_j'(x) \mathrm{d}x \\ % b_i &:= \int_{-1}^{+1} \phi_i(x) f(x) \mathrm{d}x. \end{align}

We may therefore approximate the solution of Poisson’s equation by solving A \bm c = \bm b (e.g. by using Julia’s built in LU decomposition algorithm A \ b) and expanding u(x) = \sum_{j=1}^{n-1} c_j \phi_j(x).

In the following, we consider Poisson’s equation with forcing f(x) = -x.

  1. Show that the function u(x) = \frac{1}{6} x(x^2-1) solves the boundary value problem -u''(x) = -x with u(-1) = u(+1) = 0.

Answer.

We have u(\pm1) = \pm\frac16 ( 1 - 1 ) = 0 and u'(x) = \frac{1}{6} (x^2-1) + \frac16 x( 2x ) and thus -u''(x) = -\frac{1}{6}( 2x + 4x ) = -x.

In general, a closed form expression for the solution is unavaliable and one has to implement numerical schemes to find the solution. In this case, we choose an example that we can compute analytically so that we can analyse the errors in our approximation schemes.

Therefore, all that is left to do is choose a basis \{ \phi_j \} and solve the system A \bm c = \bm b. Notice that the Poisson equation may be well-conditioned, but our choice of implementation (our basis \{\phi_j\}) may be unstable (as we will now see).

27.0.1 Lagrange polynomials

Fix a set of distinct nodes X = \{x_0, \dots, x_n\} with x_0 = -1 and x_n = +1 and recall that the Lagrange polynomial \ell_j is the degree n polynomial such that \ell_j(x_j) = 1 and \ell_j(x_i) = 0 for all i\not= j. Notice that for all j = 1,\dots,n-1 we have \ell_j(-1) = \ell_j(+1) = 0 and so the approximation u(x) = \sum_{j=1}^{n-1} c_j \ell_j(x) satisfies the boundary conditions.

Here, we construct A and \bm b and solve A \bm c = \bm b numerically using this basis set with X the set of Chebyshev nodes:

n = 50

# data 
f = Polynomial([0,-1])

# In this particular case we know the exact solution U
U = x-> (1/6)*x*(x^2-1) 

X = ChebyshevNodes( n )[2:end-1]
A = zeros( n-1, n-1 )
b = zeros( n-1 )

# Store Lagrange polynomials (we don't need ℓ₀, ℓₙ)
= [ Lagrange( 1, n ) ];
for i = 2:(n-1)
    push!( ℓ, Lagrange( i, n ) )
end

# populate A, b (this is less efficient than it could be!)
for i = 1:(n-1) 
    b[i] = integrate( ℓ[i] * f, -1, 1 )
    for j=1:(n-1)
        A[i,j] = integrate( derivative( ℓ[i] ) * derivative( ℓ[j] ), -1, 1)
    end
end
c = A \ b
u = sum( c .* ℓ )

println("error in max norm (n=$n) = ", maximum( x->abs(u(x)-U(x)), mesh ) )
println("cond(A) = ", cond(A))

plt = plot(U, -1, 1, label="exact solution")
plot!(plt, u, label="approximate solution, n = $N")
plot!(plt, x->f(x)/30, linestyle=:dash, label="forcing, f" )
error in max norm (n=50) = 0.20162103242306634
cond(A) = 7650.072007254484
  1. Experiment with the above code by changing the number of basis functions n. Is this choice of basis \{\ell_j\} numerically stable?

Answer.

Choosing n \approx 50, we see that this implementation is unstable.

27.1 Hat functions

Now fix X = \{x_0< \dots< x_n\} to be the set of equispaced points on [-1,1] with x_0= -1 and x_n = 1 and notice that x_{i+1}-x_i = \frac{2}{n}. Consider the hat functions h_j defined as the functions that are linear in the intervals [x_i, x_{i+1}] for all i and such that h_j(x_j) = 1 and h_j(x_i) = 0 for all i \not= j. Here, we plot two of these functions:

n = 10;
X = -1:(2/n):1

scatter( X, zeros(n+1), label=L"X", color="black" )
plot!( [X[1],X[4], X[5], X[6], X[n+1]], [0, 0, 1, 0, 0], label=L"h_{4}")
plot!( [X[1], X[5], X[6], X[7], X[n+1]], [0, 0, 1, 0, 0], label=L"h_{5}")
  1. Show that

\begin{align} h_i(x) &= \frac{n}{2}\begin{cases} x - x_{i-1} & \text{if } x \in [x_{i-1}, x_i]\\ % x_{i+1} - x & \text{if } x \in [x_{i}, x_{i+1}]\\ % 0 &\text{otherwise} \end{cases} \end{align}

Answer.

We can see that h_i as defined in Question 5 is linear on each interval [x_j,x_{j+1}]. Moreover, we have

\begin{align} h_i(x_j) &= \frac{n}{2}\begin{cases} x_j - x_{i-1} & \text{if } j \in \{i-1, i\} \\ % x_{i+1} - x_j & \text{if } j \in \{i, i+1\}\\ % 0 &\text{otherwise}. \end{cases} \end{align}

Therefore, when j = i, the first line in the formula gives \frac{n}{2}( x_i - x_{i-1} ) = \frac{n}{2} \frac2n = 1 (which is the same as the formula given by the second line). Moreover, when j = i \pm 1, we get h_i(x_{i-1}) = \frac{n}2 ( x_{i-1} - x_{i-1} ) = 0 or h_i(x_{i+1}) = \frac{n}2 (x_{i+1} - x_{i+1}) = 0.

We now use the basis set \{h_j\} and expand u(x) = \sum_{j=1}^{n-1} c_j h_j(x). Since h_j(-1) = h_j(1) = 0 for all j=1,\dots,n-1, u satisfies the boundary condition. Again, we find the coeeficients \{c_j\} by solving A \bm c = \bm b where

\begin{align} A_{ij} &= \int_{-1}^1 h_i'(x) h_j'(x) \mathrm{d}x \\ % b_{i} &= \int_{-1}^1 h_i(x) f(x) \mathrm{d}x. \end{align}

One may construct these integrals in the same way as we did before for the Lagrange polynomial basis but it turns out they have a very simple form:

  1. Show that

\begin{align} A &= -\frac{n}{2} \begin{pmatrix} -2 & 1 & \\ 1 & -2 & 1 \\ & 1 & -2 & \ddots \\ & & \ddots &\ddots & 1 \\ & & & 1 & -2 \end{pmatrix} \end{align}

Remark: -A is the (scaled) discrete Laplacian.

Answer.

First, we calculate the derivative of h_i:

\begin{align} h_i'(x) &= \frac{n}{2}\begin{cases} 1 & \text{if } x \in [x_{i-1}, x_i]\\ % -1 & \text{if } x \in [x_{i}, x_{i+1}]\\ % 0 &\text{otherwise}. \end{cases} \end{align}

We have

\begin{align} A_{ij} &= \int_{-1}^1 h_i'(x) h_j'(x) \mathrm{d}x \end{align}

and so A_{ij} = 0 whenever [x_{i-1},x_{i+1}] \cap [x_{j-1},x_{j+1}] = \emptyset. That is, when |i-j| > 1.

Next, when |i-j| = 1, h_i'(x) h_j'(x) is zero outside [\min\{x_i,x_j\}, \max\{x_i,x_j\}] and \{ h_i'(x), h_j'(x) \} = \{-1, 1\} on [\min\{x_i,x_j\}, \max\{x_i,x_j\}] and so

\begin{align} A_{ij} &= \int_{-1}^1 h_i'(x) h_{j}'(x) \mathrm{d}x \nonumber\\ % &= \left( \frac{n}{2} \right)^2 \int_{\min\{x_i,x_j\}}^{\max\{x_i,x_j\}} (-1) (+1) \mathrm{d}x \nonumber\\ % &= - \left( \frac{n}{2} \right)^2 \frac{2}{n} % = - \frac{n}2. \end{align}

Finally,

\begin{align} A_{ii} &= \int_{-1}^1 h_i'(x)^2 \mathrm{d}x \nonumber\\ % &= \left( \frac{n}{2} \right)^2 \int_{x_{i-1}}^{x_{i+1}} \mathrm{d}x \nonumber\\ % &= \left( \frac{n}{2} \right)^2 \frac4n = n. \end{align}

  1. Recall that we are considering f(x) = -x. Show that

\begin{align} b_i = \int_{-1}^1 h_i(x) f(x) \mathrm{d}x = -\frac{2}{n} x_i. \end{align}

Answer.

Let \bm 1_A be the function that is equal to 1 on A and 0 on \mathbb R \setminus A (i.e. the characteristic function of A). We integrate by parts and use the fact that h_i(\pm1) = 0:

\begin{align} b_i &= - \int_{-1}^1 h_i(x) \big( \tfrac{x^2}{2} \big)' \mathrm{d}x \nonumber\\ % &= - h_i(x) \tfrac{x^2}{2} \Big|_{-1}^1 + \int_{-1}^1 h_i'(x) \tfrac{x^2}{2} \mathrm{d}x \nonumber\\ % &= \frac{n}{4} \int_{-1}^1 \big(\bm 1_{[x_{i-1},x_i]}(x) - \bm 1_{[x_i,x_{i+1}]}(x)\big) x^2 \mathrm{d}x \nonumber\\ % &= \frac{n}4 \left[ \frac{x^3}{3}\Big|_{x_{i-1}}^{x_i} - \frac{x^3}{3}\Big|_{x_{i}}^{x_{i+1}} \right] \nonumber\\ % &= \frac{n}4 \left[ \frac{2x_i^3}{3} - \frac{x_{i-1}^3}{3} - \frac{x_{i+1}^3}{3} \right] \nonumber\\ % &= \frac{n}{12} \left[ 2x_i^3 - (x_{i} - 2/n)^3 - (x_i + 2/n)^3 \right] \nonumber\\ % &= \frac{n}{12} \left[ - 6 (2/n)^2 x_i \right] = - \frac2n x_i \end{align}

We now implement A \bm c = \bm b in order to solve the Poisson’s equation with f(x) = -x. Here, our approximate solution is the piecewise linear function passing through the points plotted below:

n = 1000; 
X = (-1:(2/n):1)[2:end-1]
A = zeros( n-1, n-1 )
b = zeros( n-1 )

function h(i, x)
    if ( X[i] - 2/n <= x <= X[i])
        return (n/2)*( x - ( X[i] - 2/n ))
    elseif ( X[i] < x < X[i] + 2/n)
        return (n/2)*(X[i]+2/n - x)
    else
        return 0.
    end
end

# populate A, b
for i = 1:(n-1) 
    b[i] = -2*X[i]
    A[i,i] = 1
    if (i<n-1)
        A[i,i+1] = -1/2
    end 
    if (i > 1)
        A[i, i-1] = -1/2
    end
end
c = (1/n^2) * (A \ b)
u = x -> sum( c .* [h(i,x) for i  1:n-1] )

println("error in max norm (n=$n) = ", maximum( x->abs(u(x)-U(x)), mesh ) )

scatter( [-1; X; 1 ], [0; c; 0], label="approximate solution", color="black")
plot!(u, -1, 1, color="black", primary=false)
plot!( x->U(x) , label="exact solution")
plot!( x->f(x)/30, linestyle=:dash, label="forcing, f" )
error in max norm (n=1000) = 4.994999999870481e-7
  1. Experiment with the above code by changing the number of basis functions n. Is this choice of basis \{h_j\} numerically stable?

Answer.

We can choose very large n and get an excellent approximation so it seems that \{h_i\} is indeed is a numerically stable choice of basis.

Bonus: How would you impose u(-1) = u_- and u(+1) = u_+ (with u_{-} \not= 0 or u_+ \not= 0) in this numerical scheme.

Answer.

Recall that x_0 = -1, x_n = +1. We could choose h_0, h_n to be the “hat” functions which are piecewise linear on [x_i,x_{i+1}] for all i and such that h_0(-1) = h_n(1) = 1 and h_0(x_i) = 0 for all i\not=0 and h_n(x_i) = 0 for all i \not= n. Then, we can write u(x) = u_- h_0(x) + \left( \sum_{j=1}^{n-1} c_j h_j(x) \right) + u_+ h_n(x) (which now satisfies the new boundary conditions). Moreover, we can again write

\begin{align} -\int_{-1}^1 h_i(x) u''(x) \mathrm{d}x &= \int_{-1}^{1} h_i(x) f(x) \mathrm{d}x \end{align}

(for all i=1,\dots,n-1) which can be rearranged as

\begin{align} \sum_{j=0}^{n-1} \int_{-1}^1 h_i'(x) h_j'(x) \mathrm{d}x \,\, c_j % &= \int_{-1}^{1} h_i(x) f(x) \mathrm{d}x - \int_{-1}^1 h_i'(x) \left[ u_- h_0'(x) + u_+ h_n'(x) \right] \mathrm{d}x. \end{align}

Similarly to before, we can see that the only non-zero contributions to the term on the very right are

\begin{align} \int_{-1}^1 h_1'(x) h_0'(x) \mathrm{d}x % = \int_{-1}^1 h_{n-1}'(x) h_n'(x) \mathrm{d}x % = - \frac{n}{2}. \end{align}

Therefore, by changing the right hand side, we can encorporate non-zero Direchlet boundary conditions. We demonstrate this in the following code:

In the following, we consider u_- = 0.1 and u_+ = -0.1:

n = 100; 
X = (-1:(2/n):1)[2:end-1]
A = zeros( n-1, n-1 )
b = zeros( n-1 )

um, up = .1, -.1;

U = x -> (1/6)*x*(x^2-1) + x * (up-um)/2 + (um+up)/2

function h(i, x)
    if (i == 0)
        y = -1
    elseif (i == n)
        y = +1
    else
        y = X[i]
    end

    if ( y - 2/n <= x <= y)
        return (n/2)*( x - ( y - 2/n ))
    elseif ( y < x < y + 2/n)
        return (n/2)*(y+2/n - x)
    else
        return 0.
    end
end

# populate A, b
for i = 1:(n-1) 
    b[i] = -2*X[i]
    A[i,i] = 1
    if (i<n-1)
        A[i,i+1] = -1/2
    end 
    if (i > 1)
        A[i, i-1] = -1/2
    end
end

b[1] = b[1] - (-n^2/2)*um
b[n-1] = b[n-1] - (-n^2/2)*up

c = (1/n^2)* (A \ b)
u = x -> um*h(0,x) + sum( c .* [h(i,x) for i  1:n-1] ) + up*h(n,x)

println("error in max norm (n=$n) = ", maximum( x->abs(u(x)-U(x)), mesh ) )
println( "cond(A) = ", cond(A) )

scatter( [-1; X; 1 ], u.([-1; X; 1 ]), label="approximate solution", color="black")
plot!(u, -1, 1, color="black", primary=false)
plot!( x->U(x) , label="exact solution")
plot!( x->f(x)/30, linestyle=:dash, label="forcing, f" )
error in max norm (n=100) = 4.949999999996624e-5
cond(A) = 4052.180695477326

(You could show that

\begin{align} u(x) = \frac{1}{6} x(x^2 - 1) + \frac{u_+ - u_-}{2} x + \frac{u_++u_-}{2} \end{align}

is the exact solution.)