30  Some exam question ideas

30.1 Exam header:


Time Allowed: 2 hours

Answer SECTION A (40%) and TWO of the three (30% + 30%) optional sections (B, C, D). If you have answered more than than the required two optional sections, you will be given credit for your best two optional sections. As a guide, I suggest you spend around 40 minutes on each section.

Please indicate your answers to multiple choice questions clearly with an ╳ to the left of your answers. Answer the optional sections on separate paper.

Please label any additional paper you use with your name and page number.

Electronic devices are not needed and not permitted in this examination. You may use your own notes, limited to 2 sides of letter sized paper. Answer the questions to Section A in the boxes provided. Additional paper may be requested.


This is a list (of some) of the sort of questions you might see in the exam. Also see exercises throughout the lecture notes, homeworks, and midterm. I will add to these exercises for our class on Wednesday.

30.2 Chapter 1: Floating point numbers, conditioning, …

  1. How many floating point numbers with binary precision k = 4 are there?
  2. Let f(x) := \frac{\cos(x) - 1}{x^2}. By considering the Taylor expansion of \cos near 0 (that is, \cos(x) = 1 - \frac{x^2}{2} + \frac{x^4}{24} + \mathcal O(x^6)) one obtains the approximation

\begin{align} f(x) \approx g(x) := - \frac{1}{2} + \frac{x^2}{24}. \end{align}

  1. Write down an upper bound on the error when approximating f by g,
  2. What is the condition number of f?
  3. What is the condition number of g?
  4. Which formula would you use to approximate f(10^{-10})?
  5. Why?
  1. Recall that when considering x^2 + bx + c = 0, one may use either of the following (mathematically equivalent) formulas for the solutions:

\begin{align} x_{\pm} &= \frac{-b \pm \sqrt{b^2-4c}}{2} % = \frac {2c} {-b \mp \sqrt{b^2-4c}} \end{align}

  1. Suppose that b>0 and c\approx 0. Which formula is more numerically stable for computing x_+?
  2. Why?
  1. Suppose that we approximate the speed of light in a vacuum c by 3 \times 10^8 meters per second.
  1. Write down the formula for the relative error in this approximation.
  2. Suppose that c = 2.9979 \times 10^8 meters per second (this is exact to 5 significant digits). What is the relative error in approximating c by 3 \times 10^8?

(other ideas: pounds in a kg, currency conversion, marathon \approx 26 miles, ……)

  1. True or false: If x and y are known to 5 significant digits, then x + y is also known to 5 significant digits.

  2. Consider the polynomial

\begin{align*} p(x) = x^5 + 12x^4 - x^3 - 2x^2 + x - 3 \end{align*}

which can also be written as

\begin{align*} q(x) = (((((x+12)x -1)x -2)x+1)x-3). \end{align*}

In the following, we evaluate p(x) and q(x) for x = 1.23 using 16 bit floating point numbers. Which value would you trust more and why?

p(1.23) = 23.69
q(1.23) = 23.67
  1. Is \tfrac1{10} exact in 16 bit floating point? Explain your answer.

  2. Suppose that a quantity x \not= 0 is approximated by \widetilde{x}. What is the relative error in this approximation?

  3. What is the relative condition number of the function f(x) = x + 1?

  4. Suppose that g(\xi) = \xi is a fixed point of a twice continuously differentiable function g. True or false: g'(\xi) = 0, then the iteration x_{n+1} = g(x_n) always converges quadratically.

(Let x_n = ......
For which of the following sequences (\varepsilon_n) do we have x_n = O(\varepsilon_n) as n\to\infty?)

30.3 Chapter 2: Solving nonlinear equations in 1d

  1. True or false? There exists a continuous function with a root in [a,b] but with f(a) f(b) > 0,

  2. True or false? All continuous functions f: [a,b] \to \mathbb R have fixed points f(x) = x

    1. Show that f(x) = \cos(x) - 2 x has a root in [0,\frac{\pi}{2}].
  1. write the root as a fixed point of some contraction g:[0,\frac{\pi}{2}] \to [0, \frac\pi2]
  2. Does the iteration x_{n+1} := g(x_n) converge for all x_1 \in [0,\frac\pi2]? If so what is the order of convergence? What if x_1 \in \mathbb R?
  1. Show that Newton’s method applied to f(x) = (x -\xi)^2 converges quadratically to \xi

  2. Consider the iterative method (x_n) for solving f(x) = 0:

\begin{align} y_n := x_n - \frac{f(x_n)}{f'(x_n)} \nonumber\\ x_{n+1} = y_n - \frac{f(y_n)}{f'(y_n)}. \end{align}

If the method converged with order 4, what is the corresponding efficiency index? Does this method improve on Newton’s method?

  1. Consider the fixed point g(\xi) = \xi with g'(\xi) = g''(\xi) = \dots = g^{(41)}(\xi) = 0 and g^{(42)} bounded near \xi. What is the order of convergence of x_{n+1} = g(x_n) when x_1 is sufficiently close to \xi? What is the asymptotic error constant?

    1. Show that f(x) = \sin x has a unique fixed point.
  1. Show that the sequence x_{n+1} = \sin(x_n) converges sublinearly.
  1. Consider the Fibonacci sequence F_0 = F_1 = 1 and F_{n+1} = F_n + F_{n-1} for all n \geq 1. Define x_n = F_{n+1}/F_n and suppose that (x_n) converges to x. What is x?

  2. Consider the function f(x) = e^x - x - 1 with zero at \xi = 0. At what order would you expect Newton’s method to converge if x_1 was sufficiently close to \xi?

  3. Horner’s method is a computationally efficient way of applying Newton’s method for computing zeros of polynomials P. Suppose we have an approximate zero x_n of P. One factorises P as P(x) = (x-x_n) Q_n(x) + c_n for some polynomial Q_n and constant c_n and applies the Newton iteration to P about x_n: x_{n+1} = x_n - \frac{P(x_n)}{P'(x_n)}.

  1. Show that P'(x_n) = Q_n(x_n),
  2. Show that c_n = P(x_n),
  3. Explain in words why (x-x_n) is an approximate factor of P for sufficiently large n.
    Now, applying the same algorithm to Q_n one may approximate another root of P….
  1. For a fixed constant \alpha \geq 0, consider the iteration x_{n+1} = 2x_n - Ax_n^2.
  1. Suppose (x_n) \to \xi. What is the value of \xi?
  2. Show that (x_n) \to x for all x_1 sufficiently close to \xi.

30.4 Chapter 3: Interpolation

  1. True or false: For all x_0 < \dots < x_n and y_0,\dots,y_n, there exists a unique polynomial p with p(x_n) = y_n.

  2. Recall that the degree n Taylor approximation of f about x_0 = 0 is given by

\begin{align} p_n(x) = f(0) + f'(0) x + \frac{f''(0)}{2} x^2 + \dots + \frac{f^{(n)}(0)}{n!} x^n \end{align}

You can see p_n as the Hermite interpolation of f on some multiset X. What is X?

  1. The conversion from degrees Fahrenheit to degrees Celsius is linear: f(x) = a x + b where x is the temperature in Fahrenheit and f(x) is the corresponding temperature in Celsius. Suppose we have temperatures in Fahrenheit F_0,F_1 and corresponding temperatures in Celsius C_0,C_1.
  1. Write down the Lagrange form of the polynomial interpolation of f on \{F_0, F_1 \}.
  2. Write down the Newton form of the polynomial interpolation of f on \{F_0,F_1\}.
  3. Given f(-40) = -40 and f(32) = 0, write down the formula for f.
  1. Consider \exp : [-1,1] \to \mathbb R.
  1. Write down the (Lagrange) polynomial interpolation of \exp on X = \{-1,0,1\}.
  2. Write down an error estimate for interpolating \exp on X.
  1. Let P(x) = a_0 + a_1(x-x_0) + a_2 (x-x_0)(x-x_1) + \cdots + a_n(x-x_0)(x-x_1) \cdots (x-x_{n-1}) be a polynomial interpolating f at x_0, \dots, x_n. What is a_0 and a_1?

  2. Question about piecewise linear interpolation…..

  3. A cubic spline interpolation of f on \{x_0 < x_1 < \dots < x_n \} is a twice continuously differentiable function S for which S restricted to the intervals [x_{i-1}, x_i] is a cubic polynomial for all i = 1,\dots,n and S(x_i) = f(x_i) for all i. Suppose that \{ x_{j} \} are equispaced with h := x_{j+1} - x_j for all j.

  1. On [x_j,x_{j+1}], suppose S has the form a_j + b_j(x-x_j) + c_j(x-x_j)^2 + d_j(x-x_j)^3. What is the value of a_j?
  2. Show that the error in this approximation is (at least) O(n^{-2})
  3. Here, we plot the actual errors for a cubic spline approximation to f(t) = e^{\cos t}. What is the rate of convergence?
  1. Why do we use a log-log plot for this graph?

30.5 Section ??: Lebesgue Constants

We will define \|f\|_{L^\infty} := \max_{x\in[a,b]} |f(x)| and the linear interpolation of some function f \colon [a,b] \to \mathbb R in X = \{x_0, \dots, x_n\} will be written as I_Xf. The Lebesgue constant for X is the smallest number \Lambda_n = \Lambda_n(X) such that

\begin{align} \| I_Xf \|_{L^\infty([a,b])} \leq \Lambda_n \| f \|_{L^\infty([a,b])}. \end{align}

  1. Write down the formula for I_Xf in terms of Lagrange polynomials,

  2. Show that I_X(f + g) = I_X f + I_X g and I_X( \alpha f ) = \alpha I_X f for all functions f,g and constants \alpha.

Remark: You have thus shown that I_X is a linear operator.

  1. Explain why I_X p_n = p_n for all polynomials p_n of degree at most n (i.e. p_n\in\mathcal P_n) and hence I_X( I_Xf) = I_X f,

  2. Show that if p_n^\star is the best degree n polynomial approximation to f in the norm \| \cdot \|_{L^\infty} (that is, if \| f - p_n^\star\|_{L^\infty} \leq \| f - p_n\|_{L^\infty} for all polynomials p_n \in \mathcal P_n), then

\begin{align} \| f - I_Xf \|_{L^\infty} % \leq (1 + \Lambda_n) \| f - p_n^\star\|_{L^\infty}. \end{align}

Hint: You may use parts (b) and (c) and the fact that

\begin{align} \| f - I_Xf \|_{L^\infty} % &= \| f - p_n^\star + p_n^\star - I_Xf \|_{L^\infty} \nonumber\\ % &\leq \| f - p_n^\star\|_{L^\infty} + \| p_n^\star - I_Xf \|_{L^\infty} \nonumber \end{align}

Remark: As a result, \Lambda_n gives tells us how good the set of interpolation nodes is when compared with the best polynomial approximation.

  1. Use (a) to show that

\begin{align} \Lambda_n = \max_{x \in [a,b]} \sum_{j=0}^n |\ell_j(x)| \end{align}

  1. We use this definition to plot the Lebesgue constants for both interpolation in (i) Chebyshev points and (ii) in equispaced points on [-1,1]. Which is which and why?

30.6 Chapter 4: Quadrature

  1. Show that approximating the following integral with the composite Trapezoid rule on the n-1 intervals [k,k+1] for k=1,\dots,n-1 gives

\begin{align} I = \int_1^n \log(x) \mathrm{d}x \approx \log n! - \frac12 \log n \end{align}

NoteRemark

Since I = n\log n - n + 1, one can use the error estimates for the Trapezoid rule to show

\begin{align} c \sqrt{n} \left(\frac{n}{e}\right)^n \leq n! \leq C e \sqrt{n} \left(\frac{n}{e}\right)^n \end{align}

for some 0 < c < C.

  1. The error function is defined as

\begin{align} \mathrm{erf}(x) = \frac2{\sqrt\pi} \int_0^x e^{-t^2} \mathrm{d}x. \end{align}

What is the approximation to \mathrm{erf} if you use the (i) rectangular rule, (ii) trapezoid rule, (iii) Simpson’s rule? Write down upper bounds for the error between \mathrm{erf} and your approximations.

  1. What is the largest n for which

\begin{align} \int_{-1}^{+1} p(x) \mathrm{d}x % = f\left( -\tfrac{\sqrt{3}}{3} \right) + f\left( \tfrac{\sqrt{3}}{3} \right) \end{align}

for all p \in \mathcal P_n (polynomials of degree at most n).

  1. Find an error estimate for the quadrature rule

\begin{align} \int_{0}^1 f(x) \mathrm{d}x \approx % \sum_{j=0}^n \frac{f^{(j)}(0)}{(j+1)!} \end{align}

  1. Suppose that

\begin{align} \int_{-1}^1 f(x)\mathrm{d}x \approx \sum_{j=0}^n w_j f(x_j) \end{align}

is Gauss quadrature.

  1. Explain what the nodes and weights are in this case.
  2. Recall that w_j \geq 0, \sum_j w_j = 2, and Gauss quadrature is exact for all polynomials of degree less than or equal to 2n+1. How would you use these facts to obtain an error estimate for Gauss quadrature in terms of the best polynomial approximation to f?
  3. Use \ell_X(x)^2 (where \ell_X is the node polynomial) to show that n-point quadrature rules are never exact for all polynomials of degree strictly greater than 2n+1.

6, 7, 8: Exercises 1,2,3 on lecture about Gauss quadrature,

30.7 Chapter 5: Linear Systems of Equations

    1. Example of LU decomposition for diagonally dominant matrix (don’t need to pivot)
  1. Hence solve the linear system
  1. Why is Cramer’s rule not computationally feasible for large matrices?

  2. Why does the LU decomposition algorithm without pivoting sometimes fail?

  3. Recall that \| A \| := \max_{|x| = 1} |Ax| where |\cdot| denotes the Euclidean norm in \mathbb R^n. Write down \kappa(A), the condition number of A with respect to the norm |\cdot|.

  4. What is the condition number of the following matrix?

\begin{align} A = \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} \end{align}

  1. Exercies from Lecture notes #15

  2. Find the SVD of

\begin{align} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{align}

  1. How many non-zero singular values does the following matrix have?

\begin{align} \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} \end{align}

  1. Two matrices A, B \in \mathbb R^{512\times512} are represented by the following two grayscale images. Which matrix has singular values which decay faster?