38  Topics for Presentations/Posters/Papers

Warning

To be updated….

Presentations/posters will take place on the week beginning April, 27. Papers will be due the same week.

Presentations.

Posters.

Papers.

38.1 Rubric

Here are the things that I’ll be looking for in your presentations/posters/papers:

Content (4 points)

  • Accuracy: the method/results/proof/examples should be correct,
  • Originality: tell us something that we haven’t seen in lectures but links to existing topics or extends the content completed in class or on the homework assignments,
  • Give examples to motivate/explain key ideas,

Numerical Implementation (2 points)

  • Show us some numerical results/experimentation

Presentation (2 points)

  • You will be graded on how clearly you present the key ideas,
  • Think about how you want to present (whiteboard/slides/notebook) - what is the best way to convey the main point?

Questions (2 points)

  • How well are you able to answer the questions at the end?
  • Bonus points for asking good questions

(The following is in no particular order. To be updated….)

38.2 Convergence of Jacobi and Gauss-Seidel

In the midterm we prove that Jacobi and GS converge for SDD matrices. You can actually prove convergence under a weaker assumption. Show that Jacobi and GS converge under the assumption that A is irreducibly diagonally dominant.

38.3 “From potential theory to matrix iterations in six steps”

Pick out something that you find interesting from this paper Driscoll, Toh, and Trefethen (1998). [more than one person can do this: just coordinate so that you are not giving the same presentation].

38.4 Convergence of Conjugate Gradient

Give some ideas in the proof of Theorem 11.2 from the lecture notes.

38.5 Preconditioned CG

Use e.g. Incomplete LU to improve the conditioning of a matrix. Give numerical examples that show this improves the convergence behaviour.

38.6 Arnoldi Iteration

This page would be quite useful. Driscoll and Braun (2018)

38.7 GMRES

38.8 MINRES

When A is symmetric (or Hermitian) the Arnoldi iteration simplifies to the Lanczos iteration (and GMRES becomes MINRES)

38.9 Solving Poisson’s equation in 2d

See Possion’s equation in 2d.

38.10 (other) Applications of the Discrete Fourier Transform

Find something that you find interesting in e.g. “Brigham, E. O. (1988). The Fast Fourier Transform and Its Applications”. For example, how can you use the FFT to compute the product of two polynomials? Why is this useful?

38.11 Richardson-Lucy iteration for image de-blurring

Suppose you have some grayscale image represented by some matrix in [0,1]^{n\times m} that has been flattened into a vector b \in [0,1]^{nm} (e.g. b represents a NASA image from the Hubble space telescope) and a blurring matrix A. The observed image is then b = Ax where x is the original (sharp) image. The Richardson-Lucy method is an iterative method that tries to recover x.

[ Info: Saved animation to c:\Users\math5\Math 5485\Math5486\pics\math5486.gif

38.12 Fornberg algorithm

Finite differences on arbitrary nodes

38.13 Google PageRank and the Power Method

Explain how Google uses the power method (see midterm) in their page ranking algorithm.


Driscoll, Tobin A, and Richard J Braun. 2018. Fundamentals of Numerical Computation. https://fncbook.com/.
Driscoll, Tobin A, Kim-Chuan Toh, and Lloyd N Trefethen. 1998. “From Potential Theory to Matrix Iterations in Six Steps.” SIAM Rev. Soc. Ind. Appl. Math. 40 (3): 547–78.