TMA4205 Numerical linear algebra: Fall 2014

News

  • 31.07: Written re-take exam will be held on 06.08.2015.
  • 11.12: I have now graded the exams and final grades are here. Of course I cannot be on a high horse here, but I am genuinely disappointed with the results. If you have any reasonable explanation as to why there is such a gap between the project grades and the exam grades, please provide this information/complaints/suggestions to the reference group for their final report and/or to me.
  • 09.12: Very embarrassingly and in spite of our best efforts at weeding mistakes in exam questions, questions 2c) and 4b) turned out to contain them. Namely, in 2c) the correct inequality is

\[ \|r_{\text{new}}\|^2_2 \le \bigg[1-\frac{1}{n\kappa^2_2(A)}\bigg] \|r\|^2_2, \] and in 4b) the stopping criterion should be \( \| A_1 x^{(k)} - A_2 x^{(k)} - b \|_2 \le \varepsilon \|b\|_2\). I will take special care when interpreting your answers to these questions and will err on your side. I apologize most profoundly for these and thank you for all the feedback to the exam!

  • 08.12: Here are brief solutions to today's exam. I will try to grade it this week.
  • 05.12: Here is a table containing the grades for projects 1+2 (note: project 2 is twice as 'valuable' as project 1). Columns 2 and 3 of the table contain "+" if I have your report and "x" otherwise. Please let me know if you see any mistakes in these columns. The last column contains a grade (I used prosentvurderingsmetoden). Some of the grades are a bit too close to the boundary, so I will have to make a more educated assessment after the exam. Also please note that exam is 70/30 times "more valuable" than the projects, as far as the grading is concerned. Best of luck at the exam!
  • 04.12: I have finished going through your reports for Project #2. Overall I am content with your work, and a few of them are excellent. To combine the results of projects 1 & 2 I will have to figure out the mapping between the student and candidate numbers - I hope to finish that tomorrow.
  • 30.11: If you loved/hated project 2 and/but want to write a semester project about a problem with more-or-less the same structure, but very different spectral properties, please contact me and this can be arranged. Basically instead of the Stokes system we need to solve a so-called Brinkman system modelling the flow of slow incompressible liquids through a porous medium. To complicate things we will need to consider either spatially heterogeneous but isotropic permeability tensor of the porous media (easy case) or even heterogeneous, non-isotropic, indefinite permeability tensor (non-physical, but appears in certain numerical algorithms). The code for generating the linear system remains more or less the same but the preconditioning strategy will have to be adjusted.
  • 20.11: There is some programming/debugging work involved in making the multigrid work for the Laplace operators in the velocity space. Here are some tips:
    • First of all, make sure that the two-grid algorithm works (ie two grid levels only).
    • For the ease of implementation, homogenize the boundary conditions (i.e., make them 0) by solving the problem A deltaX = b - A x_0 = r_0. In this way restriction/prolongation operators close to the boundary will look the same on all levels. Then the fine solution is x_0 + deltaX.
    • Visualize everything (solutions, errors, residuals) - this may help to identify the errors.
    • Unit tests are the bread and butter of all good engineering and software development in particular. Instead of trying to understand, what is wrong with the whole code make sure that individual pieces work as expected.
    • For example, under-relaxed Jacobi is, albeit very inefficient, an iterative solver. Run thousands or millions of iterations of it and you should be able to solve the problem - make sure that the error goes to zero, and by visualizing the error make sure that the oscillatory components decay fast.
    • Restriction/interpolation involve a lot of indices in this case. Draw a picture of a coarse/fine grid and see which variables go where with what weights. Write a non-vectorized Matlab implementation first, with loops, and make sure it works. Visualise the input/result of the operation (on coarse/fine grids) to make sure that the operation works as expected.
  • 20.11: There is a very minor typo in the project 2 description, p. 5, possible multigrid cycle code listing: in the recursive call to mgv_u, the size of the coarse grid correction should be zeros((nx/2-1)*ny/2,1) and not zeros((nx/2-1)*(ny-1),1).
  • 19.12: Here is my summary of the course - basically just keywords. Hopefully this is still helpful for the exam preparation.
  • 19.12: There has been some (very understandable) naming confusion in connection with the project: MINRES method is not the same as MR iteration described in section 5.3.2 in Saad. MINRES, like MR, is a residual projection method (see section 5.2.2 in Saad). Unlike MR, it is a Krylov subspace method (Chapter 6 in Saad) for solving symmetric definite or indefinite problems. Mathematically it is equivalent to GMRES when the latter is applied to a symmetric problem; that is, they are identical as projection methods. Algorithmically/implementation-wise the method is nearly equivalent to DQGMRES (section 6.5.6 in Saad) though one utilizes Lanczos process (accounting for symmetry) as opposed to truncated Arnoldi for orthogonalization.
  • 12.11: If you are taking the course as TMA4505-fordypningsemne please email me so that we can agree on the exam dates and format.
  • 06.11: A few things:
    • Torbjørn is not available 07.11 and 21.11 - he will hold office hours 10.11 and 24.11 at 10:00 instead.
    • When you implement restriction/prolongation operators in project 2: it is OK to assume that the coarse grid is exactly half the size of the fine grid in each coordinate direction. You can achieve this by selecting the size of the coarsest grid in your multigrid code, and then determine the size of the finest grid from the number of needed grid levels, by refining the mesh each time (halving the cell size in each coordinate direction). Also, in my opinion, drawing a picture of where each unknown type (U, V, P) goes during the refinement/coarsening procedure helps a lot in implementing the operations in a sensible way.
    • I am planning to discuss iterative methods for large scale eigenvalue problems next. I will post more details over the weekend.
  • 03.11: You may use Matlab's PCG & MINRES in the project! In fact I would highly recommend doing this as you will have fewer things to debug/worry about.
  • 31.10: Due to the reoccurrence of the question "Is this algorithm faster than Matlab's backslash?" I timed my implementation (not optimized in any way) of algorithms in project 2. Here are the absolute and relative (to backslash operator) timings on an Intel Xeon E5-2665 2.4GHz machine with 64Gb RAM. I kept increasing the size of the problem until Matlab reported "out of memory". For smaller problems the additional effort in building/applying the sophisticated preconditioner does not pay off, but as the size of the problem increases one beats all strategies involving (sub-)matrix factorizations, complete or otherwise. Note that one can also solve rather large problems in this way, even when factorizations run out of memory.
  • 22.10: There will be no lecture 23.10 - please read Chapters 4 & 5 from Trefethen & Bau (copies are found on itslearning). A slightly easier (in my opinion) proof of uniqueness of SVD is by appealing to the uniqueness of eigenvalues/vectors, see for example this note. I will prepare slides for the coming weeks of the course.
  • 14.10: I have now uploaded the description of the second part of the Project. Deadline: 28.11. Submission rules are the same as for the first part. Please let me know if there are any inconsistencies or unclear parts in the description.
  • 01.10: I will try to rectify some of the omissions in the convergence proofs today at the lecture tomorrow by going in detail through the convergence analysis of MINRES for indefinite symmetric matrices. Here are my notes on this. In particular, here is the short answer to the story as to why |C_k(-x)| = |C_k(x)| for Chebyshev polynomials.
  • 24.09: There will be lectures in October/November - I will update the lecture plan soon.
  • 17.09: I was going through my notes from today's lecture and I found a mistake in my derivation of Kantorovich inequality. Here is the correct derivation.
  • 11.09: So much for my great attempt to use itslearning for submitting the project reports. As it seems impossible to upload Matlab flies there please submit your projects to Torbjørn Ringholm instead. You should submit the report as a PDF file and the accompanying code as a ZIP archive. I will close the submission through itslearning.
  • 09.09: I stand corrected on the subject of TMA4505-fordypningsemne. If you are taking this course as TMA4505 you will take an oral exam at the end, which will account for 100% of the grade. In this sense, the projects are not compulsory and will not together count as 30% of the grade. On the other hand, I need to test all the learning objectives, including "L2.1: The student is able to implement selected algorithms for a given model problem" at the oral exam. If you chose to exercise your right not to deliver the project during the semester and not to receive feedback on your solution, I will have to test your meeting the learning objective L2.1 by asking you to orally present the project material, including the details of the implementation, during the oral examination.
  • 08.09: If you are taking this course as TMA4505-fordypningsemne: contrary to the circulating rumours you are required to hand in the projects. The only difference is that you get an oral exam at the end of the course instead of the written one.
  • 08.09: One word has been added to the project description: in b) we require that the matrix is row diagonally dominant. Please let me know if you find any other issues in the project description!
  • 04.09: There will be no lectures on 10.09 and 11.09. Instead, please work on project 1. Torbjørn will be available for questions/answers as usual on Wed at 16 or Fri at 10.
  • 04.09: Project part 1 is now online: Project. Deadline: 28.09. Please submit your replies via itslearning. Please let me know if you have any technical issues with uploading your answers and I will try to resolve them ASAP.
  • 21.08: When reading Saad's book mind the known errors
  • 20.08: We have tentatively agreed that AE will hold office hours on Mondays 11-12. Please do not forget to volunteer for the reference group or you risk being appointed :)

Schedule

<schedule> <timeslots>08:15|09:15|10:15|11:15|12:15|13:15|14:15|15:15|16:15</timeslots> <entry day:Mo start:3 end:3 color:ntnu2>AE office hour (SBII/1038)</entry> <entry day:We start:6 end:7 color:ntnu2>Lecture (K26)</entry> <entry day:Th start:2 end:3 color:ntnu2>Lecture (K26)</entry> <entry day:We start:8 end:8 color:ntnu2>Exercise (R60)</entry> <entry day:Fr start:2 end:2 color:ntnu2>TR office hour (SBII/1056)</entry> </schedule>

Course information

Exam information

The exam will test a selection of the Learning outcome.

Permitted examination support material: C: Specified, written and handwritten examination support materials are permitted. A specified, simple calculator is permitted. The permitted examination support materials are:

  • Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003 (book or printout)
  • L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997 (book or photocopy)
  • G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996 (book or photocopy)
  • E. Rønquist: Note on The Poisson problem in <jsm>\mathbb{R}^2</jsm>: diagonalization methods (printout)
  • K. Rottmann: Matematisk formelsamling
  • Your own lecture notes from the course (handwritten)

Learning outcome

Here is a list of the goals that we will achieve in this course.

  1. Knowledge
    • L1.1: The student has knowledge of the basic theory of equation solving.
    • L1.2: The student has knowledge of modern methods for solving large sparse systems.
    • L1.3: The student has detailed knowledge of techniques for calculating eigenvalues of matrices.
    • L1.4: The student understands the mechanisms underlying projection methods in general.
    • L1.5: The student understands the mechanisms underlying Krylov methods in general.
    • L1.6: The student has detailed knowledge about a selection of projection and Krylov algorithms.
    • L1.7: The student understands the principle of preconditioning.
    • L1.8: The student understands selected preconditioning techniques in detail.
    • L1.9: The student is familiar with the practical use of matrix factorization techniques.
  2. Skills
    • L2.1: The student is able to implement selected algorithms for a given model problem.
    • L2.2: The student can assess the performance and limitations of the various methods.
    • L2.3: The student can make qualified choices of linear equation solvers/eigenvalue algorithms for specific types of systems.
    • L2.4: The student can assess the complexity and accuracy of the algorithms used.
  3. General competence
    • L3.1: The student can describe a chosen scientific method and communicate his or her findings in a written report using precise language.

Curriculum

The curriculum consists of all topics that have been covered in the lectures, all self-study topics, the semester project, and the exercises with solutions. The lectures and self-study topics are based on the following material. The list is not yet final.

  • Saad: 1.1–1.13, 2.2, 4.1, 4.2.1–4.2.3, 5.1–5.3, 6.1–6.11, 9.1–9.3, 10.1–10.2, 13.1–13.5, 14.1–14.3
  • Trefethen & Bau: Lectures 4, 5, 10, 25, 26, 27, 28, 29
  • Golub & Van Loan: 2.5
  • Rønquist: Note on The Poisson problem in \(\mathbb{R}^2\): diagonalization methods. [pdf]

Reading material

Of the reading material listed below, you should buy the book by Saad, but not necessarily any of the other ones. Saad's book is also available online for free through NTNU's network at SIAM.

  • Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003. (main text)
  • L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997. Photocopies are available at itslearning.
  • G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996. Photocopies are available at itslearning.
  • W. L. Briggs, V. E. Henson, S. F. Mc Cormick: A multigrid tutorial, SIAM, 2000. Available online at SIAM.
  • J. W. Demmel: Applied Numerical Linear Algebra, SIAM, 1997. Available online at SIAM.

Old exams

Lecture log and plan

S refers to Saad, TB to Trefethen & Bau, GVL to Golub & Van Loan, D to Demmel, and R to Rønquist.

Date Topic Reference Learning outcome
27.11 No lecture. Work on project 2 L2.1, L2.2
26.11 No lecture. Work on project 2 L2.1, L2.2
20.11 Q&A
19.11 Summary; Q&A
13.11 The Rayleigh-Ritz method and Lanczos algorithm for eigenvalue computation D7.1-D7.7 L1.3, L2.4
12.11 Perturbation of symmetric eigenvalue problems D5.2 L1.3
06.11 QR algorithm with shifts, Wilkinson shift TB29 L1.3, L2.4
05.11 QR algorithm without shifts, simultaneous iteration TB28 L1.3
30.10 Power iteration, inverse iteration, Rayleigh quotient iteration TB27 L1.3
29.10 Eigenvalue problems, eigenvalue-revealing factorizations, eigenvalue algorithms TB24–25, TB26 (self-study), slides L1.3, L1.9
23.10 Matrix properties via the SVD TB5 L1.9
22.10 The singular value decomposition (SVD) GVL2.5, TB4 L1.9
16.10 Domain decomposition (DD) methods, Schwarz' alternating procedure, multiplicative and additive overlapping Schwarz, DD as a preconditioner S14.1, S14.3–14.3.3 L1.8, L2.3
15.10 Intergrid operators, two-grid cycles, V-cycles and W-cycles, red-black Gauss–Seidel, MG as a preconditioner S13.3–13.4.3, S12.4.1 L1.2, L1.8, L2.3
09.10 Introduction to multigrid (MG) methods, weighted Jacobi iteration S13.1–13.2.2 L1.2, L1.8
08.10 Preconditioned GMRES and CG. Flexible GMRES. S9.1, S9.2.1, S9.3.1-9.3.2, 9.4 L1.2, L1.7, L1.8
02.10 Convergence analysis of MINRES, indefinite case. Note, S9.1 L1.6, L2.2, L2.4
01.10 Convergence analysis of CG and GMRES S6.11.1-6.11.4 (S6.11.2-self study) L1.6, L2.2, L2.4
25.09 The D-Lanczos algorithm, the conjugate gradient method (CG) S6.7.1 L1.2, L1.6
24.09 GMRES (GMRES(k), QGMRES, and DQGMRES), Lanczos method S6.5.1, S6.5.3, S6.5.5, parts of S6.5.6, S6.6.1 L1.2, L1.6, L1.9
18.09 Arnoldi's algorithm, the full orthogonalization method (FOM) and variations of FOM S6.3–6.4.2 L1.2, L1.5, L1.6, L2.2
17.09 Convergence of steepest descent, minimal residual (MR) iteration, convergence of MR, Krylov subspace methods S5.3.1–5.3.2, S6.1 L1.4, L1.5, L1.6, L2.2
11.09 No lecture. Work on project 1 L2.1, L2.2
10.09 No lecture. Work on project 1 L2.1, L2.2
04.09 Projection methods, error projection, residual projection, steepest descent method S5.1.1-5.2.2, S5.3-5.3.1 L1.4
03.09 Orthogonal projections, Jacobi and Gauss-Seidel iteration, convergence of splitting methods, Gershgorin's theorem, the Petrov–Galerkin framework S1.12.3–1.12.4, S4.1, S4.2–4.2.1, S1.8.4, S4.2.3, S5.1 L1.1, L1.4
28.08 Perturbation analysis, finite difference methods, diagonalization methods, projection operators S1.13.1 (self-study), S1.13.2, S2.2–2.2.5 (self-study), R, S1.12.1 L1.2, L1.4
27.08 Similarity transformations. Normal, Hermitian, and positive definite matrices S1.8–1.8.1, S1.8.2 (self-study), S1.8.3, S1.9, S1.11 L1.9
21.08 QR factorizations S1.7, TB10 L1.9
20.08 Background in linear algebra S1.1–1.3 (self-study), S1.4–1.6

Project

There will be two projects in this course. The first one, which counts for 10 % of the final grade, is given in September, and the second part, which counts for 20 %, will be given in the end of October/beginning of November. The project can to be done either individually or in groups of two. Hand in your reports in PDF format (perhaps including the snippets of the most important parts of your code) and the full code via itslearning. This is my first time using it for project handins, so please let me know if you experience any technical problems with uploading your answers due to me setting up the project, and I will try to resolve them ASAP. Please submit your report as a PDF and the code as a ZIP file to Torbjørn Ringholm. You may choose which programming language to use. Please do not type your names on the report, only your candidate numbers. If your candidate number is not available yet, you may use your student number.

The project will cover learning outcomes L2.1–2.4, and L3.1.

You will need the note Eigenvalues of tridiagonal Toeplitz matrices in the first semester project.

Exercises

These exercises are voluntary and should not be handed in. They are here to help you to achieve the learning objectives of the course and you are encouraged to solve them.

MATLAB files

2015-07-31, Anton Evgrafov