TMA4205 Numerical linear algebra: Fall 2015
News
- 24.11: I have now read your project#2 reports. The reports are in my office. Here are some percentage "grades"; please let me know if you have any questions or your report has not been graded:
No. | Percentage |
---|---|
724041 | 50-60 |
752110 | 50-50 |
764099 | 50-60 |
770877 | 80 |
10001 | 80-90 |
10008, 764169 | 80 |
10013 | 80 |
10012, 10014 | 70-80 |
- 19.11:
- Kudos to everyone who has already filled out the course evaluation. In case you have not yet, there is no time like the present! Here is the link.
- I will maintain the office hours in the comping weeks in case you have any questions before the exam. Alternatively, email them (please put TMA4205 in the subject line).
- 11.11: Please take 15 min of your time to fill out the course evaluation form to help us improve this course.
- 19.10: Small typo in the project description: in equation (4) "-G" should be "G". This has now been corrected in the project description.
- 18.10: I have now uploaded the description of the second part of the Project. Deadline: 20.11 (last day of the semester). Submission rules are the same as for the first part. Please let me know if there are any inconsistencies or unclear parts in the description. Here are some suggestions for making sure the multigrid is implemented correctly:
- When you implement restriction/prolongation operators in project 2: it is OK to assume that the coarse grid is exactly half the size of the fine grid in each coordinate direction. You can achieve this by selecting the size of the coarsest grid in your multigrid code, and then determine the size of the finest grid from the number of needed grid levels, by refining the mesh each time (halving the cell size in each coordinate direction). Also, in my opinion, drawing a picture of where each unknown type (U, V, P) goes during the refinement/coarsening procedure helps a lot in implementing the operations in a sensible way.
- First of all, make sure that the two-grid algorithm works (ie two grid levels only).
- For the ease of implementation, homogenize the boundary conditions (i.e., make them 0) by solving the problem A deltaX = b - A x_0 = r_0. In this way restriction/prolongation operators close to the boundary will look the same on all levels. Then the fine solution is x_0 + deltaX.
- Visualize everything (solutions, errors, residuals) - this may help to identify the errors.
- Unit tests are the bread and butter of all good engineering and software development in particular. Instead of trying to understand, what is wrong with the whole code make sure that individual pieces work as expected.
- For example, under-relaxed Jacobi is, albeit very inefficient, an iterative solver. Run thousands or millions of iterations of it and you should be able to solve the problem - make sure that the error goes to zero, and by visualizing the error make sure that the oscillatory components decay fast.
- Restriction/interpolation involve a lot of indices in this case. Draw a picture of a coarse/fine grid and see which variables go where with what weights. Write a non-vectorized Matlab implementation first, with loops, and make sure it works. Visualise the input/result of the operation (on coarse/fine grids) to make sure that the operation works as expected.
- 23.09: In project 1 part e), remember that the iteration matrix is not normal, and therefore, the error estimate should include the condition number of the matrix containing the eigenvectors. However, it does become closer and closer to normal as \( h\to 0 \): indeed, the non-symmetric "advection contribution" to the equations is scaled with \( 1/h \) whereas the symmetric "diffusion contribution" to the equations is scaled with \( 1/h^2 \). Therefore one may expect that the condition number to be at least bounded and at best converge to \( 1 \) as \( h \to 0 \).
- 21.09: Due to popular demand, exercise hours have been moved back to Tuesdays 09:15-10:00. The room will still be S23. Also, office hours for Torbjørn are now placed at Wednesdays 14:15-15:00.
- 07.09: Monday morning lectures have been moved from R90 to S22!!!
- 04.09: The exercise hours have changed to Wednesdays 14:15-15:00 since the Doodle poll indicated everyone will be available at this time. The room will still be S23. Also, office hours for Torbjørn are now Tuesdays 13:15-14:00.
- 04.09: Project part 1 is now online: Project. Deadline: 27.09. Please submit your replies (report as PDF + zipped code) to Torbjørn.
- 03.09: Today we had a first reference group meeting. The follow up is scheduled for 08.10. Please try to communicate to the reference group the issues, concerns, or suggestions related to the course that you may have.
- 24.08: There have been some concerns regarding the time slot for the exercise class, mainly its collision with lectures in TMA4145. If everyone could please fill out this doodle as quickly as possible, we can see if there exists a time for which more (or all!) of you are able to attend. There will still be an exercise class on Aug 25 at 9:15-10:00.
- 21.08: According to this doodle there is no time slot for moving Mon morning lecture, that suits everyone. Right now, the best possibilities are: status quo/Mon 08-10; Tue 12-14, 13-15, or 14-16; Wed 13-15 or with some effort 14-16. If your preferences, circumstances, or schedules have changed - please update your reply. The next lecture happens on Mon, Aug 24, 08:15-10:00 in R90.
- 18.08: It appears that the best option for moving Mon 08:15-10:00 lecture is Tue 14:15-16:00. Will this be agreeable to everyone?
- 12.08: Welcome to the course! Please do not forget to volunteer for the reference group or you risk being appointed :) I would also like to change the schedule and move one of the lecture slots from Monday to any other day. Therefore, I kindly ask you to fill out this doodle and indicate your preference/availability ASAP.
Schedule
I would like to change the schedule and move one of the lecture slots from Monday to any other day. Therefore, I kindly ask you to fill out this doodle and indicate your preference/availability ASAP. The first lecture is on 17.08; there will be no exercises on 18.08!
<schedule> <timeslots>08:15|09:15|10:15|11:15|12:15|13:15|14:15|15:15|16:15</timeslots> <entry day:Mo start:0 end:1 color:ntnu2>Lecture (S22) </entry> <entry day:Tu start:1 end:1 color:ntnu2>Exercises (S23) </entry> <entry day:We start:5 end:5 color:ntnu2>AE office hour (SBII/1038)</entry> <entry day:We start:6 end:6 color:ntnu2>TR office hour (SBII/1056)</entry> <entry day:Th start:2 end:3 color:ntnu2>Lecture (K28)</entry> </schedule>
Course information
- Lectures: Anton Evgrafov antone [at] math [dot] ntnu [dot] no; office hours: Wed 13-14 in SBII/1038 or by appointment
- Exercises: Torbjørn Ringholm torbjorn [dot] ringholm [at] math [dot] ntnu [dot] no; office hours: Tue 13-14 in SBII/1056
- Reference group:
- Silje Irene Hansen siljeirene [dot] hansen [at] gmail [dot] com
- Lennart Kerckhoffs lennart [dot] kerckhoffs [at] hotmail [dot] com
- Ludwig Lahmeyer l [dot] lahmeyer [at] googlemail [dot] com
- Please put TMA4205 in the subject line when emailing
- Learning outcome and other general information: see here
Exam information
The exam will test a selection of the Learning outcome.
Permitted examination support material: C: Specified, written and handwritten examination support materials are permitted. A specified, simple calculator is permitted. The permitted examination support materials are:
- Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003 (book or printout)
- L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997 (book or photocopy)
- G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996 (book or photocopy)
- J. W. Demmel: Applied Numerical Linear Algebra, SIAM, 1997 (book or photocopy)
- E. Rønquist: Note on The Poisson problem in \(\mathbb{R}^2\): diagonalization methods (printout)
- K. Rottmann: Matematisk formelsamling
- Your own lecture notes from the course (handwritten)
Learning outcome
After this course, you should be able to:
- L1: explain and fluently apply fundamental linear algebraic concepts such as matrix norms, eigen- and singular values and vectors;
- L2: estimate stability of the solutions to linear algebraic equations and eigenvalue problems;
- L3: recognize matrices of important special classes, such as normal, unitary, Hermitian, positive definite and select efficient computational algorithms based on this classification;
- L4: transform matrices into triangular, Hessenberg, tri-diagonal, or unitary form using elementary transformations;
- L5: utilize factorizations and canonical forms of matrices for efficiently solving systems of linear algebraic equations, least squares problems, and finding eigenvalues and singular values;
- L6: explain the underlying principles of several classic and modern iterative methods for linear algebraic systems, such as matrix-splitting, projection, and Krylov subspace methods, analyze their complexity and speed of convergence based on the structure and spectral properties of the matrices;
- L7: explain the underlying principles of iterative algorithms for computing eigenvalues of small and select eigenvalues of large eigenvalue problems;
- L8: explain the idea of preconditioning and flexible preconditioning;
- L9: explain the fundamental ideas behind multigrid and domain decomposition methods;
- L10: estimate the speed of convergence and computational complexity of select numerical algorithms;
- L11: implement select algorithms on a computer.
Curriculum
The curriculum consists of all topics that have been covered in the lectures, all self-study topics, the semester project, and the exercises with solutions. The lectures and self-study topics are based on the following material. The list is not yet final.
- Saad: 1.1–1.13, 2.2, 4.1, 4.2.1–4.2.3, 5.1–5.3, 6.1–6.11, 9.1–9.3, 10.1–10.2, 13.1–13.5, 14.1–14.3
- Demmel: Section 5.2, Chapter 7
- Trefethen & Bau: Lectures 4, 5, 10, 25, 26, 27, 28, 29
- Golub & Van Loan: 2.5
- Rønquist: Note on The Poisson problem in \(\mathbb{R}^2\): diagonalization methods. [pdf]
Reading material
Of the reading material listed below, you should buy the book by Saad, but not necessarily any of the other ones. Saad's book is also available online for free through NTNU's network at SIAM.
- Y. Saad: Iterative Methods for Sparse Linear Systems. 2nd ed. SIAM, 2003. (main text)
- L. N. Trefethen and D. Bau: Numerical Linear Algebra, SIAM, 1997. Photocopies are available at itslearning.
- G. Golub and C. Van Loan: Matrix Computations. 3rd ed. The Johns Hopkins University Press, 1996. Photocopies are available at itslearning.
- W. L. Briggs, V. E. Henson, S. F. Mc Cormick: A multigrid tutorial, SIAM, 2000. Available online at SIAM.
- J. W. Demmel: Applied Numerical Linear Algebra, SIAM, 1997. Available online at SIAM.
Old exams
Lecture log and plan
S refers to Saad, TB to Trefethen & Bau, GVL to Golub & Van Loan, D to Demmel, and R to Rønquist.
Date | Topic | Reference | Learning outcome |
---|---|---|---|
19.11 | Reserved for repetition/Q&A | ||
16.11 | Reserved for repetition/Q&A | ||
12.11 | The Rayleigh-Ritz method and Lanczos algorithm for eigenvalue computation | D7.1-D7.7 | L7 |
09.11 | Perturbation of symmetric eigenvalue problems | D5.2 | L2 |
05.11 | QR algorithm with shifts, Wilkinson shift | TB29 | L7 |
02.11 | QR algorithm without shifts, simultaneous iteration | TB28 | L7 |
28.10 | Power iteration, inverse iteration, Rayleigh quotient iteration | TB27 | L7 |
26.10 | Eigenvalue problems, eigenvalue-revealing factorizations, eigenvalue algorithms | TB24–25, TB26 (self-study), slides | L7 |
22.10 | Matrix properties via the SVD | TB5 | L1 |
19.10 | The singular value decomposition (SVD) | GVL2.5, TB4 | L1 |
15.10 | Domain decomposition (DD) methods, Schwarz' alternating procedure, multiplicative and additive overlapping Schwarz, DD as a preconditioner | S14.1, S14.3–14.3.3 | L8, L9 |
12.10 | Multigrid: smoothing, intergrid operators, two-grid cycles, V-, W-, and F-cycles. MG as a preconditioner | S13.1–13.4.3. | L8, L9, L10 |
08.10 | Preconditioned Lanczos (CG, MINRES). Flexible preconditioning. | S9.1, S9.2.1, S9.3.1-9.3.2, 9.4 | L8 |
05.10 | Preconditioned GMRES. Examples of different preconditioners. | S9.1, S9.2.1, S9.3.1-9.3.2, 9.4 | L8 |
01.10 | Convergence analysis of MINRES (indefinite case) and GMRES. | Note, good review article, S9.1 | L6, L10 |
28.09 | Convergence analysis of CG. Chebyshev polynomials. | S6.11.1-6.11.4 (S6.11.2-self study) | L6, L10 |
24.09 | The D-Lanczos algorithm, the conjugate gradient method (CG) | S6.7.1 | L6 |
21.09 | GMRES (GMRES(k), QGMRES, and DQGMRES), Lanczos method | S6.5.1, S6.5.3, S6.5.5, parts of S6.5.6, S6.6.1 | L4, L6 |
17.09 | The full orthogonalization method (FOM) and variations of FOM | S6.4 | L4, L6 |
14.09 | Krylov subspace methods, Arnoldi's algorithm | S6.1–6.3 | L4, L6 |
10.09 | Convergence of steepest descent, minimal residual (MR) iteration, convergence of MR, | S5.3.1–5.3.2 | L6, L10 |
07.09 | Projection methods, error projection, residual projection, steepest descent method | S5.1.1-5.2.2, S5.3-5.3.1 | L6 |
03.09 | Jacobi and Gauss-Seidel iteration, convergence of splitting methods, Gershgorin's theorem | S4.1, S4.2–4.2.1, S1.8.4, S4.2.3 | L6, L10 |
31.08 | Perturbation analysis, projection operators, introduction to projection methods | S1.13.1 (self-study), S1.13.2, S1.12, S5.1 | L2, L3 |
27.08 | Courant-Fisher (min-max) theorem, positive definite matrices, diagonalization methods and fast Poisson solvers | S1.9.2, S1.11, R, S2.2–2.2.5 (self-study) | L3, L5, L10 |
24.08 | Similarity transformations. Normal, Hermitian, and positive definite matrices | S1.8–1.8.1, S1.8.2 (self-study), S1.8.3, S1.9, S1.11 | L3 |
20.08 | QR factorizations | S1.7, TB10 | L4 |
17.08 | Background in linear algebra | S1.1–1.3 (self-study), S1.4–1.6 | L1 |
Project
There will be two projects in this course. The first one, which counts for 10 % of the final grade, is given in September, and the second part, which counts for 20 %, will be given in the end of October/beginning of November. The project can to be done either individually or in groups of two. Please submit your report as a PDF and the code as a ZIP file to Torbjørn Ringholm. In principle, you may choose which programming language to use; Matlab/Octave is however recommended. Please do not type your names on the report, only your candidate numbers. If your candidate number is not available yet, you may use your student number.
These projects constitute the component of the examination that tests the learning outcome L11!
You will need the note Eigenvalues of tridiagonal Toeplitz matrices in the first semester project.
- Project 1 (deadline: 27.09 @23:59)
- Project 2 (deadline: 20.11 @23:59, that is, the last day of the semester) and accompanying Matlab scripts stokes.m, laplace_uv.m, fgmres_mgs.m, fgmres_h.m. There has been a small typo in the project description: in equation (4) "-G" should have been "G". This has now been corrected.
Exercises
These exercises are voluntary and should not be handed in. They are here to help you to achieve the learning objectives of the course and you are strongly encouraged to solve them.