Curriculum

In addition to the following list, the exercises given are also part of the curriculum.

Chapter Section Keywords Pages
1: Mathematical Preliminaries and Floating-Point Representation 1.2: Mathematical Preliminaries Taylor's Theorem, Mean Value Theorem 20-23, 25-28
1.3: Floating-Point Representation Floating-point representation, single and double precision, machine epsilon, rounding, chopping 38-51
1.4: Loss of Significance Significant digits, range reduction 56-58, 60-63
2: Linear Systems 2.1: Naive Gaussian Elimination Linear system, naive Gaussian elimination, pivot, forward elimination, back substitution, error vector, residual vector 69-79
2.2: Gaussian Elimination with Scaled Partial Pivoting Pivoting (partial, scaled partial, complete), index vector, long operation, condition number 82-97
2.3: Tridiagonal and Banded Systems Banded matrix, diagonal matrix, tridiagonal matrix, (strict) diagonal dominance 103-106
3: Nonlinear Equations 3.1: Bisection Method Root/zero, bisection method, false position method 114-121
3.2: Newton's Method Newton's method, multiplicity, nonlinear system, Jacobian matrix, quadratic and linear convergence 125-134
3.3: Secant Method Secant method 142-144, 147
Note: Fixed Point Iterations Fixed point iteraton, contraction Note
4: Interpolation and Numerical Differentiation 4.1: Polynomial Interpolation Interpolating polynomial, nodes, Lagrange form, cardinal polynomial, Newton form, divided differences, Neville's algorithm 153-173
4.2: Errors in Polynomial Interpolation Runge function, interpolation error, Chebyshev nodes 178-185
4.3: Estimating Derivatives and Richardson Extrapolation Truncation error, forward difference, central difference, Richardson extrapolation, computational noise 187-197
5: Numerical Integration 5.1: Trapezoid Method Definite/indefinite integral, antiderivative, Fundamental Theorem of Calculus, trapezoid rule (basic, composite), recursive trapezoid formula, multidimensional integration 201-212
5.2: Romberg Algorithm Romberg algorithm, Euler-Maclaurin formula, general extrapolation 217-224
5.3: Simpson's Rules and Newton-Cotes Rules Method of undetermined coefficients, Simpson's rule (basic, composite, adaptive), Newton-Cotes rules. 227-236
5.4: Gaussian Quadrature Formulas Nodes, weights, linear transformation, Gaussian quadrature rules, Legendre polynomials, weighted Guassian quadrature, weight function 239-246
6: Spline Functions 6.1: First Degree and Second Degree Splines Spline (linear, quadratic), knots, interpolating spline, modulus of continuity 252-258
6.2: Natural Cubic Splines Spline (degree k), interpolation conditions, continuity conditions, natural cubic spline, smoothness of natural cubic splines 263-276
7: Initial Value Problems 7.1 Taylor Series Methods Ordinary differential equation (ODE), initial value problem (IVP), solution, implicit/explicit formulas, vector field, Taylor series methods, Euler's method, order, local truncation error, accumulated global error, roundoff error 299-308
7.2: Runge-Kutta Methods Runge-Kutta method, two variable Taylor series 311-316
7.3: Adaptive Runge-Kutta and Multistep Methods Adaptive Runge-Kutta-Fehlberg Method, automatic step size adjustment, stability, convergent/divergent solution curves 320-324, 325-327
7.4: Methods for First and Higher Order Systems Coupled/uncoupled systems, systems of ODEs, vector notation, autonomous/nonautonomous ODE, higher order differential equation, transformation into autonomous and first order form 331-342
8: More on Linear Systems 8.1: Matrix Factorizations LU factorization, elementary matrix, lower/upper triangular matrix, Doolittle factorization, LDLT factorization, Crout factorization, Cholesky Factorization, symmetric positive definite (SPD) matrix, permutation matrix 358-373
8.2: Eigenvalues and Eigenvectors Eigenvalue, eigenvector, eigenspace, characteristic polynomial, multiplicity, direct method, Hermitian matrix, similar matrices 380-385
8.4: Iterative Solutions of Linear Systems Matrix/vector-norms (l_1, l_2, l_inf), spectral radius, condiiton number, well/ill conditioned matrix, iterative method, Richardson iteration, Jacobi method, Gauss-Seidel method, SOR method, overrelaxation, sparse system 405-417
9: Least Squares Methods and Fourier Series 9.1: Method of Least Squares MinmĂ­nimization of error, linear least squares, normal equations, basis functions, linear independence 427-432
9.2: Orthogonal Systems and Chebyshev Polynomials Orthogonality, orthonormality, basis, Chebyshev polynomials, polynomial fitting, inner product, Gram-Schmidt process 435-439, 441-443 (From equation (7) to equation (10)), 444-445
9.3: Examples of the Least-Squares Principle Inconsistent systems, approximation of functions on intervals with weight function 447-449 (Skip "Modified Gram-Schmidt Process")

Note: If nothing else is stated, a page number that falls in the middle of a section has the following interpretation: You should start or stop at the subsection which begins on that page depending on whether the number is the initial or final one in a pair respectively. E.g. 25-28 means start reading from the subsection "Taylor's Theorem in Terms of (x-c)" on page 25 and up to the subsection "Alternating Series" on page 28. If no subsection starts on the page of a final page number, read the entire page. All page numbers refer to the 7th edition.