Boundary value problems

Date: 7th of January.

Today we implement the central differences discretization of the following boundary value problem considered in yesterday's class:

<jsm>u(x)=f(x),</jsm> <jsm>0<x<1,</jsm> <jsm>u(0)=\alpha,</jsm> <jsm> u(1)=\beta.</jsm> Considering the grid of equidistant points <jsm>x_j=j\cdot h</jsm>, <jsm>j=0,1,\dots , M+1 </jsm>, <jsm> h=\frac{1}{M+1}</jsm>, on each node <jsm>x_j</jsm> we replace the second derivative in the differential equation with its approximation by central differences and get <jsm> \frac{1}{h^2}(U_{i-1}-2U_i+U_{i+1})=f_i, </jsm> <jsm> i=1,\dots, M</jsm> using the boundary conditions <jsm>U_0=\alpha</jsm>, <jsm>U_{M+1}=\beta</jsm> we get a system of <jsm>M</jsm> equations in the <jsm>M</jsm> unknowns <jsm>U_1,\dots,U_M</jsm> that is <jsm>A_h\vec{U}=\vec{F}</jsm> where <jsm>\vec{U}=[U_1,\dots ,U_M]^T</jsm>, <jsm>\vec{F}=[f_1-\frac{\alpha}{h^2},f_2,\dots,f_{M-1} ,f_M-\frac{\beta}{h^2}]^T</jsm> and <jsm>A_h:=\frac{1}{h^2}\,\left[\begin{array}{ccccc} -2 & 1 & 0& &
1 & -2 & 1 & \ddots &
0 &\ddots & \ddots & \ddots &0
&\ddots & \ddots & \ddots &1
& &0 & 1 &-2 \end{array} \right]. </jsm> Task 1 Choose <jsm>f=\sin(\pi x)</jsm>, <jsm>\alpha=\beta=0</jsm> and <jsm> M=10</jsm> construct the linear system <jsm>A_h\vec{U}=\vec{F}</jsm> and solve it with the backslash command of Matlab (or in other ways if you are using another programming language) to find the numerical solution <jsm>\vec{U}</jsm>. Choose then another <jsm> f</jsm> leading to non trivial boundary values. Task 2 Compute by hand the exact solution to your boundary value problem and use it to compute the the error vector <jsm>\vec{e}_h:=[U_1-u_1,\dots , U_M-u_M]^T</jsm>. Consider then the piecewise constant error function <jsm>e_h(x)=e_j</jsm> if <jsm> x\in [x_j,x_{j+1})</jsm> and <jsm>j=1,\dots ,M.</jsm> We want to see with a numerical experiment how the function-norm of the error decreased as a function of <jsm>h</jsm>. We know from Taylor theorem that <jsm> \frac{1}{h^2}(u_{j+1}-2u_j+u_{j-1})=u
(x_j)+\frac{h^2}{12}u^{(4)}(x_j)+\mathcal{O}(h^4).</jsm> Tomorrow in the lecture we will prove that also <jsm> e_h(x) </jsm> is going to zero in the 2-norm for functions as <jsm> \mathcal{O}(h^2)</jsm>. To do this we will use the fact that <jsm>A_h</jsm> is invertible with inverse bounded in 2-norm independently on <jsm>h</jsm>. This is called (order 2) convergence .

We design our numerical experiment as follows: consider increasing values of <jsm>M</jsm>, for example <jsm>M=2^k</jsm>, <jsm> k=1,2,\dots ,8 </jsm> and decreasing values of <jsm> h</jsm> accordingly. Solve the linear system from taks 1 for each value of <jsm>M</jsm> and compute the corresponding norm of the error, (use max-norm, 1-norm and 2-norm), store the obtained values. Plot in logarithmic scale the different values of <jsm>h</jsm> versus the corresponding values of the error norm (for the three different choices of norm), you should observe a straight line with slope 2 (testifying second order convergence).

Date: 14th of January.

Task 3 You should now modify your programme and implement Neumann boundary conditions (follow the description of chapter 3.1.2 in the note). There are several strategies: CASE 1 is a first order method, CASE 2 is a second order method using fictitious nodes, CASE 3 is a second order method leading to a matrix which is not tridiagonal but without using fictitious nodes. Implement each of these and verify the order of each technique numerically.

Task 4 Implement the method described in section 3.2.1 of the note, where a general self-adjoint linear boundary value problem is considered and discretized so to preserve symmetry under discretization. Verify the order.

2014-01-07, Elena Celledoni