## Homework

The material in the homework is part of the curriculum.

• The homework is compulsory. There will be four exercises, at least three must be approved to take the exam.
• The deadline for handing in will be announced with each exercise, typically one week after the relevant exercise class.
• Each exercise set will only be evaluated as "approved/not approved", no further classification or feedback.
• You are allowed to work together with others, but you will hand in your homework individually.
• For each exercise set it is specified what to hand in. Do not hand in a detailed report or any Matlab code.
15.01, 19.01. 29.01. Boundary value problems Implementation of the numerical solution of simple boundary value problems with various boundary conditions. Here are some examples of implementations of other problems in Matlab that you may use as inspiration and reference. Numerical solution of a convection diffusion boundary value problem in 1D ($\nu u''-u'=f, u(0)=\alpha, u(1)=\beta$) convdiff.m convdiff.py. Numerical solution of the pendulum problem ch. 3.3 of the note pendulum.m. You find here an example of how to compute error plots giving numerical evidence that the method is implemented correctly and achieves the expected order.
19.01, 22.01, 29.01 02.02 Heat equation Implement a finite differences discretization of the heat equation, with various boundary conditions: use the Euler method, the backward-Euler method and the Crank-Nicholson method.
5.02, 12.02 17.02  Elliptic problems Implement 5-point discretisation of the Laplace equation on a square, Dirichlet conditions see the exercise file
12.02, 19.02  24.02  Hyperbolic problems Various finite difference methods for transport equations Note that there is a sign error in the description of the upwind scheme - it should read +k/h, not -k/h

## Notes: exercise 1

The deadline for the first exercise is extended to 29.01. Most of you managed task 1. For task 2, the plot should look something like that given below. All the lines are straight, with a slope of 2 (look at the values on the axes to get an estimate of the slope)

If the lines have different slopes (such as shown below), it is probably because you have failed to account for the differences in scaling between vector and function norms. See p.16 of the course note; in practice this means you should multiply the 1-norm by h and the 2-norm by sqrt(h).

Note that you should plot in a logarithmic scale to achieve the straight lines (in matlab this is achieved with loglog, in python matplotlib.pyplot.loglog). If this is not correctly implemented the plot will look something like

Finally, for the Neumann problem (task 3), as f(0)=0, for this particular example cases 1 and 2 will coincide. One trick many have noticed is that if you choose sigma=-1/pi, the exact solution is the same as that from the previous problem. However, be careful that you have handled the addition of an extra grid point correctly. In particular, make sure the exact solution vector is evaluated at the correct points - if u is the previous solution vector, u(1) will become u(2) (u[0] to u[1] in python) etc. If completed correctly, the plot should look something like:

## Notes: exercise 2

In this exercise (tasks 1 and 2), it is recommended that the error measured is a function norm (say 2-norm) of U(x,T)-u(x,T) at some fixed time T. I recommend T=0.1, so that the exact solution has not decayed to zero. There are then two parameters, M and N - the first is the number of spatial discretisation points (i.e. dimension of the vector u on the computer), and the second is the total number of time steps taken to reach T=0.1, so that k=T/N. You should then measure the exact solution u(x,0.1) against the value of u you found after your final iteration. As before, use a vector norm (1-, 2- or Inf-, I would tend to default to the 2-norm) and remember to scale this by a factor of h or sqrt(h) as per the first exercise.

You will make two plots, one where N (and hence k) is fixed and M (and hence h) is varied. Plot the errors vs. h on a log-log plot (spatial error plot, the case where M is varied), and errors vs. t also in log-log scale (time error plot, where N is varied). It is important to note that when plotting the error as a function of h, we want the error due to time discretisation to be small (i.e., choose N large). Similarly, when plotting error as a function of k, we want M to be large enough.

Here is a good plot of the error when varying h: here M=2^k, k=1..8 and N=10000. We see a straight line with slope 2, as expected. All of the methods are essentially the same, as the error is almost entirely due to the spatial discretisation procedure, which is the same in all instances

In this case, M is as before, but N=1000. This is also good, although we can see that for the larger values of M, the time discretisation error is becoming significant, so the plots stop looking like straight lines in the bottom left:

Now N is too small, here N=10. The results are not meaningful for backward Euler, because the discretisation error in time dominates that in space almost regardless of M. Note that the forward Euler method has been omitted for reasons to become clear!

Now we move onto the time convergence plots. Here we take N=2^k, k=1..8, and a nice large enough value of M, here M=2000

Oops! As we will see in lectures (this was the intended observation from task 2), forward Euler requires values of N much larger than M if it is not to blow up completely. This essentially forces the time discretisation error to be dominated by the spatial discretisation error, hence it is difficult to observe the time convergence in a nice way (for this reason, it is not a good idea to use this method in practice on the heat equation!). I would advise to leave out this method from the time convergence plots. Here with N=2^k, k=1..8 and M=2000 again:

We see a straight line of slope 1 for backward Euler, and a line of slope 2 for Crank-Nicolson. This is the theoretically predicted result. With a smaller value of M (M=100)

And yet smaller (M=10, don't do this, this is a warning to take larger values, but also an encouragement not to panic if your plot looks like this)

#### Neumann problem

For task 3 (Neumann problem), we are considering u_t = u_xx, where u_x(0,t)=g_0(t) and u(1,t)=g_1(t) are given, along with u(x,0)=f(x). Perhaps simplest is to take g_0(t) = pi*exp(-t*pi^2), g_1(t) = 0, and u(x,0)=sin(pi*x), so the exact solution is the same as in the previous instance.

The case of the Neumann problem is analogous to task 3 from the first exercise: we obtain a system with one more unknown, and the linear system we solve to advance a time step in the backward Euler / Crank-Nicolson methods acquires a new row. The semi-discretisation paradigm allows us to obtain this change directly from the matrix appearing in exercise 1, task 3: the ODE will be d/dt u(t) = A*u(t) - F(t), where A and F(t) are the matrix and vector from this task. The vector F(t) will generally depend on time in that it involves evaluating the boundary condition g_0(t), so this should be updated at each time step. Applying backward Euler for instance results in the linear system (I - k*A) u^{n+1} = u^n - k*F^{n+1}, where I is the identity matrix.

The manner in which M and N are varied to give convergence plots in time and space is the same as from task 1 of this sheet.

(If comparing your values to the exact solution, make sure you are using the correct one for the boundary conditions you choose, you would not generally expect this to be the same as the solution for the first task).

#### Periodic boundary conditions

For the question on periodic boundary conditions, we are only interested in the values u(x,t) for x in [0,1] as the rest follow from periodicity. Spatial discretisation with h=1/(m+1) as before will therefore result in m+2 unknowns, u_i(t), i=0,…,m+1. However, as u_0(t)=u_{m+1}(t), we can remove u_{m+1} from the system and reduce this to m+1 unknowns. The equation that results will be a system of ODEs d/dt u_i(t) = 1/h^2 * ( u_{i-1}(t) - 2*u_i(t) + u_{i+1}(t) ), i=0,…,m. For the particular case i=0, you should use that as u is periodic, we have u_{-1}(t)=u_m(t). The end result is a system similar to that of the previous problems, but where the first row and last row of the matrix appearing in your ODE method will be different.

If you are stuck, use the backward Euler method - the matrix appearing in the system you solve will be the same as that of the dirichlet problem, with the exception that the size will be (M+1)x(M+1) and the entries in the top right and bottom left corner will now be -r (instead of 0). The vector you are solving for will also be of size M+1 (as per the Neumann problem). Make sure that the initial condition and exact solution take values including the endpoint (as per the Neumann problem)

An issue arises when computing the errors here: it is difficult to find the exact solution to the problem posed here. You have two options:

i) [Probably quickest] Consider instead the problem periodic on [-1,1], with sin(pi*x) as an initial condition. Here you will have to modify the code that produces the vector on the right hand side to account for the different position of the points x_i, but the exact solution will be the same as that from problem 1. If you modify the code to generate a vector of points x = linspace(-1,1,M+2), say, remember that the h values will be twice the size as the interval has increased, i.e. take h = 2/(M+1).

ii) Use the technique of comparing the solutions to one obtained on a very fine grid, i.e. the most accurate approximation available to you. In the case of convergence in time, try it once with N=10000 say, and store the value at the final time T=0.1 as v. Then for smaller values of N, you will obtain a vector u for the same T=0.1, and you can compare (u-v).

The convergence in space is a bit more complicated as with this technique as the vectors v and u are no longer the same size. I recommend taking m = 2^j - 1, (say, j=1,..,8) which ensures that h=1/2^j, and the coarsest grids will be 'contained in' the finest grid. Suppose j_max is the parameter for the computation at the finest scale (the suggestion above corresponds to j_max=8). Then v(1+2^(j_max - j)*(0:m)) will give the points of v to compare to u in matlab, or in python you would compare v[0:-1:2^(j_max - j)] to u.

## Notes: exercise 3

I have had a few questions about how to measure the error. In general, if solving for u(x,t), where x is a vector taking values in some subset D of R^n, when taking the error you will want to integrate over the spatial domain D. The treatment of time can vary, but in most cases you will probably want to fix this to some given value T.

In this case, there is no time-dependence, so you will simply integrate over D, which here is the square [0,1] x [0,1], i.e.the 2-norm is a double integral sqrt(int_0^1 int_0^1 u(x,y)^2 dx dy)

The easiest way to compute this in practice is to take a 2-norm of the vector/matrix U (this depends on whether you store the solution as a vector or a matrix, but both will be equivalent). This should then be scaled by sqrt(h*k) to make it a proper approximation of the underlying integral.