Topics for projects - Markus Grasmair

I am mainly working in variational (that is: optimisation based) methods with applications in mathematical image processing and inverse problems. Below you can find some concrete topics for a master projects. Feel free to come with your own proposals, though. In any case, if you are potentially interested in a thesis, please take contact with me, so that we can arrange a meeting.

Hvis du er interessert i en bacheloroppgave, send meg en e-post, så vi kan avtale et møte og diskutere mulige tema.

Inverse Problems

Inverse problems are typically concerned with the solution of operator equations (usually integral or differential operators) where the solution is extremely sensitive with respect to noise (data and/or modeling errors). A classical example is the inversion of the Radon transform, which is the basis of computerised tomography (CT). Another example is deblurring, which is required for obtaining clear images both for imaging at the largest scales in astronomy and the smallest scales in microscopy. Another class of examples is concerned with parameter identification problems for PDEs where one wants to reconstruct some parameters (e.g. the heat source or a spatially varying conductivity) from the solution of a PDE.

Abstractly, an inverse problem can be formulated as the problem of solving an equation \(F(u) = v^\delta\) for \(u\), given some noisy measurement data \(v^\delta\) with noise level \(\delta\). Here \(F \colon U \to V\) is a possibly non-linear mapping between the Hilbert or Banach spaces \(U\) and \(V\). Because of the ill-posedness of the problem (that is, discontinuous dependence of the solution on the data \(v^\delta\)), a direct solution does not make sense. Instead, it is necessary to introduce some type of regularisation in the solution process that is based on prior knowledge of qualitative properties of the true solution \(u^\dagger\) of the problem.

Proposal 1: Source conditions for inverse problems

One classical approach to the solution of inverse problems is classical Tikhonov regularisation, where the prior assumption is that the norm of the true solution \(u^\dagger\) is small. In this case, it makes sense to find an approximate solution of the inverse problem by solving the optimisation problem \[ \frac{1}{2}\lVert F(u)-v^\delta\rVert^2 + \alpha\lVert u \rVert^2 \to \min. \] In this setting, it can be shown that the quality of the approximate solution depends on whether the true solution \(u^\dagger\) satisfies a so called source condition of the form \[ u^\dagger = F'(u^\dagger)^* \xi. \] For typical applications like parameter identification problems in PDEs or also the solution of integral equations, this source condition can often be interpreted as a smoothness condition on the true solution.

The goal of this project is to investigate source conditions (and thus the quality of the approximate solutions) in the case where the prior assumption is that of smallness of some more general regularisation functional \(\mathcal{R}\), and where one computes approximate solutions of the inverse problem by solving \[ \frac{1}{2}\lVert F(u)-v^\delta\rVert^2 + \alpha\mathcal{R}(u) \to \min \] instead. Examples of regularisation functionals of interest are various Sobolev and total variation (semi-)norms on \(L^p\)-spaces, as well as \(\ell^p\)-norms of basis or frame coefficients of \(u\).

Prerequisites: Good knowledge of functional analysis (e.g. the course TMA4230 - Functional Analysis); knowledge of convex analysis and/or inverse problem is an advantage, but might also be acquired in a specialisation course (fordypningsemne); it can be an advantage to have previous knowledge of PDE theory and/or the theory of Sobolev spaces.

Proposal 2: Learning based regularisation

In this project we will investigate learning based solution methods for inverse problems. Here we assume that the true solutions follow a prior distribution \(\mu\in \mathbb{P}(U)\) that we can sample or learn. Moreover, we assume that the data follows a distribution \(\nu^\delta \approx g^\delta \ast F_\sharp \mu\), where the convolution with \(g^\delta\) models the noise (which here is assumed to be additive and i.i.d.).

Now there are at least two main approaches for the regularised solution. First, there is the possibility of directly learning an approximate inverse \(G^\delta\colon V \to U\) of \(F\) in the sense that \(G^\delta_\sharp \nu^\delta \approx \mu\). An alternative is to learn a regularisation functional \(\mathcal{R}\colon U \to V\) and then to apply Tikhonov regularisation with the learned functional. The goal of this project is to conduct an investigation into the advantages and drawbacks of the different methods, both from a theoretical and an applied point of view. The main theoretical questions to answer are those of stability and convergence: How sensitive are the regularisation methods with respect to perturbations (or sampling biases) in the distributions \(\mu\) and \(\nu^\delta\), and can we show that \(G^\delta \nu^\delta \to \mu\) as \(\delta \to 0\), and in which sense does this convergence hold?

Prerequisites: Good knowledge of functional analysis (e.g. the course TMA4230 - Functional Analysis) and measure theory (e.g. the course TMA4225 - Foundations of Analysis); knowledge of inverse problems is an advantage, but might also be acquired in a specialisation course (fordypningsemne); it is a definite advantage to have some prior knowledge in the field of machine learning (e.g. through the course TMA4268 - Statistical Learning).

2023-11-19, Markus Grasmair