Topics for projects - Markus Grasmair

I am mainly working in variational (that is: optimisation based) methods with applications in mathematical image processing and inverse problems. The latter are typically concerned with the solution of operator equations (usually integral or differential operators) where the solution is extremely sensitive with respect to noise (data and/or modeling errors). A classical example is the inversion of the Radon transform, which is the basis of computerised tomography (CT). Another example is deblurring, which is required for obtaining clear images both for imaging at the largest scales in astronomy and the smallest scales in microscopy.

Contact info

Main topics (details can be found below):

  • Sparse optimisation methods for compressed sensing and learning.
  • Parameter learning for imaging problems.
  • Shape analysis for inverse problems.

The list of projects is by no means exhaustive. Feel free to suggest other topics within optimisation, inverse problems, or image processing that interest you.

In addition to the topics mentioned above, I can act as a local supervisor for certain machine learning/optimisation based projects offered at Simula Research Laboratory in Oslo (details to be announced later).

Sparse optimisation methods

In many modern applications, one has to deal with data points in high dimensions, where all the significant information and variation of these data points is contained in some (unknown!) low-dimensional subspace. For instance, one might know that a given signal contains only a limited number of different Fourier modes, but one does not know a–priori which frequencies occur in the specific measurements. In addition, the measurement data often contain additional noise and therefore the sparse representation by e.g. a limit number of Fourier modes is not exact. One approach for tackling the problem of finding the sparse basis is the solution of the optimisation problem (closely related to Lasso–regression) \[ \ell(Au,y) + \alpha \lVert \Phi u \rVert_1 \to \min_u. \] Here \(\ell\) is some loss function, \(A\) the measurement operator, \(y\) the measured data, \(\Phi\) a basis change, and \(u\) the solution. The sparsity of the solution \(u\) is a consequence of the non-differentiability of the \(\ell^1\)-norm, which is part of the functional. Problems of a similar structure also occur for the solution of inverse problems or also for sparse parameter identification problems for PDEs. In that case, the operator \(A\) could be some integral operator or a solution operator for a PDE.

The goal of this project is the development and analysis of sparsity enhancing optimisation algorithms for this type of problems. Because the \(\ell^1\)-norm is non-differentiable, direct solution methods are not available and it is necessary to apply iterative, non-smooth methods. However, it can happen that the iterates generated during the solution process are largely non-sparse, although the actual solution the iterates converge to is. We want to study and explain this phenomenon and try to develop algorithms that generate sparse iterates.

Prerequisites: TMA4180 Optimisation I.

Depending on the concrete choice of applications, it can help (but is not necessary) to have some background in some of the following courses: TMA4205 Numerical Linear Algebra, TMA4183 Optimisation II, TMA4268 Statistical Learning, MA8104 Wavelets.

Parameter learning

Variational image restoration methods are typically of the form \[ \ell(Au,y) + \alpha_1 R_1(u) + \ldots +\alpha_K R_K(u) \to \min_u, \] where \(A\) is the imaging operator, \(\ell\) a loss function that penalises the deviation from the given noisy data \(y\), and the regularisation functionals \(R_k\) penalise different aspects of the image. Moreover, the regularisation parameters \(\alpha_k\) balance the influence of the different aspects. One important class of such methods are decomposition models, in the simplest case of which the image is decomposed as \(u = u_C + u_T\) into a cartoon part \(u_C\) and a texture part \(u_T\), leading to a model of the form \[ \ell(Au,y) + \alpha_C R_C(u_C) + \alpha_T R_T(u_T) \to \min \qquad\text{ such that } u = u_C + u_T. \]

Decomposition of an image into cartoon part and texture part
Original image Cartoon part Texture part
Taken from: M. Grasmair and V. Naumova, Multi-parameter approaches in image processing, submitted (2019).

While such models can yield very good reconstructions of the image \(u\), the problem of parameter choice combined with a potentially time consuming numerical solution can make them impractical for applications, as it is often necessary to solve the optimisation problem for a large number of different choices of the regularisation parameters. In this project, we will investigate new parameter selection methods for such multi-parameter variational models. We will in particular consider learning based methods, where the parameter selection is either based on the behaviour of the model on some labelled training set (that is, supervised learning) or on subsampled or transformed versions of the given image (unsupervised learning).

Prerequisites: TMA4180 Optimisation I, TMA4183 Optimisation II.

Recommended courses: TMA4205 Numerical Linear Algebra, TMA4268 Statistical Learning.

Shape analysis for inverse problems

This project will discuss methods for the reconstruction of shapes from (possibly noisy) measurements. Here, a shape is described mathematically as a mapping \(\phi\colon M \to \mathbb{R}^d\) modulo reparameterisations of \(M\), the parameter space \(M\) depending on the dimension and topology of the shapes we interested in; typical choices are that of an interval for the modeling of curves, a circle for the modeling of closed (periodic) curves, or a sphere for the modeling of surfaces of three-dimensional objects. The main idea of shape analysis consists in viewing the shape space as a manifold with a Riemannian metric that describes the energy necessary for infinitesimal deformations of the shape. The distance between two shapes can then be defined as the minimal energy necessary to deform one shape into the other. In the particular case of one-dimensional shapes, this energy can be computed efficiently by means of a dynamical programming based approach.

The situation we will consider is that where we are given a data set of model shapes \(\psi_1,\ldots,\psi_K\). Given a measurement operator \(F\) and (noisy) data \(y\), we can then reconstruct the generating shape \(\phi\) by approximately solving a problem of the form \[ \ell(F(\phi),y) + \alpha S(d(\phi,\psi_1),\ldots,d(\phi,\psi_K)) \to \min_\phi. \] A special case of this situation is that of low-dimensional data representation (where \(F\) is the identity), which in turn is closely related to the problem of supervised geometric learning.

The main goal of this project is the development of numerical methods for the solution of this (highly non-convex) optimisation problem. In addition, it is possible to study well-posedness and applicability to various types of applied inverse problems.

Prerequisites: TMA4180 Optimisation I, TMA4183 Optimisation II.

It can help to have taken the course TMA4190 Introduction to Topology (or a similar introduction to the fundamental concepts of differential geometry).

2019-11-20, Markus Grasmair