Trondheim Symposium in Statistics 2017

The symposium will take place 6-7 October, 2017, at Baardshaug (https://baardshaug.no/).

Agenda (subject to change):

  • Friday 1400-1515: Bus from NTNU in Trondheim to Bårdshaug (Check-in).
  • Friday 1615-1630: Coffee and fruit.
  • Friday 1630-1730 : Rosemary Bailey
  • Friday 1730-1830 : David Ginsbourger
  • Friday 19→ Dinner and socialization.
  • Saturday →9: Breakfast.
  • Saturday 9-10: Jan Terje Kvaløy
  • Saturday 10-11: Magnar Lillegård
  • Saturday 11-12: Claire Miller
  • Saturday 12-13: Lunch.
  • Saturday 13-1345: Bus from Bårdshaug to NTNU in Trondheim.

Presentations are about 50 min, and then questions. There will also be a couple of minutes break between talks to get water and coffee.

Invited speakers, with titles and abstracts:


Block designs with very low replication, and other challenges in design of experiments

Abstract: In the early stages of testing new crop varieties, it is common that there are only small quantities of seed of many new varieties. In the UK (and some other countries with centuries of agriculture on the same land) variation within a field can be well represented by a division into blocks. Even when that is not the case, subsequent phases (such as testing for milling quality, or evaluation in a laboratory) have natural blocks, such as days or runs of a machine. I will discuss how to arrange the varieties in a block design when the average replication is less than two. I will conclude by showing that the Fisherian model of replication, blocking and randomization is still being ignored in much experimentation.


Quantifying and reducing uncertainties on sets under Gaussian Process priors

Abstract:

Gaussian Process models have been used in a number of problems where an objective function f needs to be studied based on a drastically limited number of evaluations.

Global optimization algorithms based on Gaussian Process models have been investigated for several decades, and have become quite popular notably in design of computer experiments. Also, further classes of problems involving the estimation of sets implicitly defined by f, e.g. sets of excursion above a given threshold, have inspired multiple research developments.

In this talk, we will give an overview of recent results and challenges pertaining to the estimation of sets under Gaussian Process priors, with a particular interest for to the quantification and the sequential reduction of associated uncertainties.

Based on a series of joint works primarily with Dario Azzimonti, François Bachoc, Julien Bect, Mickaël Binois, Clément Chevalier, Ilya Molchanov, Victor Picheny, Yann Richet and Emmanuel Vazquez.


Control charts – handling of estimation error and model choice

Abstract: Control charts is a set of techniques for monitoring stochastic processes over time. These techniques originated in the manufacturing industry where the units being monitored typically are quite homogeneous. Over time, numerous extensions and more sophisticated control charts have been developed for more challenging applications, and today control charts are applied in a diversity of areas like medicine, environmental science, finance, social science, insurance, reliability, etc. One important factor for the widespread use of control charts have been the development of so called risk-adjusted charts, which allow for monitoring in situations where the units are less homogeneous by accounting for the explainable difference between units via regression models. Two issues arise when applying such methods in practice, how to account for the estimation error in the model and how to choose an appropriate regression model for the monitoring purpose.

This talk will start with a brief introduction to some of the most popular control charts, and then focus on handling of estimation error and model choice issues. In particular will a bootstrap based method for handling of estimation error, implemented in the R-package spcadjust, be explained. This method applies to both ordinary control charts and risk-adjusted charts. Various illustrations of applications to medical data will be given.

The presentation is based on joint work with Axel Gandy.


The Norwegian Commodity Flow Survey: design, estimation, and error sources

Abstract: The Commodity Flow Survey (CFS) in Norway shows transportation within and between regions in Norway from establishments within mining and quarrying, industry, waste collection, renovation, and wholesale trade. The main purpose of the survey is to gain better knowledge of where the main trade flows are transported within Norway and between Norway and abroad. The mapping lays the foundation for planning and priorities in which parts of the infrastructure there are bottlenecks, and where we have the greatest need for investments and improvements. The first CFS in Norway was carried out in 2009 for year 2008 statistics. The 2008 survey was conducted as a traditional sample survey within the activity groups listed above. In the 2014 survey, a multisource design was chosen, including a sample survey, plus administrative data from shipment databases. The latter approach gave many more observations, but it also created new sources of error. Because of the multisource design, estimation by weighting was not used. Instead shipments from companies not in the data set were imputed by a nearest neighbour method. The talk will go through some common estimation and imputation techniques in business statistics. Then it will focus on problems and challenges in the CFS.


Functional data approaches for satellite data

Abstract: Developments in satellite retrieval algorithms continually extend the extraordinary potential of satellite platforms such as the MEdium ReSolution Imaging Spectrometer (MERIS) and the Advanced Along-Track Scanning Radiometer (AATSR) to retrieve information across the Earth at finer spatial resolution. For example, resolution down to 300m for MERIS now enables water quality products to be produced for lakes and Sentinel 2A & 2B (launched in June 2015 and March 2017) will enable quantitative retrieval with resolutions down to around 10-60m. Challenges associated with these new environmental data streams are the large volumes of data in space and time, collected as images, and the often large quantities of missing data. This talk will describe smoothing and functional data analysis as useful approaches for investigating such lake water quality data at a global scale. Novel developments in these areas are required to provide efficient dimensionality reduction, data imputation and data linkage to enable spatiotemporal patterns to be estimated from sparse data, satellite data to be bias-corrected using in-situ data, and identification of temporal coherence for lakes globally. Methods will be illustrated using data from the AATSR and MERIS instruments on the European Space Agency satellite platform, which have been used to estimate lake surface water temperature data and ecological properties such as chlorophyll for lakes in the projects ARC Lake (http://www.geos.ed.ac.uk/arclake), Diversity II (http://www.diversity2.info/) and GloboLakes (www.globolakes.ac.uk). This presentation is based on joint work with: Marian Scott, Ruth O’Donnell, Mengyi Gong, Craig Wilkie, School of Mathematics and Statistics, University of Glasgow

2017-09-25, Jo Eidsvik