Lecture plan
Lectures: Chapters 6-10 from George Casella and Roger L. Berger, 2002: Statistical Inference, Second Edition, Duxbury. Self-study: Chapters 1-5. The contents is assumed esentially known, but some parts will be covered on a 'need to know' basis in the lectures.
Please observe: The posted scribes below may contain factual errors, and covers only parts of what was said in the lecture.
Week | Scribe ++ | Book | Topic | Notes/Supplementary reading |
---|---|---|---|---|
34 | Prologue SI GUM Mon Wed | Ch. 6 and Ch 1-5 | Introduction and repetition. Parameter and statistic: Axioms for events and random points. Distributions, Expectation, The change-of-variables theorem. Statistical models and sufficiency. Principles of data reduction. Sufficiency principle. Factorization theorem | Prologue. Videoer på norsk: Sverdrup intervju; Kolmogorov; Hendelser; Encyclopedia of mathematics: Factorization theorem |
35 | Mon What is X given T=t ? | Ch. 6 | Statistical models and sufficiency. Principles of data reduction. Level sets, Partitions, Equivalence relations, Sufficiency principle. Factorization theorem. Minimal sufficiency. Conditional probability and expectation, Complete statistics and ancillary statistics: Basu theorem. Exponential family. Likelihood | Wikipedia: Sufficient including minimal; ancillary; Exponential family; Complete statistic; Basu theorem; Ancillary meaning |
36 | Mon Wed | Ch. 6 | Principles for data reduction. The likelihood principle. Conditionality principle. Birnbaum's theorem. Complete statistics and ancillary statistics: Basu theorem. Symmetry in statistics. Equivariant statistics and parameters. Group actions. Rao passed away in august 2023. 102 years! Rao on Wikipedia. | Likelihood principle and Birnbaum argument; Conditionality principle; Statistical problem of the Nile, and real Nile problem. |
37 | Wed | Ch. 6 | Symmetry in statistics: The equivariance principle. Level sets, Partitions, Equivalence relations. Conditional probability and expectation, Hilbert space of random variables. Projection theorem, Summary of Ch.6: S-L-C-E and Basu and Halmos-Savage | Equivariance; Inference; Group action; Alternative methods for data reduction Conditional probability; Conditional expectation; |
38 | Mon | Ch. 7 | Estimation and prediction: Point, Set, Distribution. Methods: Empirical-Moments-MLE-Bayes-Fiducial-RaoBlackwell. Maximum likelihood estimation in exponential families. Optimal estimates. Lehmann-Scheffe, Rao-Blackwell, Equivariance and ancillaries. Bayes-estimation. Convex loss and Jensen's inequality. Evaluation: Distribution-Loss-Bias-Equivariance. Examples: Location/Scale, Binomial, Exponential family, Gamma | Parameter; Statistics; Statistics also!: Point estimation; Estimation; Estimation again :-) ; Exponential family; Lehmann-Scheffe theorem; Rao-Blackwell theorem |
39 | Mon | Ch. 7 | Bayes estimates. How2? Conjugate prior. Right Haar prior. Exponential families. MCMC. INLA. Information. Fisher information and metric. Cramer-Rao's inequality. Entropy and Kullbach-Leibler | Bayesian inference; Loss; Convex function; Jensen's inequality; Convex optimizatioan |
40 | Mon | Ch. 7 | ISO GUM and delta method. Standard uncertainty. Expanded uncertainty. Effective degrees of freedom. Examples and summary Fisher information metric, Time series, Linear models, Gaussian, Gamma, Bernoulli, Fisher's problem of the Nile, Uniform, Behrens-Fisher, Common means. | GUM; Accuracy and precision |
41 | Mon Wed | Ch. 8 | Hypothesis testing Ex8. Hypothesis. Randomized test. Power. Level. Neymann-Pearson. Methods: Likelihood, p-value, estimator, Bayes. Optimal tests. Neyman-Pearson, Karlin-Rubin, Unbiased test. Most powerful tests: UMP. Risk and power. p-value. Stochastic order. | Monotone likelihood ratio; p-value; Stochastic ordering; Neyman-Pearson; Karlin-Rubin theorem |
42 | Mon Wed | Ch. 8 and 9 | Examples p-values, sufficiency, Fisher's exact test. conditional tests, Exponential families, linear models, Behrens-Fisher. Summary (Tuesday lecture) Methods. Evaluation. Examples. Ex8 hints. Interval estimation. Test-, Point-, set-, and distribution-estimators. Credibility-, Confidence-, Prediction-, and Tolerance-set. Inversion. Ghosh-Pratt and Kolmogorov-Robbins Theorems. Bayes. Pivot. | Reject an unlikely hypothesis! Hypothesis H. Test R. Power β. Likelihood (ratio) λ(x), level and size α, best test, valid p (x) |
43 | Wed | Ch. 9 | Optimal intervals. Loss and risk. Power. Unbiased. Equivariance. Coverage coefficient and level. Accuracy. Size- or Neyman-precision. Trueness. Examples: Construction and evaluation. Bayes-, Fiducial-, and Confidence-distributions, -intervals, -estimators. | |
44 | Wed | Ch. 9 | Examples: Construction and evaluation. p-value function. Bayes-, Fiducial-, and Confidence-distributions, -intervals, -estimators. Summary. Methods. Evaluation. Examples. | GUM; Accuracy and precision |
45 | Mon | Ch.9 and Ch. 10 + 5.5 | Equivariance and set estimation Examples. Inversion and group. Asymptotics. Convergence. Likelihood estimators. Delta method. Examples Exam example. | Asymptopia |
46 | Summary | Concepts, theorems, examples | ||
47 | Repetition |
Chapter 1 of Keener (2010) (Theoretical Statistics. Topics for a Core Course) is recomended as a replacement of Chapter 1 in Casella-Berger. Keener writes at a level similar to Casella-Berger, but he is mathematically more precise. Keener uses more standard notation and definitions, and this will also be used in the lectures. The text by Keener was used as a textbook in (STAT 210A) Theoretical Statistics at UC Berkeley by one of my master students in 2018. It should be accessible also to other students at NTNU. The course STATS 300A at Stanford is more advanced, but with many good student notes that you may find useful for the mandatory scribe exercise. The notes for Berkeley's introductory Ph.D.-level course on theoretical statistics can also be useful, but covers more material.