Courses in Mathematics and Statistics

Courses in Mathematics and Statistics in the 28th Jyväskylä Summer School. The University of Jyväskylä reserves the right to make changes to the course programme.

MA1: Recent progress in regularity theory

Time: 13.-17.8.2018, 10h 
Participants: no limits
Lecturer(s): Prof. Giuseppe Mingione (Università di Parma, Italy)
Coordinator(s): Mikko Parviainen
Code: MATJ5101
Modes of study: Obligatory attendance on lectures, and completing exercises.
Credits: 2 ECTS
Evaluation: Pass/fail
Contents: In this series of lectures I will try to summarize a few recent progresses in the regularity theory of quasilinear, possibly degenerate equations, after giving a rapid overview of basic facts. Specifically, I will move from basic and by now classical De Giorgi-Nash-Moser theory and partial regularity problems for systems, to more recent topics of current interest such as Nonlinear Calderón-Zygmund Theory and potential estimates in Nonlinear Potential Theory.

Learning outcomes: To have an idea of basic regularity problems and results about regularity theory for solutions quasilinear degenerate equations and minimizers of integral functionals in the Calculus of Variations, with special emphasis on those parts touching so called Nonlinear Calderón-Zygmund theory and Nonlinear Potential Theory.

Prerequisites: Basic measure theory and functional analysis. Sobolev spaces. Notion of distributional solution.

MA2: Quantitative stochastic homogenization

Time: 13.-17.8.2018, 10h
Participants: no limits
Lecturer(s):  Prof. Tuomo Kuusi (University of Oulu)
Coordinator(s):  Mikko Parviainen
Code: MATJ5102
Modes of study: Obligatory attendance on lectures, and completing exercises.
Credits: 2 ECTS
Evaluation: Pass/fail

Contents: The aim of the course is to describe some recent developments in random homogenization. The main focus is in linear elliptic equations with random coefficients. During the first part of the course we shall prove a quantitative rate of homogenization with very good stochastic integrability. Next, we will develop a stochastic regularity theory (where theories of De Giorgi and Stampacchia play crucial role) and finally use this regularity theory to accelerate the rate of convergence of homogenization, and to bootstrap it to the optimal one given by the Central Limit Theorem scaling.

Learning outcomes: To know what is meant by the stochastic homogenization. To be able to derive quantitative rate of homogenization for linear elliptic equations with random coefficients dealt in the course. Understand how regularity theory can be used to accelerate the rate of convergence.

Prerequisites: Basic measure theory and stochastics. Sobolev spaces. Notion of distributional solution.  The course is based on the lecture notes "Quantitative stochastic homogenization and large-scale regularity" available at http://perso.ens-lyon.fr/jean-christophe.mourrat/lecturenotes.pdf

MA3: On recent developments on Markov Processes and Applications

Time: 6.-10.8.2018, 10x45 min
Lecturer(s):  Prof. Andreas Kyprianou (Department of Mathematical Sciences, University of Bath)
Coordinator(s): Prof. Stefan Geiss
Code: MATJ5103
Modes of study: Lectures, homework by solving problems
Credits: 2 ECTS
Evaluation: Pass/fail

  • 1. Quick review of Lévy processes
  • 2. Stable processes seen as Lévy processes
  • 3. Stable processes seen as a self-similar Markov process
  • 4. Riesz–Bogdan–Zak transform
  • 5. Hitting spheres
  • 6. Spherical hitting distribution
  • 7. Spherical entrance/exit distribution
  • 8. Radial excursion theory

In this mini-course we will review some very recent work on isotropic stable processes in high dimension. The recent theory of self-similar Markov and Markov additive processes gives us new insights into their trajectories. Combining this with classical methods, we revisit some old results, as well as offering new ones.

Learning outcomes: The students are acquainted with basic and advanced parts of the theory of L'evy processes and self-similar Markov processes. They can deal with isotropic stable processes in high dimensions and are familiar with important path properties of these processes. The students are able to work on appropriate theoretical problems as well as on concrete examples.

Prerequisites: Some relatively basic knowledge of Levy processes, basic facts about Markov processes. Although pitched at the level of a graduate course, prerequisites are not significant beyond most undergraduate/masters level exposure to probability and stochastic processes.

STAT1: Penalized and Bayesian Models for Variable Selection in Linear Regression: Inference, Model averaging and Prediction

Time: 6.-10.8.2018, 20 hours lectures & 6 hours exercises
Participants: Advanced MSc students, PhD students, researchers
Lecturer(s): Professor Mikko Sillanpää (University of Oulu)
Coordinator(s):  Prof. Juha Karvanen & Assoc. Prof. Matti Vihola
Code: TILJ5101
Modes of study: Lectures, practicals, coursework
Credits: 3 ECTS
Evaluation: Pass/fail (Based on practical work to be submitted to the lecturer some time after course)

Contents: Common methods to do simultaneous variable selection and parameter estimation in linear regression models will be covered in settings which typically have more unknown parameters than data points (i.e., high-dimensional problems). These are methods such as LASSO, Bayesian LASSO, Elastic-net, SSVS, Spike-and-Slab and many more. We will compare penalized least squares / maximum likelihood framework to Bayesian estimation under Markov chain Monte Carlo (MCMC) and faster Maximum-a-posteriori estimation. We will especially pay attention to the differencies between two cases: if we want to use the model for prediction or to formally identify important covariates out of large number of candidates. Many of the variable selections methods covered during the course can be generalized to general linear models.

Learning outcomes: Student can evaluate, compare and utilize most common variable selection methods in their own work and will be aware of common properties of different methods. Student can also understand some basic philosophical differences between the methods.

Prerequisites: Background in linear models & regression, basic course in Bayesian statistics (& knowledge about Markov chain Monte Carlo).