05.06.2018

Courses in Computational Sciences

Courses in Computational Sciences in the 28th Jyväskylä Summer School. The University of Jyväskylä reserves the right to make changes to the course programme.

COM1Variational models and fast numerical schemes in image processing and computer vision

Time: 6.-10.8.2018, 20 hours (afternoon)
Participants: master’s students and doctoral students
Lecturer(s): Prof. Xue-Cheng Tai (Hong Kong Baptist University, Kowloon Tong, Hong Kong)
Coordinator(s): Dr. Sanna Mönkölä 
Code: TIES6820
Modes of study: Lectures and assignments.
Credits: 2-3 ECTS
Evaluation: Pass/fail

Contents: This course will introduce a number of problems in image processing and computer vision, and describe how they can be tackled by modern techniques based on calculus of variations and partial differential equations. We will focus especially (but not exclusively) on image reconstruction (denoising, deblurring, inpainting, as well as some inverse problems) and image segmentation. These procedures are fundamental in many applications, such as medical imaging and target recognition. Numerical solution of the models, which involve minimizing appropriate energies (often by solving associated partial differential equations) will be a major concern of the course: A variety of numerical techniques for this purpose, including level set and diffuse interface methods for evolving curves and surfaces, will be introduced and covered in detail. In addition, important theoretical questions about the various models and how they have been answered will be presented. We will try to cover some of the newest developments for these problems which have not been covered in any other standard textbook. Tentative outline:

(1) Mathematical preliminaries.

  • Some elementary partial differential equations.
  • Basic about minimization and calculus of variations
  • Functions of bounded variation.

(2) Image restoration, inpainting and deblurring.

  • The total variation model of Rudin, Osher, and Fatemi.
  • Mumford-Shah model.
  • Euler’s Elastica model.
  • Higher order methods of PDE nonlinear filters.
  • Other geometrical models for image filters.

(3) Fast numerical schemes

  • Gradient descent method.
  • Operator splitting and AOS schemes.
  • Dual approaches.
  • Split-Bregman.
  • Augmented Lagrangian approach.
  • Other fast minimization approaches for image processing.

(4) Image segmentation and geometrical PDEs

  • Geodesic active contours; implementation using level-sets.
  • Mumford-Shah and Chan-Vese models using level-sets.
  • Piecewise Constant Level set method.
  • Graph cut approach for interface problems and image segmentation.
  • Recent fast numerical schemes and global minimizations.
  • Discussion of a few other vision problems.

Learning outcomes: Knowledge and skills to formulate and solve numerically variational image processing and computer vision problems.

Completion mode: Obligatory attendance at lectures and completing the given exercises.

Prerequisites: Basics of numerical methods for partial differential equations (e.g., finite difference or finite element method), vector calculus, linear algebra, and some programming experience. 

COM2: Data analytics + machine learning + optimization

Time: 13.-17.8.2018, 15 hours lectures (mornings) + 15 hours computer lab sessions (afternoons)
Participants: Master’s students with the appropriate background knowledge in quantitative methods and some basic programming experience. Doctoral students and post-docs working on topics that deal with large datasets, prediction/classification tasks or optimisation problems.

Lecturer(s):  Assistant Prof. Manuel Lopez-Ibanez (University of Manchester, UK)
Coordinator(s): Atanu Mazumdar
Code: TIES6821 
Modes of study: Lectures, computer lab work, group course work.
Credits: 4 ECTS
Evaluation: Pass/fail

Contents: This course covers all the steps that go from accessing data about a problem to analysing the data, using it for prediction and classification and designing and testing an optimisation algorithm to solve it, using Python as the main programming language. We will introduce fundamental concepts in data analytics, modelling, machine learning and optimisation, and how they relate to each other. In addition, the course will discuss all those practical details that are often left out from textbooks, but are crucial for successfully solving an optimisation problem. Finally, the course will include many examples of pitfalls and recommended practices when designing, testing and comparing optimisation methods, in particular metaheuristics, such as local search and evolutionary algorithms, for both single-objective and multi-objective problems. The following topics will be covered in the course:

  • Accessing and preparing data
  • Data wrangling, munging and preprocessing
  • Data analysis and fundamentals of machine learning
  • Problem formulation and modelling
  • Fundamentals of decision making and optimisation, with a focus on metaheuristics
  • Complex problem features: uncertainty, constraint-handling and multiple objectives
  • Automatic parameter tuning and design of optimisation algorithms

The computer lab sessions will be dedicated to solve examples of real-world problems by applying the concepts learned in the lectures using Python packages such as: Pandas, NumPy, SciPy, Scikit-learn (Machine learning), etc.

Learning outcomes: At the end of the course, students should be able to:

  • Access data from the web in various formats
  • Transform, preprocess and prepare data for various purposes
  • Analyse and visualise large amounts of data
  • Use data to formulate and model optimisation problems with complex features
  • Design and evaluate optimisation algorithms for such problems

Prerequisites:

  • The course will build on basic concepts in probability, statistics, and discrete mathematics. It would be suitable for anyone with a background in quantitative methods who has an interest in learning about heuristic optimization and/or machine learning.
  • Programming experience is recommended.
  • Basics of Python in data science (e.g. Intro to Python for Data Science). The goal is not to learn Python programming, but to use it as a tool.

Assessment Methods: Attendance to the lectures is a requisite for passing the course. The final evaluation will consist in one group coursework assignment about a research question or real-world application. The coursework will consist in a group report of around 5000 works and a Python script (50%), a group presentation (30%) and a peer-review (20%)

COM3: Numerical Analysis of PDEs - history, overview of methods, and recent advantages

Time: 7.-11.8. (Tue-Sat) and 13.-17.8.2018 (Mon-Fri), 20h (lectures), 2 h each day; Tue 7.8.-Thu 9.8. 10-12, other days 12-14
Participants: master’s students and doctoral students 
Lecturer(s): Prof. Sergey Repin (University of Jyväskylä & Steklov Institute of Mathematics at St. Petersburg, Russia)
Coordinator(s):  Dr. Monika Wolfmayr
Code: TIES595 
Modes of study: Lectures and exercises, computer work.
Credits: 3-5 ECTS
Evaluation: Pass/fail

Contents: The lecture course is intended to give an overview  on mathematical models and methods based on partial differential equations. It starts with a relatively simple introductory part, where the main principles of quantitative analysis of differential equations are discussed within the paradigm of ordinary differential equations. The second part is devoted to such fundamental questions as correctness of boundary value problems. Then the course gives an overview of numerical methods (from classical to those have been developed in the last decades). The last part of the lecture course is devoted to error analysis and related topics such as: a priori rate convergence estimates, error indicators and a adaptivity, a posteriori error estimates).

1 INTRODUCTION

  • The  main questions in analysis of PDEs.
  • Examples of mathematical models based on differential equations.    

2 ORDINARY DIFFERENTIAL EQUATIONS

  • Differential equations and difference equations.                             
  • Cauchy problem, the Euler method and its modifications.           
  • Convergence of the Euler method.               
  • The  methods of Adams  and Picard-Lindel of.                  
  • Boundary value problems.                 
  • Finite difference method and the "sweep" procedure.          
  • Variational-difference method.           

3 CORRECTNESS of BOUNDARY VALUE PROBLEMS

  • Generalized solutions of boundary value problems.         
  • Classical solutions of BVPs.                 
  • Petrov-Bubnov-Galerkin method.               
  • Mathematical background.                 
  • Existence of a generalized solution.               
  • Variational settings of boundary value problems.           
  • Variational inequalities.                    
  • Existence of a minimizer.                 
  • Saddle point settings of boundary value problems.          
  • Primal and dual variational problems.              
  • Existence of a saddle point.                 
  • Saddle point statements of linear elliptic problems.   
  • Saddle point statements of nonlinear variational problems.

4 OVERVIEW OF NUMERICAL METHODS FOR PDEs

  • Classification of numerical methods
  • The Ritz method     
  • The finite element method (geometrical and functional features, convergence to the exact solution, practical applications, literature)
  • The finite difference method (correctness of the FD method, practical implementation: advantages and drawbacks, literature.
  • Mixed  finite element methods (classical mixed FEM, the dual mixed FEM, least squares mixed FEM, literature).
  • The method of Trefftz.
  • The finite volume method.
  • The method of fictitious domains.
  • Mortar approximations.
  • The Discontinuous Galerkin (DG) method (derivation of DG integral relations, literature).

5 CONVERGENCE OF APPROXIMATIONS

  • Convergence of approximations.
  • Modification of the limit density notion.          
  • Variational inequality of the first kind.            
  • Variational inequality of the second kind.          

6  A PRIORI VERIFICATION OF THE ACCURACY

  • Projection error estimate. Bramble-Hilbert Theorem.                    
  • Affine equivalent mappings. Interpolation operators in Sobolev Spaces.
  • Interpolation on polygonal sets, aspect ratio.         
  • Estimate of the convergence rate for polygonal domains.        
  • A priori convergence estimates for problems in convex domains.  
  • Aubin-Nitsche estimate.  A priori error estimates for mixed FEM.           

7  ERROR INDICATORS AND ADAPTIVITY

  •  Error indicators and adaptive numerical methods.             
  •  Error indicators for FEM solutions.                  
  •  Accuracy of error indicators.               

8  A POSTERIORI VERIFICATION OF THE ACCURACY

  •     Errors and residuals. Algebraic equations.            
  •     Errors and residuals. Differential equations.         
  •     Evaluation of negative norms.       
  •     Residual method for ordinary differential equations .
  •     Estimation of the residual for linear elliptic problems.
  •     Clement's interpolation operator for plane simplexes.
  •     Methods using adjoint problems.
  •     Goal oriented indicators for PDEs.
  •     Post-processing (averaging, superconvergence and equilibration).
  •     A posteriori estimates of the functional type.

Learning outcomes: Mathematical models and methods based on partial differential equations.

Prerequisites: Numerical methods for partial differential equations (e.g., finite difference or finite element method), vector calculus, linear algebra, and some programming experience.

Completion mode: Obligatory attendance at lectures and completing the given exercises. 

Literature:

  1. R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, New York, 1984
  2. D. Braess, Finite Elements. Cambridge University Press, Cambridge, 2007
  3. G. Duvaut, J.-L. Lions, Les In Èquations en MÈcanique et en Physique. Dunod, Paris, 1972
  4. O.A. Ladyzhenskaya, The Boundary Value Problems of Mathematical Physics. Springer, New York, 1985
  5. S. Repin. A posteriori estimates for partial differential equations. Walter de Gruyter, Berlin, 2008.
  6. O. Mali, P. Neittaanmaki and S. Repin. Accuracy Verification Methods.  Theory and Algorithms. Springer, 2014.

COM4: Introduction to cryptography and security

Time: 13.-17.8.2018, 20 hours?, afternoon
Participants: master’s students and doctoral students
Lecturer(s):  Prof. Bülent Yener (Rensselaer Polytechnic Institute (RPI) in Troy, New York)
Coordinator(s):  Dr. Mirka Saarela
Code: TIES6822
Modes of study: Obligatory attendance at lectures and completing the given exercises.
Credits: 2-3 ECTS
Evaluation: Pass/fail

Contents: This is an introductory course to cryptography and security. It is a self-contained class (i.e., we will cover some background material) that will include necessary topics from algebra and number theory to understand basics of cryptography. Applications of cryptography to various security protocols will be covered toward the end of the course. We will cover a wide range of material to give a broad view of the field while including fundamentals in depth. Lectures will be delivered mainly with slides except when math is involved then we will use the whiteboard. Homeworks will be a combination of (i) programming assignments for implementing various cryptographic techniques, and (ii) simple proofs or calculations that do not require programming.

A tentative outline and topics will be covered in this class are as follows:


  1. Classical Cryptography
  2. Math Background & Information Theoretical Cryptography
  3. Block Ciphers
  4. Randomness, RNG and Stream Ciphers
  5. Hash and MAC Algorithms
  6. Public-Key Cryptography
  7. Digital Signatures, Secret Sharing
  8. Subliminal Channels
  9. Web Security, SSL and PGP
  10. Anonymity and Privacy
  11. Digital Cash

Learning outcomes: At the end of the course students will know basic encryption/decryption for confidentiality, how to electronically sign document, how to establish covert channels, share secrets. They will learn necessary math skills that required for cryptography. The home works will give them a chance for hands on experience and a small project will enable them to put all the knowledge they gain into practice to implement a real life problem.

Prerequisites: Some experience with computer programming and operating systems is required. Ideally the students should have background on the following topics: Computer Networks, Socket programming knowledge,
 Operating Systems.


Literature: Recommended (NOT REQUIRED) Text Books: Cryptography and Network Security - by William Stallings; Applied Cryptography: Protocols, Algorithms, and Source Code in C - by Bruce Schneier.