# Algorithms for Uncertainty Quantification - Summer 17

Computer simulations of different phenomena heavily rely on input data which – in many cases – are not known as exact values but face random effects. Uncertainty Quantification (UQ) is a cutting-edge research field that supports decision making under such uncertainties. Typical questions tackled in this course are “How to incorporate measurement errors into simulations and get a meaningful output?”, “What can I do to be 98.5% sure that my robot trajectory will be safe?”, “Which algorithms are available?”, “What is a good measure of complexity of UQ algorithms?”, “What is the potential for parallelization and High-Performance Computing of the different algorithms?”, or “Is there software available for UQ or do I need to program everything from scratch?” In particular, this course will cover • Brief repetition of basic probability theory and statistics • 1st class of algorithms: sampling methods for UQ (Monte Carlo): the brute-force approach • More advanced sampling methods: Quasi Monte Carlo & Co. • Relevant properties of interpolation & quadrature • 2nd class of algorithms: stochastic collocation via the pseudo-spectral approach: Is it possible to obtain accurate results with (much) less costs? • 3rd class of algorithms: stochastic Galerkin: Are we willing to (heavily) modify your software to gain accuracy? • Dimensionality reduction in UQ: apply hierarchical methodologies such as tree-based sparse grid quadrature. How does the connection to Machine Learning and classification problems look like? • Which parameters actually do matter? => sensitivity analysis (Sobol’ indices etc.) • What if there is an infinite amount of parameters? => approximation methods for random fields (KL expansion) • Software for UQ: What packages are available? What are the advantages and downsides of major players (such as chaospy, UQTk, and DAKOTA) • Outlook: inverse UQ problems, data aspects, real-world measurements