Representation of soil water movement

Dr. Hannes Bauser

Soil water movement is a key process in ecosystem services, such as biomass production, fresh water retention, climate regulation, or water buffering and filtering. However, the quantitative description of soil water movement on all relevant scales from meters to the global scale, remains an open challenge. In this talk I focus on the meter scale, where soil water movement can still be described with the process based Richards equation. Nevertheless, the mathematical representation of soil water movement exhibits uncertainties in all model components. This means that the representation of uncertainties in each model component becomes an integral part of the model formulation. The goal is then an optimal consistent representation with minimal uncertainties. Data assimilation methods, which combine models and data, are a key tool for this task. In this talk I present an application on a real-world case. We assessed the key uncertainties for the specific hydraulic situation of a 1-D soil profile with TDR (time domain reflectometry)-measured water contents. We employed a data assimilation method, the ensemble Kalman filter (EnKF), with an augmented state to represent and reduce all key uncertainties (initial condition, soil hydraulic parameters, small-scale heterogeneity, and upper boundary condition), except for an intermittent violation of the local equilibrium assumption by the Richards equation. To bridge this time, we employed a closed-eye period, which pauses the parameter estimation and only guides the states through this time. This ensured constant parameters throughout the whole estimation, suggesting that we achieved a more consistent description and limited the incorporation of errors into parameters.

Explainable Machine Learning


Prof. Dr. Ullrich Köthe (Research Group: Visual Learning Lab Heidelberg)

Today's machine learning algorithms, and in particular neural networks, mostly act as blackboxes: They make very good predictions, but we don't really understand why. This is problematic for various reasons: Why should users (e.g. physicians) trust these algorithms? Will blackbox methods contribute to the advancement of science, when they produce numbers, not insight? How can one challenge objectionable machine decisions? Explainable machine learning attempts to solve these problems by opening the blackboxes. The first part of my talk will review central ideas of this field. The second part describes invertible neural networks (INNs) in more detail and shows how this novel network type may contribute to various aspects of explainability.

Multilevel Uncertainty Quantification with Sample-Adaptive Model Hierarchies


Prof. Dr. Robert Scheichl (Research Group: Numerical Analysis and Uncertainty Quantification)

Sample-based multilevel uncertainty quantification tools, such as multilevel Monte Carlo, multilevel quasi-Monte Carlo or multilevel stochastic collocation, have recently gained huge popularity due to their potential to efficiently compute robust estimates of quantities of interest (QoI) derived from PDE models that are subject to uncertainties in the input data (coefficients, boundary conditions, geometry, etc). Especially for problems with low regularity, they are asymptotically optimal in that they can provide statistics about such QoIs at (asymptotically) the same cost as it takes to compute one sample to the target accuracy. However, when the data uncertainty is localised at random locations, such as for manufacturing defects in composite materials, the cost per sample can be reduced significantly by adapting the spatial discretisation individually for each sample. Moreover, the adaptive process typically produces coarser approximations that can be used directly for the multilevel uncertainty quantification. In this talk, we present two novel developments that aim to exploit these ideas. In the first part we will present Continuous Level Monte Carlo (CLMC), a generalisation of multilevel Monte Carlo (MLMC) to a continuous framework where the level parameter is a continuous variable. This provides a natural framework to use sample-wise adaptive refinement strategies, with a goal-oriented error estimator as our new level parameter. We introduce a practical CLMC estimator (and algorithm) and prove a complexity theorem showing the same rate of complexity as MLMC. Also, we show that it is possible to make the CLMC estimator unbiased with respect to the true quantity of interest. Finally, we provide two numerical experiments which test the CLMC framework alongside a sample-wise adaptive refinement strategy, showing clear gains over a standard MLMC approach with uniform grid hierarchies. In the second part, we extend the sample-adaptive strategy to multilevel stochastic collocation (MLSC) methods providing a complexity estimate and numerical experiments for a MLSC method that is fully adaptive in the dimension, in the polynomial degrees and in the spatial discretisation.