Mini Symposium on Mathematical Methods for Data Science

03 March 2023 10:25 | Add to my calendar

Talks

Computational Advancements in Edge-Preserving Methods for Dynamic and Large-Scale Inverse Problems
Dr. Mirjeta Pasha, Dept of Mathematics Tufts University

Stochastic Edge Preserving Random Tree Besov Priors
Dr. Hanne Kekkonen, Delft Institute of Applied Mathematics, TU Delft

Stochastic Gradient Descent in Continuous Time: Discrete and Continuous data
Dr. Jonas Latz, Dept of Actuarial Mathematics and Statistics, Heriot-WaW University

Low-Rank Tensor Network Kernel Machines for Supervised Learning
Dr. Kim Batselier, Delft Center for Systems and Control, TU Delft

Free Registration @ (needed because of limited space): https://forms.office.com/e/vXDR6HmTMq

for info please contact Dr. Elvin Isufi.

Abstracts

Computational Advancements in Edge-Preserving Methods for Dynamic and Large-Scale Inverse Problems

Abstract: Fast-developing fields such as data science, uncertainty quantification, and machine learning rely on fast and accurate methods for inverse problems. Three emerging challenges on obtaining meaningful solutions to large-scale and data-intensive inverse problems are ill- posedness of the problem, large dimensionality of the parameters, and the complexity of the model constraints. Tackling the immediate challenges that arise from growing model complexities (spatiotemporal measurements) and data-intensive studies (large-scale and high-dimensional measurements), state-of-the-art methods can easily exceed their limits of applicability. In this talk we discuss recent advancements on edge-preserving methods for computing solutions to dynamic inverse problems, where both the quantities of interest and the forward operator may change at different time instances. In the first part of the talk, to remedy these difficulties, we apply efficient regularization methods that enforce simultaneous regularization in space and time (such as edge enhancement at each time instant and proximity at consecutive time instants) and achieve this with low computational cost and enhanced accuracy. In the remainder of the talk, we focus on designing spatio- temporal Bayesian Besov priors for computing the MAP estimate in large-scale and dynamic inverse problems. Numerical examples from a wide range of applications, such as tomographic reconstruction, image deblurring, and multichannel dynamic tomography are used to illustrate the effectiveness of the described methods.

Stochastic Edge Preserving Random Tree Besov Priors

Abstract: Gaussian process priors are often used in practice due to their fast computational properties. The smoothness of the resulting estimates, however, is not well suited for modelling functions with sharp changes. We propose a new prior that has same kind of good edge-preserving properties than total variation or Mumford-Shah but correspond to a well- defined infinite dimensional random variable. This is done by introducing a new random variable T that takes values in the space of ‘trees’, and which is chosen so that the realisations have jumps only on a small set.

Stochastic Gradient Descent in Continuous Time: Discrete and Continuous data

Abstract: Optimisation problems with discrete and continuous data appear in statistical estimation, machine learning, functional data science, robust optimal control, and variational inference. The `full' target function in such an optimisation problems is given by the integral over a family of parameterised target functions with respect to a discrete or conBnuous probability measure. Such problems can often be solved by stochastic optimisation methods: performing optimisation steps with respect to the parameterised target function with randomly switched parameter values. In this talk, we discuss a continuous-time variant of the stochastic gradient descent algorithm. This so-called stochastic gradient process couples a gradient flow minimising a parameterised target function and a continuous-time `index' process which determines the parameter.

We first briefly introduce the stochastic gradient processes for finite, discrete data which uses pure jump index processes. Then, we move on to continuous data. Here, we allow for very general index processes: reflected diffusions, pure jump processes, as well as other Lévy processes on compact spaces. Thus, we study multiple sampling paWerns for the continuous data space. We show that the stochastic gradient process can approximate the gradient flow minimising the full target function at any accuracy. Moreover, we give convexity assumptions under which the stochastic gradient process with constant learning rate is geometrically ergodic. In the same selng, we also obtain ergodicity and convergence to the minimiser of the full target function when the learning rate decreases over time sufficiently slowly.

We illustrate the applicability of the stochastic gradient process in a simple polynomial regression problem with noisy functional data, as well as in physics-informed neural networks approximating the solution to certain partial differential equations.

Low-Rank Tensor Networks Kernel Machines for Supervised Learning

Abstract: In this talk I will present a new kernel machine model that is obtained by introducing a low-rank tensor constraint onto the model weights. In this way it becomes possible to learn billions of model weights from data in seconds on a laptop without loss of model performance.