Upcoming events

09 April 2024 16:00 till 17:00

[AN] Ivan Trapasso: Explorations in wave packet analysis

In this talk we provide a concise overview of the fundamental principles underlying harmonic analysis in phase space. The roots of this vibrant field of modern Fourier analysis are to be found at the crossroads of signal analysis, mathematical physics, representation theory and analysis of partial differential equations. The key idea is to exploit a dictionary of oscillating wave packets (or equivalently, the combined structure of translations and modulations or dilations) to investigate properties of functions, distributions and operators in terms of suitable companion phase space representations.

Addressing time and frequency/scale on the same level presents both advantages and challenges due to the uncertainty principle. In essence, time and frequency exhibit a somewhat dual nature as variables, hence the efforts to handle them concurrently are ultimately directed to keep track of the multifaceted manifestations of their entanglement. We will delve into these issues, whose origins date back to the foundations of quantum mechanics, and show how they continue to stimulate insightful research in analysis.

Lastly, we will offer a taste of applications of these techniques to some problems motivated by the current challenges of data science, mostly in order to convey the message that the principles of time-frequency analysis are ubiquitous, hence adopting a phase space perspective can provide a versatile framework to explore problems from pure and applied mathematics.

19 April 2024 12:30 till 13:15

[NA] Alena Kopaničáková : Enhancing Training of Deep Neural Networks Using Multilevel and Domain Decomposition Strategies

The training of deep neural networks (DNNs) is traditionally accomplished using stochastic gradient descent or its variants. While these methods have demonstrated certain robustness and accuracy, their convergence speed deteriorates for large-scale, highly ill-conditioned, and stiff problems, such as ones arising in scientific machine learning applications. Consequently, there is a growing interest in adopting more sophisticated training strategies that can not only accelerate convergence but may also enable parallelism, convergence control, and automatic selection of certain hyper-parameters.
In this talk, we propose to enhance the training of DNNs by leveraging nonlinear multilevel and domain decomposition strategies. We will discuss how to construct a multilevel hierarchy and how to decompose the parameters of the network by exploring the structure of the DNN architecture, properties of the loss function, and characteristics of the dataset. Furthermore, the dependency on a large number of hyper-parameters will be reduced by employing a trust-region globalization strategy. The effectiveness of the proposed training strategies will be demonstrated through a series of numerical experiments from the field of image classification and physics-informed neural networks.

References:
[1] A. Kopaničáková, H. Kothari, G. Karniadakis and R. Krause. Enhancing training of physics-informed neural networks using domain-decomposition based preconditioning strategies. Under review, 2023.
[2] S. Gratton, A. Kopaničáková, and Ph. Toint. Multilevel Objective-Function-Free Optimization with an Application to Neural Networks Training. SIAM, Journal on Optimization (Accepted), 2023.
[3] A. Kopaničáková. On the use of hybrid coarse-level models in multilevel minimization methods. Domain Decomposition Methods in Science and Engineering XXVII (Accepted), 2023.
[4] A. Kopaničáková, and R. Krause. Globally Convergent Multilevel Training of Deep Residual Networks. SIAM Journal on Scientific Computing, 2022.

27 May 2024 15:45 till 16:45

[STAT/AP] Collin Drent: Condition-Based Production for Stochastically Deteriorating Systems: Optimal Policies and Learning

Production systems used in the manufacturing industry degrade due to production and may eventually break down, resulting in high maintenance costs at scheduled maintenance moments. This degradation behavior, and hence the system's reliability, is affected by the system's production rate. While producing at a higher rate generates more revenue, the system's reliability may also decrease. Production should thus be controlled dynamically to trade-off reliability and revenue accumulation in between maintenance moments. We study this dynamic trade-off for (i) systems where the relation between production and degradation is known as well as (ii) systems where this relation is not known and needs to be learned on-the-fly from condition data. For systems with a known production-degradation relation, we cast the decision problem as a continuous-time Markov decision process and prove that the optimal policy has intuitive monotonic properties. We also present sufficient conditions for the optimality of bang-bang policies and we characterize the structure of the optimal interval between scheduled maintenance moments. For systems with an a-priori unknown production-degradation relation, we propose a Bayesian procedure to learn the unknown degradation rate under any production policy from real-time condition data. Numerical studies indicate that on average across a wide range of practical settings (i) condition-based production increases profits by 50% compared to static production, (ii) integrating condition-based production and maintenance interval selection increases profits by 21% compared to a state-of-the-art approach, and (iii) our Bayesian approach performs close, especially in the bang-bang regime, to an Oracle policy that knows each system's production-degradation relation.