Seminar Archives

This the archive page for our ongoing Seminars in Numerical Analysis series. Please find the historical (<2021) archives in the left pane, or find a later archived event below.

19 april 2024 12:30 t/m 13:15

[NA] Alena Kopaničáková : Enhancing Training of Deep Neural Networks Using Multilevel and Domain Decomposition Strategies

The training of deep neural networks (DNNs) is traditionally accomplished using stochastic gradient descent or its variants. While these methods have demonstrated certain robustness and accuracy, their convergence speed deteriorates for large-scale, highly ill-conditioned, and stiff problems, such as ones arising in scientific machine learning applications. Consequently, there is a growing interest in adopting more sophisticated training strategies that can not only accelerate convergence but may also enable parallelism, convergence control, and automatic selection of certain hyper-parameters.
In this talk, we propose to enhance the training of DNNs by leveraging nonlinear multilevel and domain decomposition strategies. We will discuss how to construct a multilevel hierarchy and how to decompose the parameters of the network by exploring the structure of the DNN architecture, properties of the loss function, and characteristics of the dataset. Furthermore, the dependency on a large number of hyper-parameters will be reduced by employing a trust-region globalization strategy. The effectiveness of the proposed training strategies will be demonstrated through a series of numerical experiments from the field of image classification and physics-informed neural networks.

References:
[1] A. Kopaničáková, H. Kothari, G. Karniadakis and R. Krause. Enhancing training of physics-informed neural networks using domain-decomposition based preconditioning strategies. Under review, 2023.
[2] S. Gratton, A. Kopaničáková, and Ph. Toint. Multilevel Objective-Function-Free Optimization with an Application to Neural Networks Training. SIAM, Journal on Optimization (Accepted), 2023.
[3] A. Kopaničáková. On the use of hybrid coarse-level models in multilevel minimization methods. Domain Decomposition Methods in Science and Engineering XXVII (Accepted), 2023.
[4] A. Kopaničáková, and R. Krause. Globally Convergent Multilevel Training of Deep Residual Networks. SIAM Journal on Scientific Computing, 2022.

16 maart 2024 12:30 t/m 13:15

[NA] Carlos Pérez Arancibia: Fast, high-order numerical evaluation of volume potentials via polynomial density interpolation

This talk outlines a novel class of high-order methods for the efficient numerical evaluation of volume potentials (VPs) defined by volume integrals over complex geometries. Inspired by the Density Interpolation Method (DIM) for boundary integral operators, the proposed methodology leverages Green’s third identity and a local polynomial interpolation of the density function to recast a given VP as a linear combination of surface-layer potentials and a volume integral with a regularized (bounded or smoother) integrand. The layer potentials can be accurately and efficiently evaluated inside and outside the integration domain using existing methods (e.g. DIM), while the regularized volume integral can be accurately evaluated by applying elementary quadrature rules to integrate over structured or unstructured domain decompositions without local numerical treatment at and around the kernel singularity. The proposed methodology is flexible, easy to implement, and fully compatible with well-established fast algorithms such as the Fast Multipole Method and H-matrices, enabling VP evaluations to achieve linearithmic computational complexity. To demonstrate the merits of the proposed methodology, we applied it to the Nyström discretization of the Lippmann-Schwinger volume integral equation for frequency-domain Helmholtz scattering problems in piecewise-smooth variable media.

16 februari 2024 12:30 t/m 13:15

[NA] Andrea Bressan: Dimension of piecewise polynomials on the Wang-Shi macroelement

In dimension one the construction of piecewise polynomials with given degree, number of continuous derivatives and the  subdomains is a solved problem. Already in dimension two the rank of the continuity condition depends on the geometry of the polygonal subdomains and thus the space dimension can only be computed case by case. This changes if the subdomains are sufficiently structured for the required pair of degree and number of continuous derivatives. On cartesian meshes C^k splines are achieved with any degree d>k using a tensor product construction and both the space dimension and its properties are easily derived by the univariate case. On any triangulation C^0 splines can be constructed for all degree>0. More generally C^k functions on a general triangulation can be achieved for degree d>= 3k+2, e.g. for k=1, d=5 is the Argyris element. (a local basis requires d>=4k+1) An alternative is to add additional structure to the partition by replacing a the triangles of a triangulation with macro-elements, i.e. by “splitting” the triangles of a triangulation in subdomains. Examples are C^1-cubic on the Clough–Tocher split and C^2-quintics on the Powell-Sabin 12 split. Recently C^2-cubics (2022) and C^3-quartics (2024) spaces have been constructed using the Wang-Shi split where the macroelements consist of polygons with possibly many vertices. The talk will summarize an “elementary" proof that the dimension of C^(d-1) splines of degree d on the Wang-Shi macroelement can be expressed in purely combinatorial terms.

15 december 2023 12:30 t/m 13:15

[NA] Stefan Kurz: Observers in relativistic electrodynamics

"We introduce a relativistic splitting structure to map fields and equations of electromagnetism from four-dimensional spacetime to three-dimensional observer's space. We focus on a minimal set of mathematical structures that are directly motivated by the language of the physical theory. Space-time, world-lines, time translation, space platforms, and time synchronization all find their mathematical counterparts. The splitting structure is defined without recourse to coordinates or frames. This is noteworthy since, in much of the prevalent literature, observers are identified with adapted coordinates and frames. Among the benefits of the approach is a concise and insightful classification of observers. The application of the framework to Schiff's ""Question in General Relativity"" [1] further illustrates the advantages of the framework, enabling a compact, yet profound analysis of the problem at hand. 

[1] Schiff, L. I. ""A question in general relativity."" Proceedings of the National Academy of Sciences 25.7 (1939): 391-395.
Consider two concentric spheres with equal and opposite total charges uniformly distributed over their surfaces. When the spheres are at rest, the electric and magnetic fields outside the spheres vanish. [...] Then an observer traveling in a circular orbit around the spheres should find no field, for since all of the components of the electromagnetic field tensor vanish in one coordinate system, they must vanish in all coordinate systems. On the other hand, the spheres are rotating with respect to this observer, and so he should experience a magnetic field. [...] It is clear in the above arrangement that an observer A at rest with respect to the spheres does not obtain the same results from physical experiments as an observer B who is rotating about the spheres."

17 november 2023 12:30 t/m 13:15

[NA] Alena Kopaničáková: Enhancing Training of Deep Neural Networks Using Multilevel and Domain Decomposition Strategies

The training of deep neural networks (DNNs) is traditionally accomplished using stochastic gradient descent or its variants. While these methods have demonstrated certain robustness and accuracy, their convergence speed deteriorates for large-scale, highly ill-conditioned, and stiff problems, such as ones arising in scientific machine learning applications. Consequently, there is a growing interest in adopting more sophisticated training strategies that can not only accelerate convergence but may also enable parallelism, convergence control, and automatic selection of certain hyper-parameters.
In this talk, we propose to enhance the training of DNNs by leveraging nonlinear multilevel and domain decomposition strategies. We will discuss how to construct a multilevel hierarchy and how to decompose the parameters of the network by exploring the structure of the DNN architecture, properties of the loss function, and characteristics of the dataset. Furthermore, the dependency on a large number of hyper-parameters will be reduced by employing a trust-region globalization strategy. The effectiveness of the proposed training strategies will be demonstrated through a series of numerical experiments from the field of image classification and physics-informed neural networks.

References:
[1] A. Kopaničáková, H. Kothari, G. Karniadakis and R. Krause. Enhancing training of physics-informed neural networks using domain-decomposition based preconditioning strategies. Under review, 2023.
[2] S. Gratton, A. Kopaničáková, and Ph. Toint. Multilevel Objective-Function-Free Optimization with an Application to Neural Networks Training. SIAM, Journal on Optimization (Accepted), 2023.
[3] A. Kopaničáková. On the use of hybrid coarse-level models in multilevel minimization methods. Domain Decomposition Methods in Science and Engineering XXVII (Accepted), 2023.
[4] A. Kopaničáková, and R. Krause. Globally Convergent Multilevel Training of Deep Residual Networks. SIAM Journal on Scientific Computing, 2022.

16 juni 2023 12:30 t/m 13:15

[NA] Andrew Gibbs: Evaluating Oscillatory Integrals using Automatic Steepest Descent

Highly oscillatory integrals arise across physical and engineering applications, particularly when modelling wave phenomena. When using standard numerical quadrature rules to evaluate highly oscillatory integrals, one requires a fixed number of points per wavelength to maintain accuracy across all frequencies of interest. Several oscillatory quadrature methods exist, but in contrast to standard quadrature rules (such as Gauss and Clenshaw-Curtis), effective use requires a priori analysis of the integral and, thus, a strong understanding of the method. This makes highly oscillatory quadrature rules inaccessible to non-experts.

A popular approach for evaluating highly oscillatory integrals is &quot;Steepest Descent&quot;. The idea behind Steepest Descent methods is to deform the integration range onto complex contours where the integrand is non-oscillatory and exponentially decaying. By Cauchy's Theorem, the value of the integral is unchanged. Practically, this reformulation is advantageous, because exponential decay is far more amenable to asymptotic and numerical evaluation. As with other oscillatory quadrature rules, if naively applied, Steepest Descent methods can break down when the phase function contains coalescing stationary points.

In this talk, I will present a new algorithm based on Steepest Descent, which evaluates oscillatory integrals with a cost independent of frequency. The two main novelties are: (1) robustness - cost and accuracy are unaffected by coalescing stationary points, and (2) automation - no expertise or a priori analysis is required to use the algorithm.

17 februari 2023 12:30 t/m 13:30

[NA] Fernando José Henriquez Barraza: Shape Uncertainty Quantification in Acoustic and Electromagnetic Scattering

In this talk, we consider the propagation of acoustic and electromagnetic waves in domains of uncertain shape. We are particularly interested in quantifying the effect on these perturbations on the involved fields and possibly into other quantities of interest. After considering a domain or surface parametrization with countably-many parameters, one obtains a high-dimensional parametric map describing the problem's solution manifold. The design and analysis of a variety of methods commonly used in computational uncertainty quantification (UQ for short) affording provably dimension-independent convergence rates rely on the holomorphic dependence of the problem's solution upon the parametric input. When the parametric input encodes a family of domain or boundary transformations, the holomorphic dependence of the problem upon the parametric input is usually referred to as shape holomorphy. We present and discuss the key technicalities involved in the verification of this property for different models including: volume formulation of the Helmholtz problem, boundary integral formulation, volume integral equations, boundary integral formulations for multiple disjoint arcs, among others. We discuss the importance of this property in the implementation and analysis of different techniques used in forward and inverse computational shape UQ for the previously described models and the implications in the constructions efficient surrogates using neural networks.