From UW-Math Wiki
Jump to: navigation, search

ACMS Abstracts: Fall 2021

Jiuhua Hu (TAMU and UW)

Title: Wavelet-based Edge Multiscale Parareal Algorithm for Parabolic Equations with Heterogeneous Coefficients

Abstract: In this talk, I will talk about the Wavelet-based Edge Multiscale Parareal Algorithm to solve parabolic equations with heterogeneous coefficients. This algorithm combines the advantages of multiscale methods that can deal with heterogeneity in the spatial domain effectively, and the strength of parareal algorithms for speeding up time evolution problems. We derive the convergence rate of this algorithm and present extensive numerical tests to demonstrate the performance of our algorithm. This is a joint work with Guanglian Li (The University of Hong Kong).

Di Fang (UC-Berkeley)

Title: Time-dependent unbounded Hamiltonian simulation with vector norm scaling

Abstract: Hamiltonian simulation is a basic task in quantum computation. The accuracy of such simulation is usually measured by the error of the unitary evolution operator in the operator norm, which in turn depends on certain norm of the Hamiltonian. For unbounded operators, after suitable discretization, the norm of the Hamiltonian can be very large, which significantly increases the simulation cost. However, the operator norm measures the worst-case error of the quantum simulation, while practical simulation concerns the error with respect to a given initial vector at hand. We demonstrate that under suitable assumptions of the Hamiltonian and the initial vector, if the error is measured in terms of the vector norm, the computational cost may not increase at all as the norm of the Hamiltonian increases using Trotter type methods. In this sense, our result outperforms all previous error bounds in the quantum simulation literature. We also clarify the existence and the importance of commutator scalings of Trotter and generalized Trotter methods for time-dependent Hamiltonian simulations.

L. Ruby Leung (PNNL)

Title: Mesoscale convective systems in observations and a hierarchy of models

Abstract: Mesoscale convective systems (MCSs) account for 50-70% of warm season precipitation in the central U.S. With high intensity rainfall covering large area, MCSs are also responsible for most of the slow-rising and hybrid floods in the U.S. east of the Rocky Mountains. MCSs develop under different environments featuring frontal systems and the Great Plains low-level jet providing a lifting mechanism and moist environment for initiation of MCSs. During summer, eastward propagating sub-synoptic perturbations are crucial for MCS initiation under unfavorable large-scale circulations. MCSs have been producing more intense precipitation and lasting longer in the last 35 years, motivating the need to understand how they may change in the future. However, MCSs are notoriously difficult to simulate, as even convection permitting simulations underestimate MCS number and precipitation in the central U.S., particularly during summer. Using observation data and a hierarchy of models including a Lagrangian parcel model, regional and global convection permitting models, global climate models, and a tracer-enabled land surface model, we study MCSs, their large-scale environments, their role in land-atmosphere interactions, and the mechanisms of their response to global warming.

Yariv Aizenbud (Yale)

Title: Non-parametric estimation of manifolds from noisy data

Abstract: A common task in many data-driven applications is to find a low dimensional manifold that describes the data accurately. Estimating a manifold from noisy samples has proven to be a challenging task. Indeed, even after decades of research, there is no (computationally tractable) algorithm that accurately estimates a manifold from noisy samples with a constant level of noise.

In this talk, we will present a method that estimates a manifold and its tangent in the ambient space. Moreover, we establish rigorous convergence rates, which are essentially as good as existing convergence rates for function estimation.

This is a joint work with Barak Sober.

Yuhua Zhu (Stanford)

Title: Fokker-Planck Equations and Machine Learning

Abstract: As the continuous limit of many discretized algorithms, PDEs can provide a qualitative description of algorithm's behavior and give principled theoretical insight into many mysteries in machine learning. In this talk, I will give a theoretical interpretation of several machine learning algorithms using Fokker-Planck (FP) equations. In the first one, we provide a mathematically rigorous explanation of why resampling outperforms reweighting in correcting biased data when stochastic gradient-type algorithms are used in training. In the second one, inspired by an interactive particle system whose mean-field limit is a non-linear FP equation, we develop an efficient gradient-free method that finds the global minimum exponentially fast. In the last one, we propose a new method to alleviate the double sampling problem in model-free reinforcement learning, where the FP equation is used to do error analysis for the algorithm.

Enkeleida Lushi (NJIT)

Title: Aligning active particles in confinement

Abstract: I will present a new model for aligning self-propelling particles and discuss their collective motion in non-trivial confined domains. To classify the collective behaviors in a variety of domains, several order parameters have been considered. Based on those order parameters, I will discuss non-equilibrium phase diagrams with respect to different geometries and sizes of confinement, the densities of swimmers, and the alignment distance. Lastly, I will compare the simulation results to experiments in active matter systems such as motile colloids, swimming bacteria or larval zebrafish, and note the qualitative similarities and differences.

Russell Hewett (Virginia Tech)

Title: Parallel Primitives for High-performance Deep Learning

Abstract: Training deep neural networks (DNNs) in large-cluster computing environments is increasingly necessary, as networks grow in size and complexity. Local memory and processing limitations require robust data and model parallelism for crossing compute node boundaries. We present a linear-algebraic approach to model parallelism in deep learning, which allows parallel distribution of any tensor in the DNN. Rather than rely on automatic differentiation tools, which do not universally support distributed memory parallelism models, we use the fact that operations on a computer's memory are linear to build a suite of parallel data movement operations, e.g., broadcast, sum-reduce, and halo exchange, which are also linear operators. Thus, we can develop the adjoint operators required for gradient-based training of DNNs. We build distributed DNN layers using these parallel primitives, composed with sequential layer implementations, and demonstrate their application by building and training extremely large DNNs on CPU and hybrid CPU-GPU clusters using DistDL, a PyTorch and MPI-based distributed deep learning toolkit.

Alex Chamolly (ENS Paris)

Title: Mechanical self-organisation in avian embryos

Abstract: Traditionally, biologists think of genes and proteins as the deciding factors in the development of organisms. However it turns out that mechanical forces can be just as important. In this talk I will present a mathematical model that predicts how fluid mechanics control the formation of a quail embryo. What’s more, in analogy with a Turing system we can predict how the development self-regulates when the embryo is perturbed, leading to multiple or no embryos forming. The model is supported by experiments, which show that mechanics directly affect the expression of genes.

Dominic Vella (Oxford)

Title: Buffering by Buckling: New wrinkles on Gauss’ Pizza Theorem

Abstract: The deformations of thin elastic objects are familiar from everyday life, from a piece of paper or clothes crumpling, to vibrations of bridges and other structures. A guiding principle of such deformations is that the material chooses to bend, rather than change its length, whenever possible. I will describe some of the consequences of this desire to avoid stretching and Gauss’ Remarkable Theorem, which include everyday examples like the geometrical rigidification of pizza slices by curving the crust. However, I will show that these consequences can be subverted by buckling instabilities such as wrinkling. Finally, I will talk about how fast these deformations happen.

Mohammad Farazmand (NCSU)

Title: RONS: Reduced-order nonlinear solutions for PDEs with conserved quantities

Abstract: Reduced-order models of time-dependent partial differential equations (PDEs) where the solution is assumed as a linear combination of prescribed modes are rooted in a well-developed theory. However, more general models where the reduced solutions depend nonlinearly on time-dependent variables have thus far been derived in an ad hoc manner. I introduce Reduced-order Nonlinear Solutions (RONS): a unified framework for deriving reduced-order models that depend nonlinearly on a set of time-dependent variables. The set of all possible reduced-order solutions are viewed as a manifold immersed in the function space of the PDE. The variables are evolved such that the instantaneous discrepancy between reduced dynamics and the full PDE dynamics is minimized. This results in a set of explicit ordinary differential equations on the tangent bundle of the manifold. In the special case of linear parameter dependence, our reduced equations coincide with the standard Galerkin projection. Furthermore, any number of conserved quantities of the PDE can readily be enforced in our framework. Since RONS does not assume an underlying variational formulation for the PDE, it is applicable to a broad class of problems. I demonstrate its applications on a few examples including the nonlinear Schrodinger equation and Euler's equation for ideal fluids.

Bamdad Hosseini (UW-Seattle)

Title: Solving and Learning Nonlinear PDEs with Gaussian Processes

Abstract: In this talk I present a simple, rigorous, and interpretable framework for solution of nonlinear PDEs based on the framework of Gaussian Processes. The proposed approach provides a natural generalization of kernel methods to nonlinear PDEs; has guaranteed convergence; and inherits the state-of-the-art computational complexity of linear solvers for dense kernel matrices. I will outline our approach by focusing on an example nonlinear elliptic PDE followed by further numerical examples. I will also briefly comment on extending our approach to solving inverse problems.