# Difference between revisions of "Past Probability Seminars Spring 2020"

(→Thursday, March 8, 2018, TBA) |
(→Thursday, February 22, 2018, Garvesh Raskutti UW-Madison Stats and WID) |
||

Line 63: | Line 63: | ||

<!-- == Thursday, March 1, 2018, TBA== --> | <!-- == Thursday, March 1, 2018, TBA== --> | ||

− | + | == Thursday, March 8, 2018, [http://www.math.cmu.edu/~eemrah/ Elnur Emrah], [http://www.math.cmu.edu/index.php CMU] == | |

== Thursday, March 15, 2018, [http://web.mst.edu/~huwen/ Wenqing Hu] [http://math.mst.edu/ Missouri S&T]== | == Thursday, March 15, 2018, [http://web.mst.edu/~huwen/ Wenqing Hu] [http://math.mst.edu/ Missouri S&T]== |

## Revision as of 15:15, 27 February 2018

# Spring 2018

**Thursdays in 901 Van Vleck Hall at 2:25 PM**, unless otherwise noted.
**We usually end for questions at 3:15 PM.**

If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu.

## Thursday, February 1, 2018, Hoi Nguyen, OSU

Title: **A remark on long-range repulsion in spectrum**

Abstract: In this talk we will address a "long-range" type repulsion among the singular values of random iid matrices, as well as among the eigenvalues of random Wigner matrices. We show evidence of repulsion under arbitrary perturbation even in matrices of discrete entry distributions. In many cases our method yields nearly optimal bounds.

## Thursday, February 8, 2018, Jon Peterson, Purdue

Title: **Quantitative CLTs for random walks in random environments**

Abstract:The classical central limit theorem (CLT) states that for sums of a large number of i.i.d. random variables with finite variance, the distribution of the rescaled sum is approximately Gaussian. However, the statement of the central limit theorem doesn't give any quantitative error estimates for this approximation. Under slightly stronger moment assumptions, quantitative bounds for the CLT are given by the Berry-Esseen estimates. In this talk we will consider similar questions for CLTs for random walks in random environments (RWRE). That is, for certain models of RWRE it is known that the position of the random walk has a Gaussian limiting distribution, and we obtain quantitative error estimates on the rate of convergence to the Gaussian distribution for such RWRE. This talk is based on joint works with Sungwon Ahn and Xiaoqin Guo.

## Friday, 4pm February 9, 2018, Van Vleck B239 Wes Pegden, CMU

** This is a probability-related colloquium---Please note the unusual room, day, and time! **

Title: **The fractal nature of the Abelian Sandpile**

Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor. Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.

## Thursday, February 15, 2018, Benedek Valkó, UW-Madison

Title: **Random matrices, operators and analytic functions**

Abstract: Many of the important results of random matrix theory deal with limits of the eigenvalues of certain random matrix ensembles. In this talk I review some recent results on limits of `higher level objects' related to random matrices: the limits of random matrices viewed as operators and also limits of the corresponding characteristic functions.

Joint with B. Virág (Toronto/Budapest).

## Thursday, February 22, 2018, Garvesh Raskutti UW-Madison Stats and WID

Title: **Estimation of large-scale time series network models**

Abstract: Estimating networks from multi-variate time series data is an important problem that arises in many applications including computational neuroscience, social network analysis, and many others. Prior approaches either do not scale to multiple time series or rely on very restrictive parametric assumptions in order to guarantee mixing. In this talk, I present two approaches that provide learning guarantees for large-scale multi-variate time series. The first involves a parametric GLM framework where non-linear clipping and saturation effects that guarantee mixing. The second involves a non-parametric sparse additive model framework where beta-mixing conditions are considered. Learning guarantees are provided in both cases and theoretical results are supported both by simulation results and performance comparisons on various data examples.

## Thursday, March 8, 2018, Elnur Emrah, CMU

## Thursday, March 15, 2018, Wenqing Hu Missouri S&T

Title: **A random perturbation approach to some stochastic approximation algorithms in optimization**

Abstract: Many large-scale learning problems in modern statistics and machine learning can be reduced to solving stochastic optimization problems, i.e., the search for (local) minimum points of the expectation of an objective random function (loss function). These optimization problems are usually solved by certain stochastic approximation algorithms, which are recursive update rules with random inputs in each iteration. In this talk, we will be considering various types of such stochastic approximation algorithms, including the stochastic gradient descent, the stochastic composite gradient descent, as well as the stochastic heavy-ball method. By introducing approximating diffusion processes to the discrete recursive schemes, we will analyze the convergence of the diffusion limits to these algorithms via delicate techniques in stochastic analysis and asymptotic methods, in particular random perturbations of dynamical systems. This talk is based on a series of joint works with Chris Junchi Li (Princeton), Weijie Su (UPenn) and Haoyi Xiong (Missouri S&T).