"published"

Approximations of Geometrically Ergodic Reversible Markov Chains

A common tool in the practice of Markov Chain Monte Carlo is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or intractable. A limited set of quantitative tools exist to assess the relative …

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

We propose to study the generalization error of a learned predictor h^ in terms of that of a surrogate (potentially randomized) predictor that is coupled to h^ and designed to trade empirical risk for control of generalization error. In the case …

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

The information-theoretic framework of Russo and J. Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual information between the algorithm's output and the training sample. …

Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates

In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019). Our main contributions are significantly improved mutual …