Toggle Main Menu Toggle Search

Open Access padlockePrints

Convergence rates for a class of estimators based on Stein’s method

Lookup NU author(s): Professor Chris Oates

Downloads


Licence

This is the final published version of an article that has been published in its final definitive form by International Statistical Institute, 2019.

For re-use rights please refer to the publisher's terms and conditions.


Abstract

© 2019 ISI/BS. Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo estimators via Stein’s method. An important application is that of estimating an expectation of a test function along the sample path of a Markov chain, where gradient information enables convergence rate improvement at the cost of a linear system which must be solved. The contribution of this paper is to establish theoretical bounds on convergence rates for a class of estimators based on Stein’s method. Our analysis accounts for (i) the degree of smoothness of the sampling distribution and test function, (ii) the dimension of the state space, and (iii) the case of non-independent samples arising from a Markov chain. These results provide insight into the rapid convergence of gradient-based estimators observed for low-dimensional problems, as well as clarifying a curse-of-dimension that appears inherent to such methods.


Publication metadata

Author(s): Oates CJ, Cockayne J, Briol F-X, Girolami M

Publication type: Article

Publication status: Published

Journal: Bernoulli

Year: 2019

Volume: 25

Issue: 2

Pages: 1141-1159

Online publication date: 06/03/2019

Acceptance date: 06/03/2019

Date deposited: 29/04/2019

ISSN (print): 1350-7265

ISSN (electronic): 1573-9759

Publisher: International Statistical Institute

URL: https://doi.org/10.3150/17-BEJ1016

DOI: 10.3150/17-BEJ1016


Altmetrics

Altmetrics provided by Altmetric


Share