stochastic variance-reduced hamilton monte carlo...

10
Stochastic Variance-Reduced Hamilton Monte Carlo Methods Difan Zou *1 Pan Xu *1 Quanquan Gu 1 Abstract We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a vari- ance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve accuracy in 2-Wasserstein dis- tance, our algorithm achieves e O ( n + 2 d 1/2 /+ 4/3 d 1/3 n 2/3 /2/3 ) gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sam- pling from smooth and general log-concave dis- tributions, and prove the corresponding gradient complexity as well. Experiments on both syn- thetic and real data demonstrate the superior per- formance of our algorithm. 1. Introduction Past decades have witnessed increasing attention of Markov Chain Monte Carlo (MCMC) methods in modern machine learning problems (Andrieu et al., 2003). An important family of Markov Chain Monte Carlo algorithms, called Langevin Monte Carlo method (Neal et al., 2011), is pro- posed based on Langevin dynamics (Parisi, 1981). Langevin dynamics was used for modeling of the dynamics of molec- ular systems, and can be described by the following It ˆ o’s stochastic differential equation (SDE) (Øksendal, 2003), dX t = -rf (X t )dt + p 2βdB t , (1.1) where X t is a d-dimensional stochastic process, t 0 de- notes the time index, β > 0 is the temperature parameter, and B t is the standard d-dimensional Brownian motion. Under certain assumptions on the drift coefficient rf , Chi- ang et al. (1987) showed that the distribution of X t in (1.1) * Equal contribution 1 Department of Computer Science, Univer- sity of California, Los Angeles, CA 90095, USA. Correspondence to: Quanquan Gu <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). converges to its stationary distribution, a.k.a., the Gibbs measure β / exp(-βf (x)). Note that β is smooth and log-concave (resp. strongly log-concave) if f is smooth and convex (resp. strongly convex). A typical way to sample from density β is applying Euler-Maruyama discretization scheme (Kloeden & Platen, 1992) to (1.1), which yields X k+1 = X k -rf (X k )+ p 2⌘β · k , (1.2) where k N (0, I dd ) is a standard Gaussian random vec- tor, I dd is a d d identity matrix, and > 0 is the step size. (1.2) is often referred to as the Langevin Monte Carlo (LMC) method. In total variation (TV) distance, LMC has been proved to be able to produce approximate sampling of density β / e -f/β under arbitrary precision requirement in Dalalyan (2014); Durmus & Moulines (2016b), with prop- erly chosen step size. The non-asymptotic convergence of LMC has also been studied in Dalalyan (2017); Dalalyan & Karagulyan (2017); Durmus et al. (2017), which shows that the LMC algorithm can achieve -precision in 2-Wasserstein distance after e O( 2 d/2 ) iterations if f is L-smooth and μ- strongly convex, where = L/μ is the condition number. In order to accelerate the convergence of Langevin dynamics (1.1) and improve its mixing time to the unique stationary distribution, Hamiltonian dynamics (Duane et al., 1987; Neal et al., 2011) was proposed, which is also known as underdampled Langevin dynamics and is defined by the following system of SDEs dV t = -γ V t dt - urf (X t )dt + p 2γ udB t , dX t = V t dt, (1.3) where γ > 0 is the friction parameter, u denotes the in- verse mass, X t , V t 2 R d are the position and velocity of the continuous-time dynamics respectively, and B t is the Brownian motion. Let W t =(X > t , V > t ) > , under mild assumptions on the drift coefficient rf (x), the dis- tribution of W t converges to an unique invariant distribu- tion w / e -f (x)-kvk 2 2 /(2u) (Neal et al., 2011), whose marginal distribution on X t , denoted by , is proportional to e -f (x) . Similar to the numerical approximation of the Langevin dynamics in (1.2), one can also apply the same Euler-Maruyama discretization scheme to Hamiltonian dy- namics in (1.3), which gives rise to Hamiltonian Monte

Upload: buihanh

Post on 21-Feb-2019

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

Difan Zou * 1 Pan Xu * 1 Quanquan Gu 1

AbstractWe propose a fast stochastic Hamilton MonteCarlo (HMC) method, for sampling from asmooth and strongly log-concave distribution.At the core of our proposed method is a vari-ance reduction technique inspired by the recentadvance in stochastic optimization. We showthat, to achieve ✏ accuracy in 2-Wasserstein dis-tance, our algorithm achieves eO

�n+

2d1/2

/✏+

4/3

d1/3

n2/3

/✏2/3

�gradient complexity (i.e.,

number of component gradient evaluations),which outperforms the state-of-the-art HMC andstochastic gradient HMC methods in a wideregime. We also extend our algorithm for sam-pling from smooth and general log-concave dis-tributions, and prove the corresponding gradientcomplexity as well. Experiments on both syn-thetic and real data demonstrate the superior per-formance of our algorithm.

1. IntroductionPast decades have witnessed increasing attention of MarkovChain Monte Carlo (MCMC) methods in modern machinelearning problems (Andrieu et al., 2003). An importantfamily of Markov Chain Monte Carlo algorithms, calledLangevin Monte Carlo method (Neal et al., 2011), is pro-posed based on Langevin dynamics (Parisi, 1981). Langevindynamics was used for modeling of the dynamics of molec-ular systems, and can be described by the following Ito’sstochastic differential equation (SDE) (Øksendal, 2003),

dXt = �rf(Xt)dt+p2�dBt, (1.1)

where Xt is a d-dimensional stochastic process, t � 0 de-notes the time index, � > 0 is the temperature parameter,and Bt is the standard d-dimensional Brownian motion.Under certain assumptions on the drift coefficient rf , Chi-ang et al. (1987) showed that the distribution of Xt in (1.1)

*Equal contribution 1Department of Computer Science, Univer-sity of California, Los Angeles, CA 90095, USA. Correspondenceto: Quanquan Gu <[email protected]>.

Proceedings of the 35 th International Conference on MachineLearning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018by the author(s).

converges to its stationary distribution, a.k.a., the Gibbsmeasure ⇡� / exp(��f(x)). Note that ⇡� is smooth andlog-concave (resp. strongly log-concave) if f is smooth andconvex (resp. strongly convex). A typical way to samplefrom density ⇡� is applying Euler-Maruyama discretizationscheme (Kloeden & Platen, 1992) to (1.1), which yields

Xk+1 = Xk �rf(Xk)⌘ +p

2⌘� · ✏k, (1.2)

where ✏k ⇠ N(0, Id⇥d) is a standard Gaussian random vec-tor, Id⇥d is a d ⇥ d identity matrix, and ⌘ > 0 is the stepsize. (1.2) is often referred to as the Langevin Monte Carlo(LMC) method. In total variation (TV) distance, LMC hasbeen proved to be able to produce approximate sampling ofdensity ⇡� / e

�f/� under arbitrary precision requirementin Dalalyan (2014); Durmus & Moulines (2016b), with prop-erly chosen step size. The non-asymptotic convergence ofLMC has also been studied in Dalalyan (2017); Dalalyan &Karagulyan (2017); Durmus et al. (2017), which shows thatthe LMC algorithm can achieve ✏-precision in 2-Wassersteindistance after eO(2

d/✏2) iterations if f is L-smooth and µ-

strongly convex, where = L/µ is the condition number.

In order to accelerate the convergence of Langevin dynamics(1.1) and improve its mixing time to the unique stationarydistribution, Hamiltonian dynamics (Duane et al., 1987;Neal et al., 2011) was proposed, which is also known asunderdampled Langevin dynamics and is defined by thefollowing system of SDEs

dVt = ��Vtdt� urf(Xt)dt+p

2�udBt,

dXt = Vtdt,(1.3)

where � > 0 is the friction parameter, u denotes the in-verse mass, Xt,Vt 2 Rd are the position and velocityof the continuous-time dynamics respectively, and Bt isthe Brownian motion. Let Wt = (X>

t ,V >t )>, under

mild assumptions on the drift coefficient rf(x), the dis-tribution of Wt converges to an unique invariant distribu-tion ⇡w / e

�f(x)�kvk22/(2u) (Neal et al., 2011), whose

marginal distribution on Xt, denoted by ⇡, is proportionalto e

�f(x). Similar to the numerical approximation of theLangevin dynamics in (1.2), one can also apply the sameEuler-Maruyama discretization scheme to Hamiltonian dy-namics in (1.3), which gives rise to Hamiltonian Monte

Page 2: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

Carlo (HMC) method

vk+1 = vk � �⌘vk � ⌘urf(xk) +p2�u⌘✏k,

xk+1 = xk + ⌘vk.(1.4)

(1.4) provides an alternative way to sample from the targetdistribution ⇡ / e

�f(x). While HMC has been observed tooutperform LMC in a number of empirical studies (Chenet al., 2014; 2015), there does not exist a non-asymptoticconvergence analysis of the HMC method until very recentwork by Cheng et al. (2017)1. In particular, Cheng et al.(2017) proposed a variant of HMC based on coupling tech-niques, and showed that it achieves ✏ sampling accuracyin 2-Wasserstein distance within eO(2

d1/2

/✏) iterations forsmooth and strongly convex function f . This improves uponthe convergence rate of LMC by a factor of eO(d1/2/✏).

Both LMC and HMC are gradient based Monte Carlo meth-ods and are effective in sampling from smooth and stronglylog-concave distributions. However, they can be slow ifthe evaluation of the gradient is computationally expen-sive, especially on large datasets. This motivates usingstochastic gradient instead of full gradient in LMC andHMC, which gives rise to Stochastic Gradient LangevinDynamics (SGLD) (Welling & Teh, 2011; Ahn et al., 2012;Durmus & Moulines, 2016b; Dalalyan, 2017) and StochasticGradient Hamilton Monte Carlo (SG-HMC) method (Chenet al., 2014; Ma et al., 2015; Chen et al., 2015) respectively.For smooth and strongly log-concave distributions, Dalalyan& Karagulyan (2017); Dalalyan (2017) proved that the con-vergence rate of SGLD is eO(2

d�2/✏

2), where �2 denotes

the upper bound of the variance of the stochastic gradient.Cheng et al. (2017) proposed a variant of SG-HMC andproved that it converges after eO(2

d�2/✏

2) iterations. It isworth noting that although using stochastic gradient evalua-tions reduces the per-iteration cost,it comes at a cost that theconvergence rates of SGLD and SG-HMC are slower thanLMC and HMC. Thus, a natural questions is:

Does there exist an algorithm that can leverage stochas-tic gradients, but also achieve a faster rate of conver-gence?

In this paper, we answer this question affirmatively, whenthe function f can be written as the finite sum of n smoothcomponent functions fi

f(x) =1

n

nX

i=1

fi(x). (1.5)

It is worth noting that the finite sum structure is prevalentin machine learning, as the log-likelihood function of adataset (e.g., f ) is the sum of the log-likelihood over eachdata point (e.g., fi) in the dataset. We propose a stochastic

1In Cheng et al. (2017), the sampling method in (1.4) is alsocalled the underdampled Langevin MCMC algorithm.

variance-reduced HMC (SVR-HMC), which incorporatesthe variance reduction technique into stochastic HMC. Ouralgorithm is inspired by the recent advance in stochasticoptimization (Roux et al., 2012; Johnson & Zhang, 2013;Xiao & Zhang, 2014; Defazio et al., 2014; Allen-Zhu &Hazan, 2016; Reddi et al., 2016; Lei & Jordan, 2016; Leiet al., 2017), which use semi-stochastic gradients to accel-erate the optimization of the finite-sum function, and toimprove the runtime complexity of full gradient methods.We also notice that the variance reduction technique hasalready been employed in recent work Dubey et al. (2016);Baker et al. (2017) on SGLD. Nevertheless, it does not showan improvement in terms of dependence on the accuracy ✏.

In detail, the proposed SVR-HMC uses a multi-epochscheme to reduce the variance of the stochastic gradient.At the beginning of each epoch, it computes the full gradi-ent or an estimation of the full gradient based on the entiredata. Within each epoch, it performs semi-stochastic gra-dient descent and outputs the last iterate as the warm upstarting point for the next epoch. Thorough experiments onboth synthetic and real data demonstrate the advantage ofour proposed algorithm.

Our Contributions The major contributions of our workare highlighted as follows.

• We propose a new algorithm, SVR-HMC, that incorpo-rates variance-reduction technique into HMC. Our al-gorithm does not require the variance of the stochasticgradient is bounded. We proved that SVR-HMC has a bet-ter gradient complexity than the state-of-the-art LMC andHMC methods for sampling from smooth and stronglylog-concave distributions, when the error is measured by2-Wasserstein distance. In particular, to achieve ✏ sam-pling error in 2-Wasserstein distance, our algorithm onlyneeds eO

�n+

2d1/2

/✏+4/3

d1/3

n2/3

/✏2/3

�number of

component gradient evaluations. This improves upon thestate-of-the-art result by (Cheng et al., 2017), which iseO(n2

d1/2

/✏) in a large regime.

• We generalize the analysis of SVR-HMC to samplingfrom smooth and general log-concave distributions byadding a diminishing regularizer. We prove that the gradi-ent complexity of SVR-HMC to achieve ✏-accuracy in 2-Wasserstein distance is eO(n+d

11/2/✏

6+d11/3

n2/3

/✏4).

To the best of our knowledge, this is the first convergenceresult of LMC methods in 2-Wasserstein distance.

Notation We denote the discrete update by lower case sym-bol xk and the continuous-time dynamics by upper casesymbol Xt. We denote by kxk2 the Euclidean norm of vec-tor x 2 Rd. For a random vector Xt 2 Rd (or xk 2 Rd),we denote its probability distribution function by P (Xt) (orP (xk)). We denote by Eu(X) the expectation of X under

Page 3: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

probability measure u. The squared 2-Wasserstein distancebetween probability measures u and v is

W22 (u, v) = inf

⇣2�(u,v)

Z

Rd⇥Rd

kXu �Xvk22d⇣(Xu,Xv),

where �(u, v) is the set of all joint distributions with u andv being the marginal distributions. We use an = O(bn) todenote an Cbn for some constant C > 0 independent ofn, and use an = eO(bn) to hide logarithmic terms of bn. Wedenote an . bn (an & bn) if an is less than (larger than) bnup to a constant. We use a ^ b to denote min{a, b}

2. Related WorkIn this section, we briefly review the relevant work in theliterature.

Langevin Monte Carlos (LMC) methods (a.k.a, UnadjustedLangevin Algorithms), and its Metropolis adjusted ver-sion, have been studied in a number of papers (Roberts& Tweedie, 1996; Roberts & Rosenthal, 1998; Stramer& Tweedie, 1999a;b; Jarner & Hansen, 2000; Roberts &Stramer, 2002), which have been proved to attain asymptoticexponential convergence. In the past few years, there hasemerged numerous studies on proving the non-asymptoticconvergence of LMC methods. Dalalyan (2014) first pro-posed the theoretical guarantee for approximate sampling us-ing Langevin Monte Carlo method for strongly log-concaveand smooth distributions, where he proved rate O(d/✏2)for LMC algorithm with warm start in total variation (TV)distance. This result has later been extended to Wasser-stein metric by Dalalyan & Karagulyan (2017); Durmus& Moulines (2016b), where the same convergence rate in2-Wasserstein distance holds without the warm start as-sumption. Recently, Cheng & Bartlett (2017) also provedan eO(d/✏) convergence rate of the LMC algorithm in KL-divergence. The stochastic gradient based LMC meth-ods, also known as stochastic gradient Langevin dynamics(SGLD), was originally proposed for Bayesian posteriorsampling (Welling & Teh, 2011; Ahn et al., 2012). Dalalyan(2017); Dalalyan & Karagulyan (2017) analyzed the con-vergence rate for SGLD based on both unbiased and biasedstochastic gradients. In particular, they proved that the gra-dient complexity for unbiased SGLD is O(2

d/✏2), and

showed that it may not converge to the target distribution ifthe stochastic gradient has non-negligible bias. The SGLDalgorithm has also been applied to nonconvex optimization.Raginsky et al. (2017) analyzed the non-asymptotic con-vergence rate of SGLD. Zhang et al. (2017) provided thetheoretical guarantee of SGLD in terms of the hitting timeto a first and second-order stationary point. Xu et al. (2017)provided a analysis framework for the global convergenceof LMC, SGLD and its variance-reduced variant based onthe ergodicity of the discrete-time algorithm.

In order to improve convergence rates of LMC methods,the Hamiltonian Monte Carlo (HMC) method was proposedDuane et al. (1987); Neal et al. (2011), which introduces amomentum term in its dynamics. To deal with large datasets,stochastic gradient HMC has been proposed for Bayesianlearning (Chen et al., 2014; Ma et al., 2015). Chen et al.(2015) investigated the generic stochastic gradient MCMCalgorithms with high-order integrators, and provided a com-prehensive convergence analysis. For strongly log-concaveand smooth distribution, a non-asymptotic convergenceguarantee was proved by Cheng et al. (2017) for under-damped Langevin MCMC, which is a variant of stochasticgradient HMC method.

Our proposed algorithm is motivated by the stochastic vari-ance reduced gradient (SVRG) algorithm, was first proposedin Johnson & Zhang (2013), and later extended to differ-ent problem setups Xiao & Zhang (2014); Defazio et al.(2014); Reddi et al. (2016); Allen-Zhu & Hazan (2016);Lei & Jordan (2016); Lei et al. (2017). Inspired by thisline of research, Dubey et al. (2016) applied the variancereduction technique to stochastic gradient Langevin dynam-ics, and proved a slightly tighter convergence bound thanSGLD. Nevertheless, the dependence of the convergencerate on the sampling accuracy ✏ is not improved. Thus, itremains open whether variance reduction technique can in-deed improve the convergence rate of MCMC methods. Ourwork answers this question in the affirmative and providesrigorously faster rates of convergence for sampling fromlog-concave and smooth density functions.

For the ease of comparison, we summarize the gradientcomplexity2 in 2-Wasserstein distance for different gradient-based Monte Carlo methods in Table 1. Evidently, for sam-pling from smooth and strongly log-concave distributions,SVR-HMC outperforms all existing algorithms.

Table 1. Gradient complexity of gradient-based Monte Carlo algo-rithms in 2-Wasserstein distance for sampling from smooth andstrong log-concave distributions.

Methods Gradient Complexity

LMC (Dalalyan, 2017) eO⇣

n2d✏2

HMC (Cheng et al., 2017) eO⇣

n2d1/2

SGLD (Dalalyan, 2017) eO⇣

2�2d✏2

SG-HMC (Cheng et al., 2017) eO⇣

2�2d✏2

SVR-HMC (this paper) eO⇣n+ 2d1/2

✏ + 3/4d1/3n2/3

✏2/3

2The gradient complexity is defined as number of stochasticgradient evaluations to achieve ✏ sampling accuracy.

Page 4: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

3. The Proposed AlgorithmIn this section, we propose a novel HMC algorithm thatleverages variance reduced stochastic gradient to samplefrom the target distribution ⇡ = e

�f(x)/Z, where Z =R

e�f(x)dx is the partition function.

Recall that function f(x) has the finite-sum structure in(1.5). When n is large, the full gradient 1/n

Pni=1 rfi(x)

in (1.4) can be expensive to compute. Thus, the stochasticgradient is often used to improve the computational com-plexity per iteration. However, due to the non-diminishingvariance of the stochastic gradient, the convergence rateof gradient-based MC methods using stochastic gradient isoften no better than that of gradient MC using full gradient.

In order to overcome the drawback of stochastic gradi-ent, and achieve faster rate of convergence, we proposea Stochastic Variance-Reduced Hamiltonian Monte Carloalgorithm (SVR-HMC), which leverages the advantages ofboth HMC and variance reduction. The outline of the al-gorithm is displayed in Algorithm 1. We can see that thealgorithm performs in a multi-epoch way. At the beginningof each epoch, it computes the full gradient of the f at somesnapshot of the iterate exj . Then it performs the followingupdate for both the velocity and the position variables ineach epoch

vk+1 = vk � �⌘vk � ⌘ugk + ✏vk,

xk+1 = xk + ⌘vk + ✏xk,(3.1)

where �, ⌘, u > 0 are tuning parameters, gk is a semi-stochastic gradient that is an unbiased estimator of rf(xk)and defined as follows,

gk = rfik(xk)�rfik(exj) +rf(exj), (3.2)

where ik is uniformly sampled from {1, . . . , n}, and exj is asnapshot of xk that is only updated every m iterations suchthat k = jm+ l for some l = 0, . . . ,m� 1. And ✏vk and ✏xkare Gaussian random vectors with zero mean and covariancematrices equal to

E[✏vk(✏vk)>] = u(1� e�2�⌘) · Id⇥d,

E[✏xk(✏xk)>] =u

�2(2�⌘ + 4e��⌘ � e

�2�⌘ � 3) · Id⇥d,

E[✏vk(✏xk)>] =u

�(1� 2e��⌘ + e

�2�⌘) · Id⇥d, (3.3)

where Id⇥d is a d⇥ d identity matrix.

The idea of semi-stochastic gradient has been successfullyused in stochastic optimization in machine learning to re-duce the variance of stochastic gradient and obtains fasterconvergence rates (Johnson & Zhang, 2013; Xiao & Zhang,2014; Reddi et al., 2016; Allen-Zhu & Hazan, 2016; Lei& Jordan, 2016; Lei et al., 2017). Apart from the semi-stochastic gradient, the second update formula in (3.1) also

differs from the direct Euler-Maruyama discretization (1.4)of Hamiltonian dynamics due to the additional Gaussiannoise term ✏xk . This additional Gaussian noise term is piv-otal in our theoretical analysis to obtain faster convergencerates of our algorithm than LMC methods. Similar idea hasbeen used in Cheng et al. (2017) to prove the faster rate ofconvergence of HMC (underdamped MCMC) against LMC.Algorithm 1 Stochastic Variance-Reduced HamiltonianMonte Carlo (SVR-HMC)

1: initialization: ex0 = 0, ev0 = 02: for j = 0, . . . , dK/me3: eg = rf(exj)4: for l = 0, . . . ,m� 15: k = jm+ l

6: Uniformly sample ik 2 [n]7: gk = rfik(xk)�rfik(exj) + eg8: xk+1 = xk + ⌘vk + ✏xk9: vk+1 = vk � �⌘vk � ⌘ugk + ✏vk.

10: if l = m� 111: exj = xk+1

12: end13: end for14: end for15: output: xK

4. Main TheoryIn this section, we analyze the convergence of our proposedalgorithm in 2-Wasserstein distance between the distributionof the iterate in Algorithm 1, and the target distribution⇡ / e

�f .

Following the recent work Durmus & Moulines (2016a);Dalalyan & Karagulyan (2017); Dalalyan (2017); Chenget al. (2017), we use the 2-Wasserstein distance to measurethe convergence rate of Algorithm 1, since it directly pro-vides the level of approximation of the first and second ordermoments (Dalalyan, 2017; Dalalyan & Karagulyan, 2017).It is arguably more suitable to characterize the quality ofapproximate sampling algorithms than the other distancemetrics such as total variation distance. In addition, whileAlgorithm 1 performs update on both the position variablexk and the velocity variable vk, only the convergence rateof the position variable xk is of central interest.

4.1. SVR-HMC for Sampling from StronglyLog-concave Distributions

We first present the convergence rate and gradient complex-ity of SVR-HMC when f is smooth and strongly convex,i.e., the target distribution ⇡ / e

�f is smooth and stronglylog-concave. We start with the following formal assump-tions on the negative log density function.

Page 5: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

Assumption 4.1 (Smoothness). There exists a constantL > 0, such that for any x,y 2 Rd, the following holds forany i,

krfi(x)�rfi(y)k2 Lkx� yk2.

Under Assumption 4.1, it can be easily verified that functionf(x) is also L-smooth.

Assumption 4.2 (Strong Convexity). There exists a con-stant µ > 0, such that for any x,y 2 Rd, the followingholds for any i,

f(x)� f(y) � hrf(y),x� yi+ µ

2kx� yk22. (4.1)

Note that the strong convexity assumption is only made onthe finite sum function f , instead of the individual compo-nent function fi’s.

Theorem 4.3. Under Assumptions 4.1 and 4.2. Let P (xK)denote the distribution of the last iterate xK , and ⇡ / e

�f(x)

denote the stationary distribution of (1.3). Set u = 1/L,� = 2, x0 = 0, v0 = 0 and ⌘ = eO(1/ ^ 1/(1/3

n2/3)).

Assume kx⇤k2 R for some constant R > 0, wherex⇤ = argminx f(x) is the global minimizer of functionf(x). Then the output of Algorithm 1 satisfies,

W2

�P (xK),⇡

� e

�K⌘/(2)w0 + 4⌘(2

pD1 +

pD2)

+ 2pD3m⌘

3/2, (4.2)

where w0 = W2

�P (x0),⇡

�, = L/µ is the condition

number, ⌘ is the step size, and m denotes the epoch (i.e.,inner loop) length of Algorithm 1. D1, D2 and D3 aredefined as follows,

D1 =

✓8⌘2

5+

4

3

◆Uv +

4

3LUf +

16d⌘

3L,

D2 = 13Uv +8Uf

L+

28d⌘

L, D3 = Uv + 4ud,

in which parameters Uv and Uf are in the order of O(d/µ)and O(d), respectively.

Remark 4.4. In existing stochastic Langevin Monte Carlomethods (Dalalyan & Karagulyan, 2017; Zhang et al., 2017)and stochastic Hamiltonian Monte Carlo methods (Chenet al., 2014; 2015; Cheng et al., 2017), their convergenceanalyses require bounded variance of stochastic gradient,i.e., the inequality Ei[krfi(x) � rf(x)k22] �

2 holdsuniformly for all x 2 Rd. In contrast, our analysis does notneed this assumption, which implies that our algorithm isapplicable to a larger class of target density functions.

In the following corollary, by providing a specific choiceof step size ⌘, and epoch length m, we present the gradientcomplexity of Algorithm 1 in 2-Wasserstein distance.

Corollary 4.5. Under the same conditions as in Theo-rem 4.3, let m = n and ⌘ = O

�✏/(�1

d�1/2) ^

✏2/3

/(1/3d1/3

n2/3)

�. Then the output of Algorithm 1 sat-

isfies W2

�P (xK),⇡

� ✏ after

eO✓n+

2d1/2

✏+

n2/3

4/3

d1/3

✏2/3

◆(4.3)

stochastic gradient evaluations.

Remark 4.6. Recall that the gradient complexity of HMCis eO(n2

d1/2

/✏) and the gradient complexity of SG-HMCis eO(2

d�2/✏

2), both of which are recently proved in Chenget al. (2017). It can be seen from Corollary 4.5 that thegradient complexity of our SVR-HMC algorithm has a betterdependence on dimension d.

Note that the gradient complexity of SVR-HMC in (4.3)depends on the relationship between sample size n andprecision parameter ✏. To make a thorough comparison withexisting algorithms, we discuss our result for SVR-HMC inthe following three regimes:

• When n . d1/4

/✏1/2, the gradient complexity of our

algorithm is dominated by eO(2d1/2

/✏), which is lowerthan that of the HMC algorithm by a factor of eO(n) andlower than that of the SG-HMC algorithm by a factor ofeO(d1/2/✏).

• When d1/4

/✏1/2 . n . d�

3/✏

2, the gradi-ent complexity of our algorithm is dominated byeO(n2/3

4/3

d1/3

/✏2/3). It improves that of HMC by a

factor of eO(n1/32/3

d1/6

/✏4/3), and is lower than that

of SG-HMC by a factor of eO(2/3d2/3

�2n�2/3

/✏4/3).

Plugging in the upper bound of n into (4.3) yieldseO(2

d�2/✏

2) gradient complexity, which still matchesthat of SG-HMC.

• When n & 4d/✏

2, i.e., the sample size is super large,the gradient complexity of our algorithm is dominated byeO(n). It is still lower than that of HMC by a factor ofeO(2

d1/2

/✏). Nonetheless, our algorithm has a highergradient complexity than SG-HMC due to the extremelylarge sample size. This suggests that SG-HMC (Chenget al., 2017) is the most suitable algorithm in this regime.

Moreover, from Corollary 4.5 we know that the op-timal learning rate for SVR-HMC is in the order ofO(✏2/3/(1/3

d1/3

n2/3)), while the optimal learning rate

for SG-HMC is in the order of O(✏2/(�2d))), which is

smaller than the learning rate of SVR-HMC when n d�

3/✏

2 (Dalalyan, 2017). This observation aligns with theconsequence of variance reduction in the field of optimiza-tion.

Page 6: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

4.2. SVR-HMC for Sampling from GeneralLog-concave Distributions

In this section, we will extend the analysis of the proposedalgorithm SVR-HMC to sampling from distributions whichare only general log-concave but not strongly log-concave.

In detail, we want to sample from the distribution ⇡ /e�f(x), where f is general convex and L-smooth. We follow

the similar idea in Dalalyan (2014) to construct a stronglylog-concave distribution by adding a quadratic regularizerto the convex and L-smooth function f , which yields

f(x) = f(x) + �kxk22/2,

where � > 0 is a regularization parameter. Apparently, f is�-strongly convex and (L+ �)-smooth. Then we can applyAlgorithm 1 to function f , which amounts to sampling fromthe modified target distribution ⇡ / e

�f . We will obtaina sequence {xk}k=0,...,K , whose distribution converges toa unique stationary distribution of Hamiltonian dynamics(1.3), denoted by ⇡. According to Neal et al. (2011), ⇡ ispropositional to e

�f(x), i.e.,

⇡ / exp�� f(x)

�= exp

✓� f(x)� �

2kxk22

◆.

Denote the distribution of xk by P (xk).We have

W2(P (xk,⇡)) W2(P (xk), ⇡) +W2(⇡,⇡). (4.4)

To bound the 2-Wasserstein distance between P (xk) andthe desired distribution ⇡, we only need to upper bound the2-Wasserstein distance between two Gibbs distribution ⇡

and ⇡. Before we present our theoretical characterization onthis distance, we first lay down the following assumption.

Assumption 4.7. Regarding distribution ⇡ / e�f , its

fourth-order moment is upper bounded, i.e., there existsa constant U such that E⇡[kxk42] Ud

2.

The following theorem spells out the convergence rate ofSVR-HMC for sampling from a general log-concave distri-bution.

Theorem 4.8. Under Assumptions 4.1 and 4.7, in orderto sample for a general log-concave density ⇡ / e

�f(x),the output of Algorithm 1 when applied to f(x) = f(x) +�kxk22/2 satisfies W2

�P (xk),⇡

� ✏ after

eO✓n+

d11/2

✏6+

d11/3

n2/3

✏4

gradient evaluations.

Regarding sampling from a smooth and general log-concavedistribution, to the best of our knowledge, there is no exist-ing theoretical analysis on the convergence of LMC algo-rithms in 2-Wasserstein distance. Yet the convergence anal-yses of LMC methods in total variation distance (Dalalyan,

2014; Durmus et al., 2017) and KL-divergence (Cheng &Bartlett, 2017) have recently been established. In detail,Dalalyan (2014) proved a convergence rate of eO(d3/✏4) intotal variation distance for LMC with general log-concavedistributions, which implies eO(nd3/✏4) gradient complex-ity. Durmus et al. (2017) improved the gradient complexityof LMC in total variation distance to eO(nd5/✏2). (Cheng& Bartlett, 2017) proved the convergence of LMC in KL-divergence, which attains eO(nd/✏3) gradient complexity. Itis worth noting that our convergence rate in 2-Wassersteindistance is not directly comparable to the aforementionedexisting results.

5. ExperimentsIn this section, we compare the proposed algorithm (SVR-HMC) with the state-of-the-art MCMC algorithms forBayesian learning. To compare the convergence rates fordifferent MCMC algorithms, we conduct the experimentson both synthetic data and real data.

We compare our algorithm with SGLD (Welling & Teh,2011), VR-SGLD (Reddi et al., 2016), HMC (Cheng &Bartlett, 2017) and SG-HMC (Cheng & Bartlett, 2017).

5.1. Simulation Based on Synthetic Data

On the synthetic data, we construct each component func-tion to be fi(x) = (x � ai)>⌃(x � ai)/2, where ai is aGaussian random vector drawn from distribution N (2, 4⇥Id⇥d), and ⌃ is a positive definite symmetric matrix withmaximum eigenvalue L = 3/2 and minimum eigenvalueµ = 2/3. Note that each random vector ai leads to aparticular component function fi(x). Then it can be ob-served that the target density ⇡ / exp

�1/n

Pni=1 fi(x)

�=

exp�(x�a)>⌃(x�a)/2

�is a multivariate Gaussian distri-

bution with mean a = 1/nPn

i=1 ai and covariance matrix⌃. Moreover, the negative log density f(x) is L-smoothand µ-strongly convex.

In our simulation, we investigate different dimension d

and number of component functions n, and show the 2-Wasserstein distance between the target distribution ⇡ andthat of the output from different algorithms with respectto the number of data passes. In order to estimate the 2-Wasserstein distance between the distribution of each iterateand the target one, we repeat all algorithms for 20, 000 timesand obtain 20, 000 random samples for each algorithm ineach iteration. In Figure 1, we present the convergence re-sults for three HMC based algorithms (HMC, SG-HMC andSVR-HMC). It is evident that SVR-HMC performs the bestamong these three algorithms when n is not large enough,and its performance becomes close to that of SG-HMCwhen the number of component function is increased. This

Page 7: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

0 200 400 600 800 1000 1200 1400 1600 1800 200010-3

10-2

10-1

100

101

102

(a) d = 10, n = 50

0 500 1000 1500 2000 2500 3000 3500 4000 4500 500010-3

10-2

10-1

100

101

102

(b) d = 10, n = 100

0 100 200 300 400 500 600 700 800 900 100010-3

10-2

10-1

100

101

102

(c) d = 10, n = 1000

0 50 100 150 200 250 300 350 400 450 50010-3

10-2

10-1

100

101

102

(d) d = 10, n = 5000

0 200 400 600 800 1000 1200 1400 1600 1800 200010-2

10-1

100

101

102

103

(e) d = 50, n = 50

0 100 200 300 400 500 600 700 800 900 100010-2

10-1

100

101

102

103

(f) d = 50, n = 100

0 50 100 150 200 250 300 350 400 450 50010-2

10-1

100

101

102

103

(g) d = 50, n = 1000

0 10 20 30 40 50 60 70 80 90 10010-2

10-1

100

101

102

103

(h) d = 50, n = 5000

Figure 1. Numerical results for synthetic data, where we compare 3 different algorithms, and show their convergence performance in2-Wasserstein distance. (a)-(h) represent for different dimensions d and sample sizes n.

phenomenon is well-aligned with our theoretical analysis,since the gradient complexity of our algorithm can be worsethan SG-HMC when the sample size n is extremely large.

5.2. Bayesian Logistic Regression for Classification

Table 2. Summary of datasets for Bayesian classification

Dataset pima a3a gisette mushroom

n (training) 384 3185 6000 4062n (test) 384 29376 1000 4062d 8 122 5000 112

Now, we apply our algorithm to the Bayesian logistic re-gression problems. In logistic regression, given n i.i.d. ex-amples {ai, yi}i=1,...,n, where ai 2 Rd and yi 2 {0, 1}denote the features and binary labels respectively, the prob-ability mass function of yi given the feature ai is mod-elled as p(yi|ai,x) = 1/

�1 + e

�yix>ai

�, where x 2 Rd

is the regression parameter. Considering the prior p(x) =N (0,��1I), the posterior distribution takes the form

p(x|A,Y ) / p(Y |A,x)p(x) =nY

i=1

p(yi|ai,�)p(x).

where A = [a1,a2, . . . ,an]> and Y = [y1, y2, . . . , yn]>.The posterior distribution can be written as p(x|A,Y ) /

e�

Pni=1 fi(x), where each fi(x) is in the following form

fi(x) = n log�1 + exp(�yix

>ai)�+ �/2kxk22.

We use four binary classification datasets from Libsvm(Chang & Lin, 2011) and UCI machine learning reposi-tory (Lichman, 2013), which are summarized in Table 3.Note that pima and mushroom do not have test data in theiroriginal version, and we split them into 50% for trainingand 50% for test. Following Welling & Teh (2011); Chenet al. (2014; 2015), we report the sample path average anddiscard the first 50 iterations as burn-in. It is worth notingthat we observe similar convergence comparison of differ-ent algorithms for larger burn-in period (= 104). We runeach algorithm 20 times and report the averaged results forcomparison. Note that variance reduction based algorithms(i.e., VR-SGLD and SVR-HMC) require the first data passto compute one full gradient. Therefore, in Figure 2, plotsof VR-SGLD and VRHMC start from the second data passwhile plots of SGLD and SGHMC start from the first datapass. It can be clearly seen that our proposed algorithmis able to converge faster than SGLD and SG-HMC on alldatasets, which validates our theoretical analysis of the con-vergence rate. In addition, although there is no existingnon-asymptotic theoretical guarantee for VR-SGLD whenthe target distribution is strongly log-concave, from Figure2, we can observe that SVR-HMC also outperforms VR-SGLD on these four datasets, which again demonstrates thesuperior performance of our algorithm. This clearly showsthe advantage of our algorithm for Bayesian learning.

Page 8: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

1 3 5 7 9 11 13 15 17 190.48

0.49

0.5

0.51

0.52

0.53

0.54

0.55

0.56

SGLDSGHMCVR-SGLDSVR-HMC

(a) pima

1 3 5 7 9 11 13 15 17 190.32

0.33

0.34

0.35

0.36

0.37

SGLDSGHMCVR-SGLDSVR-HMC

(b) a3a

0 5 10 15 20 25 300

0.05

0.1

0.15

0.2

0.25

SGLDSGHMCVR-SGLDSVR-HMC

(c) gisette

1 3 5 7 9 11 13 15 17 190

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

SGLDSGHMCVR-SGLDSVR-HMC

(d) mushroom

Figure 2. Comparison of different algorithms for Bayesian logistic regression, where y axis shows the negative log-likelihood on the testdata, and y axis is the number of data passess. (a)-(d) correspond to 4 datasets.

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39

10-1

100

101

SGLDSGHMCVR-SGLDSVR-HMC

(a) geographical

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 3910-3

10-2

10-1

SGLDSGHMCVR-SGLDSVR-HMC

(b) noise

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 3910-2

10-1

SGLDSGHMCVR-SGLDSVR-HMC

(c) parkinson

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 3910-4

10-3

10-2

10-1

SGLDSGHMCVR-SGLDSVR-HMC

(d) toms

Figure 3. Comparison of different algorithms for Bayesian linear regression, where y axis is the mean square errors on the test data, and x

axis is the number of data passess. (a)-(d) correspond to 4 datasets.

Table 3. Test error of different algorithms for Bayesian classification after 10 entire data passes on 4 datasets

Dateset pima a3a gisette mushroom

SGLD 0.2314± 0.0044 0.1594± 0.0018 0.0098± 0.0009 (6.647 + 2.251)⇥ 10�4

SGHMC 0.2306± 0.0079 0.1591± 0.0044 0.0096± 0.0006 (5.916± 2.734)⇥ 10�4

VR-SGLD 0.2299 + 0.0056 0.1572± 0.0012 0.0105± 0.0006 (7.755± 3.231)⇥ 10�4

SVR-HMC 0.2289± 0.0043 0.1570± 0.0019 0.0093± 0.0011 (6.278± 3.149)⇥ 10�4

Table 4. Summary of datasets for Bayesian linear regression

Dataset geographical noise parkinson toms

n 1059 1503 5875 45730d 69 5 21 96

5.3. Bayesian Linear Regression

We also apply our algorithm to Bayesian linear regres-sion, and make comparison with the baseline algorithms.Similar to Bayesian classification, given i.i.d. examples{ai, yi}i=1,...,n with yi 2 R, the likelihood of Bayessianlinear regression is p(yi|ai,x) = N (x>ai,�2

a) and theprior is N (0,��1I). We use 4 datasets, which are summa-rized in Table 4. In our experiment, we set �2

a = 1 and� = 1, and conduct the normalization of the original data.In addition, we split each dataset into training and test dataevenly. Similarly, we compute the sample path averagewhile treating the first 50 iterates as burn in. We reportthe mean square errors on the test data on these 4 datasetsin Figure 3 for different algorithms. It is evident that our

algorithm is faster than all the other baseline algorithms onall the datasets, which further illustrates the advantage ofour algorithm for Bayesian learning.

6. Conclusions and Future workWe propose a stochastic variance reduced Hamilton MonteCarlo (HMC) method, for sampling from a smooth andstrongly log-concave distribution. We show that, to achieve✏ accuracy in 2-Wasserstein distance, our algorithm enjoysa faster rate of convergence and better gradient complex-ity than state-of-the-art HMC and stochastic gradient HMCmethods in a wide regime. We also extend our algorithmfor sampling from smooth and general log-concave distribu-tions. Experiments on both synthetic and real data verifiedthe superior performance of our algorithm. In the future,we will extend our algorithm to non-log-concave distribu-tions and study the symplectic integration techniques suchas Leap-frog integration for Bayesian posterior sampling.

Page 9: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

AcknowledgementsWe would like to thank the anonymous reviewers for theirhelpful comments. This research was sponsored in part bythe National Science Foundation IIS-1618948, IIS-1652539and SaTC CNS-1717950. The views and conclusions con-tained in this paper are those of the authors and should notbe interpreted as representing any funding agencies.

ReferencesAhn, S., Balan, A. K., and Welling, M. Bayesian posterior

sampling via stochastic gradient fisher scoring. In ICML,2012.

Allen-Zhu, Z. and Hazan, E. Variance reduction for fasternon-convex optimization. In International Conference onMachine Learning, pp. 699–707, 2016.

Andrieu, C., De Freitas, N., Doucet, A., and Jordan, M. I.An introduction to mcmc for machine learning. Machinelearning, 50(1-2):5–43, 2003.

Baker, J., Fearnhead, P., Fox, E. B., and Nemeth, C. Controlvariates for stochastic gradient mcmc. arXiv preprintarXiv:1706.05439, 2017.

Bakry, D., Gentil, I., and Ledoux, M. Analysis and geometryof Markov diffusion operators, volume 348. SpringerScience & Business Media, 2013.

Chang, C.-C. and Lin, C.-J. Libsvm: a library for supportvector machines. ACM transactions on intelligent systemsand technology (TIST), 2(3):27, 2011.

Chen, C., Ding, N., and Carin, L. On the convergence ofstochastic gradient mcmc algorithms with high-order in-tegrators. In Advances in Neural Information ProcessingSystems, pp. 2278–2286, 2015.

Chen, T., Fox, E., and Guestrin, C. Stochastic gradienthamiltonian monte carlo. In International Conference onMachine Learning, pp. 1683–1691, 2014.

Cheng, X. and Bartlett, P. Convergence of langevin mcmcin kl-divergence. arXiv preprint arXiv:1705.09048, 2017.

Cheng, X., Chatterji, N. S., Bartlett, P. L., and Jordan, M. I.Underdamped langevin mcmc: A non-asymptotic analy-sis. arXiv preprint arXiv:1707.03663, 2017.

Chiang, T.-S., Hwang, C.-R., and Sheu, S. J. Diffusion forglobal optimization in rˆn. SIAM Journal on Control andOptimization, 25(3):737–753, 1987.

Dalalyan, A. S. Theoretical guarantees for approximatesampling from smooth and log-concave densities. arXivpreprint arXiv:1412.7392, 2014.

Dalalyan, A. S. Further and stronger analogy between sam-pling and optimization: Langevin monte carlo and gradi-ent descent. arXiv preprint arXiv:1704.04752, 2017.

Dalalyan, A. S. and Karagulyan, A. G. User-friendly guaran-tees for the langevin monte carlo with inaccurate gradient.arXiv preprint arXiv:1710.00095, 2017.

Defazio, A., Bach, F., and Lacoste-Julien, S. Saga: Afast incremental gradient method with support for non-strongly convex composite objectives. In Advances inNeural Information Processing Systems, pp. 1646–1654,2014.

Duane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D.Hybrid monte carlo. Physics letters B, 195(2):216–222,1987.

Dubey, K. A., Reddi, S. J., Williamson, S. A., Poczos, B.,Smola, A. J., and Xing, E. P. Variance reduction instochastic gradient langevin dynamics. In Advances inNeural Information Processing Systems, pp. 1154–1162,2016.

Durmus, A. and Moulines, E. High-dimensional bayesianinference via the unadjusted langevin algorithm. 2016a.

Durmus, A. and Moulines, E. Sampling from stronglylog-concave distributions with the unadjusted langevinalgorithm. arXiv preprint arXiv:1605.01559, 2016b.

Durmus, A., Moulines, E., et al. Nonasymptotic conver-gence analysis for the unadjusted langevin algorithm. TheAnnals of Applied Probability, 27(3):1551–1587, 2017.

Eberle, A., Guillin, A., and Zimmer, R. Couplings and quan-titative contraction rates for langevin dynamics. arXivpreprint arXiv:1703.01617, 2017.

Jarner, S. F. and Hansen, E. Geometric ergodicity ofmetropolis algorithms. Stochastic processes and theirapplications, 85(2):341–361, 2000.

Johnson, R. and Zhang, T. Accelerating stochastic gradientdescent using predictive variance reduction. In Advancesin Neural Information Processing Systems, pp. 315–323,2013.

Kloeden, P. E. and Platen, E. Higher-order implicit strongnumerical schemes for stochastic differential equations.Journal of statistical physics, 66(1):283–314, 1992.

Lei, L. and Jordan, M. I. Less than a single pass: Stochas-tically controlled stochastic gradient method. arXivpreprint arXiv:1609.03261, 2016.

Lei, L., Ju, C., Chen, J., and Jordan, M. I. Non-convexfinite-sum optimization via scsg methods. In Advances inNeural Information Processing Systems, pp. 2345–2355,2017.

Page 10: Stochastic Variance-Reduced Hamilton Monte Carlo Methodsproceedings.mlr.press/v80/zou18a/zou18a.pdf · Stochastic Variance-Reduced Hamilton Monte Carlo Methods Carlo (HMC) method

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

Lichman, M. UCI machine learning repository, 2013. URLhttp://archive.ics.uci.edu/ml.

Ma, Y.-A., Chen, T., and Fox, E. A complete recipe forstochastic gradient mcmc. In Advances in Neural Infor-mation Processing Systems, pp. 2917–2925, 2015.

Neal, R. M. et al. Mcmc using hamiltonian dynamics. Hand-book of Markov Chain Monte Carlo, 2(11), 2011.

Øksendal, B. Stochastic differential equations. In Stochasticdifferential equations, pp. 65–84. Springer, 2003.

Parisi, G. Correlation functions and computer simulations.Nuclear Physics B, 180(3):378–384, 1981.

Raginsky, M., Rakhlin, A., and Telgarsky, M. Non-convex learning via stochastic gradient langevin dy-namics: a nonasymptotic analysis. arXiv preprintarXiv:1702.03849, 2017.

Reddi, S. J., Hefny, A., Sra, S., Poczos, B., and Smola, A.Stochastic variance reduction for nonconvex optimization.In International conference on machine learning, pp. 314–323, 2016.

Roberts, G. O. and Rosenthal, J. S. Optimal scaling of dis-crete approximations to langevin diffusions. Journal ofthe Royal Statistical Society: Series B (Statistical Method-ology), 60(1):255–268, 1998.

Roberts, G. O. and Stramer, O. Langevin diffusions andmetropolis-hastings algorithms. Methodology and com-puting in applied probability, 4(4):337–357, 2002.

Roberts, G. O. and Tweedie, R. L. Exponential convergenceof langevin distributions and their discrete approxima-tions. Bernoulli, pp. 341–363, 1996.

Roux, N. L., Schmidt, M., and Bach, F. R. A stochasticgradient method with an exponential convergence ratefor finite training sets. In Advances in Neural InformationProcessing Systems, pp. 2663–2671, 2012.

Stramer, O. and Tweedie, R. Langevin-type models i: Dif-fusions with given stationary distributions and their dis-cretizations. Methodology and Computing in AppliedProbability, 1(3):283–306, 1999a.

Stramer, O. and Tweedie, R. Langevin-type models ii: Self-targeting candidates for mcmc algorithms. Methodologyand Computing in Applied Probability, 1(3):307–328,1999b.

Welling, M. and Teh, Y. W. Bayesian learning via stochasticgradient langevin dynamics. In Proceedings of the 28thInternational Conference on Machine Learning (ICML-11), pp. 681–688, 2011.

Xiao, L. and Zhang, T. A proximal stochastic gradientmethod with progressive variance reduction. SIAM Jour-nal on Optimization, 24(4):2057–2075, 2014.

Xu, P., Chen, J., Zou, D., and Gu, Q. Global convergenceof langevin dynamics based algorithms for nonconvexoptimization. arXiv preprint arXiv:1707.06618, 2017.

Zhang, Y., Liang, P., and Charikar, M. A hitting time anal-ysis of stochastic gradient langevin dynamics. arXivpreprint arXiv:1702.05575, 2017.