bivas: a scalable bayesian method for bi-level variable ... · department of mathematics, hong kong...

46
BIVAS: A scalable Bayesian method for bi-level variable selection with applications Mingxuan Cai Department of Mathematics, Hong Kong Baptist University Mingwei Dai School of Mathematics and Statistics, Xi’an Jiaotong University Jingsi Ming Department of Mathematics, Hong Kong Baptist University Heng Peng Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School Can Yang Department of Mathematics, The Hong Kong University of Science and Technology March 29, 2018 Abstract In this paper, we consider a Bayesian bi-level variable selection problem in high- dimensional regressions. In many practical situations, it is natural to assign group membership to each predictor. Examples include that genetic variants can be grouped at the gene level and a covariate from different tasks naturally forms a group. Thus, it is of interest to select important groups as well as important members from those groups. The existing Markov Chain Monte Carlo (MCMC) methods are often com- putationally intensive and not scalable to large data sets. To address this problem, we consider variational inference for bi-level variable selection (BIVAS). In contrast to the commonly used mean-field approximation, we propose a hierarchical factoriza- tion to approximate the posterior distribution, by utilizing the structure of bi-level variable selection. Moreover, we develop a computationally efficient and fully paral- lelizable algorithm based on this variational approximation. We further extend the developed method to model data sets from multi-task learning. The comprehensive numerical results from both simulation studies and real data analysis demonstrate the advantages of BIVAS for variable selection, parameter estimation and computational 1 arXiv:1803.10439v1 [stat.AP] 28 Mar 2018

Upload: others

Post on 04-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

BIVAS: A scalable Bayesian method forbi-level variable selection with applications

Mingxuan CaiDepartment of Mathematics, Hong Kong Baptist University

Mingwei DaiSchool of Mathematics and Statistics, Xi’an Jiaotong University

Jingsi MingDepartment of Mathematics, Hong Kong Baptist University

Heng PengDepartment of Mathematics, Hong Kong Baptist University

Jin LiuCentre for Quantitative Medicine, Duke-NUS Medical School

Can YangDepartment of Mathematics,

The Hong Kong University of Science and Technology

March 29, 2018

Abstract

In this paper, we consider a Bayesian bi-level variable selection problem in high-dimensional regressions. In many practical situations, it is natural to assign groupmembership to each predictor. Examples include that genetic variants can be groupedat the gene level and a covariate from different tasks naturally forms a group. Thus,it is of interest to select important groups as well as important members from thosegroups. The existing Markov Chain Monte Carlo (MCMC) methods are often com-putationally intensive and not scalable to large data sets. To address this problem,we consider variational inference for bi-level variable selection (BIVAS). In contrastto the commonly used mean-field approximation, we propose a hierarchical factoriza-tion to approximate the posterior distribution, by utilizing the structure of bi-levelvariable selection. Moreover, we develop a computationally efficient and fully paral-lelizable algorithm based on this variational approximation. We further extend thedeveloped method to model data sets from multi-task learning. The comprehensivenumerical results from both simulation studies and real data analysis demonstrate theadvantages of BIVAS for variable selection, parameter estimation and computational

1

arX

iv:1

803.

1043

9v1

[st

at.A

P] 2

8 M

ar 2

018

Page 2: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

efficiency over existing methods. The method is implemented in R package ‘bivas’available at https://github.com/mxcai/bivas.

Keywords: Bayesian variable selection; Variational inference; Group sparsity; Parallel com-puting.

2

Page 3: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

1 Introduction

Variable selection plays an important role in modern data analysis with the ever-increasing

number of variables, where it is often assumed that only a small proportion of variables are

relevant to the response variable (Hastie et al., 2015). In many real applications, this sparse

pattern could be more complicated. In this paper, we consider a class of regression problems

in which the grouping structure of the variables naturally exists. Examples include, but

not limited to, the categorical predictors that are often represented by a group of indicators

and continuous predictors that can be expressed by a group of basis functions. We assume

that only a proportion of groups are relevant to the response variable and within each

relevant group, only a subset of variables is relevant. Hence we consider a bi-level variable

selection problem, i.e., variable selection at both the individual and group levels (Breheny

and Huang, 2009).

There have been rich literatures on variable selection (Tibshirani, 1996; Fan and Li,

2001; Zhang, 2010; Yuan and Lin, 2006), but majority of them focus on variable selection at

the individual level, including penalized merhods, such as Lasso (Tibshirani, 1996), SCAD

(Fan and Li, 2001) and MCP (Zhang, 2010), and Bayesian vairable selection methods

based on sparsity-promoting priors, such as Laplace-like priors (Figueiredo, 2003; Bae and

Mallick, 2004; Yuan and Lin, 2005; Park and Casella, 2008) and spike-slab priors (Mitchell

and Beauchamp, 1988; George and McCulloch, 1993; Madigan and Raftery, 1994; George

and McCulloch, 1997). To perform variable selection at the group level, the group Lasso

(Yuan and Lin, 2006) introduced the L1-L2 norm penalty to group variables and perform

group selection using the L1 norm. CAP (Zhao et al., 2009) generalized this idea to be

the L1-Lγ norm, where γ ∈ [1,+∞). Under the Bayesian framework, this is achieved by

specifying the prior over a whole group of variables (Kyung et al., 2010; Raman et al., 2009;

Xu et al., 2015).

The group variable selection methods usually act like Lasso at the group level and

variables are selected in the ‘all-in or all-out’ manner. However, these methods does not

yield sparsity within a group, i.e. if a group is selected, all variables within that group will

be non-zero. To conduct variable selection at both the individual and group levels, various

methods have been proposed for bi-level selection from different perspectives including

3

Page 4: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

both penalized and Bayesian methods. Penalized methods often consider a composition

of two penalties. The group bridge (Huang et al., 2009) adopts a bridge penalty on the

group level and the L1 penalty on the variable level. Hierarchical Lasso (Zhou and Zhu,

2010) can be viewed as a special case of group bridge with bridge index fixed at 0.5. Under

certain regularity conditions, the global group bridge solution is proved to be group selection

consistent. However, the singularity nature of these penalties at 0 potentially complicates

the optimization in practice. The composite MCP (cMCP) (Breheny and Huang, 2009)

and group exponential Lasso (GEL) (Breheny, 2015) proposed to apply their penalty at

both levels in a manner that puts less penalization as the absolute value of a coefficient

becomes larger. On the other hand, Bayesian methods usually assume a spike-slab prior

on variables at both the individual and group levels to promote bi-level sparsity. Despite

the convenience of using Bayesian methods to depict hierarchical struture among variables,

such posteriors are usually intractable. Hence, current literatures mainly rely on sampling

methods to approximate the posterior distribution, such as Markov Chain Monte Carlo

(MCMC) (Xu et al., 2015; Chen et al., 2016). The computational costs of these methods

become very expensive in the presence of a large number of variables.

In this paper, we propose a scalable Bayesian method for bi-level variable selection

(BIVAS). Instead of using MCMC, we adopt variational inference, which greatly reduces

computational cost and makes our algorithm scalable. In contrast to standard mean-field

variational approximation, we propose a hierarchically factorizable approximation, making

use of the special structure of bi-level variable selection. A computationally efficient vari-

ational expectation-maximiztion (EM) algorithm is developed to handle large data sets.

Moreover, we extend our approach to handle a class of multi-task learning. We further

use comprehensive simulation studies to demonstrate that BIVAS can significantly outper-

form its alternatives in term of variable selection, prediction accuracy and computational

efficiency.

The remainder of this article is organized as follows. In Section 2, we describe both

model settings and algorithms. In particular, we show the rationale to improve the com-

putational efficiency. We further discuss the way of extending our approach to multi-task

learning. In Section 3, we evaluated the performance of BIVAS based on comprehensive

4

Page 5: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

simulation studies, especially checked the cases variational assumptions are violated. The

experimental results show that BIVAS can stably outperform its alternatives in various

settings. Then we applied BIVAS to three real data examples. We conclude the paper with

a short discussion in Section 4.

2 Methods

2.1 Regression with BIVAS

2.1.1 Model Setting

Suppose we have collected dataset {y,Z,X} with sample size n, where y ∈ Rn is the vector

of response variable, Z ∈ Rn×r is the design matrix of r columns including an intercept

and a few covariates (r < n) and X ∈ Rn×p is the design matrix of p predictors. Besides,

each of the p variables in X is labeled with one of K known groups, where the number of

variables in group k is denoted by lk and∑K

k=1 lk = p. We consider the following linear

model that links y to Z and X:

y = Zω + Xβ + e, (1)

where ω ∈ Rr is a vector of fixed effects, β ∈ Rp is a vector of random effects, and e ∈ Rn is

a vector of independent noise. We assume e ∼ N (0, σ2eIn), where In is the n-by-n identity

matrix. Under this model, the bi-level selection aims to identify non-zero entries of β at

both the group and individual-variable levels. For this reason, we introduce two binary

variables: ηk indicates whether group k is active (ηk = 1) or not (ηk = 0); and γjk indicates

whether the j-th variable in group k is zero (γjk = 0) or not (γjk = 1). Hence, we introduce

a bi-level spike-slab prior on β:

βjk|ηk, γjk;σ2β ∼

N (βjk|0, σ2β) if ηk = 1, γjk = 1,

δ0(βjk) otherwise,(2)

where N (βjk|0, σ2β) denotes the Gaussian distribution with mean 0 and variance σ2

β and

δ0(βjk) denotes a Dirac function at zero. This bi-level structure means that βjk is drawn

from N (0, σ2β) if and only if both the k-th group and its j-th variable are included in the

5

Page 6: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

model. Let Pr(ηk = 1) = π and Pr(γjk = 1) = α be the prior inclusion probability of

groups and variables, respectively.

The presence of Dirac function may introduce additional troubles in algorithm deriva-

tion. To get rid of the Dirac function, we re-parameterize the model as following:

βjk|σ2β ∼ N (0, σ2

β), γjk|α ∼ αγjk(1− α)1−γjk , ηk|π ∼ πηk(1− π)1−ηk . (3)

Consequently, the prior of βjk does not depend on γjk and ηk any more, and the product

ηkγjkβjk form a new random variable exactly distributed as given in (2). We shall use the

re-parameterized version through the paper.

Let θ = {α, π, σ2β, σ

2e ,ω} be the collection of model parameters and {β,γ,η} be the set

of latent variables. The joint probabilistic model is

Pr(y,η,γ,β|X,Z;θ) = Pr(y|η,γ,β,X,Z,θ)Pr(η,γ,β|θ)

=N (y|Zω +K∑k

lk∑j

ηkγjkβjkxjk, σ2e)

K∏k=1

πηk(1− π)1−ηklk∏j=1

N (0, σ2β)αγjk(1− α)1−γjk ,

(4)

where xjk is a column of X corresponding to the j-th variable in the k-th group. The goal

is to obtain the estimate of θ, θ, by optimizing the marginal likelihood

log Pr(y|X,Z;θ) = log∑γ

∑η

∫β

Pr(y,η,γ,β|X,Z;θ)dβ, (5)

and evaluate the posterior

Pr(η,γ,β|y,X,Z; θ) =Pr(y,η,γ,β|X,Z; θ)

Pr(y|X,Z; θ). (6)

2.1.2 Algorithm

Conventionally, the model involving latent variables is often solved by the Expectation-

Maximization (EM) algorithm. However, the standard EM algorithm cannot be applied

here due to the difficulty of the E-step caused by the combinatorial nature of γ and η.

Alternatively, we propose a variational EM algorithm via approximate Bayesian inference

(Bishop, 2006).

To apply variational approximation, we first define q(η,γ,β) as an approximated dis-

tribution of posterior Pr(η,γ,β|y,X,Z;θ). Then we can obtain the lower bound of log-

6

Page 7: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

marginal likelihood by Jensen’s inequality:

log p(y|X,Z;θ) = log∑γ

∑η

∫β

Pr(y,η,γ,β|X,Z;θ)dβ

≥∑γ

∑η

∫β

q(η,γ,β) logPr(y,η,γ,β|X,Z;θ)

q(η,γ,β)dβ

= Eq[log Pr(y,η,γ,β|X,Z;θ)− log q(η,γ,β)]

≡ L(q),

(7)

where the equality holds if and only if q(η,γ,β) = Pr(η,γ,β|y,X,Z;θ). Then, we can

iteratively maximize L(q) instead of working with the marginal likelihood directly. Conven-

tionally, q is often assumed to be fully factorizable based on the mean-field theory (Bishop,

2006). As there is hierarchical structure between the group level and the variable level,

here we propose a novel variational distribition to accommodate the bi-level variable selec-

tion. Specifically, we consider the the following hierarchically structured distribution as an

approximation to posterior Pr(η,γ,β|y,X,Z):

q(η,γ,β) =K∏k

(q(ηk)

lk∏j

(q(βjk|ηk, γjk)q(γjk))

). (8)

Without any other assumptions, we can show (with details in Supplementary) that the

optimal solution of q is given as:

q(η,γ,β) =K∏k

(πηkk (1− πk)1−ηk

lk∏j

(αγjkjk (1− αjk)1−γjkN (µjk, s

2jk)

ηkγjkN (0, σ2β)1−ηkγjk

)),

(9)

where

s2jk =σ2e

xTjkxjk + σ2e

σ2β

,

µjk =xTjk(y − Zω)−

∑Kk′ 6=k

∑lkj Ejk′ [ηk′γjk′βjk′ ]xTjk′xjk −

∑lkj′ 6=j E[γj′kβj′k]x

Tj′kxjk

xTjkxjk + σ2e

σ2β

,

(10)

πk =1

1 + exp(−uk), with uk = log

π

1− π+

1

2

lk∑j

αjk

(log

s2jkσ2β

+µ2jk

s2jk

), (11)

αjk =1

1 + exp(−vjk), with vjk = log

α

1− α+

1

2πk

(log

s2jkσ2β

+µ2jk

s2jk

). (12)

7

Page 8: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

By inspections of Equations (8) and (9), we have q(ηk = 1) = πk and q(γjk = 1) = αjk,

which can be viewed as approximations to the posterior distibutions Pr(ηk = 1|y,X,Z;θ)

and Pr(γjk = 1|y,X,Z;θ), repectively. Similarly, q(βjk|ηkγjk = 1) = N (µjk, s2jk) can be

interpreted as the variational approximation to Pr(βjk|ηkγjk = 1,y,X,Z;θ), which is the

conditional posterior distribution of βjk given it is selected in both the group level and

the variable level. Accordingly, q(βjk|ηkγjk = 0) = N (0, σ2β) approximates Pr(βjk|ηkγjk =

0,y,X,Z;θ), corresponding to the case when βjk is irrelevant in either of the two levels.

Note that the form of variational parameters provides an intuitive interpretation. Group-

level posterior inclusion probability πk and variable-level posterior inclusion probability αjk

can be viewed as their prior inclusion probability (π, α) updated by data-driven informa-

tion. Furthermore, πk and αjk are interdependent. On one hand, if more and more αjk

within the k-th group become closer to one, then πk will be closer to one, as seen in Equa-

tion (11). On the other hand, if πk increases, then the variables in the k-th group are more

likely to be selected, see Equation (12).

With Equation (9), the lower bound L(q) can be evaluated analytically. By setting

the derivative of L(q) with respect to θ to be zero, we have the updating equations for

parameter estimation:

σ2e =||y − Zω −

∑Kk

∑lkj πkαjkµjkxjk||2

n

+

∑Kk

∑lkj [πkαjk(s

2jk + µ2

jk)− (πkαjkµjk)2]xTjkxjk

n

+

∑Kk (πk − π2

k)[∑lk

j

∑lkj′ αj′kµj′kαjkµjk]x

Tj′kxjk

n,

σ2β =

∑Kk

∑lkj πkαjk(s

2jk + µ2

jk)∑Kk

∑lkj πkαjk

,

α =1

p

K∑k

lk∑j

αjk,

π =1

K

K∑k

πk,

ω =(ZTZ)−1ZT (y −K∑k

lk∑j

πkαjkµjkxjk).

(13)

8

Page 9: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

To summarize, the algorithm can be regarded as a variational extension of the EM

algorithm. At E-step, the lower bound L(q) is obtained by evaluating the expectation w.r.t

variational posterior q. At M-step, the current L(q) is optimized w.r.t model parameters

in θ. As a result, the lower bound increases at each iteration and the convergence is

guaranteed.

2.2 Multi-task learning with BIVAS

2.2.1 Model Setting

In this section, we consider bi-level variable selection in multi-task learning. In real appli-

cations, some related regression tasks may have similar patterns in the effects of predictor

variables. A joint model that analyze all such related tasks simultaneously can efficiently

increase statistical power, which is called multi-task learning (Caruana, 1998). As we shall

see later, a class of multi-task regression problem can be naturally solved by BIVAS with

proper adjustment for the likelihood. To avoid ambiguity, we refer to the model described

in Section 2.1 as ‘group BIVAS’ and the one discussed in this section as ‘multi-task BIVAS’.

Suppose we have collected dataset {y,Z,X} = {yj,Zj,Xj}Lj=1 from L related regression

tasks, each of which has sample size nj. In practice, yj ∈ Rnj is the the reponse vector of

j-th task from nj individuals; Zj ∈ Rnj×r includes an intercept and a few shared covariates;

Xj ∈ Rnj×K is the design matrix of K shared predictors. We relate yj to Xj and Zj using

the following linear mixed model:

yj = Zjωj + Xjβj + ej, j = 1, . . . , L, (14)

where ωj ∈ Rr is the vector of fixed effects, βj ∈ RK is the vector of random effects and

ej ∈ Rnj is the vector of independent noise with ej ∼ N (0, σ2ej

Inj). For convenience, we

denote β = [β1, ...,βL] ∈ RK×L and βjk be k-th entry in βj. Clearly, it is not reasonable

to assume that all shared predictors are relevant to all reponses, especially when K is

large. A more reasonable assumpotion is that the majority of predictors are irrelevant to

all the responses and only a few of them are relevant with many responses. With this

assumption, it is natural to treat each shared predictor as a group across different task

l, which corresponds to a row of β. Then the group-level selection aims at excluding

9

Page 10: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

variables which are irrelevant to all responses and the individual-level selection further

identifies fine-grained relevance between variables and response of specific task. For this

purpose, we introduce two binary variables: ηk indicates whether the k-th row of β is active

or not and γjk indicates whether βjk is zero or not. Then the bi-level spike-slab prior on β

is introduced by:

βjk|ηk, γjk;σ2βj∼

N (βjk|0, σ2βj

) if ηk = 1, γjk = 1,

δ0(βjk) otherwise,(15)

where prior inclusion probabilities are defined as Pr(ηk = 1) = π and Pr(γjk = 1) = α.

Again we re-parameterize the model to remove the Dirac function:

βjk|σ2βj∼ N (0, σ2

βj), γjk|α ∼ αγjk(1− α)1−γjk , ηk|π ∼ πηk(1− π)1−ηk . (16)

Let θ = {α, π, σ2βj, σ2

ej,ωj}Lj=1 be the collection of parameters under the multi-task

model. The joint probabilistic model is

Pr(y,η,γ,β|X,Z;θ) = Pr(y|η,γ,β,X,Z,θ)Pr(η,γ,β|θ)

=L∏j=1

N (yj|Zjωj +K∑k

ηkγjkβjkxjk, σ2ej

)K∏k=1

πηk(1− π)1−ηkL∏j=1

N (0, σ2βj

)αγjk(1− α)1−γjk ,(17)

where xjk is the k-th column of Xj, corresponding to the k-th variabe in the j-th task.

Our goal is to maximize the marginal likelihood, which is of the same form as Equation

(5), and evaluate the posterior distribution of βjk.

2.2.2 Algorithm

The variational EM algorithm of multi-task BIVAS is straightforward following the similar

procedure in 2.1.2. We leave the details in the supplementary document. In summary, we

10

Page 11: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

have

s2jk =σ2ej

xTjkxjk +σ2ej

σ2βj

µjk =xTjk(yj − Zjωj − yjk)

xTjkxjk +σ2ej

σ2βj

πk =1

1 + exp(−uk), where uk = log

π

1− π+

1

2

L∑j

αjk

(log

s2jkσ2βj

+µ2jk

s2jk

)

αjk =1

1 + exp(−vk), where vk = log

α

1− α+

1

2πk

(log

s2jkσ2βj

+µ2jk

s2jk

)(18)

for E-step; and

σ2ej

=||yj − Zjωj −

∑Kk πkαjkµjkxjk||2

Nj

+

∑Kk [πkαjk(s

2jk + µ2

jk)− (πkαjkµjk)2]xTjkxjk

Nj

,

σ2βj

=

∑Kk πkαjk(s

2jk + µ2

jk)∑Kk πkαjk

,

α =1

p

K∑k

L∑j

αjk,

π =1

K

K∑k

πk,

ωj =(ZTj Zj)

−1ZTj (yj −

K∑k

πkαjkµjkxjk),

(19)

for M-step.

2.3 Implementation Details

After the convergence of algorithm, we can approximate the posterior inclusion probabilities

by the variational approximation. For group BIVAS, the approximations are given by

Pr(ηk = 1|y,X,Z; θ) ≈ q(ηk = 1|θ) = πk,

Pr(γjk = 1|y,X,Z; θ) ≈ q(γjk = 1|θ) = αjk.

11

Page 12: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

These evaluations are based on parameter estimates θ. However, as there is no guarantee

of global optimal for the EM algorithm, the choice of initial value θ0 is critical. A bad

initial value will lead to a poor θ. In our model, due to the existence of multiple latent

variables (β, η, γ), choosing a good initial value could be challenging. Here we consider the

importance sampling suggested by varbvs (Carbonetto et al., 2012): we further introduce a

prior over θ and integrate over the value of θ to obtain the final evaluations. In contrast to

varbvs, we introduce prior only on the group sparsity parameter π. We first select h values

of π ({π(i)}hi=1) such that log10 odds of π(i) is uniformly distributed on [− log10(K), 0] which

encourages group sparsity. With this additional setting, the new collection of parameters

is then defined as θ′ = {θ′i}hi=1 with θ′i = {α, π(i), σ2β, σ

2e ,ω}; and the posterior inclusion

probability can be approximated as follows:

Pr(ηk = 1|y,X,Z) ≈∫q(ηk = 1|θ′) Pr(θ′|y,X,Z)dθ′ ≈

∑hi=1 q(ηk = 1|θ′i)w(θ′i)∑h

i=1w(θ′i),

Pr(γjk = 1|y,X,Z) ≈∫q(γjk = 1|θ′) Pr(θ′|y,X,Z)dθ′ ≈

∑hi=1 q(γjk = 1|θ′i)w(θ′i)∑h

i=1w(θ′i),

(20)

where w(θ′i) is the unnormalized importance weight for i-th component. For each of the

two equations in (20), the first approximation is due to the variational inference; the second

approximation is due to the importance sampling. Besides, w(θ′i) can be approximated by

exponential of L(q) given θ′i since L(q) takes similar shape to log Pr(y|X,Z;θ) when the

marginal likelihood is relatively large (Carbonetto et al., 2012). Hence, we can derive the

final evaluation of posteriors:

Pr(ηk = 1|y,X,Z) ≈h∑i=1

πk(i) · w(i) ≡ πk,

Pr(γjk = 1|y,X,Z) ≈h∑i=1

αjk(i) · w(i) ≡ γjk,

E(βjk|ηkγjk = 1,y,X,Z) ≈h∑i=1

µjk(i) · w(i) ≡ µjk,

E(ηkγjkβjk|y,X,Z) ≈ πkγjkµjk,

(21)

where

w(i) = exp(w(θ′i)−m), m = max(w(θ′i)).

12

Page 13: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Here we handle the normalization inside the exponantial so that the calculation is

numerically stable. The same weighting evaluation applies to the parameters θ′. We can

derive the same procedure for multi-task BIVAS as the one for group BIVAS. Although we

need to run EM algorithm h times in this procedure, each EM algorithm becomes more

stable and converges in less iterations. In practice, h = 20 ∼ 40 is often good enough for

large scale data sets. Furthermore, taking the advantage of independence among π(i)’s, the

h procedures can be fully parallelized. Common solutions to parallelization are based on

APIs such as OpenMP. These solutions, however, usually require the tasks to be allocated

beforehand. In our model, this restriction may lead to inefficiency because the time of

convergence for each procedure can be very different. Thus, we adopt a dynamic threading

technique that can immediately allocates a new task to a thread once it has finished an

old task. This technique greatly improves the efficiency of parallelization compared to

OpenMP.

2.4 Variable Selection and Prediction

With the results obtained by importance sampling, we extract information from our model

for the purpose of variable selection and prediction. Using the approximation of the pos-

terior inclusion probability in (21), we can approximate local false discovery rate (fdr) of

group k by fdrk = 1 − πk and fdr of j-th variable in k-th group by fdrjk = 1 − αjk.

Hence, by setting a commonly used threshold (e.g fdr < 0.05), variables and groups with

high posterior inclusion probability can be identified as relevant. Although the parameter

estimates may not be accurate due to the variational approximation, the posterior means

of latent variables appear to be accurate. We will verify this result later in the simulation.

In addition to variable selection, we can also predict y (or yj for multi-task learning)

with a new data {Znew,Xnew}. Since Eq(ηkγjkβjk) ≈ πkαjkµjk gives the estimate of effect

size for the jk-th random effect, the predicted value is simply obtained by y =∑

r ωrznewr +∑

k

∑j πkαjkµjkx

newjk (in multi-task learning yj =

∑r ωjrz

newjr +

∑k πkαjkµjkx

newjk for j-th

task).

13

Page 14: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

3 Numerical Examples

In this section, we gauged the performance of BIVAS in comparison with alternative meth-

ods using both simulation and real data analysis. In the spirit of reproducibility, all the

simulation codes are made publicly availible at https://github.com/mxcai/sim-bivas.

3.1 Simulation Study

For group BIVAS, we compared it with varbvs (Carbonetto et al., 2012), cMCP (Bre-

heny and Huang, 2009), and GEL (Breheny, 2015). The simulation data sets were gen-

erated as follows. The design matrix X was generated from normal distribution with

autoregressive correlation ρ|j−j′| between column j and j′. As the variational approxima-

tion assumes a hierarchically factorizable distribution, we selected ρ ∈ {−0.5, 0, 0.5}

to evaluate the influence of violation of this assumption. Next, we generated coeffi-

cients with different sparsity proportion at the group and individual levels: (π, α) ∈

{(0.05, 0.8), (0.1, 0.4), (0.2, 0.2), (0.4, 0.1), (0.8, 005)}. Note that the total sparsity was

fixed at α ·π = 0.04 for different combinations of π and α. Finally, we controled the signal-

to-noise ratio (SNR) at SNR = var(Xβ)/σ2e ∈ {0.5, 1, 2}. For all the above settings, we

had n = 1, 000, p = 5, 000, K = 250 with 20 variables in each group.

●● ●

● ●

●● ●●●●

●● ●

●●

●●

● ●

● ● ●

●●

●●

● ●●

●●

●●

SNR=0.5 SNR=1 SNR=2

CO

RR

=−

0.5C

OR

R=

0C

OR

R=

0.5

0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05

0.0

0.1

0.2

0.3

0.4

0.5

0.0

0.1

0.2

0.3

0.4

0.5

0.0

0.1

0.2

0.3

0.4

0.5

Sparsity

FD

R

Method BIVAS varbvs

●● ●

●●●●

●●● ● ● ●●

●●●

● ●●● ●● ●●● ●● ●●●

● ●

●● ● ●

● ● ● ●

●●

●●●

●● ●

●●●

● ●

● ●

● ● ●

● ●

● ●

SNR=0.5 SNR=1 SNR=2

CO

RR

=−

0.5C

OR

R=

0C

OR

R=

0.5

0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05

0.00

0.25

0.50

0.75

0.00

0.25

0.50

0.75

0.00

0.25

0.50

0.75

Sparsity

Pow

er

Method BIVAS varbvs

Figure 1: Comparison of BIVAS and varbvs for individual variable selection.

As cMCP and GEL did not provide FDR estimates for variable selection, we first com-

pared BIVAS with varbvs. Figure 1 shows the performance of FDR control and statistical

14

Page 15: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

power for individual variable selection obtained by BIVAS and varbvs. When the SNR is

small, both methods are underpowered. However, BIVAS gains more power as the group

sparsity dominates and further enlarges the gap as SNR increases. As ρ moves away from

zero, empirical FDRs of both methods are slightly inflated.

●●

●●

●●

●●

●●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●● ●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●● ●

●●●●

●●

●●

●●●

●●

SNR=0.5 SNR=1 SNR=2

CO

RR

=−

0.5C

OR

R=

0C

OR

R=

0.5

0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

0.5

0.6

0.7

0.8

0.9

1.0

Sparsity

AU

C

Method BIVAS varbvs cMCP GEL

●●

●●

● ●

●●

●●●

●●

●●

●●

● ●

●●

●●●

●●

●●

●●●

●●

●●

●●●

●●●

●●●

●●

●●●●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●●●●

●●

●●

●●

●●

●●

● ●●

SNR=0.5 SNR=1 SNR=2

CO

RR

=−

0.5C

OR

R=

0C

OR

R=

0.5

0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05

0.6

0.8

1.0

0.6

0.8

1.0

0.6

0.8

1.0

Sparsitygr

oup−

AU

C

Method BIVAS varbvs cMCP GEL

● ● ●

● ●

●●

●●

● ●●

● ●

●●

●●

●●●●

●●

● ●

●●

●●

● ●

●●

● ●

●●

● ●● ●

●●

● ●

● ● ●

●●

● ●●●

●●

●●●

●●

● ●●● ●

● ●

●●

●●●●

●●●

● ●

●● ●●

●●

●●

SNR=0.5 SNR=1 SNR=2

CO

RR

=−

0.5C

OR

R=

0C

OR

R=

0.5

0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05

0.000

0.005

0.010

0.015

0.020

0.000

0.005

0.010

0.015

0.020

0.000

0.005

0.010

0.015

0.020

Sparsity

Mea

n S

quar

ed E

rror

Method BIVAS varbvs cMCP GEL

●●●

●●

●●●●

●●

●●

● ●

●●

●●

●●

●●

●●●●

●●

●●●

●●

●●●

●● ●

●●

●●

●●

● ●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

SNR=0.5 SNR=1 SNR=2

CO

RR

=−

0.5C

OR

R=

0C

OR

R=

0.5

0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05 0.05:0.8 0.1:0.4 0.2:0.2 0.4:0.1 0.8:0.05

0

50

100

150

0

50

100

150

0

50

100

150

Sparsity

Tim

e(se

cond

)

Method BIVAS varbvs cMCP GEL

Figure 2: Comparisons of BIVAS, varbvs, cMCP and GEL (coupling parameter τ = 1/3).

Top Left: AUC for individual variable selection. Top Right: AUC for group selection.

Bottom Left: Mean squared error (MSE) of coefficient estimates. Bottom Right: Compu-

tational time.

Figure 2 shows the comparison of BIVAS with varbvs, cMCP and GEL in terms of bi-

level variable selection, estimation accuracy and computational efficiency. As varbvs only

selects individual variables, we treat it as a base line for comparisons of BIVAS with other

two alternatives. In the bottom left penal, estimation errors of all the three methods de-

crease steadily as SNR increases when the sparsity-in-group dominates. BIVAS has similar

15

Page 16: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

performance with cMCP and GEL when SNR is moderate (SNR = 0.5) but outperforms

them when SNR is relatively large (SNR = 1, 2). When sparsity-in-variable dominates, the

estimation performances of BIVAS and cMCP are close to varbvs, but the estimation error

of GEL is inflated.

To evaluate the performance of variable selection, we primarily focus on the measure

of area under the receiver operating characteristic (ROC) curve (AUC) both at the group

and individual levels. The top left panel of Figure 2 shows that the performance of variable

selection for BIVAS is comparable with GEL when SNR is large. When the signal is weak

(SNR = 0.5), the AUC of BIVAS is much larger than that of GEL. Moreover, as the

‘bulk’ of sparsity moves to individual variable level, BIVAS converges to varbvs while GEL

becomes even worse than varbvs. This pattern is consistent with that we observe in the

measurement of estimation error. In all settings we considered, the perfomance of cMCP

is poor. The top right panel in Figure 2 shows the performance of variable selection at

the group level (group-AUC). The pattern of group-AUC is similar to the individual level

AUC. The bottom right panel in Figure 2 illustrates the computational efficiency of the

four methods. With multi-thread computation, the speed of BIVAS is comparable to other

methods and faster than cMCP and GEL in most cases.

In addition, we also made comparisons of the estimation accuracy and computational

efficiency between BIVAS and Bayesian methods adopting MCMC. Here we considered

BSGS-SS (Xu et al., 2015); we set n = 200, p = 1, 000, K = 100 with 10 variables in each

group and ρ = 0.5, SNR = 1. As illustrated in Figure 3, BIVAS achieves almost the same

estimation accuracy as BSGS-SS but uses only around 1% of its computational time.

For multi-task BIVAS, we compared with varbvs and Lasso that are applied separately

to each task. We simulated L = 3 tasks with sample sizes n1 = 600, n2 = 500, n3 = 400.

Number of variables K = 2, 000 was used throughout. We followed the settings in group

BIVAS for the sparsity pattern and SNR. The estimation error was evaluated on both

overall scale and individual-task scale, as shown in Figure 4. As one can observe, BIVAS

outperforms varbvs and Lasso when the group sparsity is predominant and the difference

increases as the signal becomes stronger. Even when the proportion of group sparsity

decreases, BIVAS is still comparable with the other two alternatives. In addition, when

16

Page 17: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

●●

Sparsity: 0.05:0.8 Sparsity: 0.1:0.4 Sparsity: 0.2:0.2 Sparsity: 0.4:0.1 Sparsity: 0.8:0.05

BIVAS BSGS−SS BIVAS BSGS−SS BIVAS BSGS−SS BIVAS BSGS−SS BIVAS BSGS−SS

0.0000

0.0025

0.0050

0.0075

Method

Mea

n S

quar

ed E

rror

Method BIVAS BSGS−SS

●●

●●

●●

●●●

Sparsity: 0.05:0.8 Sparsity: 0.1:0.4 Sparsity: 0.2:0.2 Sparsity: 0.4:0.1 Sparsity: 0.8:0.05

BIVAS BSGS−SS BIVAS BSGS−SS BIVAS BSGS−SS BIVAS BSGS−SS BIVAS BSGS−SS

1

10

100

Method

Tim

ing(

log 1

0(S

econ

d))

Method BIVAS BSGS−SS

Figure 3: Comparison of BIVAS and BSGS-SS. Left: Mean Squared Error of coefficient

estimates. Right: Time.

● ●●

●●

●●●

●●

●●

●●● ●

●●

●●

●●●

● ●

●●

●●

●●

●●●

●●

●●●

●●

●●

● ●

●●

● ●

Sparsity: 0.05:0.8 Sparsity: 0.1:0.4 Sparsity: 0.2:0.2 Sparsity: 0.4:0.1 Sparsity: 0.8:0.05

SN

R: 0.5

SN

R: 1

SN

R: 2

Overall Task1 Task2 Task3 Overall Task1 Task2 Task3 Overall Task1 Task2 Task3 Overall Task1 Task2 Task3 Overall Task1 Task2 Task3

0.001

0.002

0.003

0.004

0.005

0.002

0.004

0.006

0.001

0.002

0.003

0.004

Task

Mea

n S

quar

ed E

rror

Method BIVAS varbvs Lasso

Figure 4: Comparison of BIVAS, varbvs, Ridge and Lasso in multi-task learning.

a strong group-sparsity pattern exists (leftmost column), BIVAS has its biggest gain on

Task 3, which has the smallest sample size. This is because BIVAS takes the advantage of

shared sparsity pattern in different tasks.

17

Page 18: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

3.2 Real Data Analysis

To examine the performance of BIVAS in large scale data, we provide three real examples:

we first apply the regression model to the GWAS data from the Wellcome Trust Case

Control Consortium (WTCCC) (Consortium et al., 2007) and the Northern Finland Birth

Cohort (NFBC) (Sabatti et al., 2009); then we analyze a movie review data set from

IMDb.com (Maas et al., 2011) using the multi-task model.

3.2.1 GWAS data

In the GWAS data sets, we conducted quality control based on PLINK (Purcell et al.,

2007) and GCTA (Yang et al., 2011): individuals with > 2% missing genotypes were first

removed; we also removed the SNPs with minor allele frequency < 0.05, missingness > 1%,

or p-value < 0.001 in Hardy-Weinberg equilibrium test, excluding individuals with genetic

relatedness greater than 0.025.

We first considered the High-Density Lipoprotein (HDL) from the NFBC data, which

was accessed by the database of Genotypes and Phenotypes (dbGaP) at https://www.

ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000276.v2.p1. This

data set was composed of 5, 123 individuals and 319, 147 SNPs. In our analysis, the SNPs

were first annotated with their corresponding gene region using ANNOVAR (Wang et al.,

2010), which leads to 318, 686 SNPs in 20, 493 genes without overlap. Treating the genes

as groups, we applied both BIVAS and varbvs to the data. Figure 5 (a) shows the conver-

gence of each EM procedure for BIVAS. One can observe that the EM algorithm converges

faster for smaller values of π, suggesting the evidence of group sparsity. Computational

times for different numbers of threads are presented in Figure 5 (b). When h = 40, BI-

VAS took around 3.2 hours to converge using 4 threads and only took 1.6 hours using 8

threads, which indicates that the developed algorithm achieved almost perfect efficiency

in parallelization. Estimates of lower bound and parameter α are shown in Figure 5 (c)

and (d), suggesting the effectiveness of leveraging group structure using BIVAS. After the

convergence, we identified the SNPs and genes based on fdr < 0.05. Five risk variants

(rs2167079, rs1532085, rs3764261, rs7499892, rs255052) were identified by varbvs. BIVAS

discovered one more variant: rs1532624. For the group level selection, BIVAS identified

18

Page 19: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

five associated genes, among which CETP containsed two risk SNPs: rs7499892 was also

identified by varbvs but rs1532624 was a new one. The above results are visualized in the

Manhattan plots (Figure 6).

−163000

−162750

−162500

−162250

−162000

0 10 20 30

iteration

Low

er B

ound

−4

−3

−2

−1

0logodds

(a)

10000

15000

20000

25000

30000

2 4 6 8

Thread

Tim

e (S

econ

ds)

(b)

●●

●●

●●

● ●● ● ● ● ● ●

● ●●

●●

−161900

−161890

−161880

−161870

−4 −3 −2 −1 0

log10odds(π)

Low

er B

ound

(c)

● ● ● ●●

●●

●●

●●

●●

●●

0.02

0.04

0.06

−4 −3 −2 −1 0

log10odds(π)

α

(d)

Figure 5: BIVAS in fitting HDL. (a) Convergence of lower bound for h = 40 EM procedure.

(b) Computational times using 1, 2, 4, 6, 8 threads. (c) Lower bound for the 40 settings

procedure after convergence. (d) α for the 40 settings after convergence.

Figure 6: Manhattan plots of High-Density Lipoprotein (HDL). Red line represents fdr =

0.1 and blue line represents fdr = 0.05.

In the second example, we analyzed Rheumatoid Arthritis (RA) and Type 1 Diabetes

(T1D) in the WTCCC data. These data sets were from European Genome-phenome

Archive (EGA) websites http://www.ebi.ac.uk/ega/studies/EGAS00000000011 and http:

//www.ebi.ac.uk/ega/studies/EGAS00000000014. After quality control, we had 4, 494

19

Page 20: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

individuals and 307, 089 SNPs for RA, and 4, 986 individuals and 307, 357 SNPs for T1D.

The SNPs were then matched with corresponding genes using HapMap3 as reference, lead-

ing to 242, 597 SNPs with 16, 789 genes for RA and 242, 824 SNPs with 16, 815 genes for

T1D. Manhattan plots are shown in Figure 7. At the SNP level, the identification results

of BIVAS and varbvs are similar but BIVAS further interrogated signals at the gene level

making the results more interpretable. For example, in T1D, genes ADA1, LINC00469 and

LOC100996324 were identified as associated by BIVAS, but these genes contain no single

associated SNP either identified by varbvs or BIVAS. This suggests that the associations

are weak at the SNP level, but they aggregatively improve power as a group and hence

identified by BIVAS at the gene level.

Figure 7: Manhattan plots of Rheumatoid Arthritis (RA) and Type 1 Diabetes (T1D).

Red line represents fdr = 0.1 and blue line represents fdr = 0.05.

20

Page 21: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

3.2.2 IMDB movie data

In the third example, we analyzed IMDb dataset (Maas et al., 2011) based on multi-task

BIVAS. The IMDb data set was publicly availible at IMDb.com. The original data were

extracted from movie reviews from IMDb.com. It contained 50K movie reviews that were

equally split into a training set and a test set. Each review was marked with a rating

ranging from 0 to 10, only the polarized reviews were retained (rating> 7 or rating < 4).

The dataset was comprised of equal number of positive reviews and negative reviews. A

bag of representative words was concluded from the whole review. Based on the bag of

words, we adopted binary representation to indicate presence of the words. This led to

K = 27, 743 features (words) with the rating being the response variable. We used 6

genres of movies as our tasks: drama, comedy, horror, action, thriller and romance. Only

the reviews of movies that had exactly one genre were used. This led to the sample sizes

3, 354 for drama, 2, 235 for comedy, 1, 175 for horror, 346 for action, 258 for thriller and

139 for romance. We compared BIVAS against Ridge, Lasso and varbvs.

Table 1 shows the testing errors of the four methods. For the categories of Horror,

Action, Thriller and Romance, BIVAS has better performance than the other 3 methods.

Note that these genres have smaller sample sizes compared to comedy and action. This

result is consistent with what we obtain in the simulation study.

Overall Drama Comedy Horror Action Thriller Romance

ridge 9.58 9.01 10.55 9.01 10.99 10.14 6.76

lasso 6.67 6.13 6.67 7.20 8.65 9.27 6.77

varbvs 7.14 6.20 6.90 8.94 8.37 11.48 6.91

BIVAS 6.66 6.32 7.01 6.76 6.89 7.44 5.39

Table 1: IMDb testing error.

The words selected by BIVAS and varbvs are presented in Figure 8 and Figure 9 using

‘wordcloud’ package in R. The words in blue and yellow represent the negative and positive

effects, respectively. The size of words represents the effect size. As shown in Figure 8, small

number of words were identified by varbvs to be associated with Action, Horror or Thriller,

which are genres with smallest sample sizes. However, as shown in Figure 9, BIVAS greatly

21

Page 22: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

enriches the effective words in these tasks by borrowing information from the large samples

(Drama, Comedy and Horror). Many associated words that were overwhelmed by noise

are now revealed. This can be viewed as a consequence of bi-level selection which selects

the important variables and, at the same time, allows sparsity pattern to be shared within

group (or through tasks in multi-task learning). Hence, many useful words shared through

tasks, such as ‘worst’, ‘awful’ and ‘amazing’, can be revealed for small sample and some

particular predictors, like ‘scariest’ in Thriller and Horror, are maintained task-specific.

On the other hand, varbvs (as well as Ridge and Lasso) does not account for the bi-level

sparsity structure, so it is unable to capture the shared information through tasks.

worstawful

poorwaste

poorly

pretentiousboring

tearsbadworse

dullexcellent

save

badly

horrible

unfortunately

wonderful limited

nothingok

masterpiece

bestamazing

?

favo

rite perfectly

minutes

lovedsorry

positive

coul

dn't

beautifulterrible

meets

cry

greatest

fascinating

than

k

perfect

propaganda

plot

wasted

supp

osed

living

pointless

succeeded

script

original

highly

skip

seen

fails

great

day

released

rem

arka

ble

artistichum

an gay

journey

disappointment

chorus

no

bunch

store

had

simple

unlike

i'm

everyone

fant

astic

superbcourage

outstanding

facts

also

easy

comments

garbage

see

well

awesome

girls

dvd

available

will

dece

nt

period

showed

anything

very

small

off

Drama Coefs

worstunfunny

wasteawful

terr

ible

excellent amaz

ing

horrible

funniesthilariousgarbage

pathetic

love

wonderful bad

boring

stay

great

favoriteweird

supposed

wasted

fun

eddie

best

?

annoying

jokes

save

minutes

even

brilliantpainful avoid

not

mess

years

plot

disa

ppoi

ntin

genjoy

refreshing

worse

crap

will

disa

ster

wel

l

fails

writers

had

world loved

you

dvd

better

just

Comedy Coefs

biasedfoley

sheendreaming

delir

ious

washingtoncreep

experts

mariethe

o'learyshorts

heat

old−fashioned

squad

scar

iest

wonderfully

signed

trashing subtleties

nakata

screeningzelda

surprising

perfectly

perfectcastle

vampiricen

joye

dexcellent

disturbingst

ephe

nworst

creepygreat

atmosphere

waste

bad

minutes no

pretty

Horror Coefs

badworst

great fan

Action Coefs

ray

atm

osph

ere

grea

t

love

quite

after

worst

Thriller Coefs

janebad

Romance Coefs

Figure 8: IMDb wordcloud generated by varbvs.

22

Page 23: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

worstbad

wasteawful

perfect

excellent

greathorrible

?

pathetic

love

terr

ible

poorly

poor

amazing

wonderful

garbageminutes

boring

worsedreadfulenjoyed

avoid

wasted

perfectly

save

atmosphere

even

lovedbadly

incredible

pointless

favorite

no

best

beau

tiful

nothing

supposed

couldn't

pretentious

skip

very

superb

annoying

inept

fails

refreshing

highlydull

knowing

also

easy

awes

ome

willsaw

fantastic

human

unlike

thank better

fun

unfortunately

total

unfunny

drivel

plot

simple

jokestears

scriptlame

well

positive

disappointing

embarrassment

sorry

years

own

crap

fascinating

stephenbrill

iant

hilarious

meets

masterpiece

avai

labl

e

gem

limited

living

mess

comments

cry

succeeded

chorusfunniest

scariest

seen

propagandacourage

horror

Shared

worstawful

poorlypretentious

poorwasteskip

limited

tears

ineptboring

dull

worse

excellent

badly

bad

pointless

positive

perfectlysucceeded

horr

ible

creepy

fails

patheticgarbage

unfortunately

wasted

drea

dful

prop

agan

da

savesorrydrivel

amazing

meets

chorus

wonderful

favorite

beautifulcourage

nothingliving

zero

minutes

supposed

couldn't

love

d

awes

ome

terrible

best

unlike

?

annoying

perf

ect

masterpiece

thank

cry

plottotal

simple

avoid

available

superb

refreshingscript

comments

facts

seen

fant

astic

lame

incredible

great

highly

fascinating

hated

easy

human

jane

ok

disappointing

parker

disturbing

eddi

e

embarrassment

own

oscar

remarkable

horror

gem

no

willstephen

had

also

very

love

better

well

saw

years

jokes

Drama Coefs

worsthorrible

unfunnyexcellent

amazing

awful

garbage

patheticterrible

wastefunniest

dreadful

refreshing

eddi

e

wonderful

hilarious

lovebad

wasted

parkerfails

pointless

boring

skip

incr

edib

le

weird

save

disappointing

favo

rite

dull

avoidpoorly

great

funsupp

osed

superb

knowing

livingannoying

best

jokes

?

perfect

highly

minutesdrivel

inept

total

years

worse

zeropoor

embarrassment

even

not

loved

easy

stephen

fant

astic

awesome

disturbing

gem

will

mess

pret

entio

us plot

well

unfortunately

facts

simple

available

had

badly

brilliant

jane

better own

thank

beautiful

couldn't

limited

lame no

crapalso

script

courage

seen

remarkable

com

men

ts

atmosphere

cry

hate

d

horror

very

positive

chorus

saw

sorry

castle

Comedy Coefs

perfectcastle

perfectly

enjoyed

worstcreepy

sheen

stephen

atmosphere

scariestgreat

excellent

dist

urbi

ng

bad

knowingavoid

waste

unlike

high

lyhuman

no

horror

loved

terrible

simple

saw

dreadful

boring

minutesgarbage

supposedworse

fantastic

poorly

horrible meets

plot

nothing

bettercouldn't

favorite

commentsamazing

inept

seen

brilliant

livingeasy

even

pathetic

poor

awful

weird

?

supe

rb

years

beautiful

pretentious

also

fascinating

total

thankeddie

refreshing

sorry

own

annoying

badly

best

fun

crap

script

lame

positive

tears

ok

well

awesome

not

jokes

oscar

wasted

funniest

love

wonderful

unfo

rtun

atel

y

hilarious

unfunny

masterpiece

disappointing

propaganda

embarrassment

jane

very

incrediblepointless

gem

limited facts

succeeded

Horror Coefs

worstbad

greatpoor

awful

verysaw

love

perfect

incredible

waste

evenhorrible

?

nothing

own worse

best

script

fun

avoid

pathetic

badly

wasted

poorly

couldn't

amazinghuman

minutes

beautifulenjoyedgarbage

tear

s

dreadful

living

loved

limited

highly

save

supposed

oscar

unlike

boring

eddie

meets

well weird

knowingexcellent

also

unfortunately

atmosphere

total

comments

perfectly

awesome

fascinating

horror

masterpiece

available

skip

superb

refreshing

willdull

than

k

courage

no

better

annoying

positive

jokes

fantastic

lame

fails

notterrible

funniest

creepy

years

cry

remarkable

craphilarious

propaganda

zero

stephen

hated

seenfavorite

simple

pointless

succeeded

embarrassment

wonderful

easy

disappointing

had

ok

mess

Action Coefs

grea

tatmosphere

perfect

loveenjoyed

wasteworstbad

alsoawful

no

?

terrible

patheticeven

excellent

very

poorlypoor

horrorsave

fun

couldn't

amazing

poin

tless

saw

wasted

horrible

minutes

had

loved

easy

avoid

garbage

nothing

castle

cree

py

plot

crapmeets

will

beautiful

dreadful

worse

dull

awesome

badly

won

derf

ul

lam

e

incredible

thankmess

available

hate

d years

simpleremarkable

well

better

supe

rb

hilariousfails

zero

emba

rras

smen

t

sorry

disappointing

gem

pretentious

driv

el

boring

ok

annoying

disturbinginept

fantastic

unlike

fascinating

living

jane

seen

best

favo

rite

stephen

limited

skip

own

tears

facts

osca

r

propaganda

succeeded

human

supposed

unfortunately

brilliant

highly

weird knowing

Thriller Coefs

janebad

wonderful

?

worst

excellentfun

will

wasteminutes

awful

favo

rite

beautiful

very

had

enjoyed

horrible

annoying

jokeseven

no

living

perfectlysuperb

betternothing

comments

best

oscar

patheticlove

simple

easyseen

sawbadly

pointless

terriblepretentious

save

incredible

hilarious

skip

poor

worse

brilliant

fantastic

disturbing

disappointing

propaganda

plot

unfortunately

total

tears

years

boring

highly

meets

drivel

embarrassment human

amaz

ing

lame

supposedloved

dull

unlike

thank

great

perfect

ok

mess

posi

tive

remarkable

inep

t

couldn't

avoid

sorry

crap

castle

own

poorly

weird

knowing

not

well

script

fails

fascinating

available

facts

Romance Coefs

Figure 9: IMDb wordcloud generated by BIVAS.

4 Discussion

The bi-level variable selection aims at capturing the sparsity at both the individual variable

level and the group level to better interrogate the structural information that can assist

parameter estimation and variable selection. Bayesian bi-level selection methods are free of

parameter tuning and able to obtain the posterior distributions of random effects. Based

on the posterior distributions, variables can be selected at both levels by controlling fdr.

Despite the convenience, existing Bayesian bi-level variable selection methods are often

computationally inefficient and unscalable to large data sets due to the intractable posterior.

In this paper, we propose a hierarchically factorizable formulation to approximate the

posterior distribution, by utilizing the structure of bi-level variable selection. Under the

variational assumption, a computationally efficient algorithm is developed based on the

variational EM algorithm and importance sampling. The convergence of algorithm is

23

Page 24: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

promised and the accurate approximation for the posterior mean can be obtained. The

proposed algorithm is efficient, stable and scalable. Our software is fast and capable of

parallel computing. After convergence, variable selection at both levels can be conducted

by controlling the fdr, prediction can be made based on posterior means. Through the sim-

ulation study we showed that our method is no worse than alternative methods given the

same computational cost and outperforms some methods in many cases. We also applied

BIVAS to real world data and verified its scalability and capability of bi-level selction.

SUPPLEMENTARY MATERIAL

Supp final.pdf The supplemental materials include the detailed derivation for both re-

gression and multi-task learning. (PDF)

R-package for BIVAS: R-package ‘bivas’. The package contains the functions used in

fitting BIVAS and making statistical inference. (zipped tar file)

sim-bivas The file contains the R scripts for generating numerical results in Section 3.

(zipped file for R scripts)

References

Bae, K. and B. K. Mallick (2004). Gene selection using a two-level hierarchical bayesian

model. Bioinformatics 20 (18), 3423–3430.

Bishop, C. M. (2006). Pattern recognition and machine learning. springer.

Breheny, P. (2015). The group exponential lasso for bi-level variable selection. Biomet-

rics 71 (3), 731–740.

Breheny, P. and J. Huang (2009). Penalized methods for bi-level variable selection. Statistics

and its interface 2 (3), 369.

Carbonetto, P., M. Stephens, et al. (2012). Scalable variational inference for bayesian

variable selection in regression, and its accuracy in genetic association studies. Bayesian

analysis 7 (1), 73–108.

24

Page 25: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Caruana, R. (1998). Multitask learning. In Learning to learn, pp. 95–133. Springer.

Chen, R.-B., C.-H. Chu, S. Yuan, and Y. N. Wu (2016). Bayesian sparse group selection.

Journal of Computational and Graphical Statistics 25 (3), 665–683.

Consortium, W. T. C. C. et al. (2007). Genome-wide association study of 14,000 cases of

seven common diseases and 3,000 shared controls. Nature 447 (7145), 661.

Fan, J. and R. Li (2001). Variable selection via nonconcave penalized likelihood and its

oracle properties. Journal of the American statistical Association 96 (456), 1348–1360.

Figueiredo, M. A. (2003). Adaptive sparseness for supervised learning. IEEE transactions

on pattern analysis and machine intelligence 25 (9), 1150–1159.

George, E. I. and R. E. McCulloch (1993). Variable selection via gibbs sampling. Journal

of the American Statistical Association 88 (423), 881–889.

George, E. I. and R. E. McCulloch (1997). Approaches for bayesian variable selection.

Statistica sinica, 339–373.

Hastie, T., R. Tibshirani, and M. Wainwright (2015). Statistical learning with sparsity: the

lasso and generalizations. CRC press.

Huang, J., S. Ma, H. Xie, and C.-H. Zhang (2009). A group bridge approach for variable

selection. Biometrika 96 (2), 339–355.

Kyung, M., J. Gill, M. Ghosh, G. Casella, et al. (2010). Penalized regression, standard

errors, and bayesian lassos. Bayesian Analysis 5 (2), 369–411.

Maas, A. L., R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts (2011). Learning

word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the

Association for Computational Linguistics: Human Language Technologies-Volume 1,

pp. 142–150. Association for Computational Linguistics.

Madigan, D. and A. E. Raftery (1994). Model selection and accounting for model uncer-

tainty in graphical models using occam’s window. Journal of the American Statistical

Association 89 (428), 1535–1546.

25

Page 26: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Mitchell, T. J. and J. J. Beauchamp (1988). Bayesian variable selection in linear regression.

Journal of the American Statistical Association 83 (404), 1023–1032.

Park, T. and G. Casella (2008). The bayesian lasso. Journal of the American Statistical

Association 103 (482), 681–686.

Purcell, S., B. Neale, K. Todd-Brown, L. Thomas, M. A. Ferreira, D. Bender, J. Maller,

P. Sklar, P. I. De Bakker, M. J. Daly, et al. (2007). Plink: a tool set for whole-genome

association and population-based linkage analyses. The American Journal of Human

Genetics 81 (3), 559–575.

Raman, S., T. J. Fuchs, P. J. Wild, E. Dahl, and V. Roth (2009). The bayesian group-

lasso for analyzing contingency tables. In Proceedings of the 26th Annual International

Conference on Machine Learning, pp. 881–888. ACM.

Sabatti, C., A.-L. Hartikainen, A. Pouta, S. Ripatti, J. Brodsky, C. G. Jones, N. A. Zaitlen,

T. Varilo, M. Kaakinen, U. Sovio, et al. (2009). Genome-wide association analysis of

metabolic traits in a birth cohort from a founder population. Nature genetics 41 (1), 35.

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the

Royal Statistical Society. Series B (Methodological), 267–288.

Wang, K., M. Li, and H. Hakonarson (2010). Annovar: functional annotation of genetic

variants from high-throughput sequencing data. Nucleic Acids Research 38 (16), e164.

Xu, X., M. Ghosh, et al. (2015). Bayesian variable selection and estimation for group lasso.

Bayesian Analysis 10 (4), 909–936.

Yang, J., S. H. Lee, M. E. Goddard, and P. M. Visscher (2011). Gcta: a tool for genome-

wide complex trait analysis. The American Journal of Human Genetics 88 (1), 76–82.

Yuan, M. and Y. Lin (2005). Efficient empirical bayes variable selection and estimation in

linear models. Journal of the American Statistical Association 100 (472), 1215–1225.

Yuan, M. and Y. Lin (2006). Model selection and estimation in regression with grouped vari-

ables. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 (1),

49–67.

26

Page 27: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty.

The Annals of statistics 38 (2), 894–942.

Zhao, P., G. Rocha, and B. Yu (2009). The composite absolute penalties family for grouped

and hierarchical variable selection. The Annals of Statistics , 3468–3497.

Zhou, N. and J. Zhu (2010). Group variable selection via a hierarchical lasso and its oracle

property. Statistics and Its Interface 3 (4), 557–574.

27

Page 28: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Supplementary Document

A Variational EM Algorithm: Regression with BIVAS

A.1 E-Step

Let θ = {α, π, σ2β, σ

2e ,ω} be the collection of model parameters in the main text. The joint

probabilistic model is

Pr(y,η,γ,β|X,Z;θ) = Pr(y|η,γ,β,X,Z,θ)Pr(η,γ,β|θ)

=N (y|Zω +K∑k

lk∑j

ηkγjkβjkxjk)K∏k=1

πηk(1− π)1−ηklk∏j=1

N (0, σ2β)αγjk(1− α)1−γjk .

(22)

The logarithm of the marginal likelihood is

log p(y|X,Z;θ) = log∑γ

∑η

∫β

Pr(y,η,γ,β|X,Z;θ)dβ

≥∑γ

∑η

∫β

q(η,γ,β) logPr(y,η,γ,β|X,Z;θ)

q(η,γ,β)dβ

= Eq[log Pr(y,η,γ,β|X,Z;θ)− log q(η,γ,β)]

≡ L(q),

(23)

where we have adopted Jensen’s inequality to obtain the lower bound L(q). Next step is

to iteratively maximize L(q) instead of working with the marginal likelihood directly. As

in the main text, we use the following hierarchically factorized distribution to approximate

the true posterior:

q(η,γ,β) =K∏k

(q(ηk)

lk∏j

(q(βjk|ηk, γjk)q(γjk))

), (24)

where we have assumed that groups are independent; and given a group, the factors inside

are also independent. With this assumption, we first rewrite the ELBO as:

L(q) = Eq(η)[Eq(γ,β|η) [log Pr(y,η,γ,β)− log q(η,γ,β)]

]. (25)

Let q(γk) =∏lk

j q(γjk), q(βk|ηk,γk) =∏lk

j (q(βjk|ηk, γjk)q(γjk)) and

q(ηk,γk,βk) = q(ηk)∏lk

j q(βjk|ηk, γjk)q(γjk), the lower bound can be written in the following

28

Page 29: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

form:

L(q)

=∑η

K∏k

q(ηk)∑γ

K∏k

q(γk)

∫β

(log Pr(y,η,γ,β)−

K∑k

log q(ηk,γk,βk)

)K∏k

q(βk|ηk,γjk)dβ

=∑ηk

q(ηk)∑γk

lk∏j

q(γjk)

∫ lk∏j

q(βjk|ηk, γjk)

∑η−k

∏k′ 6=k

q(ηk′ )∑γ−k

∏k′ 6=k

q(γk′ )

∫log Pr(y,η,γ,β)

∏k′ 6=k

q(βk|ηk,γk)dβk′

dβk−∑ηk

q(ηk)∑γk

lk∏j

q(γjk)

∫ lk∏j

q(βjk|ηk, γjk) log q(ηk,γk,βk)dβk + const

=Eq(ηk,γk,βk)

[Ek′ 6=k log Pr(y,η,γ,β)− log q(ηk,γk,βk)

]+ const

=Eq(ηk)[Eq(γk,βk|ηk)

[Ek′ 6=k log Pr(y,η,γ,β)− log q(ηk,γk,βk)

]]+ const

=q(ηk = 1)[Eq(γk,βk|ηk=1)

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ,β)− log q(ηk = 1,γk,βk)

]](26)

+ q(ηk = 0)[Eq(γk,βk|ηk=0)

[Ek′ 6=k log Pr(y,η−k, ηk = 0,γ,β)− log q(ηk = 0,γk,βk)

]]+ const,

where ηk is from Bernoulli distribution and η−k is a vector obtained by removing the k-thterm from η. Ek′ 6=k(·) denotes taking expectation with respect to the terms outside thek-th group. Now given q(ηk), when ηk = 1, we can focus on the expectations in Equation(5):

Eq(γk,βk|ηk=1)

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ,β)− log q(ηk = 1,γk,βk)

]=∑γk

lk∏j

q(γjk)

∫βk

(Ek′ 6=k log Pr(y,η−k, ηk = 1,γ,β)− log q(ηk = 1,γk,βk)

) lk∏j

q(βjk|ηk, γjk)dβk

=∑γk

lk∏j

q(γjk)

∫βk

(Ek′ 6=k log Pr(y,η−k, ηk = 1,γ,β)− log q(γk,βk|ηk = 1)

) lk∏j

q(βjk|ηk, γjk)dβk + const

=∑γjk

q(γjk)

∫q(βjk|γjk, ηk = 1)

∑γ−j|k

∏j′ 6=j|k

q(γj′k)

∫Ek′ 6=k [log Pr(y,η−k, ηk = 1,γ,β)]

∏j′ 6=j|k

q(βj′k, γj′k|ηk = 1)dβj′k

dβjk−∑γjk

q(γjk)

∫q(βjk|γjk, ηk = 1) log q(βjk, γjk|ηk = 1)dβjk + const

=Eq(βjk,γjk|ηk=1)

[Ej′ 6=j|k

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ,β)

]− log q(βjk, γjk|ηk = 1)

]+ const

=q(γjk = 1)Eq(βjk|ηk=1,γjk=1)

[Ej′ 6=j|k

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 1,β)

]− log q(βjk, γjk = 1|ηk = 1)

](27)

+ q(γjk = 0)Eq(βjk|ηk=1,γjk=0)

[Ej′ 6=j|k

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 0,β)

]− log q(βjk, γjk = 0|ηk = 1)

].

where the last equation is because of the assumption q(βjk, γjk|ηk) = q(βjk|γjk, ηk)q(γjk)and γ−jk is a vector obtained by removing the jk-th term in γ. Ej′ 6=j|k(·) denotes taking

the expectation with respect to all variables inside the k-th group except the j-th one.

Again, given q(γjk), when γjk = 1, we can further derive with a similar procedure from the

expectation in Equation (6) that:

Eq(βjk|ηk=1,γjk=1)

[Ej′ 6=j|k

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 1,β)

]− log q(βjk, γjk = 1|ηk = 1)

]=Eq(βjk|ηk=1,γjk=1)

[Ej′ 6=j|k

[Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 1,β)

]− log q(βjk|ηk = 1, γjk = 1)

]+ const,

(28)

29

Page 30: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

which is a KL Divergence between Ej′ 6=j|k [Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 1,β)] and

q(βjk|ηk = 1, γjk = 1) given ηk = 1 and γjk = 1. Hence the optimal form of q∗(βjk|ηk =

1, γjk = 1) is given by

log q∗(βjk|ηk = 1, γjk = 1) = Ej′ 6=j|k [Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 1,β)] . (29)

Here we only derive the case when ηk = γjk = 1, other cases can be easily derived following

the same procedure. Since both ηk and γjk are from Bernoulli distribution, with the

expression in equation (8), we can first impose some variational parameters on q(γjk) and

q(ηk), then derive the conditional distribution of βjk given ηk and γjk, and lastly optimize

the lower bound to find the variational parameters.

First, we derive q(βjk|ηk, γjk) which involves the joint probability function. The logarithm

of the joint probability function is given as

logPr(y,η,γ,β|X,Z;θ) =− n

2log(2πσ2

e)−yTy

2σ2e

− (Zω)T (Zω)

2σ2e

+

∑Kk

∑lkj ηkγjkβjkx

Tjky

σ2e

+yT (Zω)

σ2e

−∑K

k

∑lkj ηkγjkβjkx

Tjk(Zω)

σ2e

− 1

2σ2e

K∑k

lk∑j

((ηkγjkβjk)

2 xTjkxjk)

− 1

2σ2e

K∑k

K∑k′ 6=k

jk∑j

lk′∑j′

(ηk′γj′k′βj′k′) (ηkγjkβjk) xTj′k′xjk

− 1

2σ2e

(K∑k

lk∑j

lk∑j′ 6=j

(ηkγj′kβj′k) (ηkγjkβjk) xTj′kxjk

)

− p

2log(2πσ2

β)− 1

2σ2β

K∑k=1

lk∑j=1

β2jk

+ log(α)K∑k=1

lk∑j=1

γjk + log(1− α)K∑k=1

lk∑j=1

(1− γjk)

+ log(π)K∑k=1

ηk + log(1− π)K∑k=1

(1− ηk).

(30)

30

Page 31: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

To find the optimal form in Equation (8), We then rearrange Equation (9) and only retain

the terms regarding jk

logPr(y,η,γ,β|X,Z;θ)

=− n

2log(2πσ2

e)−yTy

2σ2e

+ηkγjkβjkx

Tjky

σ2e

−ηkγjkβjkx

Tjk(Zω)

σ2e

− 1

2σ2e

((ηkγjkβjk)

2 xTjkxjk)

− 1

2σ2e

K∑k′ 6=k

lk′∑j′

(ηkγjkβjk) (ηkγj′k′βj′k′) xTjkxj′k′

− 1

2σ2e

(lk∑j′ 6=j

(ηkγjkβjk) (ηkγj′kβj′k) xTjkxj′k

)− 1

2σ2β

β2jk

+ log(α)γjk + log(1− α)(1− γjk)

+ log(π)ηk + log(1− π)(1− ηjk)

+ const.

(31)

Now we can derive the log q(βjk|ηk = 1, γjk = 1) by taking the expectation in Equation

(8). When ηk = γjk = 1⇔ ηkγjk = 1, we have

log q(βjk|ηk = 1, γjk = 1)

=

(− 1

2σ2e

xTjkxjk −1

2σ2β

)β2jk

+

(xTjk(y − Zω)−

∑Kk′ 6=k

∑lk′j′ Ek′ 6=k[ηk′γj′k′βj′k′ ]xTjkxj′k′ −

∑lkj′ 6=j Ej′ 6=j|k[γj′kβj′k]xTjkxj′k

σ2e

)βjk

+ const.

(32)

Since Equation (11) is a quadratic form of βjk, the posterior of q(βjk|ηk = 1, γjk = 1)

follows a Gaussian of the form N (µjk, s2jk), where

s2jk =σ2e

xTjkxjk + σ2e

σ2β

µjk =xTjk(y − Zω)−

∑Kk′ 6=k

∑lk′j′ Ek′ 6=k[ηk′γj′k′βj′k′ ]xTjkxj′k′ −

∑lkj′ 6=j Ej′ 6=j|k[γj′kβj′k]xTjkxj′k

xTjkxjk + σ2e

σ2β

.

(33)

31

Page 32: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Similarly, for ηkγjk = 0, we have

log q(βjk|ηkγjk = 0) = − 1

2σ2β

β2jk + const, (34)

which implies that q(βjk|ηkγjk = 0) ∼ N (0, σ2β). Thus, the conditional posterior of βjk is

exactly the same as the prior if this variable is irrelevant in either one of the two levels

(ηkγjk = 0). Now we turn to q(ηk) and q(γjk). Denote πk = q(ηk) and αjk = q(γjk), we

have

q(η,γ,β) =K∏k

(πηkk (1− πk)1−ηk

lk∏j

(αγjkjk (1− αjk)1−γjkN (µjk, s

2jk)

ηkγjkN (0, σβ)1−ηkγjk))

.

(35)

And the second term in L(q) can be derived as:

− Eq[logq(η,γ,β)]

=− Eq

[K∑k

(ηklog(πk) + (1− ηk)log(1− πk))

]

− Eq

[K∑k

lk∑j

ηkγjklogN (µjk, s2jk) + (1− ηkγjk)N (0, σ2

β) + γjklog(αjk) + (1− γjk)log(1− αjk)

]

=−K∑k

Eq [(ηklog(πk) + (1− ηk)log(1− πk))]

−K∑k

lk∑j

Eq[ηkγjklogN (µjk, s

2jk) + (1− ηkγjk)N (0, σ2

β) + γjklog(αjk) + (1− γjk)log(1− αjk)]

=−K∑k

lk∑j

Eηk,γjk{Eβ|ηk=1,γjk=1[logN (µjk, s2jk)] + Eβ|ηk=1,γjk=0[logN (0, σ2

β)]

+ Eβ|ηk=0,γjk=1[logN (0, σ2β)] + Eβ|ηk=0,γjk=0[logN (0, σ2

β)]}

−K∑k

lk∑j

Eq[γjklog(αjk) + (1− γjk)log(1− αjk)]−K∑k

Eq[ηklog(πk) + (1− ηk)log(1− πk)]

(36)

Note that −Eβ|ηk=1,γjk=1[logN (µjk, s2jk)] is the entropy of Gaussian, so we have

−Eβ|ηk=1,γjk=1[logN (µjk, s2jk)] = 1

2log(s2jk)+

12(1+log(2π)), similarly, −Eβ|ηk=1,γjk=0[logN (µjk, s

2jk)] =

12log(σ2

β) + 12(1 + log(2π)) and so on. Consequently, Equation (15) can be further derived

in:

32

Page 33: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

− E[logq(η,γ,β)]

=K∑k

lk∑j

{[12

log(s2jk) +1

2(1 + log(2π))]πkαjk + [

1

2log(σ2

β) +1

2(1 + log(2π))](1− πkαjk)

− αjklog(αjk)− (1− αjk)log(1− αjk)} −K∑k

{πklog(πk) + (1− πk)log(1− πk)}

=K∑k

lk∑j

1

2πkαjk(logs2jk − logσ2

β) +p

2log(σ2

β) +p

2+p

2log(2π) (37)

−K∑k

lk∑j

[αjklog(αjk) + (1− αjk)log(1− αjk)]−K∑k

[πklog(πk) + (1− πk)log(1− πk)].

33

Page 34: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Combine (16) with (9), the lower bound is obtained as follow:

Eq[logPr(y,η,γ,β|X;θ)]− Eq[logq(η,γ,β)]

=− n

2log(2πσ2

e)−yTy

2σ2e

− (Zω)T (Zω)

2σ2e

+

∑Kk

∑lkj Eq [ηkγjkβjk] x

Tjky

σ2e

+yT (Zω)

σ2e

−∑K

k

∑lkj Eq [ηkγjkβjk] x

Tjk(Zω)

σ2e

− 1

2σ2e

K∑k

lk∑j

(Eq[(ηkγjkβjk)

2]xTjkxjk)− 1

2σ2e

K∑k

K∑k′ 6=k

jk∑j

lk′∑j′

Eq [ηk′γj′k′βj′k′ ]Eq [ηkγjkβjk] xTj′k′xjk

− 1

2σ2e

(K∑k

lk∑j

lk∑j′ 6=j

Eq[η2kγj′kβj′kγjkβjk

]xTj′kxjk

)

− p

2log(2πσ2

β)− 1

2σ2β

K∑k=1

lk∑j=1

Eq[β2jk

]+ log(α)

K∑k=1

lk∑j=1

Eq [γjk] + log(1− α)K∑k=1

lk∑j=1

Eq [1− γjk]

+ log(π)K∑k=1

Eq [ηk] + log(1− π)K∑k=1

Eq [1− ηk]

+K∑k

lk∑j

1

2πkαjk(logs2jk − logσ2

β) +p

2log(σ2

β) +p

2+p

2log(2π)

−K∑k

lk∑j

[αjklog(αjk)]−K∑k

lk∑j

[(1− αjk)log(1− αjk)]

−K∑k

[πklog(πk)]−K∑k

[(1− πk)log(1− πk)],

(38)

where expectations in are derived as follows:

Eq [ηk] = πk, Eq [γjk] = αjk, (39)

E [ηkγjkβjk] =∑γjk

∑ηk

∫βjk

ηkγjkβjkq(βjk|ηk, γjk)q(ηk)q(γjk)dβjk

= πkαjk · µjk + (1− πkαjk) · 0

= πkαjkµjk (40)

34

Page 35: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Eq[β2jk

]=

∫βjk

β2jkq(βjk)dβjk

=∑ηk

∑γjk

∫βjk

β2jkq(βjk|ηk, γjk)q(ηk)q(γjk)dβjk

=

∫βjk

β2jk ·[πkαjkN (µjk, s

2jk) + (1− πkαjk)N (0, σ2

β)]dβjk

= πkαjk(s2jk + µ2

jk) + (1− πkαjk)σ2β (41)

Eq[(ηkγjkβjk)

2]

=∑ηk

∑γjk

∫βjk

ηkγjkβ2jkq(βjk|ηk, γjk)q(ηk)q(γjk)dβjk

= πkαjk

∫βjk

β2jkN (µjk, s

2jk))dβjk

= πkαjk(s2jk + µ2

jk) (42)

Eq[η2kγj′kβj′kγjkβjk

]=Eq [ηkγj′kβj′kγjkβjk]

=∑ηk

∑γjk,γj′k

∫ ∫ηkγj′kβj′kγjkβjkq(βjk|ηk, γjk)q(γjk)q(βj′k|ηk, γj′k)q(γj′k)q(ηk)dβjkdβj′k

=πkαj′kµj′kαjkµjk (43)

35

Page 36: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

We plug in the evaluations from Equation (18) to (22), L(q) in Equation (17) then becomes

Eq[logPr(y,η,γ,β|X;θ)]− Eq[logq(η,γ,β)]

=− n

2log(2πσ2

e)−||y − Zω −

∑Kk

∑lkj πkαjkµjkxjk||2

2σ2e

− 1

2σ2e

K∑k

lk∑j

[πkαjk(s2jk + µ2

jk)− (πkαjkµjk)2]︸ ︷︷ ︸

Var[ηkγjkβjk]

xTjkxjk

− 1

2σ2e

K∑k

(πk − π2

k

)( lk∑j

lk∑j′ 6=j

αj′kµj′kαjkµjkxTj′kxjk

)

− p

2log(2πσ2

β)− 1

2σ2β

K∑k=1

lk∑j=1

[πkαjk(s2jk + µ2

jk) + (1− πkαjk)σ2β]

+K∑k=1

lk∑j=1

αjklog(α

αjk) +

K∑k=1

lk∑j=1

(1− αjk)log(1− α

1− αjk)

+K∑k=1

πklog(π

πk) +

K∑k=1

(1− πk)log(1− π1− πk

)

+K∑k

lk∑j

1

2πkαjklog(

s2jkσ2β

) +p

2log(σ2

β) +p

2+p

2log(2π)

(44)

To get πk and αjk, we set

∂Eq[logPr(y,η,γ,β|X;θ)]− Eq[logq(η,γ,β)]

∂πk= 0,

∂Eq[logPr(y,η,γ,β|X;θ)]− Eq[logq(η,γ,β)]

∂αjk= 0,

which gives

πk =1

1 + exp(−uk),

where uk =logπ

1− π+

1

2

lk∑j

αjk

(log

s2jkσ2β

+µ2jk

s2jk

);

and αjk =1

1 + exp(−vjk),

where vjk =logα

1− α+

1

2πk

(log

s2jkσ2β

+µ2jk

s2jk

).

(45)

36

Page 37: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

The derivation is as follow:

uk =logπ

1− π+

1

2

lk∑j

αjklogs2jkσ2β

+

∑lkj αjkµjkx

Tjky

σ2e

−∑lk

j αjkµjkxTjk(Zω)

σ2e

− 1

2σ2e

lk∑j

αjk(s2jk + µ2

jk)xTjkxjk

− 1

σ2e

K∑k′ 6=k

lk∑j

lk′∑j′

πk′αj′k′µj′k′αjkµjkxTj′k′xjk

− 1

σ2e

(lk∑j

lk∑j′ 6=j

αj′kµj′kαjkµjkxTj′kxjk

)

− 1

2σβ

lk∑j

αjk(s2jk + µ2

jk) +1

2

lk∑j

αjk

=logπ

1− π+

1

2

lk∑j

αjklogs2jkσ2β

+ αjkµjk

K∑j

(xTjk(y − Zω)−

∑Kk′ 6=k πk′

∑lk′j′ αj′k′µj′k′x

Tj′k′xjk −

∑lkj′ 6=j αj′kµj′kx

Tj′kxjk

σ2e

)︸ ︷︷ ︸

µjk/s2jk

− 1

2

lk∑j

αjk(s2jk + µ2

jk)

(xTjkxjk

σ2e

+1

σ2β

)︸ ︷︷ ︸

1/s2jk

+1

2

lk∑j

αjk

=logπ

1− π+

1

2

lk∑j

αjklogs2jkσ2β

+

lk∑j

αjkµ2jk

s2jk−

lk∑j

αjkµ2jk

2s2jk

=logπ

1− π+

1

2

lk∑j

αjk

(log

s2jkσ2β

+µ2jk

s2jk

),

(46)

where we have used Equation (11) in the third equation. Derivation of vjk follows the same

procedure.

37

Page 38: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

A.2 M-step

At M-step, we update the parameters θ = {α, π, σ2β, σ

2e ,ω}. By setting ∂Eq [logPr(y,η,γ,β|X,Z;θ)]

∂σ2e

=

0, we have

σ2e =||y − Zω −

∑Kk

∑lkj πkαjkµjkxjk||2

n

+

∑Kk

∑lkj [πkαjk(s

2jk + µ2

jk)− (πkαjkµjk)2]xTjkxjk

n

+

∑Kk (πk − π2

k)[∑lk

j

∑lkj′ αj′kµj′kαjkµjk]x

Tj′kxjk

n.

(47)

To get σ2β, we set ∂L(q)

∂σ2β

= 0, which gives

σ2β =

∑Kk

∑lkj πkαjk(s

2jk + µ2

jk)∑Kk

∑lkj πkαjk

. (48)

Accordingly,

α =1

p

K∑k

lk∑j

αjk, (49)

π =1

K

K∑k

πk. (50)

ω = (ZTZ)−1ZT (y −K∑k

lk∑j

πkαjkµjkxjk). (51)

B Variational EM Algorithm: Multi-task Learning with BIVAS

B.1 E-step

Let θ = {α, π, σ2βj, σ2

ej,ωj}lj=1 be the collection of model parameters. The joint probabilistic

model is

Pr(y,η,γ,β|X,Z; θ) = Pr(y|η,γ,β,X,Z, θ)Pr(η,γ,β|θ)

=L∏j=1

N (yj|Zjωj +K∑k

ηkγjkβjkxjk)K∏k=1

πηk(1− π)1−ηkL∏j=1

N (0, σ2βj

)αγjk(1− α)1−γjk .(52)

38

Page 39: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

The logarithm of the marginal likelihood is

log p(y|X,Z; θ) = log∑γ

∑η

∫β

Pr(y,η,γ,β|X,Z; θ)dβ

≥∑γ

∑η

∫β

q(η,γ,β)logPr(y,η,γ,β|X,Z; θ)

q(η,γ,β)

= Eq[logPr(y,η,γ,β|X,Z; θ)− q(η,γ,β)]

≡ L(q).

(53)

Again, we assume that the variational distribution takes the form

q(η,γ,β) =K∏k

(q(ηk)

L∏j

(q(βjk|ηk, γjk)q(γjk))

). (54)

Actually, the variational approximation only assumes ‘between group’ factorizability (∏K

k=1 q(ηk, γjk, βjk))

because given the group, the tasks inside are independent due to model assumption. Follow

the same procedure in Section 1.1, the optimal form of q is given by

log q∗(βjk|ηk = 1, γjk = 1) = Ej′ 6=j|k [Ek′ 6=k log Pr(y,η−k, ηk = 1,γ−jk, γjk = 1,β)] . (55)

The Equation (34) contains the logarithm of joint probability funcion, which is

logPr(y,η,γ,β|X,Z; θ) =L∑j=1

{− Nj

2log(2πσ2

ej)−

yTj yj

2σ2ej

− (Zjωj)T (Zjωj)

2σ2ej

+

∑Kk ηkγjkβjkx

Tjkyj

σ2ej

+yTj (Zjωj)

σ2ej

−∑K

k ηkγjkβjkxTjk(Zjωj)

σ2ej

− 1

2σ2ej

K∑k

((ηkγjkβjk)

2 xTjkxjk)

− 1

2σ2ej

(K∑k

K∑k′ 6=k

(ηk′γjk′βjk′) (ηkγjkβjk) xTjk′xjk

)}

− K

2

L∑j=1

log(2πσ2βj

)−L∑j=1

∑Kk=1 β

2jk

2σ2βj

+ log(α)K∑k=1

L∑j=1

γjk + log(1− α)K∑k=1

L∑j=1

(1− γjk)

+ log(π)K∑k=1

ηk + log(1− π)K∑k=1

(1− ηk).

(56)

39

Page 40: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

We then rearrange Equation (35) and retain the terms regarding jk

logPr(y,η,γ,β|X,Z; θ)

=− Nj

2log(2πσ2

ej)−

yTj yj

2σ2ej

+ηkγjkβjkx

Tjkyj

σ2ej

−ηkγjkβjkx

Tjk(Zjωj)

σ2ej

− 1

2σ2ej

((ηkγjkβjk)

2 xTjkxjk)

− 1

2σ2ej

(K∑k′ 6=k

(ηkγjkβjk) (ηk′γjk′βjk′) xTjkxjk′

)− 1

2σ2βj

β2jk

+ log(α)γjk + log(1− α)(1− γjk)

+ log(π)ηk + log(1− π)(1− ηjk)

+ const.

(57)

When ηk = γjk = 1⇔ ηkγjk = 1, using Equation (34), we have

log q(βjk|ηk = 1, γjk = 1)

=

(− 1

2σ2ej

xTjkxjk −1

2σ2βj

)β2jk

+

(xTjk(yj − Zjωj)−

∑Kk′ 6=k Ek′ 6=k[ηk′γjk′βjk′ ]xTjkxjk′σ2ej

)βjk

+ const,

(58)

from which we can see that the conditional posterior q(βjk|ηk = 1, γjk = 1) ∼ N (µjk, s2jk),

where

s2jk =σ2ej

xTjkxjk +σ2ej

σ2βj

µjk =xTjk(yj − Zjωj)−

∑Kk′ 6=k Ek′ 6=k[ηk′γjk′βjk′ ]xTjkxjk′

xTjkxjk +σ2ej

σ2βj

.

(59)

For ηkγjk = 0, we have

log q(βjk|ηkγjk = 0) = − 1

2σ2βj

β2jk + const, (60)

40

Page 41: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

which implies that q(βjk|ηkγjk = 0) ∼ N (0, σ2βj

). Thus, the posterior is exactly the same as

the prior if this variable is irrelevant in either one of the two levels (ηkγjk = 0). Therefore

we have

q(η, γ, β) =K∏k

(πηkk (1− πk)1−ηk

L∏j

(αγjkjk (1− αjk)1−γjkN (µjk, s

2jk)

ηkγjkN (0, σ2βj

)1−ηkγjk))

,

(61)

where we denote πk = q(ηk) and αjk = q(γjk).

Now we evaluate the second term of L(q) in Equation (32):

− Eq[logq(η,γ,β)]

=− Eq

[K∑k

(ηklog(πk) + (1− ηk)log(1− πk))

]

− Eq

[K∑k

L∑j

ηkγjklogN (µjk, s2jk) + (1− ηkγjk)N (0, σ2

βj) + γjklog(αjk) + (1− γjk)log(1− αjk)

]

=−K∑k

Eq [(ηklog(πk) + (1− ηk)log(1− πk))]

−K∑k

L∑j

Eq[ηkγjklogN (µjk, s

2jk) + (1− ηkγjk)N (0, σ2

βj) + γjklog(αjk) + (1− γjk)log(1− αjk)

]=−

K∑k

L∑j

Eηk,γjk{Eβ|ηk=1,γjk=1[logN (µjk, s2jk)] + Eβ|ηk=1,γjk=0[logN (0, σ2

βj)]

+ Eβ|ηk=0,γjk=1[logN (0, σ2βj

)] + Eβ|ηk=0,γjk=0[logN (0, σ2βj

)]}

−K∑k

L∑j

Eq[γjklog(αjk) + (1− γjk)log(1− αjk)]−K∑k

Eq[ηklog(πk) + (1− ηk)log(1− πk)].

(62)

Note that −Eβ|ηk=1,γjk=1[logN (µjk, s2jk)] is the entropy of Gaussian, so we have

−Eβ|ηk=1,γjk=1[logN (µjk, s2jk)] = 1

2log(s2jk)+

12(1+log(2π)), similarly, −Eβ|ηk=1,γjk=0[logN (µjk, s

2jk)] =

41

Page 42: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

12log(σ2

βj) + 1

2(1 + log(2π)) and so on. Consequently,

− Eq[logq(η,γ,β)]

=K∑k

L∑j

{[12

log(s2jk) +1

2(1 + log(2π))]πkαjk + [

1

2log(σ2

βj) +

1

2(1 + log(2π))](1− πkαjk)

− αjklog(αjk)− (1− αjk)log(1− αjk)} −K∑k

{πklog(πk) + (1− πk)log(1− πk)}

=K∑k

L∑j

1

2πkαjk(logs2jk − logσ2

βj) +

K

2

L∑j

log(σ2βj

) +p

2+p

2log(2π)

−K∑k

L∑j

[αjklog(αjk) + (1− αjk)log(1− αjk)]−K∑k

[πklog(πk) + (1− πk)log(1− πk)].

(63)

42

Page 43: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Combine Equation (42) and Equation (35), we can find the lower bound:

Eq[logPr(yj,η,γ,β|X; θ)]− Eq[logq(η,γ,β)]

=L∑j=1

{− Nj

2log(2πσ2

ej)−

yTj yj

2σ2ej

− (Zjωj)T (Zjωj)

2σ2ej

+

∑Kk Eq [ηkγjkβjk] x

Tjkyj

σ2ej

+yTj (Zjωj)

σ2ej

−∑K

k Eq [ηkγjkβjk] xTjk(Zjωj)

σ2ej

− 1

2σ2ej

K∑k

(Eq[(ηkγjkβjk)

2]xTjkxjk)− 1

2σ2ej

(K∑k

K∑k′ 6=k

Eq [ηk′γjk′βjk′ ]Eq [ηkγjkβjk] xTjk′xjk

)}

− K

2

L∑j

log(2πσ2βj

)−K∑k=1

∑Lj=1 Eq

[β2jk

]2σ2

βj

+ log(α)K∑k=1

L∑j=1

Eq [γjk] + log(1− α)K∑k=1

L∑j=1

Eq [1− γjk]

+ log(π)K∑k=1

Eq [ηk] + log(1− π)K∑k=1

Eq [1− ηk]

+K∑k

L∑j

1

2πkαjk(logs2jk − logσ2

βj) +

K

2

L∑j

log(σ2βj

) +p

2+p

2log(2π)

−K∑k

L∑j

[αjklog(αjk)]−K∑k

L∑j

[(1− αjk)log(1− αjk)]

−K∑k

[πklog(πk)]−K∑k

[(1− πk)log(1− πk)].

(64)

Again we can show with the same technique in Section 1.1 that that Eq [ηkγjkβjk] =

πkαjkµjk, Eq [(ηkγjkβjk)2] = πkαjk(s

2jk + µ2

jk), Eq[β2jk

]= πkαjk(s

2jk + µ2

jk) + (1− πkαjk)σ2βj

,

Eq [ηk] = πk, Eq [γjk] = αjk. We plug in the expectations, then Equation (43) becomes

43

Page 44: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Eq[logPr(y,η,γ,β|X; θ)]− Eq[logq(η,γ,β)]

=L∑j=1

{− Nj

2log(2πσ2

ej)− ||yj − Zjωj −

∑Kk πkαjkµjkxjk||2

2σ2ej

− 1

2σ2ej

K∑k

[πkαjk(s2jk + µ2

jk)− (πkαjkµjk)2]︸ ︷︷ ︸

Var[ηkγjkβjk]

xTjkxjk

}

− K

2

L∑j=1

log(2πσ2βj

)− 1

2σ2βj

K∑k=1

L∑j=1

[πkαjk(s2jk + µ2

jk) + (1− πkαjk)σ2βj

]

+K∑k=1

L∑j=1

αjklog(α

αjk) +

K∑k=1

L∑j=1

(1− αjk)log(1− α

1− αjk)

+K∑k=1

πklog(π

πk) +

K∑k=1

(1− πk)log(1− π1− πk

)

+K∑k

L∑j

1

2πkαjklog(

s2jkσ2βj

) +K

2

L∑j

log(σ2βj

) +p

2+p

2log(2π).

(65)

To get πk and αjk, we let

∂Eq[logPr(yj,η,γ,β|X; θ)]− Eq[logq(η,γ,β)]

∂πk= 0,

∂Eq[logPr(yj,η,γ,β|X; θ)]− Eq[logq(η,γ,β)]

∂αjk= 0,

which gives us

πk =1

1 + exp(−uk),

where uk =logπ

1− π+

1

2

L∑j

αjk

(log

s2jkσ2βj

+µ2jk

s2jk

);

and αjk =1

1 + exp(−vjk),

where vjk =logα

1− α+

1

2πk

(log

s2jkσ2βj

+µ2jk

s2jk

).

(66)

44

Page 45: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

The derivation is as follow:

uk =logπ

1− π+

1

2

L∑j

αjklogs2jkσ2βj

+L∑j

1

σ2ej

{αjkµjkx

Tjkyj − αjkµjkxTjk(Zjωj)−

K∑k′ 6=k

πk′αjk′µjk′αjkµjkxTjk′xjk

}

− 1

2σ2ej

L∑j

αjk(s2jk + µ2

jk)xTjkxjk −

1

2σβj

L∑j

αjk(s2jk + µ2

jk) +1

2

L∑j

αjk

=logπ

1− π+

1

2

L∑j

αjklogs2jkσ2βj

+K∑j

αjkµjk

(xTjk(yj − Zjωj)−

∑Kk′ 6=k πk′αjk′µjk′x

Tjk′xjk

σ2ej

)︸ ︷︷ ︸

µjk/s2jk

− 1

2

L∑j

αjk(s2jk + µ2

jk)

(xTjkxjk

σ2ej

+1

σ2βj

)︸ ︷︷ ︸

1/s2jk

+1

2

L∑j

αjk

=logπ

1− π+

1

2

L∑j

αjklogs2jkσ2βj

+L∑j

αjkµ2jk

s2jk−

L∑j

αjkµ2jk

2s2jk

=logπ

1− π+

1

2

L∑j

αjk

(log

s2jkσ2βj

+µ2jk

s2jk

)

(67)

where we have used Equation (38). Similarly, we can derive vjk.

B.2 M-step

At M-step, we update the parameters {σ2ej, σ2

βj, π, α,ωj}. First we consider σ2

ej, by setting

∂Eq [logPr(y,η,γ,β|X,Z;θ)]∂σ2ej

= 0, we have

σ2ej

=||yj − Zjωj −

∑Kk πkαjkµjkxjk||2

Nj

+

∑Kk [πkαjk(s

2jk + µ2

jk)− (πkαjkµjk)2]xTjkxjk

Nj

.

(68)

For σ2βj

, set ∂L(q)∂σ2βj

= 0, we have

σ2βj

=

∑Kk πkαjk(s

2jk + µ2

jk)∑Kk πkαjk

. (69)

45

Page 46: BIVAS: A scalable Bayesian method for bi-level variable ... · Department of Mathematics, Hong Kong Baptist University Jin Liu Centre for Quantitative Medicine, Duke-NUS Medical School

Accordingly,

α =1

p

K∑k

L∑j

αjk, (70)

π =1

K

K∑k

πk. (71)

ωj = (ZTj Zj)

−1ZTj (yj −

K∑k

πkαjkµjkxjk). (72)

46