manifold-regression to predict from meg/eeg brain signals ... · manifold-regression to predict...
TRANSCRIPT
Manifold-regression to predict from MEG/EEGbrain signals without source modeling
D. Sabbagh, P. Ablin, G. Varoquaux, A. Gramfort, D.Engemann
NeurIPS 2019
1 / 14
Non-invasive measure of brain activity
2 / 14
Non-invasive measure of brain activity
2 / 14
Objective: predict a variable from M/EEG brain signals
We want to predict a continuous variable
From M/EEG brain signals
3 / 14
Objective: predict a variable from M/EEG brain signals
We want to predict a continuous variable
From M/EEG brain signals
3 / 14
Objective: predict a variable from M/EEG brain signals
We want to predict a continuous variable
From M/EEG brain signals
3 / 14
Model: generative model of M/EEG data
We measure M/EEG signal of subject i = 1 . . .N on P channels:
x i (t) = A s i (t) + ni (t) ∈ RP t = 1 . . .T
mixing matrix A = [a1, . . . , aQ ] ∈ RP×Q fixed accross subjects
source patterns aj ∈ RP , j = 1 . . .Q with Q < P
source vector s i (t) ∈ RQ
noise ni (t) ∈ RP
Under stationnarity and gaussianity assumptions, we can representband-pass filtered signal by its second-order statistics
C i = E[x i (t)x i (t)>] ' X iX>iT
∈ RP×P with X i ∈ RP×T
4 / 14
Model: generative model of M/EEG data
We measure M/EEG signal of subject i = 1 . . .N on P channels:
x i (t) = A s i (t) + ni (t) ∈ RP t = 1 . . .T
mixing matrix A = [a1, . . . , aQ ] ∈ RP×Q fixed accross subjects
source patterns aj ∈ RP , j = 1 . . .Q with Q < P
source vector s i (t) ∈ RQ
noise ni (t) ∈ RP
Under stationnarity and gaussianity assumptions, we can representband-pass filtered signal by its second-order statistics
C i = E[x i (t)x i (t)>] ' X iX>iT
∈ RP×P with X i ∈ RP×T
4 / 14
Model: generative model of M/EEG data
We measure M/EEG signal of subject i = 1 . . .N on P channels:
x i (t) = A s i (t) + ni (t) ∈ RP t = 1 . . .T
mixing matrix A = [a1, . . . , aQ ] ∈ RP×Q fixed accross subjects
source patterns aj ∈ RP , j = 1 . . .Q with Q < P
source vector s i (t) ∈ RQ
noise ni (t) ∈ RP
Under stationnarity and gaussianity assumptions, we can representband-pass filtered signal by its second-order statistics
C i = E[x i (t)x i (t)>] ' X iX>iT
∈ RP×P with X i ∈ RP×T
4 / 14
Model: generative model of target variable
We want to predict a continuous variable: yi =Q∑j=1
αj f (pi ,j) ∈ R
with pi ,j = Et [s2i ,j(t)] band-power of sources
linear yi =∑Q
j=1 αjpi ,j
Euclidean vectorization leads to consistent model
yi =∑
k≤l Θk,lC i (k , l) i.e. yi is linear in coeff. of Upper(C i )
log linear yi =∑Q
j=1 αj log (pi ,j)
C i live on a Riemannian manifold so can’t be naivelyvectorized
5 / 14
Model: generative model of target variable
We want to predict a continuous variable: yi =Q∑j=1
αj f (pi ,j) ∈ R
with pi ,j = Et [s2i ,j(t)] band-power of sources
linear yi =∑Q
j=1 αjpi ,j
Euclidean vectorization leads to consistent model
yi =∑
k≤l Θk,lC i (k , l) i.e. yi is linear in coeff. of Upper(C i )
log linear yi =∑Q
j=1 αj log (pi ,j)
C i live on a Riemannian manifold so can’t be naivelyvectorized
5 / 14
Model: generative model of target variable
We want to predict a continuous variable: yi =Q∑j=1
αj f (pi ,j) ∈ R
with pi ,j = Et [s2i ,j(t)] band-power of sources
linear yi =∑Q
j=1 αjpi ,j
Euclidean vectorization leads to consistent model
yi =∑
k≤l Θk,lC i (k , l) i.e. yi is linear in coeff. of Upper(C i )
log linear yi =∑Q
j=1 αj log (pi ,j)
C i live on a Riemannian manifold so can’t be naivelyvectorized
5 / 14
Model: generative model of target variable
We want to predict a continuous variable: yi =Q∑j=1
αj f (pi ,j) ∈ R
with pi ,j = Et [s2i ,j(t)] band-power of sources
linear yi =∑Q
j=1 αjpi ,j
Euclidean vectorization leads to consistent model
yi =∑
k≤l Θk,lC i (k , l) i.e. yi is linear in coeff. of Upper(C i )
log linear yi =∑Q
j=1 αj log (pi ,j)
C i live on a Riemannian manifold so can’t be naivelyvectorized
5 / 14
Model: generative model of target variable
We want to predict a continuous variable: yi =Q∑j=1
αj f (pi ,j) ∈ R
with pi ,j = Et [s2i ,j(t)] band-power of sources
linear yi =∑Q
j=1 αjpi ,j
Euclidean vectorization leads to consistent model
yi =∑
k≤l Θk,lC i (k , l) i.e. yi is linear in coeff. of Upper(C i )
log linear yi =∑Q
j=1 αj log (pi ,j)
C i live on a Riemannian manifold so can’t be naivelyvectorized
5 / 14
Riemannian matrix manifolds (in a nutshell)
ξ MT
M
M M
M'
LogM
ExpM
6 / 14
Riemannian matrix manifolds (in a nutshell)
ξ MT
M
M M
M'
LogM
ExpM
Vectorization operator:PM(M ′) = φM(LogM(M ′)) ' Upper(LogM(M ′))d(M ,M ′) = ‖PM(M ′)‖2 + o(‖PM(M ′)‖2)d(M i ,M j) ' ‖PM(M i )− PM(M j)‖2Vectorization operator key for ML
[Absil & al. Optimization algorithms on matrix manifolds. 2009]7 / 14
Riemannian matrix manifolds (in a nutshell)
ξ MT
M
M M
M'
LogM
ExpM
Vectorization operator:PM(M ′) = φM(LogM(M ′)) ' Upper(LogM(M ′))d(M ,M ′) = ‖PM(M ′)‖2 + o(‖PM(M ′)‖2)d(M i ,M j) ' ‖PM(M i )− PM(M j)‖2Vectorization operator key for ML
[Absil & al. Optimization algorithms on matrix manifolds. 2009]7 / 14
Regression on matrix manifolds
Given a training set of samples M1, . . . ,MN ∈M and targetcontinuous variables y1, . . . , yN ∈ R:
compute the mean of the samples M = Meand(M1, . . . ,MN)
compute the vectorization of the samples w.r.t. this mean:v1, . . . , vN ∈ RK as v i = PM(M i )
use those vectors as features in regularized linear regressionalgorithm (e.g. ridge regression) with parameters β ∈ RK
assuming that yi ' v>i β
8 / 14
Regression on matrix manifolds
Given a training set of samples M1, . . . ,MN ∈M and targetcontinuous variables y1, . . . , yN ∈ R:
compute the mean of the samples M = Meand(M1, . . . ,MN)
compute the vectorization of the samples w.r.t. this mean:v1, . . . , vN ∈ RK as v i = PM(M i )
use those vectors as features in regularized linear regressionalgorithm (e.g. ridge regression) with parameters β ∈ RK
assuming that yi ' v>i β
8 / 14
Regression on matrix manifolds
Given a training set of samples M1, . . . ,MN ∈M and targetcontinuous variables y1, . . . , yN ∈ R:
compute the mean of the samples M = Meand(M1, . . . ,MN)
compute the vectorization of the samples w.r.t. this mean:v1, . . . , vN ∈ RK as v i = PM(M i )
use those vectors as features in regularized linear regressionalgorithm (e.g. ridge regression) with parameters β ∈ RK
assuming that yi ' v>i β
8 / 14
Regression on matrix manifolds
Given a training set of samples M1, . . . ,MN ∈M and targetcontinuous variables y1, . . . , yN ∈ R:
compute the mean of the samples M = Meand(M1, . . . ,MN)
compute the vectorization of the samples w.r.t. this mean:v1, . . . , vN ∈ RK as v i = PM(M i )
use those vectors as features in regularized linear regressionalgorithm (e.g. ridge regression) with parameters β ∈ RK
assuming that yi ' v>i β
8 / 14
Distance and invariance on positive matrix manifolds
Manifold of positive definite matrices: M i = C i ∈ S++P
Geometric distance:
dG (S ,S ′) = ‖ log(S−1S ′)‖F =
[∑Pi=1 log2 λk
] 12
where λk , k = 1 . . .P are the real eigenvalues of S−1S ′.
Tangent Space Projection:
PS(S ′) = Upper(log(S−12S ′S−
12 ))
Affine invariance property:For W invertible, dG (W>SW ,W>S ′W ) = dG (S ,S ′)
Affine invariance is key: working with C i is then equivalent toworking with covariance matrices of sources s i
[Bhatia. Positive Definite Matrices. Princeton University Press, 2007]9 / 14
Distance and invariance on positive matrix manifolds
Manifold of positive definite matrices: M i = C i ∈ S++P
Geometric distance:
dG (S ,S ′) = ‖ log(S−1S ′)‖F =
[∑Pi=1 log2 λk
] 12
where λk , k = 1 . . .P are the real eigenvalues of S−1S ′.
Tangent Space Projection:
PS(S ′) = Upper(log(S−12S ′S−
12 ))
Affine invariance property:For W invertible, dG (W>SW ,W>S ′W ) = dG (S ,S ′)
Affine invariance is key: working with C i is then equivalent toworking with covariance matrices of sources s i
[Bhatia. Positive Definite Matrices. Princeton University Press, 2007]9 / 14
Distance and invariance on positive matrix manifolds
Manifold of positive definite matrices: M i = C i ∈ S++P
Geometric distance:
dG (S ,S ′) = ‖ log(S−1S ′)‖F =
[∑Pi=1 log2 λk
] 12
where λk , k = 1 . . .P are the real eigenvalues of S−1S ′.
Tangent Space Projection:
PS(S ′) = Upper(log(S−12S ′S−
12 ))
Affine invariance property:For W invertible, dG (W>SW ,W>S ′W ) = dG (S ,S ′)
Affine invariance is key: working with C i is then equivalent toworking with covariance matrices of sources s i
[Bhatia. Positive Definite Matrices. Princeton University Press, 2007]9 / 14
Distance and invariance on positive matrix manifolds
Manifold of positive definite matrices: M i = C i ∈ S++P
Geometric distance:
dG (S ,S ′) = ‖ log(S−1S ′)‖F =
[∑Pi=1 log2 λk
] 12
where λk , k = 1 . . .P are the real eigenvalues of S−1S ′.
Tangent Space Projection:
PS(S ′) = Upper(log(S−12S ′S−
12 ))
Affine invariance property:For W invertible, dG (W>SW ,W>S ′W ) = dG (S ,S ′)
Affine invariance is key: working with C i is then equivalent toworking with covariance matrices of sources s i
[Bhatia. Positive Definite Matrices. Princeton University Press, 2007]9 / 14
Distance and invariance on positive matrix manifolds
Manifold of positive definite matrices: M i = C i ∈ S++P
Geometric distance:
dG (S ,S ′) = ‖ log(S−1S ′)‖F =
[∑Pi=1 log2 λk
] 12
where λk , k = 1 . . .P are the real eigenvalues of S−1S ′.
Tangent Space Projection:
PS(S ′) = Upper(log(S−12S ′S−
12 ))
Affine invariance property:For W invertible, dG (W>SW ,W>S ′W ) = dG (S ,S ′)
Affine invariance is key: working with C i is then equivalent toworking with covariance matrices of sources s i
[Bhatia. Positive Definite Matrices. Princeton University Press, 2007]9 / 14
Consistency of linear regression in tangent space of S++P
Geometric vectorization
Assume yi =∑Q
j=1 αj log(pi ,j). Denote C = MeanG (C 1, . . . ,CN)and v i = PC (C i ). Then, the relationship between yi and v i islinear.
We generate i.i.d. samples following the log linear generativemodel. A = exp(µB) with B ∈ RP×P random
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
chance level
0.00
0.25
0.50
0.75
1.00
0.0 0.5 1.0 1.5 2.0 2.5 3.0µ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
10 / 14
Consistency of linear regression in tangent space of S++P
Geometric vectorization
Assume yi =∑Q
j=1 αj log(pi ,j). Denote C = MeanG (C 1, . . . ,CN)and v i = PC (C i ). Then, the relationship between yi and v i islinear.
We generate i.i.d. samples following the log linear generativemodel. A = exp(µB) with B ∈ RP×P random
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
chance level
0.00
0.25
0.50
0.75
1.00
0.0 0.5 1.0 1.5 2.0 2.5 3.0µ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
10 / 14
Consistency of linear regression in tangent space of S++P
Geometric vectorization
Assume yi =∑Q
j=1 αj log(pi ,j). Denote C = MeanG (C 1, . . . ,CN)and v i = PC (C i ). Then, the relationship between yi and v i islinear.
We generate i.i.d. samples following the log linear generativemodel. A = exp(µB) with B ∈ RP×P random
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
chance level
0.00
0.25
0.50
0.75
1.00
0.0 0.5 1.0 1.5 2.0 2.5 3.0µ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
10 / 14
And in the real world ?
We investigated 3 violations from previous idealized model:
Noise in target variable
yi =∑j
αj log(pij) + εi , with εi ∼ N (0, σ2)
Subject-dependent mixing matrix
Ai = A + E i , with entries of E i ∼ N (0, σ2)
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●● ●●● ●●●chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.10 1.00 10.00σ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●●
●
●●●●
●
●●
●●
●●●●●
●●●●●
chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.03 0.10 0.30σ
Nor
mal
ized
MA
E
●
●
●
●
●log−diagsup. log−diag
Wassersteingeometric
sup. geometric
11 / 14
And in the real world ?
We investigated 3 violations from previous idealized model:
Noise in target variable
yi =∑j
αj log(pij) + εi , with εi ∼ N (0, σ2)
Subject-dependent mixing matrix
Ai = A + E i , with entries of E i ∼ N (0, σ2)
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●● ●●● ●●●chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.10 1.00 10.00σ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●●
●
●●●●
●
●●
●●
●●●●●
●●●●●
chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.03 0.10 0.30σ
Nor
mal
ized
MA
E
●
●
●
●
●log−diagsup. log−diag
Wassersteingeometric
sup. geometric
11 / 14
And in the real world ?
We investigated 3 violations from previous idealized model:
Noise in target variable
yi =∑j
αj log(pij) + εi , with εi ∼ N (0, σ2)
Subject-dependent mixing matrix
Ai = A + E i , with entries of E i ∼ N (0, σ2)
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●● ●●● ●●●chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.10 1.00 10.00σ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●●
●
●●●●
●
●●
●●
●●●●●
●●●●●
chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.03 0.10 0.30σ
Nor
mal
ized
MA
E
●
●
●
●
●log−diagsup. log−diag
Wassersteingeometric
sup. geometric
11 / 14
And in the real world ?
We investigated 3 violations from previous idealized model:
Noise in target variable
yi =∑j
αj log(pij) + εi , with εi ∼ N (0, σ2)
Subject-dependent mixing matrix
Ai = A + E i , with entries of E i ∼ N (0, σ2)
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●●● ●●● ●●●chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.10 1.00 10.00σ
Nor
mal
ized
MA
E
● ● ●log−diag Wasserstein geometric
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●●
●
●●●●
●
●●
●●
●●●●●
●●●●●
chance level
0.00
0.25
0.50
0.75
1.00
0.01 0.03 0.10 0.30σ
Nor
mal
ized
MA
E
●
●
●
●
●log−diagsup. log−diag
Wassersteingeometric
sup. geometric
11 / 14
And in the real world ?
Rank-deficient signals (e.g. cleaning process): C i ∈ S+P,R
We can manipulate them in their native manifolds S+P,R :
Wasserstein distance:
dW (S ,S ′) =[Tr(S) + Tr(S ′)− 2Tr((S
12S ′S
12 )
12 )] 1
2
Tangent Space Projection:PYY>(Y ′Y ′>) = vect(Y ′Q∗ − Y ) ∈ RPR
where UΣV> = Y>Y ′, Q∗ = VU>
Orthogonal invariance property:For W orthogonal, dW (W>SW ,W>S ′W ) = dW (S ,S ′)
Wasserstein vectorization
Assume yi =∑Q
j=1 αj√pi ,j and A orthogonal. Denote
C = MeanW (C 1, . . . ,CN) and v i = PC (C i ).Then the relationship between yi and v i is linear.
[Massart, Absil. Quotient geometry with simple geodesics for the manifoldof fixed-rank positive-semidefinite matrices. Technical report, UCLouvain, 2018] 12 / 14
And in the real world ?
Rank-deficient signals (e.g. cleaning process): C i ∈ S+P,R
We can manipulate them in their native manifolds S+P,R :
Wasserstein distance:
dW (S ,S ′) =[Tr(S) + Tr(S ′)− 2Tr((S
12S ′S
12 )
12 )] 1
2
Tangent Space Projection:PYY>(Y ′Y ′>) = vect(Y ′Q∗ − Y ) ∈ RPR
where UΣV> = Y>Y ′, Q∗ = VU>
Orthogonal invariance property:For W orthogonal, dW (W>SW ,W>S ′W ) = dW (S ,S ′)
Wasserstein vectorization
Assume yi =∑Q
j=1 αj√pi ,j and A orthogonal. Denote
C = MeanW (C 1, . . . ,CN) and v i = PC (C i ).Then the relationship between yi and v i is linear.
[Massart, Absil. Quotient geometry with simple geodesics for the manifoldof fixed-rank positive-semidefinite matrices. Technical report, UCLouvain, 2018] 12 / 14
And in the real world ?
Rank-deficient signals (e.g. cleaning process): C i ∈ S+P,R
We can manipulate them in their native manifolds S+P,R :
Wasserstein distance:
dW (S ,S ′) =[Tr(S) + Tr(S ′)− 2Tr((S
12S ′S
12 )
12 )] 1
2
Tangent Space Projection:PYY>(Y ′Y ′>) = vect(Y ′Q∗ − Y ) ∈ RPR
where UΣV> = Y>Y ′, Q∗ = VU>
Orthogonal invariance property:For W orthogonal, dW (W>SW ,W>S ′W ) = dW (S ,S ′)
Wasserstein vectorization
Assume yi =∑Q
j=1 αj√pi ,j and A orthogonal. Denote
C = MeanW (C 1, . . . ,CN) and v i = PC (C i ).Then the relationship between yi and v i is linear.
[Massart, Absil. Quotient geometry with simple geodesics for the manifoldof fixed-rank positive-semidefinite matrices. Technical report, UCLouvain, 2018] 12 / 14
Experiment: predict age from MEG data
Task-free MEG recordings from Cam-CAN dataset
Age is a dominant driver of cross-person variance inneuroscience data
Dimension: N = 595,P = 102,T ' 520, 000, 65 ≤ Ri ≤ 73
Signals is filtered into 9 frequency bands
biophysics
unsupervised
identity
supervised
identity
6 7 8 9 10 11mean absolute error (years)
log−diag Wasserstein geometric MNE
13 / 14
Experiment: predict age from MEG data
Task-free MEG recordings from Cam-CAN dataset
Age is a dominant driver of cross-person variance inneuroscience data
Dimension: N = 595,P = 102,T ' 520, 000, 65 ≤ Ri ≤ 73
Signals is filtered into 9 frequency bands
biophysics
unsupervised
identity
supervised
identity
6 7 8 9 10 11mean absolute error (years)
log−diag Wasserstein geometric MNE
13 / 14
Experiment: predict age from MEG data
Task-free MEG recordings from Cam-CAN dataset
Age is a dominant driver of cross-person variance inneuroscience data
Dimension: N = 595,P = 102,T ' 520, 000, 65 ≤ Ri ≤ 73
Signals is filtered into 9 frequency bands
biophysics
unsupervised
identity
supervised
identity
6 7 8 9 10 11mean absolute error (years)
log−diag Wasserstein geometric MNE
13 / 14
Experiment: predict age from MEG data
Task-free MEG recordings from Cam-CAN dataset
Age is a dominant driver of cross-person variance inneuroscience data
Dimension: N = 595,P = 102,T ' 520, 000, 65 ≤ Ri ≤ 73
Signals is filtered into 9 frequency bands
biophysics
unsupervised
identity
supervised
identity
6 7 8 9 10 11mean absolute error (years)
log−diag Wasserstein geometric MNE
13 / 14
Experiment: predict age from MEG data
Task-free MEG recordings from Cam-CAN dataset
Age is a dominant driver of cross-person variance inneuroscience data
Dimension: N = 595,P = 102,T ' 520, 000, 65 ≤ Ri ≤ 73
Signals is filtered into 9 frequency bands
biophysics
unsupervised
identity
supervised
identity
6 7 8 9 10 11mean absolute error (years)
log−diag Wasserstein geometric MNE
13 / 14
Experiment: predict age from MEG data
Task-free MEG recordings from Cam-CAN dataset
Age is a dominant driver of cross-person variance inneuroscience data
Dimension: N = 595,P = 102,T ' 520, 000, 65 ≤ Ri ≤ 73
Signals is filtered into 9 frequency bands
biophysics
unsupervised
identity
supervised
identity
6 7 8 9 10 11mean absolute error (years)
log−diag Wasserstein geometric MNE
13 / 14
Conclusion
Proposed Riemannian method for regression from M/EEGdata
With theoritical guarantees and empirical robustness
Working to translate proposed method into the clinic
[Sabbagh, Ablin, Varoquaux, Gramfort, Engemann (2019),Manifold-regression to predict from MEG/EEG brain signalswithout source modeling, Proc. NeurIPS 2019]
https://arxiv.org/abs/1906.02687
14 / 14
Conclusion
Proposed Riemannian method for regression from M/EEGdata
With theoritical guarantees and empirical robustness
Working to translate proposed method into the clinic
[Sabbagh, Ablin, Varoquaux, Gramfort, Engemann (2019),Manifold-regression to predict from MEG/EEG brain signalswithout source modeling, Proc. NeurIPS 2019]
https://arxiv.org/abs/1906.02687
14 / 14
Conclusion
Proposed Riemannian method for regression from M/EEGdata
With theoritical guarantees and empirical robustness
Working to translate proposed method into the clinic
[Sabbagh, Ablin, Varoquaux, Gramfort, Engemann (2019),Manifold-regression to predict from MEG/EEG brain signalswithout source modeling, Proc. NeurIPS 2019]
https://arxiv.org/abs/1906.02687
14 / 14
Conclusion
Proposed Riemannian method for regression from M/EEGdata
With theoritical guarantees and empirical robustness
Working to translate proposed method into the clinic
[Sabbagh, Ablin, Varoquaux, Gramfort, Engemann (2019),Manifold-regression to predict from MEG/EEG brain signalswithout source modeling, Proc. NeurIPS 2019]
https://arxiv.org/abs/1906.02687
14 / 14