ols coefficient analysis · 2021. 5. 6. · singular-value decomposition everya 2rm k,m...

29
OLS Coefficient Analysis DS-GA 1013 / MATH-GA 2824 Mathematical Tools for Data Science https://cims.nyu.edu/~cfgranda/pages/MTDS_spring20/index.html Carlos Fernandez-Granda

Upload: others

Post on 25-Jul-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

OLS Coefficient Analysis

DS-GA 1013 / MATH-GA 2824 Mathematical Tools for Data Sciencehttps://cims.nyu.edu/~cfgranda/pages/MTDS_spring20/index.html

Carlos Fernandez-Granda

Page 2: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Prerequisites

Linear algebra (orthogonal matrices, singular-value decomposition)

Ordinary least squares

Page 3: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Goal

Analyze coefficients of OLS estimator

Page 4: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Singular-value decomposition

Every A ∈ Rm×k , m ≥ k , has a singular-value decomposition (SVD)

A =[u1 u2 · · · uk

]s1 0 · · · 00 s2 · · · 0

. . .0 0 · · · sk

[v1 v2 · · · vk]T

= USV T

The singular values s1 ≥ s2 ≥ · · · ≥ sk are nonnegative

The left singular vectors u1, u2, . . . uk ∈ Rm are orthonormal

The right singular vectors v1, v2, . . . vk ∈ Rk are orthonormal

Page 5: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Linear maps

The SVD decomposes the action of a matrix A ∈ Rm×k on a vectorw ∈ Rk into:

1. Rotation

V Tw =k∑

i=1

〈vi ,w〉ei

2. Scaling

SV Tw =k∑

i=1

si 〈vi ,w〉ei

3. Rotation

USV Tw =k∑

i=1

si 〈vi ,w〉ui

Page 6: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Linear maps

~x

~v1

~v2

~y

V T~x

~e1

~e2V T~y

SV T~x

s1~e1

s2~e2SV T~y

USV T~x

s1~u1s2~u2

USV T~y

V T

S

U

Page 7: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Linear maps (s2 := 0)

~x

~v1

~v2

~y

V T~x

~e1

~e2V T~y

SV T~xs1~e1

~0 SV T~y

USV T~x

s1~u1

~0

USV T~y

V T

S

U

Page 8: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Regression

Goal: Estimate response (or dependent variable)

Data: Several observed variables, known as features (or covariates,or independent variables)

Page 9: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Data

Training data: (y1, x1), (y2, x2), . . . , (yn, xn), where yi ∈ R and xi ∈ Rp

We define a response vector y ∈ Rn and a feature matrix

X :=[x1 x2 · · · xn

]

Page 10: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

OLS estimator

βOLS =(XXT

)−1Xy

= (US2UT )−1USV T y

= US−2UTUSV T y

= US−1V T y(XXT

)−1X is a left inverse or pseudoinverse of XT because(

XXT)−1

XXT = I

Page 11: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Additive model

Model for MSE analysis

y = xTβtrue + z

Problem: Ignores that estimate is obtained from training data

Page 12: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Model for training data

ytrain := XTβtrue + ztrain

I Feature matrix X ∈ Rp×n is deterministic

I Coefficients βtrue ∈ Rp are deterministic

I Noise ztrain is an n-dimensional iid Gaussian vector with zero meanand variance σ2

Page 13: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Maximum likelihood

Under this model, OLS is equivalent to maximum likelihood

Assume we observe ytrain

Lytrain(β) =1√

(2πσ2)nexp

(− 12σ2

∣∣∣∣∣∣ytrain − XTβ∣∣∣∣∣∣2

2

)

βML = arg maxβLytrain(β)

= arg maxβ

logLytrain(β)

= arg minβ

∣∣∣∣∣∣ytrain − XTβ∣∣∣∣∣∣2

2

Page 14: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Decomposition of OLS cost function

arg minβ‖ytrain − XTβ‖22

= arg minβ‖ztrain − XT (β − βtrue)‖22

= arg minβ

(β − βtrue)TXXT (β − βtrue)− 2zTtrainXT (β − βtrue) + zTtrainztrain

= arg minβ

(β − βtrue)TXXT (β − βtrue)−2zTtrainXTβ

Page 15: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Quadratic component (β − βtrue)TXXT (β − βtrue)

Contour lines are ellipsoids centered at βtrue

c2 = (β − βtrue)TXXT (β − βtrue) = (β − βtrue)TUS2UT (β − βtrue)

=

p∑i=1

si2(ui

T (β − βtrue))2

Axes of the ellipsoid are the left singular vectors

Stretch in direction of each axis is determined by singular values

Page 16: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

(β − βtrue)TXXT (β − βtrue)

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2] βtrue

0.25

0.50

0.50

1.00

1.00

2.00

2.00

4.00

4.00

7.00

7.00

0.10

Page 17: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

(β − βtrue)TXXT (β − βtrue)

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]

c s−11 u1

c s−12 u2βtrue

0.10

Page 18: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

−2zTtrainXTβ

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]

-0.20

-0.00

0.20

0.40

Page 19: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

(β − βtrue)TXXT (β − βtrue)− 2zTtrainX

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]βOLSβtrue

0.250.50

0.50

1.00

1.00

2.00

2.00

4.00

4.00

7.00

7.00

Page 20: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

−2zTtrainXTβ

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]

-0.60

-0.40-0.20

-0.00

0.20

0.400.60

0.80

1.00 1.20

1.40

Page 21: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

(β − βtrue)TXXT (β − βtrue)− 2zTtrainX

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]βOLS βtrue

0.50

1.00

1.00

2.00

2.00

4.00

4.00

7.00

7.00

10.00

Page 22: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

−2zTtrainXTβ

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]

-1.60-1.40

-1.20

-1.00

-0.80-0.60

-0.40

-0.20

-0.000.20

0.400.60

0.80

Page 23: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

(β − βtrue)TXXT (β − βtrue)− 2zTtrainX

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2] βOLSβtrue

0.25

0.25

0.50

0.50

1.00

1.00

2.00

2.00

4.00

4.00

7.00

7.00

Page 24: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Minima for 200 realizations

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]

βtrue

βOLS

Page 25: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Minima

βOLS = (XXT )−1Xytrain

= (XXT )−1XXTβtrue + (XXT )−1X ztrain

= βtrue + (XXT )−1X ztrain

= βtrue + US−1V T ztrain

Distribution?

Page 26: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Linear transformations of Gaussian random vectors

Let x be a Gaussian random vector of dimension d with mean µ andcovariance matrix Σ

For any matrix A ∈ Rm×d and b ∈ Rm y = Ax + b is Gaussian withmean Aµ+ b and covariance matrix AΣAT (as long as it is full rank)

Page 27: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Minima

βOLS = (XXT )−1Xytrain

= (XXT )−1XXTβtrue + (XXT )−1X ztrain

= βtrue + (XXT )−1X ztrain

= βtrue + US−1V T ztrain

Distribution? Gaussian with mean βtrue and covariance matrix σ2US−2UT

Page 28: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Minima

−3 −2 −1 0 1 2 3 4 5 6

β[1]

−2

−1

0

1

2

3

4

β[2]

c s−11 u1

c s−12 u2βtrue

10−15

10−13

10−11

10−9

10−7

10−5

10−3

10−1

Page 29: OLS Coefficient Analysis · 2021. 5. 6. · Singular-value decomposition EveryA 2Rm k,m k,hasasingular-valuedecomposition(SVD) A = 0 u 1 u 2 7u k 2 6 6 6 4 s 1 0 0 s 2 0 0 0 s k 3

Conclusion

Coefficient error of OLS estimator depends on singular vectors and singularvalues of feature matrix

If singular values are small, error explodes!