linear’algebraprimer’ - stanford computer vision...

77
Linear Algebra Review Fei-Fei Li Linear Algebra Primer Dr. Juan Carlos Niebles Stanford AI Lab Prof. FeiFei Li Stanford Vision Lab 24Sep15 1 Another, very indepth linear algebra review from CS229 is available here: hJp://cs229.stanford.edu/secMon/cs229linalg.pdf And a video discussion of linear algebra from EE263 is here (lectures 3 and 4): hJps://see.stanford.edu/Course/EE263

Upload: others

Post on 23-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Linear  Algebra  Primer  

Dr.  Juan  Carlos  Niebles  Stanford  AI  Lab  

 Prof.  Fei-­‐Fei  Li  

Stanford  Vision  Lab  

24-­‐Sep-­‐15  1  

Another,  very  in-­‐depth  linear  algebra  review  from  CS229  is  available  here:  hJp://cs229.stanford.edu/secMon/cs229-­‐linalg.pdf  And  a  video  discussion  of  linear  algebra  from  EE263  is  here  (lectures  3  and  4):  hJps://see.stanford.edu/Course/EE263  

Page 2: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  

–  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  

•  Matrix  rank  

•  Singular  Value  DecomposiMon  (SVD)  –  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  2  

Page 3: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  

–  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  

•  Matrix  rank  

•  Singular  Value  DecomposiMon  (SVD)  –  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  3  

Vectors  and  matrices  are  just  collecMons  of  ordered  numbers  that  represent  something:  movements  in  space,  scaling  factors,  pixel  brightness,  etc.  We’ll  define  some  common  uses  and  standard  operaMons  on  them.  

Page 4: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Vector

•  A  column  vector                                        where  

•  A  row  vector                                        where  

               denotes  the  transpose  operaMon  

24-­‐Sep-­‐15  4  

Page 5: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Vector •  We’ll  default  to  column  vectors  in  this  class  

•  You’ll  want  to  keep  track  of  the  orientaMon  of  your  vectors  when  programming  in  MATLAB  

•  You  can  transpose  a  vector  V  in  MATLAB  by  wriMng  V’.  (But  in  class  materials,  we  will  always  use  VT  to  indicate  transpose,  and  we  will  use  V’  to  mean  “V  prime”)  

24-­‐Sep-­‐15  5  

Page 6: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Vectors  have  two  main  uses  

•  Vectors  can  represent  an  offset  in  2D  or  3D  space  

•  Points  are  just  vectors  from  the  origin  

24-­‐Sep-­‐15  6  

•  Data  (pixels,  gradients  at  an  image  keypoint,  etc)  can  also  be  treated  as  a  vector  

•  Such  vectors  don’t  have  a  geometric  interpretaMon,  but  calculaMons  like  “distance”  can  sMll  have  value  

Page 7: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix

•  A  matrix                                              is  an  array  of  numbers  with  size      by    ,  i.e.    m  rows  and  n  columns.  

•  If                              ,  we  say  that              is  square.

24-­‐Sep-­‐15  7  

Page 8: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Images  

24-­‐Sep-­‐15  8  

•  MATLAB  represents  an  image  as  a  matrix  of  pixel  brightnesses  

•  Note  that  matrix  coordinates  are  NOT  Cartesian  coordinates.  The  upper  leg  corner  is  [y,x]  =  (1,1)  

=  

Page 9: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Color  Images  •  Grayscale  images  have  one  number  per  pixel,  and  are  stored  as  an  m  ×  n  matrix.  

•  Color  images  have  3  numbers  per  pixel  –  red,  green,  and  blue  brightnesses  (RGB)  

•  Stored  as  an  m  ×  n  ×  3  matrix  

24-­‐Sep-­‐15  9  

=  

Page 10: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Basic  Matrix  OperaMons •  We  will  discuss:  

– AddiMon  – Scaling  – Dot  product  – MulMplicaMon  – Transpose  –  Inverse  /  pseudoinverse  – Determinant  /  trace  

24-­‐Sep-­‐15  10  

Page 11: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons  •  AddiMon  

 – Can  only  add  a  matrix  with  matching  dimensions,  or  a  scalar.    

 

•  Scaling  

24-­‐Sep-­‐15  11  

Page 12: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons •  Inner  product  (dot  product)  of  vectors  

– MulMply  corresponding  entries  of  two  vectors  and  add  up  the  result  

– x·∙y  is  also  |x||y|Cos(  the  angle  between  x  and  y  )  

 

24-­‐Sep-­‐15  12  

Page 13: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons

•  Inner  product  (dot  product)  of  vectors  –  If  B  is  a  unit  vector,  then  A·∙B  gives  the  length  of  A  which  lies  in  the  direcMon  of  B  

 

24-­‐Sep-­‐15  13  

Page 14: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons  •  MulMplicaMon  

•  The  product  AB  is:  

•  Each  entry  in  the  result  is  (that  row  of  A)  dot  product  with  (that  column  of  B)  

•  Many  uses,  which  will  be  covered  later  

24-­‐Sep-­‐15  14  

Page 15: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons  •  MulMplicaMon  example:  

24-­‐Sep-­‐15  15  

– Each  entry  of  the  matrix  product  is  made  by  taking  the  dot  product  of  the  corresponding  row  in  the  leg  matrix,  with  the  corresponding  column  in  the  right  one.  

Page 16: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons  

•  Powers  – By  convenMon,  we  can  refer  to  the  matrix  product  AA  as  A2,  and  AAA  as  A3,  etc.  

– Obviously  only  square  matrices  can  be  mulMplied  that  way  

24-­‐Sep-­‐15  16  

Page 17: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  OperaMons  •  Transpose  –  flip  matrix,  so  row  1  becomes  column  1  

•  A  useful  idenMty:    

24-­‐Sep-­‐15  17  

Page 18: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

•  Determinant  –                             returns  a  scalar  – Represents  area  (or  volume)  of  the  parallelogram  described  by  the  vectors  in  the  rows  of  the  matrix  

– For                                    ,                                  – ProperMes:

24-­‐Sep-­‐15  18  

Matrix  OperaMons  

Page 19: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

•  Trace  

–  Invariant  to  a  lot  of  transformaMons,  so  it’s  used  someMmes  in  proofs.  (Rarely  in  this  class  though.)  

– ProperMes:  

24-­‐Sep-­‐15  19  

Matrix  OperaMons  

Page 20: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Special  Matrices •  IdenMty  matrix  I

–  Square  matrix,  1’s  along  diagonal,  0’s  elsewhere  

– I    ·∙    [another  matrix]  =  [that  matrix]  

•  Diagonal  matrix  –  Square  matrix  with  numbers  along  diagonal,  0’s  elsewhere  

– A  diagonal  ·∙  [another  matrix]  scales  the  rows  of  that  matrix  

24-­‐Sep-­‐15  20  

Page 21: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Special  Matrices

•  Symmetric  matrix    •  Skew-­‐symmetric  matrix  

24-­‐Sep-­‐15  21  

2

40 �2 �52 0 �75 7 0

3

5

Page 22: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  –  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  •  Matrix  rank  •  Singular  Value  DecomposiMon  (SVD)  

–  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  22  

Matrix  mulMplicaMon  can  be  used  to  transform  vectors.  A  matrix  used  in  this  way  is  called  a  transformaMon  matrix.  

Page 23: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

TransformaMon

•  Matrices  can  be  used  to  transform  vectors  in  useful  ways,  through  mulMplicaMon:  x’=  Ax  

•  Simplest  is  scaling:  

(Verify  to  yourself  that  the  matrix  mulMplicaMon  works  out  this  way)

24-­‐Sep-­‐15  23  

Page 24: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  •  How  can  you  convert  a  vector  represented  in  frame  “0”  to  a  new,  rotated  coordinate  frame  “1”?  

•  Remember  what  a  vector  is:  [component  in  direcMon  of  the  frame’s  x  axis,  component  in  direcMon  of  y  axis]  

24-­‐Sep-­‐15  24  

Page 25: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  •  So  to  rotate  it  we  must  produce  this  vector:  [component  in  direcMon  of  new  x  axis,  component  in  direcMon  of  new  y  axis]  

•  We  can  do  this  easily  with  dot  products!  •  New  x  coordinate  is  [original  vector]  dot  [the  new  x  axis]  •  New  y  coordinate  is  [original  vector]  dot  [the  new  y  axis]  

24-­‐Sep-­‐15  25  

Page 26: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  •  Insight:  this  is  what  happens  in  a  matrix*vector  mulMplicaMon  –  Result  x  coordinate  is:  [original  vector]  dot  [matrix  row  1]  

–  So  matrix  mulMplicaMon  can  rotate  a  vector  p:  

24-­‐Sep-­‐15  26  

Page 27: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  •  Suppose  we  express  a  point  in  the  new  coordinate  system  which  is  rotated  leg  

•  If  we  plot  the  result  in  the  original  coordinate  system,  we  have  rotated  the  point  right  

24-­‐Sep-­‐15  27  

– Thus,  rotaMon  matrices  can  be  used  to  rotate  vectors.  We’ll  usually  think  of  them  in  that  sense-­‐-­‐  as  operators  to  rotate  vectors  

Page 28: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

2D  RotaMon  Matrix  Formula  Counter-clockwise rotation by an angle θ

⎥⎦

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡ −=⎥

⎤⎢⎣

yx

yx

θθ

θθ

cossinsincos

''

P  

x  

y’ P’

θ

x’

y  

PRP'=

yθsinxθcos'x −=xθsinyθcos'y +=

24-­‐Sep-­‐15  28  

Page 29: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

TransformaMon  Matrices  •  MulMple  transformaMon  matrices  can  be  used  to  transform  a  point:    p’=R2  R1  S  p  

•  The  effect  of  this  is  to  apply  their  transformaMons  one  ager  the  other,  from  right  to  le3.  

•  In  the  example  above,  the  result  is    (R2  (R1  (S  p)))  

•  The  result  is  exactly  the  same  if  we  mulMply  the  matrices  first,  to  form  a  single  transformaMon  matrix:  p’=(R2  R1  S)  p  

24-­‐Sep-­‐15  29  

Page 30: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Homogeneous  system

•  In  general,  a  matrix  mulMplicaMon  lets  us  linearly  combine  components  of  a  vector  

– This  is  sufficient  for  scale,  rotate,  skew  transformaMons.  

– But  noMce,  we  can’t  add  a  constant!  L

24-­‐Sep-­‐15  30  

Page 31: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Homogeneous  system

– The  (somewhat  hacky)  soluMon?  SMck  a  “1”  at  the  end  of  every  vector:  

 – Now  we  can  rotate,  scale,  and  skew  like  before,  AND  translate  (note  how  the  mulMplicaMon  works  out,  above)  

– This  is  called  “homogeneous  coordinates”  

24-­‐Sep-­‐15  31  

Page 32: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Homogeneous  system –  In  homogeneous  coordinates,  the  mulMplicaMon  works  out  so  the  rightmost  column  of  the  matrix  is  a  vector  that  gets  added.  

– Generally,  a  homogeneous  transformaMon  matrix  will  have  a  boJom  row  of  [0  0  1],  so  that  the  result  has  a  “1”  at  the  boJom  too.  

24-­‐Sep-­‐15  32  

Page 33: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Homogeneous  system

•  One  more  thing  we  might  want:  to  divide  the  result  by  something  –  For  example,  we  may  want  to  divide  by  a  coordinate,  to  make  things  scale  down  as  they  get  farther  away  in  a  camera  image  

– Matrix  mulMplicaMon  can’t  actually  divide  –  So,  by  conven7on,  in  homogeneous  coordinates,  we’ll  divide  the  result  by  its  last  coordinate  ager  doing  a  matrix  mulMplicaMon  

24-­‐Sep-­‐15  33  

Page 34: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

2D  TranslaMon  

t

P  

P’

24-­‐Sep-­‐15  34  

Page 35: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li 24-­‐Sep-­‐15  35  

2D  TranslaMon  using  Homogeneous  Coordinates  

P  

x  

y  

tx  

ty  

P’ t  

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=

⎥⎥⎥

⎢⎢⎢

+

+

11001001

1' y

xtt

tytx

y

x

y

x

P

)1,,(),()1,,(),(

yxyx ttttyxyx

→=

→=

tP

t  P  

PTPtI

⋅=⋅⎥⎦

⎤⎢⎣

⎡=

10

Page 36: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Scaling  

P  

P’

24-­‐Sep-­‐15  36  

Page 37: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Scaling  EquaMon  

P

x

y

sx x

P’ sy  y  

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=

⎥⎥⎥

⎢⎢⎢

11000000

1' y

xs

sysxs

y

x

y

x

P

)1,,(),(')1,,(),(

ysxsysxsyxyx

yxyx →=

→=

PP

S

PSP100S

⋅=⋅⎥⎦

⎤⎢⎣

⎡=

'

)ys,xs(')y,x( yx=→= PP

24-­‐Sep-­‐15  37  

Page 38: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

P

P’=S·P P’’=T·P’

P’’=T  ·∙  P’=T  ·∙(S  ·∙  P)=  T  ·∙  S  ·∙P  =  A  ·∙  P  

Scaling  &  TranslaMng  

P’’

24-­‐Sep-­‐15  38  

Page 39: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Scaling  &  TranslaMng  

⎥⎥⎥

⎢⎢⎢

⎥⎦

⎤⎢⎣

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

+

+

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=⋅⋅=

110

1111000

0

11000000

1001001

''

yx

tSyx

tystxs

yx

tsts

yx

ss

tt

yy

xx

yy

xx

y

x

y

x

PSTP

A 24-­‐Sep-­‐15  39  

Page 40: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

TranslaMng  &  Scaling      !=  Scaling  &  TranslaMng  

⎥⎥⎥

⎢⎢⎢

+

+

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=⋅⋅=

1tsystsxs

1yx

100tss0ts0s

1yx

100t10t01

1000s000s

'''

yyy

xxx

yyy

xxx

y

x

y

x

PTSP

⎥⎥⎥

⎢⎢⎢

+

+

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

=⋅⋅=

1tystxs

1yx

100ts0t0s

1yx

1000s000s

100t10t01

''' yy

xx

yy

xx

y

x

y

x

PSTP

24-­‐Sep-­‐15  40  

Page 41: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  

P  

P’

24-­‐Sep-­‐15  41  

Page 42: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  EquaMons  Counter-clockwise rotation by an angle θ

⎥⎦

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡ −=⎥

⎤⎢⎣

yx

yx

θθ

θθ

cossinsincos

''

P  

x  

y’ P’

θ

x’

y  

PRP'=

yθsinxθcos'x −=xθsinyθcos'y +=

24-­‐Sep-­‐15  42  

Page 43: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

RotaMon  Matrix  ProperMes •  Transpose  of  a  rotaMon  matrix  produces  a  rotaMon  in  the  opposite  direcMon  

•  The  rows  of  a  rotaMon  matrix  are  always  mutually  perpendicular  (a.k.a.  orthogonal)  unit  vectors  –  (and  so  are  its  columns)  

24-­‐Sep-­‐15  43  

1)det( =

=⋅=⋅

RIRRRR TT

Page 44: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

ProperMes  

A  2D  rotaMon  matrix  is  2x2    

1)det( =

=⋅=⋅

RIRRRR TT

⎥⎦

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡ −=⎥

⎤⎢⎣

yx

yx

θθ

θθ

cossinsincos

''

Note: R belongs to the category of normal matrices and satisfies many interesting properties:

24-­‐Sep-­‐15  44  

Page 45: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Scaling  +  RotaMon  +  TranslaMon  P’=  (T  R  S)  P  

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎡ −

⎥⎥⎥

⎢⎢⎢

=⋅⋅⋅=

1yx

1000s000s

1000θcossinθ0inθsθcos

100t10t01

R' y

x

y

x

PSTP

=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎡ −

=

1yx

1000s000s

100tθcossinθtinθsθcos

y

x

y

x

⎥⎥⎥

⎢⎢⎢

⎥⎦

⎤⎢⎣

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎦

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡=

110

1100

10yx

tSRyx

StR

24-­‐Sep-­‐15  45  

This  is  the  form  of  the  general-­‐purpose  transformaMon  matrix  

Page 46: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  –  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  •  Matrix  rank  •  Singular  Value  DecomposiMon  (SVD)  

–  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  46  

The  inverse  of  a  transformaMon  matrix  reverses  its  effect  

Page 47: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

•  Given  a  matrix  A,  its  inverse  A-­‐1    is  a  matrix  such  that  AA-­‐1  =  A-­‐1A  =  I

•  E.g.    •  Inverse  does  not  always  exist.  If  A-­‐1  exists,  A  is  inver2ble  or  non-­‐singular.  Otherwise,  it’s  singular.  

•  Useful  idenMMes,  for  matrices  that  are  inverMble:  

24-­‐Sep-­‐15  47  

Inverse  

Page 48: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

•  Pseudoinverse  –  Say  you  have  the  matrix  equaMon  AX=B,  where  A  and  B  are  known,  and  you  want  to  solve  for  X  

–  You  could  use  MATLAB  to  calculate  the  inverse  and  premulMply  by  it:  A-­‐1AX=A-­‐1B  →  X=A-­‐1B  

– MATLAB  command  would  be  inv(A)*B  –  But  calculaMng  the  inverse  for  large  matrices  ogen  brings  problems  with  computer  floaMng-­‐point  resoluMon  (because  it  involves  working  with  very  small  and  very  large  numbers  together).    

– Or,  your  matrix  might  not  even  have  an  inverse.  

24-­‐Sep-­‐15  48  

Matrix  OperaMons  

Page 49: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

•  Pseudoinverse  –  Fortunately,  there  are  workarounds  to  solve  AX=B  in  these  situaMons.  And  MATLAB  can  do  them!  

–  Instead  of  taking  an  inverse,  directly  ask  MATLAB  to  solve  for  X  in  AX=B,  by  typing  A\B  

– MATLAB  will  try  several  appropriate  numerical  methods  (including  the  pseudoinverse  if  the  inverse  doesn’t  exist)  

– MATLAB  will  return  the  value  of  X  which  solves  the  equaMon  

•  If  there  is  no  exact  soluMon,  it  will  return  the  closest  one  •  If  there  are  many  soluMons,  it  will  return  the  smallest  one  

24-­‐Sep-­‐15  49  

Matrix  OperaMons  

Page 50: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

•  MATLAB  example:  

24-­‐Sep-­‐15  50  

Matrix  OperaMons  

>> x = A\B x = 1.0000 -0.5000

Page 51: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  –  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  •  Matrix  rank  •  Singular  Value  DecomposiMon  (SVD)  

–  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  51  

The  rank  of  a  transformaMon  matrix  tells  you  how  many  dimensions  it  transforms  a  vector  to.  

Page 52: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Linear  independence •  Suppose  we  have  a  set  of  vectors  v1, …, vn •  If  we  can  express  v1  as  a  linear  combinaMon  of  the  other  vectors  v2…vn,  then  v1  is  linearly  dependent  on  the  other  vectors.    –  The  direcMon  v1  can  be  expressed  as  a  combinaMon  of  the  direcMons  v2…vn.  (E.g.  v1  =  .7 v2  -­‐.7 v4)  

•  If  no  vector  is  linearly  dependent  on  the  rest  of  the  set,  the  set  is  linearly  independent.  –  Common  case:  a  set  of  vectors  v1, …, vn  is  always  linearly  independent  if  each  vector  is  perpendicular  to  every  other  vector  (and  non-­‐zero)    

24-­‐Sep-­‐15  52  

Page 53: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Linear  independence

Not  linearly  independent

24-­‐Sep-­‐15  53  

Linearly  independent  set

Page 54: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  rank

•  Column/row  rank      – Column  rank  always  equals  row  rank  

•  Matrix  rank  

 24-­‐Sep-­‐15  54  

Page 55: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  rank •  For  transformaMon  matrices,  the  rank  tells  you  the  dimensions  of  the  output  

•  E.g.  if  rank  of  A  is  1,  then  the  transformaMon  

           p’=Ap    maps  points  onto  a  line.    

•  Here’s  a  matrix  with  rank  1:    

24-­‐Sep-­‐15  55  

All  points  get  mapped  to  the  line  y=2x  

Page 56: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Matrix  rank •  If  an  m  x  m  matrix  is  rank  m,  we  say  it’s  “full  rank”  

– Maps  an  m  x  1  vector  uniquely  to  another  m  x  1  vector  – An  inverse  matrix  can  be  found  

•  If  rank  <  m,  we  say  it’s  “singular”  – At  least  one  dimension  is  ge}ng  collapsed.  No  way  to  look  at  the  result  and  tell  what  the  input  was  

–  Inverse  does  not  exist  •  Inverse  also  doesn’t  exist  for  non-­‐square  matrices  

 

24-­‐Sep-­‐15  56  

Page 57: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  –  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  •  Matrix  rank  •  Singular  Value  DecomposiMon  (SVD)  

–  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  57  

SVD  is  an  algorithm  that  represents  any  matrix  as  the  product  of  3  matrices.  It  is  used  to  discover  interesMng  structure  in  a  matrix.  

Page 58: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Singular  Value  DecomposiMon  (SVD) •  There  are  several  computer  algorithms  that  can  “factorize”  a  matrix,  represenMng  it  as  the  product  of  some  other  matrices  

•  The  most  useful  of  these  is  the  Singular  Value  DecomposiMon.  

•  Represents  any  matrix  A  as  a  product  of  three  matrices:  UΣVT  

•  MATLAB  command:  [U,S,V]=svd(A)

24-­‐Sep-­‐15  58  

Page 59: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Singular  Value  DecomposiMon  (SVD)

         UΣVT  =  A  •  Where  U  and  V  are  rotaMon  matrices,  and  Σ  is  a  scaling  matrix.  For  example:  

24-­‐Sep-­‐15  59  

Page 60: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Singular  Value  DecomposiMon  (SVD)

•  Beyond  2D:  –  In  general,  if  A  is  m  x  n,  then  U  will  be  m  x  m,  Σ  will  be  m  x  n,  and  VT  will  be  n  x  n.    

–  (Note  the  dimensions  work  out  to  produce  m  x  n  ager  mulMplicaMon)  

24-­‐Sep-­‐15  60  

Page 61: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Singular  Value  DecomposiMon  (SVD) •  U  and  V  are  always  rotaMon  matrices.    

– Geometric  rotaMon  may  not  be  an  applicable  concept,  depending  on  the  matrix.  So  we  call  them  “unitary”  matrices  –  each  column  is  a  unit  vector.    

•  Σ  is  a  diagonal  matrix  – The  number  of  nonzero  entries  =  rank  of  A  – The  algorithm  always  sorts  the  entries  high  to  low

24-­‐Sep-­‐15  61  

Page 62: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons •  We’ve  discussed  SVD  in  terms  of  geometric  transformaMon  matrices  

•  But  SVD  of  an  image  matrix  can  also  be  very  useful  

•  To  understand  this,  we’ll  look  at  a  less  geometric  interpretaMon  of  what  SVD  is  doing

24-­‐Sep-­‐15  62  

Page 63: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons

•  Look  at  how  the  mulMplicaMon  works  out,  leg  to  right:  •  Column  1  of  U  gets  scaled  by  the  first  value  from  Σ.  

•  The  resulMng  vector  gets  scaled  by  row  1  of  VT  to  produce  a  contribuMon  to  the  columns  of  A  

24-­‐Sep-­‐15  63  

Page 64: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons

•  Each  product  of  (column  i  of  U)·∙(value  i  from  Σ)·∙(row  i  of  VT)  produces  a  component  of  the  final  A.  

24-­‐Sep-­‐15  64  

+  

=  

Page 65: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons

•  We’re  building  A  as  a  linear  combinaMon  of  the  columns  of  U  

•  Using  all  columns  of  U,  we’ll  rebuild  the  original  matrix  perfectly  

•  But,  in  real-­‐world  data,  ogen  we  can  just  use  the  first  few  columns  of  U  and  we’ll  get  something  close  (e.g.  the  first  Apar)al,  above)  

24-­‐Sep-­‐15  65  

Page 66: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons

•  We  can  call  those  first  few  columns  of  U  the  Principal  Components  of  the  data  

•  They  show  the  major  paJerns  that  can  be  added  to  produce  the  columns  of  the  original  matrix  

•  The  rows  of  VT  show  how  the  principal  components  are  mixed  to  produce  the  columns  of  the  matrix  

24-­‐Sep-­‐15  66  

Page 67: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons

We  can  look  at  Σ  to  see  that  the  first  column  has  a  large  effect  

24-­‐Sep-­‐15  67  

while  the  second  column  has  a  much  smaller  effect  in  this  example  

Page 68: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

SVD  ApplicaMons

24-­‐Sep-­‐15  68  

•  For  this  image,  using  only  the  first  10  of  300  principal  components  produces  a  recognizable  reconstrucMon  

•  So,  SVD  can  be  used  for  image  compression  

Page 69: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Principal  Component  Analysis

•  Remember,  columns  of  U  are  the  Principal  Components  of  the  data:  the  major  paJerns  that  can  be  added  to  produce  the  columns  of  the  original  matrix  

•  One  use  of  this  is  to  construct  a  matrix  where  each  column  is  a  separate  data  sample  

•  Run  SVD  on  that  matrix,  and  look  at  the  first  few  columns  of  U  to  see  paJerns  that  are  common  among  the  columns  

•  This  is  called  Principal  Component  Analysis  (or  PCA)  of  the  data  samples  

24-­‐Sep-­‐15  69  

Page 70: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Principal  Component  Analysis

•  Ogen,  raw  data  samples  have  a  lot  of  redundancy  and  paJerns  

•  PCA  can  allow  you  to  represent  data  samples  as  weights  on  the  principal  components,  rather  than  using  the  original  raw  form  of  the  data  

•  By  represenMng  each  sample  as  just  those  weights,  you  can  represent  just  the  “meat”  of  what’s  different  between  samples.  

•  This  minimal  representaMon  makes  machine  learning  and  other  algorithms  much  more  efficient  

24-­‐Sep-­‐15  70  

Page 71: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Outline  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  –  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  •  Matrix  rank  •  Singular  Value  DecomposiMon  (SVD)  

–  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  71  

Computers  can  compute  SVD  very  quickly.  We’ll  briefly  discuss  the  algorithm,  for  those  who  are  interested.  

Page 72: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Addendum:  How  is  SVD  computed?

•  For  this  class:  tell  MATLAB  to  do  it.  Use  the  result.  

•  But,  if  you’re  interested,  one  computer  algorithm  to  do  it  makes  use  of  Eigenvectors  – The  following  material  is  presented  to  make  SVD  less  of  a  “magical  black  box.”  But  you  will  do  fine  in  this  class  if  you  treat  SVD  as  a  magical  black  box,  as  long  as  you  remember  its  properMes  from  the  previous  slides.

24-­‐Sep-­‐15  72  

Page 73: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Eigenvector  definiMon

•  Suppose  we  have  a  square  matrix  A.  We  can  solve  for  vector  x  and  scalar  λ  such  that  Ax=  λx  

•  In  other  words,  find  vectors  where,  if  we  transform  them  with  A,  the  only  effect  is  to  scale  them  with  no  change  in  direcMon.  

•  These  vectors  are  called  eigenvectors  (German  for  “self  vector”  of  the  matrix),  and  the  scaling  factors  λ  are  called  eigenvalues  

•  An  m  x  m  matrix  will  have  ≤  m  eigenvectors  where  λ  is  nonzero

24-­‐Sep-­‐15  73  

Page 74: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Finding  eigenvectors •  Computers  can  find  an  x  such  that  Ax=  λx  using  this  iteraMve  algorithm:  

–  x=random  unit  vector  – while(x  hasn’t  converged)  

•  x=Ax  •  normalize  x    

•  x  will  quickly  converge  to  an  eigenvector  •  Some  simple  modificaMons  will  let  this  algorithm  find  all  eigenvectors  

24-­‐Sep-­‐15  74  

Page 75: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Finding  SVD •  Eigenvectors  are  for  square  matrices,  but  SVD  is  for  all  matrices  

•  To  do  svd(A),  computers  can  do  this:  – Take  eigenvectors  of  AAT  (matrix  is  always  square).    

•  These  eigenvectors  are  the  columns  of  U.    •  Square  root  of  eigenvalues  are  the  singular  values  (the  entries  of  Σ).  

– Take  eigenvectors  of  ATA  (matrix  is  always  square).    

•  These  eigenvectors  are  columns  of  V  (or  rows  of  VT)  

24-­‐Sep-­‐15  75  

Page 76: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

Finding  SVD •  Moral  of  the  story:  SVD  is  fast,  even  for  large  matrices  

•  It’s  useful  for  a  lot  of  stuff  •  There  are  also  other  algorithms  to  compute  SVD  or  part  of  the  SVD  – MATLAB’s  svd()  command  has  opMons  to  efficiently  compute  only  what  you  need,  if  performance  becomes  an  issue  

24-­‐Sep-­‐15  76  

A  detailed  geometric  explanaMon  of  SVD  is  here:  hJp://www.ams.org/samplings/feature-­‐column/fcarc-­‐svd    

Page 77: Linear’AlgebraPrimer’ - Stanford Computer Vision Labvision.stanford.edu/teaching/cs131_fall1516/lectures/cs131_linalg_review.pdfFei-Fei Li Linear Algebra Review Linear’AlgebraPrimer’

Linear Algebra ReviewFei-Fei Li

What  we  have  learned  •  Vectors  and  matrices  

–  Basic  Matrix  OperaMons  –  Special  Matrices  

•  TransformaMon  Matrices  

–  Homogeneous  coordinates  –  TranslaMon  

•  Matrix  inverse  

•  Matrix  rank  

•  Singular  Value  DecomposiMon  (SVD)  –  Use  for  image  compression  –  Use  for  Principal  Component  Analysis  (PCA)  –  Computer  algorithm  

24-­‐Sep-­‐15  77