[discussion] taylor expansion of matrix inverse

Upload: thgnguyen

Post on 08-Jul-2018

244 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    1/12

    Aug 23, 2010#1

    Aug 23, 2010#2

    Aug 23, 2010#3

    Matrix inverse equals power-series

    Hi!

    In an economics book about input-output analysis the following statement

    is presented, but I cannot find the proof:

    Can someone help me show why this is the case?

    P.s. I think there is some assumptions made about such that all

    elements is less than 1 and greater than 0. Btw, n goes to infty.

    Phys.org - latest science and technology news stories on Phys.org

    • Game over? Computer beats human champ in ancient Chinese game

    • Simplifying solar cells with a new mix of materials

    • Imaged 'jets' reveal cerium's post-shock inner strength

    So you mean . This is true if the right side

    converges, which is true if and only if all of the eigenvalues of A have

    absolute value smaller than 1.

    To prove it, multiply both sides by .

    Forums   Mathematics   Linear and Abstract Algebra  

    Mårten

    (   I    −     A     )  

    − 1   

    = (   I    +     A     +     A    

    2   

    +     A    

    3   

    + . . . +        A    

        

    )  

    A    

    a   

       

    adriank

    (   I    −     A     )  

    − 1   

    =     ∑         

    ∞     

       = 0   

    A    

       

    I    −     A    

    Mårten

    http://phys.org/news/2016-01-imaged-jets-reveal-cerium-post-shock.htmlhttp://phys.org/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/forums/linear-and-abstract-algebra.75/https://www.physicsforums.com/http://phys.org/news/2016-01-imaged-jets-reveal-cerium-post-shock.htmlhttp://phys.org/news/2016-01-solar-cells-materials.htmlhttp://phys.org/news/2016-01-chess-human-ancient-chinese-game.htmlhttp://phys.org/https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850331https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850260https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/https://www.physicsforums.com/misc/quick-navigation-menu?selected=node-75

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    2/12

    Aug 23, 2010#4

    Aug 23, 2010#5

    Aug 23, 2010#6

    Aug 23, 2010#7

    Thanks a lot! Easier than I thought.

    Now I found another assumption regarding A, and that is that the row

    sums are all less than 1, and the column sums are also all less than 1.

    Could that be the same as saying that all eigenvalues have to be less than

    |1|, and in that case why are these statements equivalent?

    What exactly do you mean by row sums and column sums?

    If the matrix A is

    then the rowsums are and . The columnsums

    are and .

    Well then that's certainly not true. Maybe you meant absolute values, orsomething?

    An equivalent condition to the eigenvalue condition is that for

    all such that .

    Last edited: Aug 23, 2010

    adriank

    Mårten

      

    a   

    1 1   

    a   

    1 2   

    a   

    2 1   

    a   

    2 2   

     

    a   

    1 1   

    +     a   

    1 2   

    < 1    a   

    2 1   

    +     a   

    2 2   

    < 1   

    a   

    1 1   

    +     a   

    2 1   

    < 1    a   

    1 2   

    +     a   

    2 2   

    < 1   

    adriank

    ∣ A     ∣ < 1   

        ∣      ∣ = 1   

    Mårten

    https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850388https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850382https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850357https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850335

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    3/12

    Aug 24, 2010#8

    But if, at the same time, all , it doesn't work then?

    Anyhow, is there a way to set up criteria for A making all its eigenvalues

    less than |1|?

    adriank said: ↑

    Mårten said: ↑

    This isn't a valid proof. If you multiply both sides by I-A and simplify the

    right-hand side, you're making assumptions about the infinite sum that you

    can't make at this point. And if you intend to consider the equality in post

    #1 with a finite number of terms in the sum, the two sides aren't actually

    equal, so you'd be starting with an equality that's false.

    (Maybe you understand this already, but it didn't look that way to me, and

    you did  say that you're an economy student. )

    What you need to do is to calculate

    a   

       

    > 0   

    Fredrik

     Staff Emeritus Science Advisor  Gold Member

    So you mean . This is true if the right side converges, which is 

    true if and only if all of the eigenvalues of A have absolute value smaller than 1.

    To prove it, multiply both sides by .

    (   I    −     A     )  

    − 1   

    =    ∑     

    ∞     

       = 0   

    A    

       

    I    −     A    

    Thanks a lot! Easier than I thought.

    Now I found another assumption regarding A, and that is that the row sums are all less 

    than 1, and the column sums are also all less than 1. Could that be the same as saying 

    that all eigenvalues have to be less than |1|, and in that case why are these statements 

    equivalent? 

    (    l i m     

        → ∞     

    (   I    +     A     +     A    

    2   

    + ⋯ +     A    

        

    )       (   I    −     A     )  

    https://www.physicsforums.com/members/fredrik.14944/https://www.physicsforums.com/goto/post?id=2850331#post-2850331https://www.physicsforums.com/goto/post?id=2850260#post-2850260https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850762

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    4/12

    Aug 24, 2010#9

    Aug 24, 2010#10

    Start by rewriting it as

    and then prove that this is =I. This result implies that I-A is invertible, and

    that the inverse is that series. The information you were given about the

    components of A is needed to see that the unwanted term goes to zero in

    that limit.

    Last edited: Aug 24, 2010

    Thanks a lot! I really appreciate this help, I think I understand it pretty well

    now.

    Adriank, it seems that my conditions given about A satisfy your condition

    that for all such that . I cannot prove it formally that

    these two conditions are the same (or that my condition follows from

    Adriank's), but some easy calculations and checks I've made, makes itreasonable. (If someone has the energy to prove it, I wouldn't be late to

    look at it.)

    My conditions about A once more (sorry to not have given them at the

    same time before): Row sums and column sums are all, one by one, less

    than 1, and at the same time .

    P.s. Actually, I didn't say I'm an economics student, I just said I read aneconomics book.

    Btw, it occurred to me that I don't understand really why all eigenvalues of 

    A have to have absolute value less than 1 in order to make the series

    above converge. Why does that eigenvalue condition affect the

    = l i m          

        → ∞     

    (    (   I    +     A     +     A    

    2   

    + ⋯ +     A    

        

    ) (   I    −     A     )   )   

    Mårten

    ∣ A     ∣ < 1        ∣      ∣ = 1   

    0 ≤     a   

       

    < 1   

    Mårten

    https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2851562https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2850876

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    5/12

    Aug 24, 2010#11

    convergence property of the series?

    Another follow-up question: Now when we have this nice power expansion

    for the inverse matrix, is that actually the way (some) matrix inverses is

    calculated in computers?

    Following on what Fredrik said, note that

    Now we want to take the limit as to show that

    But for that to be true, you need that

    This is true if and only if the operator norm of A is less than 1. And it turns

    out that the operator norm of A is the largest absolute value of the

    eigenvalues of A. If you have some other condition that implies this, then

    that works too.

    As for computation, usually that's a very inefficient way to calculate it

    directly, unless A is nilpotent (so that the series only has finitely manynonzero terms). However, if A is n by n, then you can express An in terms

    of lower powers of A by the Cayley-Hamilton theorem. You could also

    apply various numerical techniques to directly find the inverse of I - A; a

    lot of times, all you care about is (I - A)-1x for some vector x, which can

    often be done even more efficiently than calculating the full matrix

    inverse.

    adriank

    (   

        

    ∑       

       = 0   

    A    

       

        (   I    −     A     ) =     I    −     A    

        + 1   

        

    → ∞     

    (   

    ∞     

    ∑       

       = 0   

    A    

       

        (   I    −     A     ) =     I    . 

    l i m     

        → ∞     

    A    

        + 1   

    = 0 . 

    https://www.physicsforums.com/members/adriank.148960/http://en.wikipedia.org/wiki/Operator_normhttps://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2851595

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    6/12

    Aug 24, 2010#12

    Let's try the Hilbert-Schmidt norm, defined by

    This definition and the information given together imply that if A is a m×m

    matrix,

    The norm satisfies

    so

    Fredrik

     Staff Emeritus Science Advisor  Gold Member

    ⟨   A     ,  B      ⟩   = T r   A    

    †  

    B     

    ∥   A     ∥  

    2   

    =     ⟨   A     ,  A     ⟩   =     ∑       

      

        

    a   

    ∗   

        

    a   

        

    =     ∑       

      

        

    |  a   

       

    2   

    ∥  A     ∥ ≥ 0   

    ∥   A     ∥  

    2   

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    7/12

    Aug 24, 2010#13

    Aug 24, 2010#14

    Aug 25, 2010#15

    but this doesn't converge as n→∞. It looks like the assumption

    isn't strong enough to guarantee convergence. It looks like we would need

    something like (or another norm to define convergence).

    Hmm...

    Last edited: Aug 24, 2010

    Fredrik said: ↑

    Did you take into account that my assumption wasn't just that

    (actually ), but also that the row sums and column sums are,

    one by one, less than 1? In a way, the latter thing seems to imply that

    , "on the average" at least.

    I'll take a look at the operator room in the meantime.

    Well, the Hilbert-Schmidt norm is always bigger than the operator

    norm, so if A has Hilbert-Schmidt norm less than 1, then its operator norm

    is also less than 1.

    And the Hilbert-Schmidt norm happens to satisfy

    Mårten said: ↑

    |  a   

       

    | < 1   

    |  a   

       

    | < 1 /           

    Mårten

    It looks like the assumption isn't strong enough to guarantee convergence. It 

    looks like we would need something like (or another norm to define 

    convergence). Hmm...

    |  a   

       

    | < 1   

    |  a   

       

    | < 1 /           

    |  a   

       

    | < 1   

    0 ≤     a   

       

    < 1   

    |  a   

       

    | < 1 /           

    adriank

    ∥ ⋅ ∥  

    H S    

    ∥   A     ∥  

    2   

    H S    

    =     ∑       

      

        

    ∣  a   

       

    ∣ 

    2   

    Fredrik

    Did you take into account that my assumption wasn't just that (actually|  a       

    | < 1   

    https://www.physicsforums.com/members/fredrik.14944/https://www.physicsforums.com/members/adriank.148960/https://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/goto/post?id=2851704#post-2851704https://www.physicsforums.com/goto/post?id=2851676#post-2851676https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2852224https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2851761https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-2851704

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    8/12

    Aug 26, 2010#16

    No, I didn't take that into account. When I do, I get

    which is better, but not good enough. We want . The condition on

    the "row sums" and "column sums" allows the possibility of a diagonal

    matrix with entries close to 1 on the diagonal. Such a matrix doesn't have

    a Hilbert-Schmidt norm

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    9/12

    Aug 26, 2010#17

    equation above imply that all the components of An+1 go to zero when n

    goes to infty? Where exactly in the equation above does one see that? It

    was a couple of years ago I took my classes in linear algebra...

    (Edit: I do understand that it's enough to show that the elements of A2 is

    less than the elements of A, because then the elements in A3 should be

    even less, and so on, but I cannot yet see why the elements of A2 is less

    than the elements of A.)

    Last edited: Aug 26, 2010

    Mårten said: ↑

    It doesn't, and you don't. For this method of proof to work, we needed the

    norm of A to be

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    10/12

    Sep 12, 2010#18

    Since the left-hand side is the largest component of An+1, all components

    of An+1 go to zero.

    Last edited: Aug 26, 2010

    Thanks a lot for this derivation, it was really a nifty one! Just a minor thing,

    I think the rightmost inequality in the uppermost equation above should

    say "= ||B||a" not "

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    11/12

    Feb 22, 2011#19

    Correction 

    I looked through this again, and realized that I was wrong about the

    condition I gave that the rowsums are less than 1. Fortunately, this doesn't

    have any implications for the proof above, since it just makes use of that

    the column sums are less than 1. So the proof still holds!

    For anyone surfing by this thread, the following background information

    could therefore be useful: What you start out with in input-output analysis,

    is the input-output table which could be written as

    ,

    where f ij  represents input of products from industry i  to industry j , i  is a

    unit column vector (with just ones in it), C  is a column vector representing

    final demand (non-industry use from households, government, et.c.), and x 

    is a column vector representing total ouput (as well as total input). Not

    included in the above equation is a row of value-added (VA) below the F-

    matrix - this row could be interpreted as the labour force input (i.e.,

    salaries) required for the industries to produce their products. The different

    inputs of products and labor for a certain industry j  is shown in the rows of 

    column j  in the F-matrix and the VA-row; the sum of this column equals

    the total inputs x  j . This value is the same as the total output x  j  from that

    industry, which is the rowsum of F and C for row j . That is, total costs equal

    total revenues.

    Now, if the matrix A is generated via a ij  = f ij  /x  j , the cells in a certain

    column j  in A, represent the shares of total input x  j . That implies that each

    column sum of A are less than 1. And that's not the same as saying that

    the row sums of A are less than 1, since the cells in a row of A doesn't

    represent the shares of that row's rowsum, according to a ij  = f ij  /x  j . Finally,

    the above equation can now be written as

    Mårten

    F   +     C      =        

     MENU LOG IN OR SIGN UP

    https://www.physicsforums.com/loginhttps://www.physicsforums.com/members/marten.133232/https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/#post-3151870

  • 8/19/2019 [Discussion] Taylor Expansion of Matrix Inverse

    12/12

    CONTACT US HELP  

    Terms and Rules Privacy Policy© 2001-2016 Physics Forums

    (Want to reply to this thread? Log in or Sign up here!)

    (where x  becomes a function of the final demand C ), and that explains why

    the topic of this thread was about matrix inverses.

    Know someone interested in this topic? Share this thread via  Reddit,  Google+,  Twitter, or

    Facebook

    Have something to add?  

    Similar Discussions: Matrix inverse equals power-series

    Power series to find the inverse of any function in Z_2[x]? (Replies: 14)

    Inverse Matrix (Replies: 2)

    Powers of matrices equal to the identity matrix (Replies: 3)

    Inverse of a matrix (Replies: 4)

    Inverse of a matrix (Replies: 3)

    A    +     C      =         ⇔          = (   I    −     A     )  

    − 1   

    C     

    Forums   Mathematics   Linear and Abstract Algebra  

    https://www.physicsforums.com/forums/linear-and-abstract-algebra.75/https://www.physicsforums.com/https://www.physicsforums.com/threads/inverse-of-a-matrix.747317/https://www.physicsforums.com/threads/inverse-of-a-matrix.663402/https://www.physicsforums.com/threads/powers-of-matrices-equal-to-the-identity-matrix.653205/https://www.physicsforums.com/threads/inverse-matrix.612080/https://www.physicsforums.com/threads/power-series-to-find-the-inverse-of-any-function-in-z_2-x.102878/http://www.facebook.com/sharer.php?u=https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/http://twitter.com/share?url=https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/https://plus.google.com/share?url=https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/http://www.reddit.com/submit?url=https://www.physicsforums.com/threads/matrix-inverse-equals-power-series.423897/&title=Matrix%20inverse%20equals%20power-serieshttps://www.physicsforums.com/misc/quick-navigation-menu?selected=node-75https://www.physicsforums.com/login/https://www.physicsforums.com/threads/physics-forums-privacy-policy.797829/https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/https://www.pinterest.com/physicsforumshttps://www.facebook.com/physicsforumshttps://twitter.com/physicsforumshttps://plus.google.com/+physicsforumhttps://play.google.com/store/apps/details?id=com.tapatalk.physicsforumscom&hl=enhttps://itunes.apple.com/us/app/physics-forums/id594086680https://www.physicsforums.com/help/https://www.physicsforums.com/misc/contact