quadratic inverse eigenvalue problems: theory, methods, and applications · 2020-04-09 · abstract...

110
ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim Olegovich Sokolov, Ph.D. Department of Mathematical Sciences Northern Illinois University, 2008 Biswa Nath Datta, Director This dissertation is devoted to the study of quadratic inverse eigenvalue problems from theoretical, computational and applications points of view. Special attention is given to two important practical engineering problems: finite element model up- dating and substructured quadratic inverse eigenvalue problems. Because of their importance these problems have been well studied and there now exists a voluminous body of work, especially on finite element model updating, both by academic researchers and practicing engineers. Unfortunately, many of the existing industrial techniques are ad hoc in nature and lack solid mathematical foundation and sophisticated state-of-the-art computa- tional techniques. In this dissertation, some of the existing engineering techniques and industrial practices have been explained, whenever possible, by providing mathe- matical explanations with the help of new results on the underlying quadratic inverse eigenvalue problems, and based on these results, new techniques of model updat- ing and substructured quadratic inverse eigenvalue problems have been proposed.

Upload: others

Post on 17-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

ABSTRACT

QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY,

METHODS, AND APPLICATIONS

Vadim Olegovich Sokolov, Ph.D.

Department of Mathematical Sciences

Northern Illinois University, 2008

Biswa Nath Datta, Director

This dissertation is devoted to the study of quadratic inverse eigenvalue problems

from theoretical, computational and applications points of view. Special attention

is given to two important practical engineering problems: finite element model up-

dating and substructured quadratic inverse eigenvalue problems.

Because of their importance these problems have been well studied and there

now exists a voluminous body of work, especially on finite element model updating,

both by academic researchers and practicing engineers.

Unfortunately, many of the existing industrial techniques are ad hoc in nature

and lack solid mathematical foundation and sophisticated state-of-the-art computa-

tional techniques. In this dissertation, some of the existing engineering techniques

and industrial practices have been explained, whenever possible, by providing mathe-

matical explanations with the help of new results on the underlying quadratic inverse

eigenvalue problems, and based on these results, new techniques of model updat-

ing and substructured quadratic inverse eigenvalue problems have been proposed.

Page 2: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

These results will contribute to advancement of the state-of-the-art knowledge in

applied and computational mathematics, and mechanical vibrations and structural

engineering. They will also impact the industries, such as automobile and aerospace

companies, where these problems are routinely solved in their design and manufac-

turing.

Page 3: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

NORTHERN ILLINOIS UNIVERSITYDE KALB, ILLINOIS

DECEMBER 2008

QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS,

AND APPLICATIONS

BY

VADIM OLEGOVICH SOKOLOV

c© 2008 Vadim Olegovich Sokolov

A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE

DOCTOR OF PHILOSOPHY

DEPARTMENT OF MATHEMATICAL SCIENCES

Doctoral Director:Biswa Nath Datta

Page 4: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

ACKNOWLEDGMENTS

I am thankful to my advisor Professor Biswa Datta for his encouragement and

guidance during my three years of research, which is presented in this thesis. I

would also like to acknowledge the help of Dr. Sien Deng, with whom I discussed

many aspects of the optimization algorithms used in this work. As well, I would

like to thank Olga Tchertkova for proof-reading my manuscript. Finally, I avow the

support that was provided by the Graduate School, in the form of the completion

fellowship, as well as support from Argonne National Laboratory.

Page 5: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

TABLE OF CONTENTS

Page

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

Chapter

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2. Quadratic Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1 Definition and Basic Results . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 Orthogonality Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5 Real-Valued Representation for the Eigenvalues and Eigenvectorsof the QEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3. Quadratic Inverse Eigenvalue Problem [QIEP] . . . . . . . . . . . 19

3.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Unstructured QIEP from Fully Prescribed Eigenstructure . . . 20

3.3 Structured QIEP with Fully Prescribed Eigenstructure . . . . . 27

3.4 Unstructured QIEP with Partially Prescribed Eigenstructure . 30

3.5 Structured QIEP with Partially Prescribed Eigenstructure . . . 33

3.5.1 Pencils with Banded Coefficient Matrices . . . . . . . . . . . . 36

3.5.2 An Application of Structured QIEP: Finite Element ModelUpdating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4. Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Page 6: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

iv

Chapter Page

4.1 Unconstrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3 Augmented Lagrangian Method . . . . . . . . . . . . . . . . . . . . . . . 47

5. Model Updating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.1 Mathematical Statement and Engineering and ComputationalChallenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.2 Methods for Undamped Models . . . . . . . . . . . . . . . . . . . . . . . 51

5.2.1 Undamped Model Updating with No Spurious Modes andwith Incomplete Measured Data . . . . . . . . . . . . . . . . . . . 52

5.3 Model Updating for Damped Models . . . . . . . . . . . . . . . . . . . 56

5.3.1 Eigenvalue Embedding Methods . . . . . . . . . . . . . . . . . . . 57

5.4 Quadratic Model Updating with Measured Data Satisfying Or-thogonally Relations: A New Method . . . . . . . . . . . . . . . . . . . 59

5.4.1 Existence of Symmetric Solution of the Model UpdatingProblem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5.4.2 Linear Case (Undamped Model) . . . . . . . . . . . . . . . . . . . . 61

5.4.3 Quadratic Case (Damped Model) . . . . . . . . . . . . . . . . . . . 62

5.4.4 A Two-Stage Model Updating Scheme . . . . . . . . . . . . . . 64

5.5 A Solution Method and Its Convergence Properties . . . . . . . . 66

5.5.1 Computation of the Gradient Formulas . . . . . . . . . . . . . . 70

5.5.2 Gradient Formulas for Problem Q . . . . . . . . . . . . . . . . . . 71

5.5.3 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.5.4 A Mass-Spring System of 10 DoF . . . . . . . . . . . . . . . . . . 74

5.5.5 Vibrating Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

6. Affine Parametric Quadratic Inverse Eigenvalue Problem . . 80

Page 7: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

v

Chapter Page

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

6.2 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.3 Matrix Nearness Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6.4 Method of Alternating Projections . . . . . . . . . . . . . . . . . . . . . 90

6.5 Hybrid Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

6.6 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Page 8: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

LIST OF FIGURES

Figure Page

2.1. Mass-spring system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.1. Constraint optimization problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.1. Mass - spring system with 10 DoF. . . . . . . . . . . . . . . . . . . . . . . . . 74

5.2. The percentage change in the diagonal elements of the stiffness matrixfor mass-spring system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

6.1. Serially linked mass-spring system. . . . . . . . . . . . . . . . . . . . . . . . . . 81

Page 9: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

CHAPTER 1

Introduction

This dissertation deals with theory, computations and applications of quadratic

inverse eigenvalue problem (QIEP). A QIEP is concerned with constructing three

n-by-n matrices M , C, and K such that quadratic matrix pencil:

P (λ) = λ2M + λC + K

has a prescribed eigenstructure and the matrices M, C, K satisfy certain constraints.

By eigenstructure we mean a set of k eigenvalues and k eigenvectors and their

multiplicities; k is not necessarily equal to 2n. A special attention is given to two

important QIEPs: finite element model updating (FEMU) and quadratic affine

inverse eigenvalue problems. Both the problems have industrial applications and

arise in vibration industries, such as automobile, air and space craft, buildings,

bridges, and highways constructions, and others.

The vibrating structures are modeled by distributed parameter systems. In prac-

tice, such systems are discretized into a finite dimensional systems by using finite

element techniques. The dynamics of finite element models are governed by the

eigenvalues and eigenvectors of the pencil P (λ). Usually, the finite element models

are very large. Unfortunately, only a small number of eigenvalues and eigenvectors of

the associated pencil are computable using the state-of-the-art computational tech-

niques, such as Jacobi-Davidson projection method. Similarly only a small number

Page 10: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

2

of them are also measurable experimentally, because of hardwire limitations. Under

these circumstances, in order to numerically validate a finite element model, a vibra-

tion engineer must update an analytical finite element model using a small number

of measured data from a prototype or a real-life structure. Finite element matrices

usually have nice exploitable properties, such as the symmetry, positive definiteness,

sparsity, etc., which are assets in a computational setting. Thus, the updating has to

be done in such a way that the updated model reproduces the measured eigenvalues

and eigenvectors and the original physical properties are preserved.

If a model has been updated this way, then the updated model can be used

in confidence for future design and construction. Another important application

is that an updated model is very often used to identify damages in structures, by

comparing the physical parameters and differences of the elements of the original

and updated models. Because of its industrial uses, the updating problem has been

widely studied both by academic researchers and practicing engineers. As a result,

a voluminous body of work exists.

There are two types of updating procedures. The first type of methods, assuming

the mass matrix as the reference matrix, update first the measured data so that it

satisfies the mass-orthogonality constraint (iii). This is then followed by updating

the stiffness matrix so as to satisfy the constraints (ii) and (iv). The others update,

either separately or simultaneously, the mass and stiffness matrices, satisfying the

constraints (i)-(iv) [9, 42, 75]. There also now exists a method which updates the

stiffness matrix first satisfying the constraints (ii) and (iv) and then computes the

missing entries of the measured modes in a computational setting such that com-

puted data satisfies the mass orthogonality constraint [10]. The method proposed

in [10] has the additional important feature that the eigenvalues and eigenvectors

Page 11: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

3

which are not updated remain unchanged by the updating procedure. This guaran-

tees that “no spurious modes appear in the frequency range of interest.”

In this dissertation, we propose a new method of the first type for a damped

model. Thus, our method consists of two stages. In Stage I, we update the mea-

sured eigenvectors so that they satisfy a quadratic orthogonality relation proved in

Corollary 5.1 of this dissertation. This result is a real-form generalization of the

three orthogonality relations, proved earlier in [19]. In Stage II, the updated mea-

sured eigenvectors from Stage I are used to update the stiffness matrix so that it

remains symmetric after updating and the measured eigenvalues and eigenvectors

are reproduced by the updated model. Thus, our method generalizes methods for

undamped models of the first type to a damped model. The results of numerical ex-

periments on some case studies are presented to show the accuracy of the proposed

method. Our contribution also includes mathematically established results to show

that satisfaction of the orthogonality relation by the measured data is necessary and

sufficient for the solution of Stage II to be symmetric.

It is to be noted that there are other methods for updating of a damped model.

These include the method of Friswell, Inman and Pilkey [32], the algorithmic im-

plementation of this method [44], and several control-theoretic methods (e.g, [56,

57, 76]). For details of the control theoretic methods, see Friswell and Mottershead

[33, p. 154]. However, none of those methods explicitly updates the measured data

so as to satisfy any orthogonality constraints. On the other hand, as noted before,

our mathematical results of Theorems 5.6 and 5.7 demonstrate that Stage I must

be performed explicitly or it is to be implicitly assumed that the measured data

already satisfies an appropriate orthogonality constraint, before performing Stage

II. Otherwise, the feasibility set of Stage II problem might be empty.

Page 12: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

4

In our case the problems in both stages are nonlinear optimization problems.

The Stage I problem is a nonconvex minimization problem with equality relations.

This is a difficult optimization problem to solve. An augmented Lagrangian method

is proposed to deal with this problem. Some convergence properties of this method

are discussed.

The Stage II problem is a convex quadratic problem. This is a rather nice

optimization problem to deal with and there are several excellent numerical methods

for such problems in the literature (see [60]).

Implementations in optimization settings of Stage I and Stage II require that the

appropriate gradient formulas must be computed in terms of the known quantities

only, which are, in our case, just a few measured eigenvalues and eigenvectors and

the corresponding sets from the analytical model. Such gradient formulas have been

mathematically derived in the dissertation.

Besides the FEMU problem, another related QIEP, known as the affine quadratic

inverse eigenvalue problem (AQIEP) is considered in this dissertation. The quadratic

affine inverse eigenvalue problem is defined as follows:

Given a set of 2n self-conjugate scalars µ1, ...µ2n, the fixed matrix M , and a

set of substructure matrices Ci’s and Ki’s, the quadratic affine inverse eigenvalue

problem concerns with finding parameters α’s and β’s so that the pencil P (λ) =

λ2M + λC(α) + K(β) has µ1, ...µ2n as its eigenvalues, where

C(α) =2n∑i=0

αiCi, K(β) =2n∑i=1

βiKi.

Such a substructure model arises, for example, when working with a free-free

vibrating structure with finite number of degrees of freedom, in which each of the

masses is connected by linear springs to other masses of the system.

Page 13: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

5

Note that AQIEP can be view as FEMUP for a substructure model. Indeed, if we

set K = K0, C = C0 and find the matrices ∆K, ∆C in the form: ∆C =∑2n

i=1 αiCi

and ∆K =∑2n

i=1 αiKi, then Cu = C+∆C, Ku = K+∆K could be found by solving

affine QIEP. Affine models, for example, have been recently used in the analysis

of helicopter structures [40]. The AQIEP also arises in model-aided diagnosis of

mechanical systems [59]. Another example of application of the affine problem is

constitutive equations, when decision about material properties is made by using

the measured data [58].

In practical formulations of these problems it is usually assumed that the number

of parameters is smaller then the number of measured eigenvalues; it is therefore

more appropriate to consider the least square formulation of the problem, i.e, find

α’s and β’s so that the spectrum of the pencil (M,C(α), K(β)) is as close as possible

to the set of target eigenvalues in the least square sense.

We propose new Newton-like and alternating projections algorithms in this dis-

sertation to solve the AQIEP. These methods can be easily generalized to solution

of the problem in least squares setting.

Here are the main contributions of this dissertation:

• A real-valued representation of the quadratic orthogonality result originally

proved by Datta, et al. [19] is derived in Theorem 5.6. This result facilitates

the use of existing optimization algorithms which are mostly formulated in

real arithmetics.

• A new result (Theorem 5.7) on the QIEP is proved giving a necessary and

sufficient condition on the existence and uniqueness of the matrix K, given

the fixed matrices M and C, and a set of k self-conjugate scalars and vectors

(k < n), such that the spectrum of the pencil P (λ) contains the given set of

Page 14: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

6

scalars and the given vectors become the associated eigenvectors.

• Using the result cited above, a new algorithm is proposed for a damped model

updating that satisfies the newly proved orthogonality relation. The above

result also forms a mathematical basis for several model updating techniques,

which implicitly assume such a result.

• New Newton-like and alternating projections algorithms are proposed for so-

lution of the quadratic affine inverse eigenvalue problems.

• Convergence analysis of the proposed numerical algorithms are discussed in

some details.

The new theoretical and computational results obtained in this dissertation will

advance the state-of-the-art knowledge in computational and applied mathematics

and vibration engineering. They are also likely to impact automobile and aircraft

and other vibration industries which routinely solve the finite element model up-

dating problem. I also feel very strongly that my own interdisciplinary training

blending applied and computational mathematics with vibration engineering during

the development of this dissertation is an asset for my future career.

Here is an outline of the dissertation:

Chapter 2 covers the basic results on quadratic eigenvalue problem, including

important orthogonality relations and the notion of real-valued representation of

the eigenvalues and eigenvectors.

Chapter 3 gives an introduction to inverse eigenvalue problems associated with

quadratic matrix pencils. The inverse problems are grouped into several categories

according to their structure-preserving natures, and for each of the categories recent

results on those problems are reviewed.

Page 15: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

7

Chapter 4 provides necessary background on optimization techniques which are

used in the subsequent discussions of the optimization-based algorithm of the prob-

lems discussed in this dissertation.

Chapter 5 describes theory and computations of the finite element model updat-

ing problem. Sections 5.2 and 5.3 provide a review of model updating techniques for

damped and undamped (C = 0) models, respectively. A new two-stage approach

for model updating for a damped model is described in Section 5.4. A new result

on QIEP that provides a mathematical justification of our formulation of Stage II

is also proved in this section. A new algorithm, based on the two-stage approach,

along with its convergence analysis and results on numerical experiments are pre-

sented in Section 5.5. Most of the research presented in this chapter is the result of

joint work with Daniil Sarkissian and Sien Deng [26].

The affine inverse eigenvalue problem is described in Chapter 6. A locally

quadratically convergent Newton-like algorithm is proposed in Section 6.2. Then

two globally convergent algorithms based on alternating projections approach are

presented in section 6.4. A matrix nearness problem, which arises as an auxiliary

problem while computing projection operators necessary for applying alternating

projections, is considered in Section 6.3. A combined usage of both locally con-

vergent and globally convergent algorithms proposed in the Sections 6.2 and 6.3

is considered in Section 6.5. The last section contains some illustrative numerical

experiments.

Page 16: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

CHAPTER 2

Quadratic Eigenvalue Problem

2.1 Definition and Basic Results

Given matrices M, C and K, each of order n, the matrix

P (λ) = λ2M + λC + K (2.1.1)

is called the quadratic matrix pencil. For convenience, quadratic pencils will be

denoted by the symbol (M,C,K). In vibration design and analysis the matrices

M, C and K are called, respectively, the mass matrix, damping matrix and

stiffness matrix. These names describe the nature of the matrices. However,

historically these names are used regardless of the application. We assume that they

are real.

The matrix pencil is called singular if det P (λ) = 0, for all values of λ; otherwise

it is called regular. Unless otherwise is stated, we will assume that the pencil is

regular.

A scalar λ and an n-vector φ such that

P (λ)φ = 0 (2.1.2)

are, respectively, called an eigenvalue, and eigenvector of P (λ). The eigenvalues are

the roots of the polynomial equation

det P (λ) = 0,

Page 17: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

9

known as the characteristic equation. If M is nonsingular, then the there are 2n

eigenvalues and eigenvectors. The problem of finding the eigenvalues and eigenvec-

tors of P (λ) is known as the quadratic eigenvalue problem (QEP).

The underlying equation, which is often used in dynamic analysis of mechanical

systems, is a homogeneous linear second-order differential equation:

Mq(t) + Cq(t) + Kq(t) = 0. (2.1.3)

Mechanical structures are usually modeled by the equations, which are typically

obtained by finite element discretization of distributed parameter systems.

Using separation of variables and assuming a solution of the form q(t) = φ0eλ0t,

the equation (2.1.3) leads us to the eigenvalue-eigenvector problem:

P (λ0)φ0 = 0.

In the case, when all of the eigenvalues of the quadratic pencil are distinct, the

general solution to the above equation (2.1.3) is:

q(t) =2n∑

k=1

aiφieλit.

More generally, when λ0 is an eigenvalues of algebraic multiplicity p, function

q(t) =

(tk

k!φ0 + ... +

t

1!φk−1 + φp

)eλ0t

is a solution of the differential equation if the set of vectors φ0, ..., φp, with φ0 6= 0,

satisfies the relation

j∑p=0

1

p!L(p)(λ0)φj−p = 0, j = 1, ..., p.

Here L(p) is the pth derivative of the polynomial. Such set of vectors φ1, ..., φp is

called a Jordan chain of length p + 1 associated with eigenvalue λ0. The Jordan

Page 18: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

10

matrix of the pencil is the matrix in Jordan canonical form of size 2n that contains

all information about the eigenvalues and their multiplicities. It is a generalization

of the standard Jordan canonical matrix of a matrix A. If the geometric multiplicity

of each of the eigenvalues is one, then the pencil is called semisimple. It means

that all of the Jordan chains are of length one and consist of the corresponding

eigenvectors.

It is convenient to organize eigenvalues and eigenvectors into matrices, as follows:

Φ = (φ1, ..., φk), Λ =

λ1

. . .

λk

. (2.1.4)

Definition 2.1 The matrices Λ and Φ are called eigenvalue and eigenvector ma-

trices respectively. The pair (Φ, Λ) will be called k-matrix eigenpair, or just matrix

eigenpair.

Note, that the matrix eigenpair (Φ, Λ) of the pencil P (λ) satisfies the following

matrix equation:

MΦΛ2 + CΦΛ + KΦ = 0. (2.1.5)

2.2 Applications

The QEP arises in a wide variety of applications, including mechanical vibra-

tions and structural engineering, physics, acoustic studies, etc., (see [73]). As noted

before, mechanical vibrating systems are very often modeled by a system of second-

order differential equations of the form (2.1.3). Very often in practical applications,

these matrices have special structural properties. To see this, consider the following

mass-spring system:

Figure 2.1 illustrates a simple mass-spring system with consequently connected

masses m1, ..., mn. The resistance to displacement is provided by springs with

Page 19: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

11

...

2(t)

m1 m2 mn

α1

β1

α2

β2

α3

β3

α

β

n

n

q1(t)qn(t)q

Figure 2.1: Mass-spring system.

stiffness constants β1, ..., βn, respectively, and the energy dissipation mechanism

is represented by dampers with coefficients α1, ..., αn. Assuming that the damping

is proportional to velocity qi(t), the free vibrations (with no external forces) of the

mass-spring system will be governed by the system of the form (2.1.3) with matrices

M, C, and K given by

M =

m1

m2

. . .

mn

,

C =

α1 + α2 −α2

−α2 α2 + α3 −α3

. . . . . .

−αn αn

,

K =

β1 + β2 −β2

−β2 β2 + β3 −β3

. . . . . .

−βn βn

.

(2.2.6)

The natural frequencies are related to the eigenvalues of the associated quadratic

matrix pencil and the eigenvectors are called mode shapes or just modes.

Note that coefficient matrices M, C and K of the above system are structured;

moreover, all three of them are symmetric positive definite. Indeed, the matrices

arising from the vibrating structures often have nice physical properties, such as:

• M is symmetric positive definite (M = MT > 0) and often diagonal or tridi-

Page 20: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

12

agonal

• C is symmetric semi-definite (C = CT ≥ 0)

• K is symmetric semi-definite (K = KT ≥ 0) and often tridiagonal or banded.

Unless otherwise stated we will assume throughout this dissertation that

• M is symmetric positive definite (M = MT > 0)

• C is symmetric semi-definite (C = CT ≥ 0)

• K is symmetric semi-definite (K = KT ≥ 0).

The fact that the damping matrix is semidefinite corresponds to the fact that

energy either dissipates or doesn’t change due to the damping. For the mass-spring

system, energy always dissipates due to nonzero damping, thus matrix C is positive

definite, C > 0. Forces of the type Hq(t), where H is skew-symmetric, can also be

found in mechanical applications. They are of a different nature and are called the

gyroscopic forces. The gyroscopic systems will not be considered in this dissertation.

Damping is present in all mechanical systems; however, it is, in general, hard

to estimate. For the sake of computational convenience, it is very often assumed,

that the damping matrix C = 0 or, more generally, is proportional to the mass and

stiffness matrices. That is, C is assumed to be of the form:

C =∑

j

ajM(M−1K)j.

The Rayleigh damping, where C = a0M + a1K, is a particular case of proportional

damping, which is very often used for convenience. The assumption of proportional

damping makes analysis much more simple; however, this assumption is not valid

for many real-life applications. For example, systems in which gyroscopic forces

Page 21: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

13

are presented are nonproportionally damped. One of the classical examples of a

gyroscopic force, is the Coriolis force [53]. The Coriolis force arises in equations

of relative motion, which are written in a moving coordinate system, that rotates

with angle velocity.

2.3 Orthogonality Relation

Recall a classical result, which states that eigenvector matrix Φ of the symmetric

generalized eigenvalue problem K−λM , provided M > 0, K ≥ 0 can be scaled such

that ΦT MΦ = I and ΦT KΦ = Λ, where Λ is the eigenvalue matrix [18].

A generalization for the quadratic matrix polynomial case has been obtained by

Datta, Elhay, and Ram [19], and Lancaster [46].

Theorem 2.1 Let P (λ) = λ2M + λD + K, where M = MT > 0, C = CT , and

K = KT . Assume that the eigenvalues λ1, ..., λ2n are all distinct and different from

zero. Let Λ = diag(λ1, ..., λ2n) be the eigenvalue matrix and Φ = (φ1, ..., φ2n) be

corresponding matrix of eigenvectors. Then there exist diagonal matrices D1, D2,

and D3 such that

ΛΦT MΦΛ− ΦT KΦ = D1 (2.3.7)

ΛΦT CΦΛ + ΛΦT KΦ + ΦT KΦΛ = D2 (2.3.8)

ΛΦT MΦ + ΦT MΦΛ + ΦT CΦ = D3. (2.3.9)

Furthermore,

D1 = D3Λ; D2 = −D1Λ; D2 = −D3Λ2.

Note that the matrices Λ and Φ are in general complex. Thus, ΛT and ΦT , are,

respectively the transposes of the complex matrices. There now exists real repre-

sentation of these orthogonality relations, which will be described in Section 2.5.

Page 22: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

14

2.4 Numerical Methods

The QEP is a special case of the general polynomial eigenvalue problem. A clas-

sical reference book for the polynomial eigenvalue problem is the book by Lancaster

[46]. A more updated account can be found in the book by Gohberg, Lancaster,

Rodman [36]. The numerical methods for the polynomial eigenvalue problems are

not well developed. Indeed, even in the case of the QEP, the state-of-the art com-

putational techniques, such as the Jacobi-Davidson method [71], can compute only

a few extremal eigenvalues and eigenvectors. A good account of the numerical tech-

niques for the QEP can be found in the recent paper by Tisseur and Meerbergen

[73]. We will now briefly review some of these techniques.

One of the standard approaches to investigate polynomial eigenvalue problem

is to use linearizations [55]. By the linearization, it is meant, the reduction of the

quadratic matrix pencil to a larger linear matrix pencil, which is “equivalent” to

the given matrix polynomial P (λ). There are infinitely many linear matrix pencils

A−λB, which are equivalent to the quadratic pencil (M, C, K). The size of matrices

A,B, is necessarily 2n. A quadratic pencil P (λ) is equivalent to a linear matrix

A− λB if

A− λB = E(λ)

(P (λ) 0

0 I

)F (λ)

for some 2n× 2n matrix polynomials E(λ) and F (λ) with constant nonzero deter-

minants; P (λ) and A− λB have the same eigenvalues. The linear pencil A− λB is

called linearization of P (λ).

Companion form linearizations are the most widely used in the literature. A

most commonly used companion form linearization is:

(0 WC K

)− λ

(W 00 −M

), (2.4.10)

Page 23: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

15

where W is an arbitrary nonsingular matrix. If matrix M is invertible, we can have

the following linearization:

CR−λI =

(0 I

−M−1C −M−1K

)−λI, CL−λI =

(0 −KM−1

I −CM−1

)−λI (2.4.11)

Unfortunately, the above companion form linearizations do not preserve any struc-

ture, of the original quadratic pencil. A useful linearization, which preserves sym-

metry is:

A− λB =

( −K 00 W

)− λ

(C MW 0

). (2.4.12)

Note, that this linearization is symmetric, when W = M .

The linearization given in (2.4.11) and the symmetric linearization given in

(2.4.12) are related as:

CR = B−1A.

Note, that if (Φ, Λ) is the matrix eigenpair of the pencil (M,C, K), then the lin-

earized pencils, mentioned above, would have

((Φ

ΦΛ

), Λ

)

as the matrix eigenpair. As was mentioned above, the matrix eigenpair (Φ, Λ)

satisfies the so-called eigenvalue-eigenvector relation (2.1.5).

Computing the Eigenvalues of the QEP

Once the QEP is transformed to a linearized form, the resulting generalized

eigenvalue problem can be solved by applying QZ iterations [18, 37]. The eigen-

vectors of A− λB are computed by using the generalized inverse iteration, and the

eigenvectors of the quadratic pencil can be extracted from those of A− λB.

There are also projection methods developed for finding a few eigenpairs of a

quadratic pencil. These methods are used when the problem is rather sparse and

Page 24: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

16

large. The Jacobi-Davidson method is a widely known representative of the family

of projection methods. Specifically, the Jacobi-Davidson method can be briefly

escribed as follows:

The Jacobi-Davidson Method [70]

• Compute an eigenpair (λ, φ) of the projected problem V Tk P (λ)Vk, correspond-

ing to an eigenpair (λ, φ) of P (λ) = λ2M + λC + K with φ∗φ = 1, by finding

an orthonormal basis V = v1, ..., vk for the Krylov subspace Kk. The pair

(λ, φ) is called a Ritz pair.

• Compute the correction pair v, η by solving the linear system:(P2(λ) P ′

2(λ)φ

2φ 0

)(vη

)=

( −r0

),

where P2(λ) = λ2M + λC + K, P ′2(λ) = 2λM + C, and r = P2(λ)φ is the

residual.

• Obtain the new basis vectors vk+1 by orthogonalizing v against the previous

column of the orthogonal basis matrix Vk = (v1, ..., vk).

• Repeat until ||P (λ)φ|| is small.

For more information on numerical solution of QEP (see [70]).

2.5 Real-Valued Representation for the Eigenvalues and Eigen-vectors of the QEP

The matrix eigenpair (Φ, Λ), is in general complex for the QEP. However, from

computational point of view it is more convenient to deal with the real-form rep-

resentation of the pair (Φ, Λ). Later in this dissertation optimization problems will

be considered, which will have eigenvector matrix as a variable. The MATLAB op-

timization toolbox, is used to solve some of these problems. One of the drawbacks

Page 25: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

17

of the routines implemented in the toolbox is that only real-valued variables can be

used. Usage of real matrix eigenpair, helps to overcome this drawback; moreover, it

significantly simplifies derivation of the gradient formulas, necessary to implement

optimization algorithms. We will show now how this can be done. We first note

the fact that when the pencil is semisimple, i.e., all the eigenvalues have geometric

multiplicity one, then the matrix

col(Φ, ΦΛ) =

ΦΛ

)is of full rank. (2.5.13)

See [36].

First, we note, that conditions (2.1.5) and (2.5.13) will hold if we replace the

k-matrix eigenpair (Φ, Λ) by (ΦS−1, SΛS−1), for some invertible 2k × 2k (k ≤ n)

matrix S. Let the eigenvalue matrix of P (λ) be Λ = diag(α1 + iβ1, α1− iβ1, ..., αl +

iβl, αl − iβl, λ2l+1, ..., λk) and the corresponding eigenvector matrix be Φ = (u1 +

iv1, u1−iv1, ..., ul+ivl, ul−ivl, φ2l+1, ..., φk). Define the pair (X, T ) = (ΦS−1, SΛS−1),

with

S = diag(S1, ..., Sl, S2l+1, ..., Sk), where,

Sj =

1√2

(1 1i −i

), j = 1, ..., l

1, j = 2l + 1, ..., k(2.5.14)

Each two by two blocks of the matrix S corresponds to a complex conjugate pairs

of eigenvalues of P (λ). The pair (X, T ) will be called real matrix eigenpair of

P (λ). Note, that the matrices X and T will have the following structure:

X = [u1, v1, ..., ul, vl, φ2l+1, ..., φk] (2.5.15)

T = diag(T1, ..., Tl, T2l+1, ..., Tk), with (2.5.16)

Tj =

(αi βi

−βi αi

), j = 1, ..., l

λj, j = 2l + 1, ..., k

Page 26: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

18

Definition 2.2 The pair (X, T ), X ∈ Rk×2k and T ∈ R2k×2k, as defined above, will

be called a k-real matrix eigenpair (k ≤ 2n) or just a real matrix eigenpair.

The real matrix eigenpair (X, T ) satisfies the following matrix equations:

MXT 2 + CXT + KX = 0. (2.5.17)

Assuming that the eigenvalues are semisimple, we have,

(X

XT

)is of full rank. (2.5.18)

Page 27: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

CHAPTER 3

Quadratic Inverse Eigenvalue Problem [QIEP]

3.1 Statement

Given a partial or complete eigenstructure of a matrix A, the problem of com-

puting the matrix from the given eigenstructure is called the inverse eigenvalue

problem.

Quadratic Inverse Eigenvalue Problem: The inverse eigenvalue problem

for the quadratic pencil P (λ) is similarly defended. We will denote the problem by

the symbol QIEP. Specifically, given a partial or complete eigenvalue-eigenvector

information, the QIEP is to find the matrices M, C and K, such that quadratic

pencil P (λ) = λ2M + λC + K has a prescribed eigenstructure.

The inverse eigenvalue problems are as equally important as the direct eigenvalue

problems. Indeed, as we will see in this dissertation, that many practical applications

give rise to QIEP. For an account of theory and applications of the inverse eigenvalue

problems, see the book by Chu and Golub [15].

The QIEP arising in finite element model updating problem, which is considered

in this dissertation, concerns modifying the matrices M, C, K from knowledge of

only the partial spectrum and the associated eigenvectors. We will briefly review

some relevant results on this topic now.

Page 28: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

20

3.2 Unstructured QIEP from Fully Prescribed Eigenstruc-ture

Given a set of 2n scalars λ1, ..., λ2n and 2n n-vectors φ1, ..., φ2n, the unstruc-

tured QIEP is the problem of finding three matrices M,C, and K, not necessarily

symmetric or having any specific properties, such that the spectrum of the quadratic

pencil

P (λ) = λ2M + λC + K

will include the numbers λ1, ..., λ2n as its eigenvalues and φ1, ..., φ2n will be the

corresponding eigenvectors. The problem is rather easy to solve.

Define the matrices Λ and Φ as in (2.1.4).

Theorem 3.1 ([36]) Assume that matrix col(Φ, ΦΛ) is of full rank. Then solution

to the QIEP can be found as follows:

M – arbitrary

C = −MΦΛ2Q2 (3.2.1)

K = −MΦΛ2Q1,

where Q is such that

Qcol(Φ, ΦΛ) = I,

and Q = (Q1, Q2), Qi ∈ Rk×n.

Proof. It is easy to see that

MΦΛ2 + CΦΛ + KΦ = MΦΛ2(I −Qcol(Φ, ΦΛ)) = 0

Page 29: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

21

In his work Lancaster uses a more general notion of matrix eigenpair, the so-called

standard pairs and triples [36].

Definition 3.1 A pair of matrices (U,D), U ∈ Cn×2n, D ∈ C2n×2n is called a

standard pair if there exists nonsingular matrix S, such that U = ΦS−1, D = SΛS−1,

where (Φ, Λ) is a matrix eigenpair.

For example, a real matrix eigenpair is a particular case of a standard pair.

Definition 3.2 Three matrices U,D, V , U ∈ Cn×2n, D ∈ C2n×2n and V ∈ C2n×n

form a standard triple if (U,D) is a standard pair, and UV = 0, UDV = M−1.

Thus, the matrices U and V generalize the notion of matrices of right and left eigen-

vectors. It is easy to show that Theorem 3.1 holds for a standard pair. Thus, given

a nonsingular matrix M and a standard pair (U,D), the relation (3.2.1) uniquely

determines the matrices C, K of the pencil (M, C, K). In other words, we can say

that a standard triple (U,D, V ) uniquely defines the associated pencil. The matrices

(M,C,K) can also be defined in terms of moments [49], defined by:

Γj = UDjV, j = 0, 1, .... (3.2.2)

The moments are invariant under the choice of a triple.

Theorem 3.2 Let (U,D, V ) be a standard triple and define the moments by (3.2.2).

Then the corresponding unique pencil (M, C, K) can be constructed as follows:

M = Γ−11 , C = −MΓ2M, K = −MΓ3M + CΓ1C. (3.2.3)

Proof. Note, formulas (3.2.1) obtained above will hold for a standard pair

(U,D) as well: i.e., pencil (M,−MUD2Q2,−MUD2Q1) is the pencil associated

with standard pair (U,D), where M is arbitrary and col(U,UD)(Q1, Q2) = I.

Page 30: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

22

Now we need to show that C = −MΓ2M = −MUD2V M = −MUD2Q2 and

K = −MΓ3M + CΓ1C = −MUD2Q1. It is easy to see, that V M = Q2. Indeed,

(U

UD

)(Q1, Q2) =

(UQ1 UQ2

UDQ1 UDQ2

)= I.

Thus, UDQ2 = I. Postmultiply by M−1, UDQ2M−1 = M−1 = UDV , and obtain

V M = Q2, which shows that C = −MΓ2M .

Now, lets consider formula for the matrix K = −MΓ3M +CΓ1C = −MUD2Q1.

Rearranging this expression we have

−MUD2(DV M −Q2UD2Q2) = MUD2Q1;

i.e., it is enough to show that DV M −Q2UD2Q2 = Q1. Premultiplying this expres-

sion by U , we obtain UDV M = UQ1 = I and this completes the proof.

Consider triple of matrices (Φ, Λ, Y ), where (Φ, Λ) is an eigenpair and 2n×n matrix

Y satisfies (Φ

ΦΛ

)Y =

(0

M−1

). (3.2.4)

Note, that (Φ, Λ, Y ) is just a particular case of the standard triple. Thus, the result

of Theorem 3.2 will hold for the triple (Φ, Λ, Y ).

Corollary 3.1 Given matrix eigenpair (Φ, Λ) and nonsingular matrix M , let Y be

as defined above in (3.2.4). Define the moments by

Γj = ΦΛjY, j = 0, 1, .... (3.2.5)

Then corresponding unique pencil (M, C, K) can be constructed as follows:

M = Γ−11 , C = −MΓ2M, K = −MΓ3M + CΓ1C. (3.2.6)

Page 31: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

23

In the previous section it was shown that quadratic eigenvalue problem is equiv-

alent to a standard or generalized eigenvalue problem of dimension 2n. Thus, well-

studied techniques for the standard or the generalized eigenvalue problem can be

used to obtain solution of the quadratic problem. However, this is not the case for

inverse problems. It is much harder to use algorithms for standard inverse eigenvalue

problems to solve quadratic inverse eigenvalue problem. Lancaster and Prells [63]

have studied structure preserving equivalences and structure preserving similarities.

The idea is to start with one system, say (M0, D0, K0), and, implicitly, a complete

set of corresponding spectral data, and to show how to generate isospectral pen-

cils (M, D, K) which share the same set of eigenvalues (including their multiplicity

structures). And this is to be done without explicit reference to eigenvalues and

eigenvectors.

Definition 3.3 Let A−λB be a symmetric linearization of the pencil (2.1.1) as in

(2.4.12). Let A′−λB′ be obtained from A−λB by strict equivalence, i.e., A′ = EAF ,

B′ = EBF for some nonsingular E and F . Then this strict equivalence is said to

be structure preserving if A′ and B′ have the same form as A and B, i.e.:

A′ =( −K ′ 0

0 M ′

), B′ =

(C ′ M ′

M ′ 0

)(3.2.7)

and M ′ is nonsingular.

In this case A′ − λB′ is a linearization of the pencil M ′, C ′, K ′. Observe, that

structure preserving equivalence (SPE) could be obtained from a transformation of

P (λ). If S and H are nonsingular, then transformation SP (λ)H corresponds to an

SPE with

E =

(S 00 S

), F =

(H 00 H

).

Page 32: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

24

Definition 3.4 Let CR be the right companion matrix (2.4.11) of P (λ). The non-

singular 2n× 2n matrix SR is called a (right) structure preserving similarity (SPS)

if S−1R CRSR is a (right) companion matrix.

These transformations are closely related.

Theorem 3.3 Two quadratic pencils are related by an SPE if and only if they are

related by an SPS.

Proof. Let A− λB = E(A0− λB0)F , then F defines right structure preserving

similarity. Indeed,

B−1A = F−1B−10 E−1EA0F = F−1B−1

0 A0F

Conversely, let S be a structure preserving similarity, then

A− λB = B(CR − λI) = B(S−1C(0)R S − λI) =

BS−1(C(0)R − λI)S = BS−1B−1

0 (A0 − λB0)S.

In their paper [63], Prells and Lancaster, obtained an expression for an structure

preserving equivalence.

Theorem 3.4 Let P0(λ) be given matrix pencil and A0− λB0 be its symmetric lin-

earization (2.4.12), and write C0 = B−10 A. Consider triple (X, C0, Y ), s.t col(C0, C0X)

is nonsingular, XY = 0 and XC0V is nonsingular.

Then the 2n× 2n matrices (B0Y A0Y ) and

(X

XC0

)are nonsingular and

E := (Y C0Y )−1B−10 , F :=

(X

XC0

)−1

determine an SPE of the given pencil.

Conversely, every pencil P (λ) isospectral with P0(λ) can be obtained from some

choice of X and Y in a standard triple (X, C0, Y ).

Page 33: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

25

Proposition 3.1 Let X, C0, Y be as defined in the theorem above. Then the corre-

sponding pencil has Hermitian coefficients if Y = B−10 X∗.

The condition X∗ = B0Y can be rewritten in the following way:

X∗ = B0

(X

XC0

)−1 (0

M−1

),

and (X

XC0

)B−1

0 X∗ =

(O

M−1

).

Thus, Hermitian coefficients are generated provided the two following conditions are

satisfied:

XB−10 X∗ = 0, XC0B

−10 X∗ = M−1.

Even though the results presented above give some insight into the problems, from

computational point of view it is more viable to use matrix eigenpairs instead of

triples (X,C0, Y ). Some computational schemes for inverse problem for pencils

with non-real spectrum, based on matrix eigenpairs, were presented by Lancaster

and Prells in [48]. The ideas introduced in [48] were then extended in [51], from

the restriction to systems with purely nonreal spectrum to the full range of real and

nonreal spectrum but with limitation to semisimple spectrum. From now on it is

assumed that all the eigenvalues are semisimple. Since coefficient matrices are real,

eigenvalues are real or appear in complex conjugate pairs. Assume there are 2r real

eigenvalues (0 ≤ r ≤ n) and 2(n − r) complex eigenvalues U1 ± iW , with W > 0.

Let real eigenvalues be distributed between two r× r real diagonal matrices U2 and

U3, then the eigenvalues and eigenvectors matrices are now

Λ =

U1 + iW 0 0 00 U2 0 00 0 U3 00 0 0 U1 − iW

, Φ = (Φc, ΦR1, ΦR2, Φc). (3.2.8)

Page 34: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

26

Defining Ω21 = U2

1 + W 2, it is easily seen that there is an associated pencil

P0(λ) = λ2I − 2λ

(U1 00 1

2(U2 + U3)

)+

(Ω2

1 00 U2U3

), (3.2.9)

which is a direct sum of two diagonal pencils:

λ2In−r − 2λU1 + Ω21 = (λIn−r − (U1 + iW ))(λIn−r − (U1 − iW )) (3.2.10)

and

λ2I2r − λ(U2 + U3) + U2U3 = (λIr − U2)(λIr − U3). (3.2.11)

Define

U =

(U1 00 1

2(U2 + U3)

), and Ω2 =

(Ω2

1 00 U2U3

)

then (3.2.9) takes form

P0(λ) = λ2I − 2λU + Ω2. (3.2.12)

The right companion matrix of this pencil is

C0 =

(0 I−Ω2 2U

). (3.2.13)

Lemma 3.1 Let the real eigenvalues be prescribed in such a way that

det(U2 − U3) 6= 0;

then the matrix Z, defined by

Z =

U1 − iW 0 0 −I0 −U3 I 00 −U2 −I 0

U1 + iW 0 0 I

, (3.2.14)

is nonsingular and transforms the companion matrix C0 into Λ, i.e.,

ZC0Z−1 = Λ.

Page 35: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

27

Using the matrix Z defined in the Lemma 3.1 it is possible to establish a one-to-one

correspondence between matrix eigenpairs and structure preserving similarities.

Theorem 3.5 Let Λ be a diagonal matrix of the form (3.2.8), for which det(U2 −U3 6= 0.

(a) If Φ is a matrix of the for Φ = (Φc ΦR1 ΦR2 Φc), where Φc ∈ Cn×n−r and

ΦR1, ΦR2 ∈ Rr×r, i.e., (Φ, Λ) form a matrix eigenpair, then with Z defined by

(3.2.14),

V =

ΦΛ

)Z (3.2.15)

defines an SPS of C0.

(b) Conversely, if V defines an SPS of C0 and Z is defined by (3.2.14), then there

exists a Φ in the form defined above, such that (6.3.14) holds.

Observe, the fact that eigenvalues and corresponding eigenvectors are real or pre-

sented in complex conjugate pairs will guarantee that the resulting isospectral family

will have real coefficient matrices.

3.3 Structured QIEP with Fully Prescribed Eigenstructure

As we have just seen from the mass-spring example in the previous chapter,

the matrices M,C and K have some nice structural properties (realness, symmetry,

positive definiteness, bandness). It is a typical situation, when an application, which

gives rise to the QIEP, assumes that the reconstructed matrices have some of the

structural properties. Thus, it is required that the solution of the associated inverse

eigenvalue problem satisfies the structural constraints. Thus, the structured QIEPs

are the ones of practical interest. Finite element model updating (FEMU) problem,

to be defined in the next section, is one such application problem. As we will see

Page 36: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

28

later, the updated matrices must be symmetric, and positive definite (semidefinite)

and must have specific sparsity pattern. Unfortunately, these requirements may not

always be met - certain conditions must be satisfied. For example, the symmetry

of matrices M, C, K implies that there are 23n(n + 1) unknowns, but each eigenpair

(φ, λ) characterizes a system of n equations. Thus, if a k-eigenpair (Φ, Λ) is given,

with k ≤ 23(n + 1) (or k ≤ n + 1 if M is given), then there exists a symmetric

solution to the inverse problem. In the case, k > n+1, a result giving a necessary and

sufficient condition is presented in the following section. The existence of a solution,

with the requirement that matrices are positive definite is a more complicated issue.

The simple counting of variables and equations will not give us the answer. In [16]

it was shown, that given a k-matrix eigenpair (Φ, Λ), with distinct eigenvalues and

linearly independent eigenvectors, it is always possible to find real and symmetric

matrices M and K solving QIEP with positive definite and semidefinite, respectively,

provided k ≤ n (see [16]). Detail will be given later.

There now exist several results on reconstructing a structured quadratic matrix

pencil from a spectral data [1, 2, 3, 13, 16, 21, 28, 30, 34, 35, 45, 47, 49, 51, 64, 65,

66, 72]. Solution to this type of problems typically is not unique. Below, several of

the current results on this topic are summarized.

First, we consider the case of symmetric structure. It was shown in the previous

section in Corollary 3.1 how matrices C,K can be constructed in terms of moments,

when Φ, Λ and M are given. A more interesting question is how to reconstruct

matrices C and K, which are symmetric.

Theorem 3.6 ([49]) Let the system (M,C,K) be generated by triple (Φ, Λ, Y ).

Then the moments Γ1, Γ2 and Γ3 (3.2.5) are all real, Hermitian, or real symmetric

according as all of M, C,K are also real, Hermitian, or real symmetric repletely.

Page 37: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

29

It is now necessary to introduce an important notion of the sign characteristic

associated with a real eigenvalue.

Definition 3.5 Let (φ, λ) be a semisimple real eigenpair of the quadratic pencil

P (λ) = λ2M + λC + K. Consider the following product:

φT P ′(λ)φ = 2λφT Mφ + φT Cφ = εκ2;

then ε = ±1 defines the sign characteristic of λ.

Definition 3.6 The eigenvector φ is called normalized, when κ = 1.

Now assume that real eigenvalues in (3.2.8) are distributed is such a way that all

eigenvalues with positive sign characteristic are presented in U2 and with negative

in U3. Sign characteristic cannot be assigned arbitrarily; for the case of semisimple

pencil we have2r∑

j=1

εj = 0

(see [36]). Define permutation matrix

P =

0 0 0 In−r

0 Ir 0 00 0 −Ir 0

In−r 0 0 0

. (3.3.16)

Note that with Λ and Φ in the form (3.2.8), (PΛ)∗ = PΛ and PΦ∗ = ΦT . Lancaster

and Prells [48] have proved the following results:

Theorem 3.7 Given a nonsingular symmetric matrix M , the matrix eigenpair (Φ, Λ)

(3.2.8) determines real symmetric matrices C and K if

ΦΛ

)PΦ∗ =

(0

M−1

).

Conversely, eigenvectors of a symmetric pencil can be normalized in such a way that

the above relation holds.

Page 38: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

30

Computational schemes of constructing a symmetric pencil, based on the result

provided by the Theorem 3.7, have been investigated in [51].

Now if we assume that the spectral data is consistent with real and symmetric

systems, the question is, how we can ensure the positive definiteness (semidefinite-

ness) of matrices (M,C,K)? To answer this question, observe that

−K−1 = X(Λ−1P )X∗ = Γ−1. (3.3.17)

This follows immediately from the resolvent form for P (λ):

P (λ)−1 = Φ(λI − λ)−1Y.

Theorem 3.8 If Λ is stable (all eigenvalues lie in the open left half-plane), Γ2 ≤ 0,

and Γ1, Γ−1 are nonsingular, and M, C, and K are as defined in 3.1, then M >

0, C ≥ 0, and K > 0.

Proof. The stability of Λ implies that M > 0 and K > 0. Then C ≥ 0 follows

from Γ2 ≤ 0.

3.4 Unstructured QIEP with Partially Prescribed Eigen-structure

In applications it is often desirable to reconstruct a matrix pencil when only a

portion of the spectrum is given. For example, when the eigenvalues which need

to be assigned come from a measurement data. One of the constraints that arises

in practical applications is that for complicated large systems, only a few eigen-

vectors and eigenvalues, especially the smallest ones, can be computed. Similarly,

because of hardware limitations, it is possibly only to measure in a laboratory or

in an experiment with a real-life structure only a few eigenvalues and eigenvectors

that correspond to the lowest natural frequencies of a vibrating structure. These

Page 39: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

31

constraints limit us to consider QIEP which can be solved with partially prescribed

eigenstructure. A few results now exist and we will state some of them in this

section.

In a series of papers [19, 20, 22, 23, 24, 68], Datta, Ram, Elhay and Sarkissian in

the last few years have solved several unstructured QIEP with partially prescribed

spectrum. These problems naturally arise in controlling dangerous vibrations, such

as resonance, in mechanical structures, including, buildings, bridges, highways, etc.

These include, quadratic partial eigenvalue assignment and quadratic partial eigen-

structure assignment problems, denoted, respectively, by QPEVAP and QPEAP.

The QPEVAP concerns with re-assigning only a small part of the spectrum of the

pencil (M, C,K) by using feedback control matrices F and G to another user-chosen

set while keeping the remaining large part of the spectrum and the associated eigen-

vectors unchanged. The last phenomenon is known as the no spill-over in vibration

engineering literature. The QPEAP is similarly defined. Here both a small part of

the spectrum and the associated eigenvectors, are reassigned to user’s chosen set of

eigenvalues and eigenvectors by feedback control.

Mathematically the problems are defined as follows:

QPEVAP: Given the pencil (M, C, K); a set µ1, ..., µk, closed under com-

plex conjugation; and a matrix B, find matrices F and G, so that the pencil

(M,C − BF T , K − BGT ) has µ1, ..., µk as its first k eigenvalues and the rest

2n − k eigenvalues and corresponding eigenvectors are the same as of the original

pencil (M,C, K).

QPEAP: Given a pencil (M,C,K); a set µ1, ..., µk and set of vectors y1, ..., yk,both closed under complex conjugation; and a matrix B, find matrices F and G so

that the pencil (M, C−BF T , K−BGT ) has (µ1, y1)..., (µk, yk) as its first k eigen-

Page 40: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

32

pairs and the rest 2n−k eigenpairs are the same as of the original pencil (M, C, K).

A practical solution for QPEVAP [23] has been obtained in the quadratic setting

without making any transformation to a standard first-order state-space form, thus,

avoiding computation of the inverse of possibly ill-conditioned mass-matrix and the

destruction of exploitable structural properties, including the symmetry, positive

definiteness, boundness, sparsity, etc.

The other important features of the solution proposed in [23] are (i) that only

the knowledge of the small number of eigenvalues and eigenvectors that need to be

reassigned suffice for implementation, (ii) the no spill-over is established by means

of mathematically proved results, and (iii) no model reduction is done, no matter

how large the model may be.

We state the solution of QPEVAP here without any proof.

Theorem 3.9 Let matrix B be of full rank. Let the scalars µ1, ..., µk and the

eigenvalues of the pencil (M,C,K) be such that sets λ1, ..., λk, λk+1, ..., λ2n and

µ1, ..., µk are disjoint and each set is closed under complex conjugation. Let Y =

(y1, ..., yk) be the matrix of left eigenvectors associated with eigenvalues λ1, ..., λp.Let the pair (P (λ), B) be partially controllable with respect to λ1, ..., λk, i.e., y∗i B 6=0, i = 1, ..., k. Let Γ = (γ1, ..., γk) be a matrix such that

γj = γi, whenever µj = µi.

Set Λ1 = diag(λ1, ..., λk) and set Σ = diag(µ1, ..., µk). Let Z be the unique nonsin-

gular solution of the Sylvester equation:

Λ1Z − ZΣ = −Y ∗BΓ.

Let the real feedback matrices be given by

F = ΦY ∗M, and G = Φ(Λ1Y∗M + Y ∗C),

Page 41: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

33

where Φ satisfies the linear system ΦZ = Γ. Then the closed-loop pencil (M,C −BF T , K−BGT ) will have µ1, ..., µk, λk+1, ..., λ2n as its eigenvalues and the eigen-

vector corresponding to the eigenvalues λk+1, ..., λ2n will remain unchanged.

Notes:

• The feedback matrices given by Theorem 3.9 are parameterized by the matrix

Γ, then yielding a family of feedback matrices for different choices of Γ. Ex-

ploiting this parametric nature of the solution, Brahma and Datta [7, 8] have

recently developed algorithms for minimum feedback norms and minimization

of the condition number of the closed-loop eigenvector matrix. These algo-

rithms constitute a numerically robust solution to QPEVAP.

• Similar expression for feedback matrices for QPEAP have been derived by

Datta, Elhay, Ram, and Sarkissian [20].

3.5 Structured QIEP with Partially Prescribed Eigenstruc-ture

In this section, we describe some results on structured QIEP which are also

motivated by applications in vibration engineering.

The following theorem, proved by Chu, Kuo, and Lin [16], shows how to construct

a symmetric pencil with mass and stiffness matrices being both positive definite and

semidefinite, respectively, when partial eigenstructure is prescribed. Eigenstructure

of the resulting pencil is also analyzed. Some attention is also paid to monic sys-

tems (M = I). Recall, that by (X,T ) we denote real matrix eigenpair; i.e., T is

the real-valued representation of the eigenvalue matrix and X is the real-valued

representation of the eigenvector matrix. Instead of considering generalized inverse

of col(X, XT ) to obtain the solution to the inverse problem as in (3.2.1), in [16]

Page 42: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

34

the authors consider the null space of col(X,XT )T , where (X,T ) is a k-real matrix

eigenpair, X ∈ R2n×k and T ∈ Rk×k, (k ≤ n).

Theorem 3.10 ([16]) Let

(Q1 Q2)col(X, XT ) = 0. (3.5.18)

Then, a solution to the quadratic inverse eigenvalue problem, with matrices M and

K being symmetric positive semidefinite, is given as follows:

M = QT2 Q2,

C = QT2 Q1 + QT

1 Q2, (3.5.19)

K = QT1 Q1.

It is shown in [16] that if X is of full column rank, then the resulting pencil (3.5.19)

is regular, and singular otherwise. The remaining eigenstructure of the pencil is

described in the following theorems.

Theorem 3.11 Let (X, T ) be a k-real matrix eigenpair and let the pencil P (λ) =

λ2M +λC +K be defined by (3.5.19). Assume, that X is of full column rank, then:

1. If k = n, then P (λ) has double eigenvalue λj for each λj ∈ σ(T ).

2. If k < n, then P (λ) has double eigenvalue λj for each λj ∈ σ(T ). The re-

maining 2(n− k) eigenvalues of P (λ) are all complex conjugate with nonzero

imaginary parts. In addition, if the matrices Q1 and Q2 are chosen from an

orthogonal basis of the null space of col(X, XT )T , then the remaining 2(n−k)

eigenvalues are only ±i with corresponding eigenvectors z±iz, where XT z = 0.

Thus, when k = n all eigenvalues are counted. It also possible to further calculate

the geometric multiplicity of the double roots characterized in the previous theorem.

Page 43: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

35

Theorem 3.12 Let (X, T ) be a k-real matrix eigenpair and let the pencil P (λ) =

λ2M +λC +K be defined by (3.5.19). Assume, that X is of full column rank, then:

1. Each real-valued λj ∈ σ(T ) has an elementary divisor of degree 2; that is, the

dimension of the null space N (P (λj)) is 1.

2. The dimension of N (P (λj)) of a complex-valued eigenvalue λj ∈ σ(T ) is gener-

ically 1. That is, the pairs of matrices (X,T ) of which a complex-valued eigen-

value has a linear elementary divisor form a measure zero set.

Another numerical method for constructing matrices M , C and K, with M

positive definite, when k-real matrix eigenpair (X, T ) is given (k ≤ n), is proposed

in [45].

Theorem 3.13 ([45]) Given a k-real matrix eigenpair (X, T ), let

X = Q

(R0

)(3.5.20)

be the QR decomposition of X, and let S = RTR−1. Then the symmetric solution to

the quadratic inverse eigenvalue problem with positive definite mass matrix is given

as follows:

M = Q

(M11 M12

M21 M22

)QT , C = Q

(C11 C12

C21 C22

)QT , K = Q

(K11 K12

K21 K22

)QT ,

(3.5.21)

where

1. M is arbitrarily symmetric positive definite,

2. C22 = CT22, K22 = KT

22 ∈ R(n−k)×(n−k) are arbitrarily symmetric,

3. C21 = CT12 ∈ R(n−k)×k is arbitrary,

Page 44: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

36

4. C11 = −(M11S + ST M11 + R−T DR−1) ∈ Rk×k,

5. K11 = ST M11S + R−T DTR−1 ∈ Rk×k,

6. K21 = −(M21S2 + C21S) ∈ R(n−k)×k,

with

D = diag

((σ1 η1

η1 −σ1

), ...,

(σl ηl

ηl −σl

), σ2l+1, ..., σk

),

and σ’s and η’s being arbitrary real numbers. Further it is shown in [45], that

matrices M11, M21 and C21 can be chosen is such a way that resulting matrix K is

positive definite.

3.5.1 Pencils with Banded Coefficient Matrices

In [66] Ram and Elhay, presented a method which constructs a tridiagonal sym-

metric quadratic pencil with 2n eigenvalues and the 2n− 2 of its n− 1-dimensional

leading principal subpencils prescribed. The mass matrix M is assumed to be an

identity matrix, and

C =

α1 β1 0 ... 0β1 α2 β2 ... 0

0 β2 α3. . . 0

......

. . . . . ....

0 0 0 ... αn

, K =

γ1 δ1 0 ... 0δ1 γ2 δ2 ... 0

0 δ2 γ3. . . 0

......

. . . . . ....

0 0 0 ... γn

(3.5.22)

Theorem 3.14 Given two sets of distinct numbers λj2nj=1 and µj2n−2

j=1 . Define

a polynomial p(λ) =∑2n−2

i=0 ciλi, such that

−p(µj) =2n∏i=1

(µj − λj), j = 1, ..., 2n− 2,

and λ = r is a double root of p(λ), for some r, i.e.:

d

dλp(r) = 0.

Page 45: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

37

Then the last row of the resulting pencil can be identified as:

1.

αn =2n−2∑i=1

µi −2n∑i=1

λi,

2. β2n−1 = c2n−2,

3. δn−1 = −rβn−1,

4.

γn =

∏2ni=1(r − λi)∏2n−2

i=1 (r − µi)− r2 − rαn.

Knowing the last row it is possible to determine the eigenvalues of the leading

principal submatrices of dimension n − 2 of P (λ). Thus, the method prescribed

in the theorem can be reapplied to the deescalated problem of dimension n − 1.

Authors presented a numerical algorithm based on these approach.

Another numerical method of solving inverse eigenvalue problem associated with

a serially linked mass-spring system (2.2.6) was presented in [28]. Note, that in

(2.2.6) damping and stiffness matrices are linear function of α’s and β’s, respectively,

i.e., C = C(α), K = K(β). A Newton’s iteration-based method was presented in

[66] to find a zero of the function:

f(α, β) =

f1(α, β)f2(α, β)

...f2n(α, β)

,

where α = (α1, ..., αn), β = (β1, ..., βn), and

fi(α, β) = det(λ2i M + λiC(α) + K(β)).

Recently, two more approaches were presented to solve tridiagonal inverse prob-

lem, [2, 13]. In his work [2] Bai considers the case of reconstructing a monic quadratic

Page 46: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

38

tridiagonal pencil, when four eigenpairs (φi, λi)4i=1 are given. In this case,

Λ2Φ + CΛΦ + KΦ = 0,

is a system of 4n equations, when (Φ, Λ) is 4-matrix eigenpair, and can be rewritten

as the following linear system:

Ay = g.

Note, that there are 4n equations and 4n − 2 unknowns, namely αini=1, βin−1

i=1 ,

γini=1, and δin−1

i=1 . The results on existence and uniqueness of the solution to the

problem are provided in [2].

In a similar manner, as in [2], the problem of reconstructing the mass-spring

system (2.2.6) from two eigenpairs is considered in [13]. Note, there are 3n physi-

cal parameters, namely mini=1, αin

i=1 and βini=1. Thus, specifying more then

three eigenpairs will make the problem over-determined. In [13] authors provide a

condition under which the problem will have a solution, with positive coefficients

m’s, α’s and β’s. A numerical algorithm which constructs the masses m1, ..., mn,

the damping constants α1, ..., αn, and the spring constants β1, ..., βn, all positive and

mi ≤ 1, for the pencil λ2M + λC + K with the prescribed eigenpairs, where M ,C,

and K are of the structure specified in (2.2.6), or determines that such a system

does not exists is also presented in [13].

3.5.2 An Application of Structured QIEP: Finite Element Model Up-dating

The model updating problem can be viewed as a QIEP. Since in a modal-based

approach the matrices M , C, and K are assumed to be explicitly given, the problem

here is to modify these matrices, rather than construct them from “the novo”, using

a given set of self-conjugate numbers and vectors, so that the updated model with

the modified matrices contains the given numbers in its spectrum with the given

Page 47: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

39

vectors as the corresponding eigenvectors, and the structure of the original model is

preserved.

The problem routinely arises in vibration industries, including automobiles, air-

and spacecraft. A properly updated model can be used with confidence for future

design and analysis.

Because of its practical importance, the problem has been widely studied, both

by mathematicians and practicing and academic engineers; as a result there now

exists a voluminous body of work on the solutions.

For discussion of these methods, see the authoritive book [33], on the subject.

For other recent methods, (see [10, 11, 52]).

In [10, 11, 52], the authors have solved the model updating problems with an

additional constraint that the eigenvalues and eigenvectors of the original model

which are not affected by updating, remain unchanged. This is known as the no

spill-over in engineering literature. No spill-over constraint guarantees that “no

spurious modes will be introduced into the frequency range of interest,” which is a

major concern for the engineers. However, there are some philosophical differences

of opinion by the engineers about the “no spill-over.”

Model updating problems are usually formulated as constrained optimization

problems. Since damping is hard to estimate, very often in engineering practice,

the model to be updated is considered to be an undamped model. In such cases,

it is possible to give an explicit solution. See the book [33] for a discussion of

such existing methods. However, in the case of damped model, the associated

minimization problem is usually a nonlinear constrained minimization problem and

numerical optimization techniques need to be used.

In the following section we will include a brief discussion on both unconstrained

Page 48: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

40

and constrained minimization techniques. Note that a constrained minimization

problem may be cast as unconstrained minimization problem. For details of the

numerical optimization methods, see the book by Nocedal and Wright [60].

Page 49: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

CHAPTER 4

Optimization

In this chapter, we very briefly review some of the concepts and numerical meth-

ods for optimization. Some of these concepts and techniques will be used later in

the dissertation. For details, see the book by Nocedal and Wright [60].

Many of the inverse eigenvalue problems, are formulated as optimization prob-

lems. In several cases, it is possible to obtain a closed-form solution. For example,

closed-form solution of some of the model updating problems can be obtained using

Lagrange multiplier formalism [32, 33]. However, in most of the cases, the usage

of numerical optimization methods is necessary [60]. In fact, an optimization algo-

rithms, were used to obtain a solution to both of the problems considered in Chapter

5 of this dissertation.

4.1 Unconstrained Optimization

First, let’s review a more simple case of an unconstrained optimization problem.

The mathematical formulation is

minx∈Rn

f(x), (4.1.1)

where f : Rn → R is a smooth function. The following theorems give necessary and

sufficient conditions for optimality of x∗ (for proofs see [60]).

Theorem 4.1 (First-Order Necessary Conditions) If x∗ is a local minimizer

and f is continuously differentiable in an open neighborhood of x∗, then ∇f(x∗) = 0.

Page 50: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

42

Theorem 4.2 (Second-Order Necessary Conditions) If x∗ is a local minimizer

and ∇2f is continuous in an open neighborhood of x∗, then ∇f(x∗) = 0 and ∇2f(x∗)

is positive definite.

Definition 4.1 A point x∗ is a strict local minimizer (also called a strong local

minimizer) if there is a neighborhood N of x∗ such that f(x∗) < f(x) for all x ∈Nwithx 6= x∗.

Theorem 4.3 (Second-Order Sufficient Conditions) Suppose that ∇2f is con-

tinuous in an open neighborhood of x∗, and that ∇f(x∗) = 0 and ∇2f(x∗) is positive

definite. Then x∗ is a strict local minimizer of f .

4.2 Constrained Optimization

The following portion of this section reviews the constraint optimization problem.

We begin by reviewing the concept of Lagrange multipliers, which is a useful tool

to treat constrained optimization problems. A general formulation of a constrained

optimization problem is

minx∈Rn f(x)

s.t. ci(x) = 0, i ∈ E (4.2.2)

ci(x) ≥ 0, i ∈ I,

where f and c′is are smooth real-valued functions on Rn, and E and I are finite sets

of indices. Function f is called the objective function and ci, i ∈ E and ci, i ∈ I are

equality and inequality constraints correspondingly. The set of points, which satisfy

the constants

Ω = x | ci(x) = 0, i ∈ E , ci ≥ 0, i ∈ I,

Page 51: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

43

is called the feasibility set, and the constrained optimization problem can be formu-

lated as follows:

minx∈Ω

f(x).

Definition 4.2 A point x∗ is a global minimizer if f(x∗) ≤ f(x), for all x ∈ Ω.

Definition 4.3 A point x∗ is a local minimizer if f(x∗) ≤ f(x), for all x ∈ N (x∗),

some neighborhood of x∗.

We illustrate the concept by a simple example of finding an extreme of f(x) with

one equality constraint. In Figure 4.2, we can see a graph of the constraint equation

c1(x) = 0 and a family of curves f(x) = s, which cover a part of the space. Moving

along the curve c1 = 0, we cross curves from the family f = s, and parameter s

changes monotonically. The solution to the problem is expected to be at the point,

where the direction of s changes. From the Figure 4.2, we can see that it happens

at the point where curve f = s is “tangent” to the curve c1 = 0. This means, that

both of the curves c1 = 0 and f = s have common tangent line, thus, the normal

vectors are collinear, i.e.,

∇f = λ1∇c1.

By introducing the Lagrangian function:

L(x, λ1) = f(x)− λ1c1(x),

we can restate the necessary optimality condition as follows: At the solution x∗,

there is a scalar λ∗1, such that

∇xL(x∗, λ∗1) = ∇f(x∗)− λ∗1c1(x∗) = 0.

The scalar λ1 is called a Lagrange multiplier. The optimality condition for a problem

Page 52: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

44

f(x) = s

c1(x) = 0

f

c1

Figure 4.1: Constraint optimization problem.

with one inequality constraint can be formulated in a similar way, with a comple-

mentary condition:

∇xL(x∗, λ∗) = 0, for some λ∗1 ≥ 0.

It also required that

λ∗1c1(x∗) = 0.

Thus, if x∗ ∈ Ω is a local solution of (4.2.2), then there is a Lagrange multiplier

vector λ∗, with components λ∗i , i ∈ E ⋃ I, such that the following conditions are

Page 53: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

45

satisfied:

∇xL(x∗, λ∗) = 0

ci(x∗) = 0, for all i ∈ E

ci(x∗) ≥ 0, for all i ∈ I (4.2.3)

λ∗i ≥ 0, for all i ∈ I

λ∗i ci(x∗) = 0, for all i ∈ E

⋃I.

These conditions 4.2.3 are usually called Karush-Kuhn-Tucker conditions or KKT

conditions for short.

Definition 4.4 At a feasible point x, the inequality constraint i ∈ I is said to be

active if ci(x) = 0 and inactive if the strict inequality ci(x) > 0 is satisfied.

Definition 4.5 The active set A(x) at any feasible x is the union of the set E with

the indices of the active inequality constraints; that is,

A(x) = E⋃i ∈ I | ci(x) = 0.

Definition 4.6 Given the point x∗ and the active set A(x∗) defined above, we say

that the linear independence constraint qualification (LICQ) holds if the set of active

constraint gradients ∇ci(x∗), i ∈ A(x∗) is linearly independent.

Definition 4.7 Given a feasible point x∗, a sequence zk∞k=0 with zk ∈ Rn is a

feasible sequence if the following properties hold:

1. zk 6= x∗, for all k;

2. limk→∞ zk = x∗

3. zk is feasible for all sufficiently large k.

Page 54: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

46

Definition 4.8 A limiting point of the following sequence:

zk − x

||zk − x|| ,

is called a limiting direction of the feasible sequence zk∞k=0.

The sequence (zk − x)/||zk − x|| lies on the surface of a compact set (unit sphere),

and thus has at least one limiting point.

Lemma 4.1 The following two statements hold.

1. If d ∈ RN is a limiting direction of a feasible sequence, then

dT∇c∗i = 0, for all i ∈ E , dT∇c∗i ≥ 0, for all i ∈ A(x∗)⋂I. (4.2.4)

2. If (4.2.4) holds with ||d|| = 1 and the LICQ condition is satisfied, then d ∈ Rn

is a limiting direction of some feasible sequence.

For proof see [60].

The set of directions defined by (4.2.4) plays a central role in the optimality

conditions, so for future reference we give this set a name and define it formally.

Definition 4.9 Given a point x∗ and the active constraint set A(x∗) defined by

(4.2.4), the set F1 is defined by

F1 =

αd |α > 0,

dT∇c∗i = 0, for all i ∈ EdT∇c∗i ≥ 0, for all i ∈ A(x∗)

⋂ I

.

The cone F1 is simply the set of all positive multiples of all limiting directions of all

possible feasible sequences, see Lemma. (4.1).

Above, a necessary condition of optimality has been formulated. When KKT

condition is satisfied a move along any vector w from F1 either increases the first-

order approximation to the objective function (that is, wT∇f(x∗) > 0), or else

Page 55: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

47

keeps this value the same (that is, wT∇f(x∗) = 0). The second-order conditions

concern the curvature of the Lagrangian function in the “undecided” directions, the

directions w ∈ F1 for which wT∇f(x∗) = 0. Naturally, we need to assume that

objective function and constraint functions have smooth second derivatives. Lets

define subset of F1, which contains “undecided” directions, by

F2(λ∗) = w ∈ F1 | wT∇f(x∗) = 0, i ∈ A(x∗)

⋂I, λ∗i > 0.

Here λ∗ and x∗ satisfy KKT conditions. Now second order sufficient condition can

be formulated.

Theorem 4.4 Suppose that for some feasible point x∗ ∈ Rn, there is a Lagrange

multiplier vector λ∗ such that the KKT conditions (4.2.3) are satisfied. Suppose also

that

wT∇xxL(x∗, λ∗)w > 0, for all w ∈ F2(λ∗), w 6= 0.

Then x∗ is a strict local solution for (4.2.2).

4.3 Augmented Lagrangian Method

Augmented Lagrangian method for constrained optimization seeks the solution

by replacing the original constrained problem by a sequence of unconstrained sub-

problems.

The problems considered in this dissertation are equality-constrained problems.

In this section we will review augmented Lagrangian method applied to problems

with equality constraints:

minx

f(x), subject to ci(x) = 0, i ∈ E . (4.3.5)

Definition 4.10 Function

Q(x; ρ) = f(x) + ρ∑i∈E

c2i (x)

Page 56: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

48

is called the quadratic penalty function, and ρ is called the penalty parameter.

We can consider a sequence of values ρk →∞ and seek the approximate minimizer

xk of Q(x; ρk) for each k. This type of methods is called quadratic penalty methods.

Theorem 4.5 If ||∇Q(x; ρk)|| ≤ τk and τk → 0, while ρk → ∞, then for all limit

points x∗ of xk at which ∇ci(x∗) are linearly independent, we have that x∗ is a

KKT point of (4.3.5). If xkk∈K → x∗ is a convergence subsequence of xk, then

limk∈K

ρkci(xk) = −λ∗i , for all i ∈ E , (4.3.6)

where λ∗ is a multiplier vector that satisfies the KKT conditions (4.2.3).

For the proof see [60]. Thus, the minimizers xk of Q(x; ρk) do not satisfy the

feasibility condition; for large k we have

ci(xk) ≈ −λ∗iρk

.

To get an optimizer which nearly satisfies the equality constraint, we introduce

Lagrange multiplier term into the quadratic penalty function. It will lead us to aug-

mented Lagrangian method. This “trick” reduces the possibility of ill-conditioning

of the subproblems. The Lagrangian with an additional penalty term is called aug-

mented Lagrangian:

Lρ(x, λ) = f(x)−∑i∈E

λici(x) + ρk

∑i∈E

c2i (x). (4.3.7)

The augmented Lagrangian method will be considered in more detail in Chapter 5.

Page 57: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

CHAPTER 5

Model Updating

5.1 Mathematical Statement and Engineering and Compu-tational Challenges

Finite element model updating problem concerns updating of an analytical sym-

metric finite element generated model using measured data from a real-life or an

experimental structure. The updating needs to be done in such a way that the

symmetry of the model is preserved and the updated model contains some of the

desirable physical and structural properties of the original finite element model.

Mathematically, the problem can be formulated as follows: given a structured

pencil (Ma, Ca, Ka) and a few of its associated eigenpairs (λi, φi)ki=1 with k << 2n,

assume that newly measured eigenpairs (µi, yi)ki=1 have been obtained. Update

the pencil (Ma, Ca, Ka) to (Mu, Cu, Ku) = (Ma + ∆M, Ca + ∆C,Ka + ∆K) having

the same structure, so that the subset (λi, φi)ki=1 is replaced by (µi, yi)k

i=1 as k

eigenpairs of (Mu, Cu, Ku).

There now exists a large number of model updating methods. Most of this

work prior to 1995 is contained in the book by Friswell and Mottershead [33]) and

references therein. Some of the more recent results can be found in [10, 11, 25, 29,

32, 38, 42, 54].

Most existing methods concern updating of an undamped analytical model of

the form:

Max(t) + Kax(t) = 0,

Page 58: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

50

where Ma, Ka are, respectively, the analytical mass and stiffness matrices. For the

sake of convenience, we will denote this model simply by (Ma, Ka). Similar notations

will be used for the updated model; that is, (Mu, Ku) will stand for the undamped

updated model and (Mu, Cu, Ku) will stand for the damped updated model :

Mux + Cux + Kux = 0.

The analytical and updated eigenvalues and eigenvectors will also be denoted in a

similar way.

A standard practice is to formulate the updating problem in an optimization set-

ting such that the undamped updated model satisfies the following basic properties

of the original model [33]:

(i) Mu = MTu

(ii) Ku = KTu (symmetry)

(iii) ΦTu MuΦu = I (orthogonality)

(iv) KuΦu = MuΦuΛu (eigenvalue-eigenvector relation).

Maintaining symmetry and reproduction of the measured data are the basic

requirements for model updating. However, satisfaction of the orthogonality relation

by the measured data is also of prime importance because the measured data, which

comes from an experiment or a real-life structure, very often fails to satisfy the

orthogonality constraint (iii).

Besides these basic requirements, there are also other engineering and computa-

tional challenges associated with the updating problem. These include: (i) dealing

with incompleteness of the measured data, and (ii) complex measured data versus

real analytical data, etc. Due to hardware limitations, the measured eigenvectors

Page 59: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

51

are very often of much shorter length than the corresponding analytical eigenvec-

tors. However, in order to use these measured data in updating process, these two

sets of data must be of equal length. To remedy this situation, either the order of

the analytical model is reduced (model reduction) or the measured eigenvectors are

expanded (model expansion).

For details, see the book by Friswell and Mottershead [33]. In this dissertation,

we deal only with real representations of the data and assume that either modal

expansion or model reduction has been performed to deal with the issue of the

incomplete measured data.

5.2 Methods for Undamped Models

There are two types of updating procedures. The first type of methods, assuming

the mass matrix as the reference matrix, update first the measured data so that it

satisfies the mass-orthogonality constraint (iii). This is then followed by updating

the stiffness matrix so as to satisfy the constraints (ii) and (iv).

Baruch and Bar Itzhack [4] suggested to update eigenvector matrix so that it

satisfies the mass-orthogonality relation and the following mass-weighted norm is

minimized:

1

2||Ma(Φa − Φu)|| → min

ΦuMaΦu = I.

An updated matrix Ku is a solution of the following minimization problem:

1

2||M−1/2

a (Ka −Ku)M−1/2a || → min

Ku = KTu .

The others update, either separately or simultaneously, the mass and stiffness

Page 60: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

52

matrices, satisfying the constraints (i)-(iv) [9, 42, 75].

5.2.1 Undamped Model Updating with No Spurious Modes and withIncomplete Measured Data

There are two major engineering concerns with updating, one, as noted before, is

the appearance of spurious modes in the frequency range of interest after updating,

and the other, the issue of incomplete measured data. Due to hardware limitations,

measured eigenvectors are almost always much shorter than their analytical coun-

terparts. This difficulty is usually overcome by model expansion of the measured

eigenvalues or reducing the order of the model.

There now exists a method which updates the stiffness matrix first satisfying

the constraints (ii) and (iv) and then computes the missing entries of the mea-

sured modes in a computational setting such that computed data satisfies the mass

orthogonality constraint [10]. The method proposed in [10] has the additional im-

portant feature that the eigenvalues and eigenvectors which are not updated remain

unchanged by the updating procedure. This guarantees that “no spurious modes

appear in the frequency range of interest.”

To present the method proposed in [10], consider the following partitioning of

eigenvalue and eigenvector matrices:

X = (X1, X2), Λ =

(Λ1 00 Λ2

),

where X1 ∈ Rn×k, Λ1 ∈ Rk×k. Let µ1, ..., µk and y1, ..., yk be a set of k (k <<

2n) eigenvalues and eigenvectors measured from an experimental structure. Let

Y = (y1, ..., yk) and Σ = diag(µ1, ..., µk). Let

Y =

(Y1

Y2

), Y1 ∈ Rm×m, Y2 ∈ R(n−m)×m, (5.2.1)

and it is assumed that only Y1 is known. The question is, how we can construct Y2

Page 61: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

53

and how can we update the matrix K to Ku, such that

(Y, X2),

(Σ 00 Λ2

)

is the matrix eigenpair of the updated model? In other words, the updated model

should reproduce measured eigenvectors and eigenvalues and the rest of the eigen-

structure remains unchanged. The following orthogonality relations, proved in [10],

play the central role in the proposed updating procedure.

Theorem 5.1 Let P (λ) = λ2M + K be a be a symmetric pencil with M positive

definite and K semidefinite, with distinct nonzero eigenvalues and let (X, Λ) be the

matrix eigenpair of this pencil. Then the matrices D1 and D2 defined by

D1 = XT MX, D2 = XT KX

are diagonal and D2 = −D1Λ2.

(b) Furthermore, suppose that Λ1 and Λ2 do not have a common eigenvalue, then

XT1 MX2 = 0, and XT

1 KX2 = 0. (5.2.2)

Theorem 5.2 (Eigenstructure Preserving Updating) Assume that Λ1 and Λ2

do not have a common eigenvalue. Then, for every symmetric matrix Ψ , the updated

pencil Pu(λ) = λ2M + Ku, where

Ku = K −MX1ΨXT1 M (5.2.3)

is such that

MX2Λ22 + KuX2 = 0.

That is, the eigenvalues and eigenvectors of the original finite element model that

are not to be affected by updating, remain unchanged.

Page 62: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

54

Proof. Substituting the expression of Ku from (5.2.3) and using the orthogo-

nality relation (5.2.2) above, we have

MX2Λ22 + KuX2 = MX2Λ

22 + KX2 −MX1ΨXT

1 MX2 = 0

The question now is how to choose the matrix Ψ such that the measured eigen-

values and eigenvectors will be reproduced by the updated model. That is, for what

choice of Ψ, the matrix Ku is such that

MY Σ2 + KuY = 0?

An algorithm description for choosing such a Ψ via solution of a Sylvester equation

was given in [10]. The unmeasured part Y2 of Y was also constructed in this algo-

rithm satisfying the mass-orthogonality relation that Y T MY is a diagonal matrix.

The following result shows how to choose the matrix Ψ such that it is symmet-

ric and the measured eigenvalues and eigenvectors are embedded into the updated

model.

Theorem 5.3 Let matrix Y be mass-orthogonal; i.e., Y T MY is a diagonal matrix.

Let Ψ satisfy the Sylvester matrix equation:

(Y T MX1)Ψ(Y T MX1)T = Y T MY Σ2 + Y T KY. (5.2.4)

Then Ψ is symmetric and

MY Σ2 + KuY = 0,where Ku as in (5.2.3). (5.2.5)

Recently, Chu et al. [17] established a necessary and sufficient condition for existence

of Ψ.

Page 63: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

55

Theorem 5.4 There exists matrix Ψ ∈ Rk×k, such that

MY Σ2 + KuY = 0 (5.2.6)

if and only if

Y = X1V D

for some orthogonal matrix V ∈ Rk×k and some nonsingular diagonal matrix D ∈Rk×k. Here Ku = K −MX1ΨXT

1 M .

The following is the algorithm described in [10] for computing Ψ and finding

unmeasured part Y2 of Y . First, find QR decomposition of MX1:

MX1 = (U1 U2)

(Z0

).

Now, substituting the expression for Y (5.2.1) into (5.2.6) and premultiplying

by (U1 U2)T , we have

(UT

1

UT2

)(M1 M2)

(Y1

Y2

)Σ2 = (K1 K2)

(Y1

Y2

)=

(Z0

)ΨXT

1 MY,

where M = (M1 M2) and K = (K1 K2), M1, K1 ∈ Rn×m and M2, K2 ∈ Rn×(n−m).

Compute now Y2 by solving the Sylvester equation:

UT2 M2Y2Σ

2 + UT2 K2Y2 = −UT

2 (K1Y1 + M2Y1Σ21). (5.2.7)

Page 64: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

56

Algorithm 1 Model Updating algorithm for an undamped model with guaranteedno spill-over

INPUT: M = MT > 0, K = KT , Σ = diag(µ1, ..., µk), Y1

OUTPUT: Updated stiffness matrix Ku

1: Compute Y2 by solving the equation (5.2.7) and form the matrix Y =

(Y1

Y2

)

2: Mass-orthogonalize matrix Y by computing LDL decomposition of Y T MY =LDLT . Update the matrix Y by Y ← Y L−T

3: Compute Ψ by solving the following algebraic system of equation:

(Y T MX1)Ψ(Y T MX1)T = Y T MY Σ2 + Y T KY,

4: Update the stiffness matrix:

Ku = K −MX1ΨXT1 M

5.3 Model Updating for Damped Models

Updating of a damped model is rarely considered in the engineering literature,

the reason being, it is hard to estimate damping in practice. However, all the

existing structures have damping. Earlier, several control-theoretic methods for

updating were developed [56, 57, 76]. Unfortunately, the use of control destroys the

symmetry. There are optimization-based techniques which are capable of preserving

the symmetry. Friswell et al. [32] gave an explicit formula for updating a damped

model, and a numerical algorithm based on the formula was developed by Kuo et

al. [44].

In this section, we discuss two recent updating schemes for damped models:

a symmetric, no spill-over eigenvalue embedding method by Carvalho et al. [12]

and its generalization by Lancaster [50, 52] and a new two-stage optimization-based

algorithm .

Page 65: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

57

5.3.1 Eigenvalue Embedding Methods

Carvalho et al. [12] developed an updating scheme which preserves symmetry,

and uses low-rank updating, and guarantees the no spill-over with mathematical

proof. Unfortunately, in their work, only updating of measured eigenvalues were

considered. The scheme did not take into account the updating of the measured

eigenvectors. The proposed updating scheme, however, preserves the symmetry of

the original model.

We illustrate the scheme by means of the following theorem, where a real isolated

eigenvalue is updated. But the process is more general and is capable of updating

both real and complex eigenvalues.

Theorem 5.5 Let (λ, x) be a real isolated eigenpair of P (λ) = λ2M +λD +K with

xT Kx = 1. Let λ be reassigned to µ. Assume that 1 − λµθ 6= 0 and 1 − λ2θ 6= 0,

where θ = yT My and ε = (λ− µ)/(1− λµθ). Then the following updated quadratic

matrix pencil:

Pu(λ) = λ2Mu + λCu + Ku

withMu = M − ελMxxT M

Cu = C + ε(MxxT K + KyxT M)

Ku = K − ε

λKxxT K,

is such that

i. The eigenvalues of Pu(λ) are the same as those of P (λ) except that λ has been

replaced by µ (assignment of real eigenvalues).

ii. x1 is also an eigenvector of Pu(λ) corresponding to the embedded eigenvalue

µ.

Page 66: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

58

iii. If (λ2, x2) is an eigenpair of P (λ), where λ2 6= λ, then (λ2, y2) is also an

eigenpair of Pu(λ) (no spill-over).

A generalization of the result is obtained by Lancaster in [50, 52]. In his work,

Lancaster considers Hermitian matrices (instead of real symmetric). The method

presented by Lancaster is also capable of updating measured eigenvectors. The

moments (3.2.5) defined previously are used to solve the problem. Recall that the

coefficients of P (λ) can be recursively determined in terms of the moments:

Γj = XΛjPX∗, j = 1, 2, 3

M = Γ−11 , C = −MΓ2M, K = −MΓ3M + CΓ1C,

where P is a permutation matrix, defined in (3.3.16). Now we consider not only the

splitting of the spectral data but also the splitting of the matrix P:

X = (X1, X2), Λ =

(Λ1 00 Λ2

), P =

(P1 00 P2

).

As before, Λ1 is to be modified to Σ and Λ2 is its unknown complement and stays

unchanged after updating. The eigenvectors X1 associated with Λ1 are to be replaced

by Y . The spectral decomposition of M gives

M−1 = XΛPX∗ = S1 + S2,

where S1 = X1Λ1P1X∗1 , S2 = X2Λ2P2X

∗2 . This approach admits only changes from

real to real eigenvalues and complex conjugate pairs to complex conjugate pairs.

Thus, matrix P remains unchanged. The updated matrix Mu, is now defined by

M−1u = M−1 +(S1−S1)

where S1 = Y ΣP1Y∗. For updating the damping matrix, the matrices T1 and

T2 are introduced as:

T1 = X1Λ21P1X

∗1 , T2 = T1 = X2Λ

22P2X

∗2 ,

Page 67: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

59

then Γ2 = T1 + T2. Now

C = −M(T1 + T2)M, Du = −Mu(T 1 + T2)M,

where T 1 = Y Σ2P1Y∗. Elimination of T2 leads to

Cu = Mu(M−1CM−1−(T 1−T1))Mu.

For updating the matrix K, define U1, U2 by

U1 = X1Λ31P1X

∗1 , U2 = X2Λ

32P2X

∗2 ,

so that Γ3 = U1 + U2, and then the updated matrix U1 = Y Σ3P1X∗1 . Then

K = −M(U1 + U2)M + CM−1C, Ku = −Mu(U1 + U2)Mu + CuM−1u Cu.

Elimination of U2 leads to

Ku = −Mu(M−1(CM−1C−K)M−1+(U1−U1)MU +CuM

−1u C.

Note that the formulas for (Mu, Cu, Ku) do not require the knowledge of the

(X2, Λ2).

5.4 Quadratic Model Updating with Measured Data Satis-fying Orthogonally Relations: A New Method

As stated before, most existing updating methods for an undamped model im-

plicitly or explicitly use orthogonality constraint. Satisfaction of an orthogonality

constraint, of course, is essential for measured data to be acceptable for updating.

However, the role orthogonality constraint plays in updating procedure itself has

not been systematically investigated.

In this section, we prove, both for undamped and damped models, that sat-

isfaction of the orthogonality relation in both cases is necessary and sufficient for

Page 68: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

60

preserving the symmetry in the updated model. Using this new result, we then

propose a two-stage optimization process for updating a damped model.

In Stage I, we update the measured eigenvectors so that they satisfy a quadratic

orthogonality relation (2.3.9). In Stage II, the updated measured eigenvectors from

Stage I are used to update the stiffness matrix so that it remains symmetric after

updating and the measured eigenvalues and eigenvectors are reproduced by the

updated model. Thus, our method generalizes methods for undamped models of the

first type to a damped model. The results of numerical experiments on some case

studies are presented to show the accuracy of the proposed method.

The problems in both stages are nonlinear optimization problems. The Stage

I problem is a nonconvex minimization problem with equality constraint. This is

a difficult optimization problem to solve. An augmented Lagrangian method is

proposed to deal with this problem. Some convergence properties of this method

are discussed.

The Stage II problem is a convex quadratic problem. This is a rather nice

optimization problem to deal with and there are several excellent numerical methods

for such problems in the literature (see [60]). Necessary optimization background

also has been reviewed in the Chapter 4.

Implementations in optimization settings of Stage I and Stage II require that the

appropriate gradient formulas must be computed in terms of the known quantities

only, which are, in our case, just a few measured eigenvalues and eigenvectors and

the corresponding sets from the analytical model. Such gradient formulas have been

mathematically derived.

Page 69: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

61

5.4.1 Existence of Symmetric Solution of the Model Updating Problem

In this section, we establish mathematical results to demonstrate the fact that

the updated measured data must satisfy orthogonality condition for the existence

of a symmetric solution to the Stage II. We consider both cases: undamped and

damped models.

5.4.2 Linear Case (Undamped Model)

Consider the symmetric undamped model

Mx(t) + Kx(t) = 0.

Let (X,T ) be the real-form representation of the complex eigenpair (Φ, Λ) of the

associated generalized eigenvalue problem.

Theorem 5.6 Given M = MT > 0, T ∈ Rk×k, a block-diagonal matrix of the form

(2.5.16), X ∈ Rn×k of the form (2.5.15) with full rank, there exists a symmetric

nonzero matrix K such that KX = MXT if and only if XT MX = B, where B is

a block-diagonal matrix of the form:

B = diag(B1, ..., Bl, B2l+1, ..., Bk), Bj =

(aj bj

bj −aj

), j = 1, ..., l

bj, j = 2l + 1, ..., k(5.4.8)

Proof. (⇐) Sufficiency. Since X has full rank, the matrix equation KX =

MXT has a nonzero solution. To prove that there exists a symmetric solution K to

the equation, we consider an extension of (X,T ) in the form Xext = [X X] ∈ Rn×n,

Text = diag(T, T ) ∈ Rn×n such that XTextMXext = Bext = diag(B, B), where B is a

block-diagonal matrix, Xext is of full rank, and T is a block-diagonal matrix. Now

define

K = X−Text BextTextX

−1ext.

Page 70: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

62

Then, obviously, KX = MXT ; moreover, since BextText is a symmetric matrix, K

is also symmetric. Different choices of X and T will produce different symmetric

solutions to the above equation.

(⇒) Necessity. Since M = MT > 0 and K = KT , there exists a matrix Φ, such

that ΦT MΦ = D, where D is a diagonal matrix (see [18]).

Setting X = ΦS−1, where S is defined as in (2.5.14), we have, XT MX =

S−T ΦT MΦS−1 = S−T DS−1 = B. Thus, a 2 × 2 block of B is of the following

form:

√2

(1 1i −i

)−T (a + ib 0

0 a− ib

)√2

(1 1i −i

)−1

=

(a bb −a

).

5.4.3 Quadratic Case (Damped Model)

In this section, we prove an analogous result for the quadratic pencil P (λ) =

λ2M + λC + K. In this case there are 2n eigenvalues and eigenvectors.

The real-form representation of the matrix eigenpair (Φ, Λ) of P (λ) is denoted

by (X, T ).

The pair (X, T ) satisfies the relation (2.5.17), which can be written as

( −K 00 M

)(X

XT

)=

(C MM 0

)(X

XT

)T. (5.4.9)

This shows that the matrix (X

XT

)

is the real-form representation of the eigenvector matrix of the 2n× 2n linear pencil

(( −K 00 M

),

(C MM 0

)).

Since M, C, K are symmetric, by Theorem 5.6, therefore, we have

Page 71: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

63

(X

XT

)T (C MM 0

)(X

XT

)= B(X,T ), (5.4.10)

where B = diag(B1, ..., Bl, B2l+1, ..., Bk) is a block-diagonal matrix with blocks de-

fined as in (5.4.8). Note the relation (5.4.10) is equivalent to the orthogonality

relation (2.3.9) for the complex eigenpair.

To solve the inverse problem, i.e., to find a symmetric K which will satisfy the

eigenvalue-eigenvector relation (2.5.17) for given M = MT > 0, C = CT , and the

rank condition (X, T ) satisfying (2.5.18), we find an extension of the matrices X

and T , Xext = [X X] ∈ Rn×n, Text = diag(T, T ) ∈ Rn×n, such that

(Xext

XextText

)is of full rank and

(Xext

XextText

)T (C MM 0

)(Xext

XextText

)= [B(X, T ), B] = Bext(Xext, Text).

Here T , B are real block-diagonal matrices. Then we can take K as the solution to

the following linear system:(

Xext

XextText

)T ( −K 00 M

)(Xext

XextText

)= BextText

i.e., K = X−Text (T

TextX

TextMXextText − BextText)X

−1ext. This is a real symmetric matrix

(note, that BextText is a symmetric matrix).

The above discussion leads to the following theorem.

Theorem 5.7 Given M = MT > 0 ∈ Rn×n, C = CT ∈ Rn×n, and T ∈ Rk×k, X ∈Rn×k, k < n matrices of the form (2.5.16), (2.5.15), respectively. Let (X, T ) satisfy

condition (2.5.18). Then there is a real symmetric matrix K such that MXT 2 +

CXT + KX = 0 if and only if(

XXT

)T (C MM 0

)(X

XT

)= B(X,T ), (5.4.11)

Page 72: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

64

where B is some block-diagonal matrix with blocks of the form (5.4.8).

Corollary 5.1 Assume that T is a real-form representation of an eigenvalue matrix

of the symmetric positive definite pencil (M, C, K) and all the diagonal blocks of T

are distinct, then B is of form (5.4.8) if and only if BT = T T B.

Proof. Consider the matrix D3 from (2.3.9), and note that B = S−T D3S−1.

Then, BT = S−T D3S−1SΛS−1 = S−T D3ΛS−1 and T T B = S−T ΛST S−T D3S

−1 =

S−T ΛD3S−1. Now, B = S−T D3S

−1 is of the form (5.4.8) if and only if D = ST BS

is a diagonal matrix, which is equivalent, to

ΛD3 = D3Λ.

This implies that BT = T T B.

5.4.4 A Two-Stage Model Updating Scheme

In this section, we introduce our two-stage model updating scheme for FEMU.

Throughout, we assume that Ma, Ca, Ka, XM = ΦMS−1, and TM = SΛMS−1 are

given, Ma > 0, Ka = KTa , Ca = CT

a .

Stage I: In this stage, the real-form representation of the measured eigenvector

matrix XM is updated so that it becomes as close as possible to the analytical data

in the sense that a weighted distance between them is minimized. Furthermore, an

orthogonality constraint stated in Corollary 5.1 is enforced. Mathematically, the

problem may be stated as follows:

(P) min1

2||W−1/2

1 (X −XM)W−1/21 )||2F

s.t. H(X) = 0

X ∈ Rn×k,

Page 73: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

65

where

H(X) = B(X,TM)TM − T TMB(X, TM), (5.4.12)

with B(X, TM) given by (5.4.10) and W1 is some positive-definite weighting matrix.

A solution to the problem will be denoted by Xu.

Stage II : Let Xu be a solution from Stage I. In this stage, we would like to update

the stiffness matrix K so that it becomes as close as possible to Ka in the sense

that a weighted distance between K and Ka is minimized. In addition, constraints

on symmetry for K and eigenvalue-eigenvector relation of (2.5.17) are enforced.

Mathematically, this amounts to solving the following minimization problem:

(Q) min1

2||W−1/2

2 (K −Ka)W−1/22 ||2F

s.t. MaXuT2M + CaXuTM + KXu = 0

KT = K (symmetry)

K ∈ Rn×n,

where W2 > 0 is a positive-definite weighting matrix. The solution to the problem

will be denoted by Ku.

Problem (Q) is a convex quadratic programming problem with a unique solution.

There exists an analytical expression [32] and a computational algorithm [44] based

on numerical linear algebra techniques. Since (Q) is a simple convex quadratic

programming problem, we can also solve it numerically by existing optimization

techniques.

Remarks : It is also possible to update both the stiffness and damping matrices

satisfying the orthogonality relation of Stage I. This will require reformulation of

Page 74: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

66

the problem. Such reformulation is currently being investigated.

5.5 A Solution Method and Its Convergence Properties

We now focus on how to solve (P). As noted before Problem (Q) is a convex

quadratic programming problem for which there exist excellent numerical methods.

However, in our numerical experiment, we use the same method developed for (P)

to solve (Q).

To simplify the presentation, set W = I, and

f(X) =1

2||X −XM ||2F .

Then the Lagrangian function for (P) is:

L(X, Y ) = f(X) + 〈Y, H(X)〉,

where Y ∈ Rk×k. Some remarks about our Lagrange function L are in order. By

definition, H(X)T = −H(X). Hence the system H(X) = 0 defines k(k − 1)/2

constraints.

The necessary optimality conditions for (P) can now be stated as follows: Find

a pair (X∗, Y∗) such that

∇XL(X∗, Y∗) = 0, H(X∗) = 0. (5.5.13)

Problem (P) has a convex quadratic objective function, and polynomial equality

constraints. So it is a polynomial programming problem. But the feasible region

defined by the polynomial equality constraints are nonconvex in general. Hence we

are dealing with a nonconvex minimization problem with equality constraints.

Generally speaking, if X∗ is an optimal solution for (P), and the constraint sys-

tem satisfies certain regularity condition at X∗, then there is a Y∗ ∈ Rk×k such that

(X∗, Y∗) satisfies (5.5.13). Elements of Y∗ are usually called Lagrangian multipliers.

Page 75: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

67

Optimization techniques for constrained problems such as (P) have been well

developed in the past fifty years. There are many efficient methods, including

augmented Lagrangian methods, to solve a nonlinear programming problem with

equality constraints. The first augmented Lagrangian method was independently

proposed by Hestenes [41] and Powell [62] by adding a quadratic penalty term to

its Lagrangian function L(X, Y ). Because of its attractive features, such as ease

to implement, it has emerged as an important method for handling constrained

optimization problems. The literature on augmented Lagrangian methods is vast.

We refer the reader to [6, 67] for a thorough treatment on this class of methods

and its convergence theory. Following Hestenes and Powell, we propose an aug-

mented Lagrangian method to solve (P). To this end, we introduce the following

parameterized family of the augmented Lagrangian functions:

Lρ(X,Y ) = L(X, Y ) +ρ

2||H(X)||2F , (5.5.14)

where ρ is a positive constant.

We now discuss an issue associated with our proposed augmented Lagrangian

method.

The existence of a global minimizer in Step 3: For P(i), the following proposition

guarantees that there is a solution.

Proposition 5.1 Let ρ > 0 and Y ∈ Rk×k. Then argminXLρ(X,Y ) is non-empty.

Proof. We will prove that Lρ(·, Y ) is level-bounded [67, Definition 1.8]; that is,

for each real µ, the set X | Lρ(X,Y ) ≤ µ is bounded. Once this is done, the

non-emptiness of argminXLρ(X, Y ) follows from [67, Theorem 1.9].

Let Xi be a sequence such that ||Xi||F →∞ as i →∞. Then

Lρ(Xi, Y ) →∞ as i →∞

Page 76: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

68

Algorithm 2 The augmented Lagrangian method for (P)

INPUT: X0, Y0, ρ0 > 0, 0 < β < 1, and ε > 0OUTPUT: Solution to (P)

1: for i = 0,1,... do2: Stop if ||∇XL(Xi, Yi)|| ≤ ε, and ||H(Xi)|| ≤ ε3: Solve the unconstrained optimization subproblem:

(P(i)) minX

Lρi(X, Yi)

with the stopping criteria, ||∇XLρi(X, Yi)||F < βi

Let Xi+1 be the solution of (P(i))4: Update the multiplier matrix:

Yi+1 = Yi + ρiH(Xi+1)

Then choose ρi+1 > ρi

5: end for

since, by Cauchy-Schwartz inequality 〈Y, H(Xi)〉 ≥ −||Y ||F ||H(Xi)||F ,

〈Y, H(Xi)〉+ ρ/2||H(Xi)||2F ≥ (ρ/2||H(Xi)||F − ||Y ||F )||H(Xi)||F

and f(Xi) → ∞ as i → ∞. If X : Lρ(X, Y ) ≤ µ were not a bounded set for

some µ, then there would exist a sequence Xi sequence such that ||Xi||F →∞ as

i →∞. This would imply that

µ ≥ f(Xi) + 〈Y, H(Xi)〉+ ρ/2||H(Xi)||2F → +∞

as i →∞ by the above argument. The contradiction proves the level-boundedness

of Lρ(·, Y ).

2. The convergence of the proposed method: As we have already pointed out

before, (P) is a nonconvex programming problem. It is well known in optimization

literature that finding a global minimizer for a nonconvex programming problem

is a very challenging task. A practical way for solving a sequence of nonconvex

Page 77: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

69

programming problems in Step 3 is to find a sequence of critical points instead. The

following theorem, which is embedded somewhere in the general convergence theory

in [6, 67], ensures that such a sequence of critical points will have a convergent

subsequence.

To make our presentation self-contained and for the reader’s convenience, we in-

clude a proof of this theorem in the next section after computable gradient formulas

are derived. Let us set

F (X) = [H12(X), · · · , H1k(X), H23(X), · · · , H2k(X), · · · , H(k−1)k(X)]T ,

and

Y = [Y12, · · · , Y1k, Y23, · · · , Y2k, · · · , Y(k−1)k]T . (5.5.15)

Then F : Rk×k → R(k−1)k/2, and it is easy to see that F (X) = 0 if and only if

H(X) = 0. Also, simple calculations show that

< Y, H(X) >= 2 < Y , F (X) >

and

||H(X)||2F = 2||F (X)||2F .

Our Lagrangian functions then become

L(X, Y ) = L(X, Y ) = f(X) + 2 < Y , F (X) >

and

Lρ(X,Y ) = Lρ(X, Y ) = L(X, Y ) + ρ||F (X)||2F .

Theorem 5.8 Suppose ||∇XLρj(Xj, Yj)||F < βj, 0 < β < 1 for j = 0, 1, 2, . . . ,

with ||Yj||F is bounded, and ρj < ρj+1 and ρj → ∞ as j → ∞. If there is

Page 78: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

70

a convergent subsequence Xji of Xj with Xji

→ X∗ such that ∇F (X∗) maps

Rn×k onto R[(k−1)×k/2]×[n×k], then there is a Y∗ such that

∇XL(X∗, Y∗) = 0, F (X∗) = 0. (5.5.16)

Remarks: (a) In view of the relationships between F and H, Y and Y , it is

easy to see that ∇XL(X∗, Y∗) = 0 and F (X∗) = 0 if and only if there is some Y∗

(by (5.5.15)) such that the pair (X∗, Y∗) satisfies (5.5.13). (b) It is also clear from

(5.5.15) that ||Yj||F is bounded if and only if ||Yj||F is bounded. The following

simple example shows that if the conditions listed in Theorem 5.8 are not satisfied,

Lagrangian multipliers may not exist.

Example 1 Consider minx∈R x subject to x2 = 0. Then it is trivial to see that

x = 0 is an optimal solution. However, there is no Lagrangian multiplier for the

minimization problem.

5.5.1 Computation of the Gradient Formulas

To implement the proposed method effectively, we need to have gradient formulas

that can be computed in terms of the given quantities. In this section, we will show

that gradients of functions associated with problems (P) and (Q) can be computed in

terms of their associated matrices. This is particularly important since our numerical

experiments are conducted in MATLAB environment.

The basic idea for deriving gradient formulas comes from operator theory on

adjoint operators as has been used in [14].

It is easy and elementary to see that

∇f(X) = X −XM .

Page 79: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

71

Let

h(X) = 1/2||H(X)||2F = 1/2〈H(X), H(X)〉

gY (X) = 〈Y, H(X)〉.

Then

∇h(X) = 2(MaXG(X)TMT + (CaX + MaXTM)G(X)), (5.5.17)

where G(X) = H(X)TMT − TMH(X) and H(X) is given by (5.4.12):

∇gY (X) = 2(MaX(Y T TM − TMY )T T

M + (CaX + MaXTM)(Y T TM − TMY )). (5.5.18)

The details of the derivation of formula (5.5.17) are given in the following subsection.

With the above gradient formulas, the gradient formulas for L and Lρ with

respect to X can be written as:

∇XL(X, Y ) = ∇f(X) +∇gY (X),

∇XLρ(X, Y ) = ∇XL(X, Y ) + ρ∇h(X).

5.5.2 Gradient Formulas for Problem Q

Gradient functions in Problem (Q) are much simpler, and can be written down

as follows. For h(K) = 1/2||K −Ka||2F , we have

∇h(K) = K −Ka.

For f(K) = 1/2||MaXuT2M + CaXuTM + KXu||2F , we have

∇f(K) = (MaXuT2M + CaXuTM + KXu)X

Tu .

We will now use the above gradient formulas to obtain the necessary optimality

conditions for Stage I.

Page 80: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

72

Necessary optimality conditions in matrix form for (P):

Find X and Y such that

∇f(X) +∇gY (X) = 0,

H(X) = 0,

where H(X)is given by (5.4.12), and ∇gY (X) is given by (5.5.18). The above

necessary optimality conditions expressed in terms of given matrices are significant.

It not only opens up the possibility of solving (P) by solving the above systems of

equations (such as by Newton’s method for nonlinear equations), but it also forms

the basis for sensitivity analysis when the problem data undergoes small changes.

We conclude this section by including a proof of Theorem 5.8. We begin with a

well-known lemma on the invertibility of a matrix (see e.g., [18, p. 319]).

Lemma 5.1 Let p, q be positive integers such that p ≥ q. Suppose that A is an p×q

matrix with rank q. Then [AT A]−1 exists.

Proof. The proof is by contradiction. Let u be a nonzero q× 1 vector such that

AT Au = 0. Then (Au)T (Au) = 0. It follows that Au = 0. The rank condition on A

implies that u = 0, which is a contradiction. This proves AT A is invertible.

Proof of Theorem 5.8. Without any loss of generality, suppose that Xj → X∗

as j → ∞. For any fixed X, since ∇F (X) is a [(k − 1) × k/2] × [n × k] matrix,

∇F (X∗) has rank (k − 1)× k/2, and ∇F (·) is continuous. We may further assume

that ∇F (Xj) has rank (k − 1) × k/2 for all j. Set Aj = ∇F (Xj)T and A∗ =

∇F (X∗)T . By Lemma 5.1, ATj Aj is invertible. The continuity of ∇F (·) implies that

[ATj Aj]

−1 → [AT∗A∗]−1. By

∇XLρj(Xj, Yj) = ∇f(Xj) + 2∇F (Xj)

T [Yj + ρjF (Xj)], (5.5.19)

Page 81: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

73

and multiplying both sides of (5.5.19) by ∇F (Xj) = ATj , we have

2(Yj + ρjF (Xj)) = [ATj Aj]

−1ATj [∇XLρj

(Xj, Yj)−∇f(Xj)]

Since ||∇XLρj(Xj, Yj)||F → 0 and [AT

j Aj]−1 → [AT

∗A∗]−1 as j →∞, we get

2(Yj + ρjF (Xj)) → −[AT∗A∗]−1AT

∗∇f(X∗) as j →∞.

Set 2Y∗ = −[AT∗A∗]−1AT

∗∇f(X∗). By taking the limit on both sides (5.5.19), we have

that ∇XL(X∗, Y∗) = 0. Since 2(Yj + ρjF (Xj)) → 2Y∗, and ||Yj||F is bounded, the

sequence ρjF (Xj) is bounded. By ρj →∞ as j →∞, we conclude that

limj→∞

F (Xj) = F (X∗) = 0.

So the pair (X∗, Y∗) satisfies (5.5.16). This completes the proof.

5.5.3 Case Studies

In this section, we present the results of our numerical experiments on

• A spring-mass system of 10 degree of freedom (DoF) [33]

• A vibrating beam.

The data for our experiments are set up as follows:

• The matrices Ma, Ca are kept fixed.

• To simulate the measured data (XM , TM), we add a random noise with some

Gaussian distribution to the eigendata of the analytical model.

• The weighting matrix was taken as W = I.

We used MATLAB with double arithmetics to run numerical experiments. As

a solver of an unconstrained optimization problem, MATLAB optimization toolbox

routine fminunc, which implements BFGS quasi-Newton method was used.

Page 82: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

74

5.5.4 A Mass-Spring System of 10 DoF

Consider the example of a mass-spring system of 10 DoF, as depicted in Figure

5.5.4. In this example all rigid bodies have a mass of 1 kg, and all springs have

stiffness 1 kN/m. The analytical model is given by

Ma = I

Ca =

0.48 -8.3-8.38 8.38 -1.02

-1.0 1.02 -7.28-7.28 7.28 -4.40

-4.40 4.40 -9.97-9.97 9.97 -5.62

-5.62 5.62 -4.65-4.65 4.65 -4.19

-4.19 4.19 -2.11-2.11 2.11

Ka =

2000 -1000-1000 3000 -1000 -1000

-1000 2000 -1000-1000 3000 -1000 -1000

-1000 -1000 3000 -1000-1000 2000 -1000

-1000 2000 -1000-1000 -1000 3000 -1000

-1000 2000 -1000-1000 2000

Figure 5.1: Mass - spring system with 10 DoF.

Page 83: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

75

The measured data for our experiment was simulated by reducing stiffness of the

spring between masses 2 and 5 to 600 N/m and adding Gaussian noise with σ = 2%.

The analytical eigenvalue and eigenvector matrices are

Ta =

-6.23 71.1 0 0-71.1 -6.23 0 0

0 0 -3.67 65.90 0 -65.9 -3.67

, Xa =

0.142 0.001 -0.161 -0.001-0.438 0.020 0.372 -0.0500.288 0.065 -0.056 0.031-0.502 -0.206 -0.191 0.0870.479 0.148 -0.296 -0.034-0.136 -0.011 0.263 0.091-0.066 0.011 -0.346 -0.1450.339 0.003 0.599 0.063-0.122 -0.007 -0.296 -0.0930.040 0.010 0.115 0.075

The measured eigenvalue and eigenvector matrices are:

TM =

-6.16 69.8 0 0-69.8 -6.16 0 0

0 0 -4.7 64.90 0 -64.9 -4.7

, XM =

0.102 0.026 -0.172 -0.023-0.283 -0.061 0.401 0.0230.282 0.115 -0.195 -0.005-0.579 -0.240 0.074 0.0680.341 0.242 -0.354 -0.202-0.067 -0.054 0.286 0.242-0.168 0.036 -0.249 -0.3400.508 -0.042 0.362 0.382-0.207 0.009 -0.183 -0.2460.077 0.011 0.060 0.130

Results of Stage I

Orthogonality condition was not satisfied with the measured eigenvector matrix

as shown by the residual:

||H(XM)||F = 17.7.

Page 84: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

76

However, with preprocessing of the measured data using Stage I, we obtained

Xu =

0.086 0.128 0.139 0.021-0.270 -0.226 -0.324 0.0550.266 0.176 0.233 -0.131-0.437 -0.239 -0.222 0.2640.227 0.387 0.343 0.0410.031 -0.261 -0.218 -0.232-0.147 0.205 0.183 0.3120.395 -0.239 -0.226 -0.544-0.145 0.142 0.149 0.2440.051 -0.061 -0.064 -0.096

The orthogonality condition is now satisfied with this updated eigenvector matrix

as shown by the following residual:

||H(Xu)||F = 1.97× 10−8.

Results of Stage II

With the updated eigenvector matrix Xu from Stage I:

• The updated matrix Ku was symmetric, as shown by the residual norm:

||Ku −KTu ||F = 9.293× 10−9.

• The measured eigenvalues and corrected measured eigenvectors were repro-

duced accurately by the updated model, as shown by the following residual:

||R(Ku)||F = 1.78143× 10−6,

where R(K) = MaXuT2M + CaXuTM + KXu.

Note: It is clear from the Figure 5.5.4 that the largest changes correspond to

degrees of freedom 2 and 5. The changes corresponding to the other degrees of

freedom are reasonably small.

Page 85: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

77

1 2 3 4 5 6 7 8 9 10−0.06

−0.04

−0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

degree of freedom

% c

hang

e in

stif

fnes

s

Figure 5.2: The percentage change in the diagonal elements of the stiffness matrixfor mass-spring system.

5.5.5 Vibrating Beam

Consider a discrete spring-mass model of a vibrating beam [35], which consists

of n + 2 masses mini=−1, linked by massless rigid rods of length lin

i=0 which are

themselves connected by n rotational springs of stiffness kini=1. This model corre-

sponds to a finite difference approximation of a beam with distributed parameters.

The vibration of the beam with clamped left hand end and with no force applied at

the free end is governed by:

Mx + Kx = 0,

where

K = EL−1EKET L−1ET

K = diag(k1, . . . , kn), L = diag(l1, . . . , ln), M = diag(m1, . . . , mn),

Page 86: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

78

E =

1 −1 0 . . . 00 1 −1 . . . 0

. . . . . . . . .

0 . . . 0 1 −10 . . . 0 0 1

The simulated beam has 16 rods of length 1/16 m each, and all masses are 0.1 kg.

Results of Stage I

The measured data was obtained from the analytical data in the same way as

the previous example. To simulate the measured data, the coefficients k3, k5, k9

were reduced by 40%, 50%, 30%, respectively and Gaussian noises with σ = 2%

were added. The simulated measured eigenvector matrix became:

XM =

−1.2528× 10−5 -0.00029345 -0.00286057.9745× 10−5 0.0015719 0.012768-0.00039017 -0.0064281 -0.0433260.0015785 0.021449 0.11279-0.0054324 -0.058119 -0.234380.016097 0.13210 0.38359-0.041476 -0.24960 -0.473930.091780 0.39290 0.39510-0.17778 -0.49152 -0.108520.29897 0.45559 -0.24409-0.44078 -0.23769 0.379620.53922 -0.10088 -0.16311-0.52197 0.36497 -0.220860.34568 -0.34533 0.33064-0.10101 0.11494 -0.12827

Without application of Stage I, the matrix XM did not satisfy the orthogonality

constraint, as shown by the residual:

||H(XM)||F = 3.668× 107.

Page 87: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

79

Application of Stage I yielded:

Xu =

−1.6363× 10−5 -0.00029433 -0.00286069.6687× 10−5 0.0015758 0.012768-0.00044692 -0.0064414 -0.0433280.0017231 0.021483 0.11280-0.0057238 -0.058189 -0.234410.016549 0.13220 0.38365-0.041977 -0.24972 -0.474060.092072 0.39297 0.39535-0.17760 -0.49144 -0.108930.29832 0.45532 -0.24353-0.44009 -0.23728 0.378930.53906 -0.10129 -0.16238-0.52253 0.36525 -0.221480.34638 -0.34546 0.33102-0.10127 0.11497 -0.12838

The updated matrix Xu did satisfy the orthogonality constraint, as shown by the

residual:

||H(Xu)||F = 9.09143× 10−6.

Results of Stage II

With the updated eigenvector matrix Xu from Stage I:

• The updated matrix Ku was symmetric, as shown by the residual norm:

||Ku −KTu ||F = 9.897× 10−8.

• The measured eigenvalues and the corrected measured eigenvectors were re-

produced accurately by the updated model, as shown by the following residual:

||R(Ku)||F = 5.795× 10−5.

5

Page 88: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

CHAPTER 6

Affine Parametric Quadratic Inverse Eigenvalue Problem

6.1 Introduction

In this chapter, we consider QIEP for the quadratic pencil P (λ) = λ2M+λC+K,

in which matrices C and K are defined as members of the affine families

C = C(α) = C0 +n∑

i=1

αiCi, K = K(α) = K0 +n∑

i=1

βiKi, (6.1.1)

where α, β ∈ Rn, and the matrices Ci and Ki are real symmetric n × n matrices

which comprise an affine family. For the sake of convenience we will denote this

pencil as (M,C(α), K(β)), and the eigenvalues of the pencil will be denoted by

λ1(α, β), ..., λ2n(α, β).

The set of eigenvalues is closed under conjugation. The affine quadratic inverse

eigenvalue problem to be considered here is stated as follows:

Problem 1. Affine Quadratic Inverse Eigenvalue Problem (AQIEP).

Given a set with distinct entries µ1, ..., µ2n, closed under complex conjugation, find

(α, β) ∈ R2n, such that µ1, ..., µ2n are eigenvalues of the pencil (M, C(α), K(β)).

We will call the set µ1, ..., µ2n, the target set of eigenvalues. Matrix of target

eigenvalues will be denoted by Σ

Σ = diag(µ1, ...µ2n).

As an example of how an affine model arises in practice, consider the following

mass-spring system with damping.

Page 89: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

81

...

2(t)

m1 m2 mn

α1

β1

α2

β2

α3

β3

α

β

n

n

q1(t)qn(t)q

Figure 6.1: Serially linked mass-spring system.

The natural frequencies of the system are determined by the eigenvalues of the

matrix pencil (M, C(α), K(β)), where

M =

m1

m2

. . .

mn

,

C =

α1 + α2 −α2

−α2 α2 + α3 −α3

. . . . . .

−αn αn

,

K =

β1 + β2 −β2

−β2 β2 + β3 −β3

. . . . . .

−βn βn

Here, we see, that the matrices C and K are the members of the affine families of

the form (6.1.1), where

K0 = C0 = 0, K1 = C1 = e1eT1

Ki = Ci = (ei−1 − ei)(ei−1 − ei)T , i = 2, 3, ..., n.

Our purpose here is to first develop a Newton’s method for solving Problem 1.

It can be done by finding a zero of the function

f(α, β) =

λσ(1)(α, β)− µ1...

λσ(2n)(α, β)− µ2n

, (6.1.2)

Page 90: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

82

where µ1, ..., µ2n are the target eigenvalues and λ1(α, β), ..., λ2n(α, β) are the

eigenvalues of the pencil (M, C(α), K(β)) in some order, and σ chosen in such a way

that∑

(λσ(i)(α, β)−µi)2 is minimum over all permutations σ of 1, 2, ..., 2n. Newton’s

method for a standard inverse eigenvalue problem was thoroughly studied in the well-

known paper by Friedland, Nocedal and Overton [31]. In [31] the authors considered

the formulation and local analysis of four various quadratically convergent iterative

methods, which are related to Newton’s method, for solving the symmetric standard

inverse eigenvalue problem. One of the approaches considered in the paper is to find

a zero of the function f , which is the difference between set of prescribed eigenvalues

and those calculated during iteration.

Note, that the formulation (6.1.2) requires the pairing of eigenvalues λ(αi, βi)

with µi at each iteration. To avoid pairing problem, Elhay and Ram [28] proposed

Newton’s method based on minimizing the function g(α, β) = (g1(α, β), ..., g2n(α, β)),

where

gi(α, β) = det(µ2i M + µiC + K). (6.1.3)

However, it was noted in [31], that the approach of finding a zero of the function f

defined in (6.1.2) is always to be preferred over the approach of finding zero of the

function g defined above in (6.1.3).

The problem of finding permutation σ, needed to calculate value of f in (6.1.2),

is known as the matching problem. Namely, given the two sets of numbers a =

a1, ..., ak and b = b1, ..., bk we say that set aσ(1), ..., aσ(k)matches set b1, ..., bkif σ is the permutation which minimizes

k∑j=1

(aσ(j) − bj)2.

Page 91: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

83

Permutation σ in (6.1.2) is the permutation which minimizes

2n∑j=1

(λσ(j)(α, β)− µj)2 (6.1.4)

among all possible permutations of the list of eigenvalues λi(α, β):

σ ∈ arg minσ(1),...,σ(n!)

2n∑j=1

(λσ(j)(α, β)− µj)2. (6.1.5)

In other words, λσ1(α, β), ...λσ2n(α, β) is the closest match for the target eigenval-

ues µ1, ..., µ2n.In the context of ordering the real sets, we recall the well-known result [39].

Theorem 6.1 ([39]) Given two sets of real numbers a = a1, ..., ak and b =

b1, ..., bk, the expressionk∑

j=1

(aj − bj)2

has the minimum value, when a and b are both monotonically increasing or both

monotonically decreasing, i.e., a1 ≤ ... ≤ ak and b1 ≤ ... ≤ bk or a1 ≥ ... ≥ ak and

b1 ≥ ... ≥ bk.

Matching problem for two sets of complex numbers can be solved in O(n3) time

using the so-called Hungarian method [43].

Implementation of Newton’s method for zero finding problem requires computa-

tion of Jacobian. In Section 6.2, we develop a formula for Jacobian in terms of λi’s

and the matrices M, Ck, and Kk, k = 1, 2, ..., 2n.

It is well known that Newton’s method is locally convergent. In order to obtain a

globally convergent method for our problem, we propose a hybrid method combining

Newton’s method with a globally convergent alternating projections method. The

latter is developed with a view to obtaining a good initial guess for Newton’s method.

Page 92: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

84

A numerically efficient approach for generating projection operators, needed for

alternating projections method, is also developed in this chapter. Finally, an efficient

implementation of the proposed alternating projections method by computing the

necessary eigenvectors using inverse iteration is suggested.

The chapter is organized as follows: In Section 6.2, we describe a Newton’s

method for solving the affine quadratic inverse eigenvalue problem and calculate the

derivatives necessary for the Newton’s method. We also show that the Newton’s

method is quadratically convergent. In Section 6.4 we develop the alternating pro-

jections method for solving affine QIEP and derive projection operators necessary

for implementation of this method. As an improvement to alternating projections

method we suggested using inverse iterations for computing eigenvalues. This also

reduces the computational requirement, not only because we can calculate eigenvec-

tors cheaply but also because the matching problem is solved automatically. Some

global convergence results for the method are also provided in this chapter. Solution

to the matrix nearness problem, which arises while calculating one of the projections

operators, is presented in Section 6.3. We also show how a good approximation of

the solution to the nearness problem can be calculated in a numerically efficient

way. In Section 6.6 we present results of our numerical experiments which illustrate

the accuracy of our hybrid method.

6.2 Newton’s Method

Theorem 6.2 Let (α∗, β∗) be a solution to Problem 1. Then there exists a neigh-

borhood of (α∗, β∗) which contains no singular points of the spectra. These are the

points where the pencil has multiple eigenvalues.

For details see [5, 36, 69].

Page 93: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

85

Corollary 6.1 There is a neighborhood of (α∗, β∗) where λi(α, β) are distinct and

are differentiable functions.

Thus, the function f defined above in (6.1.2) is a differentiable function in a neigh-

borhood of a solution.

The system (6.1.2) can be solved by applying Newton’s method to it. To de-

velop a gradient formula for the function f(α, β), let’s first rewrite the quadratic

eigenvalue problem MΛ(α, β)2X(α, β)+C(α)Λ(α, β)X(α, β)+K(β)X(α, β) = 0 as

the symmetric generalized eigenvalue problem:

( −K 00 M

)(X

)=

(C MM O

)(X

)Λ.

To make expressions more readable we have omitted arguments. We know, that

eigenvector matrix of the above generalized eigenvalue problem satisfies the following

orthogonality relations:

(X

)T (C MM 0

)(X

)= D

and (X

)T ( −K 00 M

)(X

)= DΛ,

where D is some diagonal matrix. Assuming that Dii = xTi (2λiM + C)xi 6= 0, i =

1, ..., 2n and then taking

zi =xi√

xTi (2λiM + C)xi

we can scale the eigenvectors so that D = I. Gradient formulas now can be obtained

by differentiating the relations:

zTi (2λiM + C)zi = 1

zTi (λ2

i M −K)zi = λi

Page 94: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

86

zTi (2λiM + C)zi + zT

i (2λiM + C)zi + zTi (2λiM + C)zi = 0 (6.2.6)

zTi (λ2

i M −K)zi + zTi (2λiλM − K)zi + zT

i (λ2i M −K)zi = 0 (6.2.7)

Multiplying (6.2.6) by λ and subtracting it from (6.2.7) we obtain

λi = −zTi (λC + K)zi. (6.2.8)

Since C and K do not depend on β and α respectively, we can write

∂λi

∂αk

= −λizTi Ckzi,

∂λi

∂βk

= −zTi Kkzi.

If we don’t scale eigenvalues, then the derivatives are given by

∂λi

∂αk

= − λixTi Ckxi

xTi (2λiM + C)xi

,∂λi

∂βk

= − xTi Kkxi

xTi (2λiM + C)xi

,

Thus the Jacobian of f is

Jik =

∂λi

∂αk= − λix

Ti Ckxi

xTi (2λiM+C)xi

, k = 1, ..., n

∂λi

∂βk−n= − xT

i Kk−nxi

xTi (2λiM+C)xi

, k = n + 1, ..., 2n, (6.2.9)

and a step of Newton’s is defined as follows:

J(αi, βi)

(αi+1 − αi

βi+1 − βi

)= −f(αi, βi). (6.2.10)

Algorithm 3 Newton’s Method

INPUT: λ∗, (α0, β0), tolerance - εOUTPUT: Solution to (6.1.2) - (α∗, β∗)1: for i = 0,1,... do2: Find eigenvalues and eigenvectors of (M, C(αi), K(βi)).3: Solve the minimization combinatorics problem (6.1.4) and compute f(αi, βi)4: Stop if ||(αi+1, βi+1)− (αi, βi)|| < ε5: Calculate J(αi, βi) as given in (6.2.9) and find (αi+1, βi+1) by solving (6.2.10)6: end for

The advantage of using Newton’s method is that it is quadratically convergent

in a neighborhood of a solution in which J(α, β) is nonsingular. Thus, provided

Page 95: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

87

an initial point is close enough to a solution, the Newton’s iterations will rapidly

converge (in practice 3-5 iterations). We will discuss in Section 6.4 how to obtain a

point, which is close to a solution, using a globally convergent alternating projections

method. The fact that a combinatorial minimization problem (linear assignment

problem) must be solved at each iteration is not very restrictive. As it was mentioned

above, this problem can be solved in O(n3) time using the so-called Hungarian

method [43]. Moreover, in practice, we observed that as we approach to a solution,

there is no need to compute σ since it gets stabilized, i.e., does not change from

iteration to iteration.

6.3 Matrix Nearness Problem

Before we continue with alternating projections method, we need to consider

matrix nearness problem, which arises while computing one of the projection opera-

tors needed for the algorithm proposed in the next section. The problem formulated

as follows: given matrices M, C, K, find matrices C,K which are as close as possi-

ble to C, K and so that the given matrix Σ is the eigenvalue matrix of the pencil

(M, C, K). In other words,

||C − C||2 + ||K − K||2 → min

s.t. MXΣ2 + CΣ + KX = 0 (6.3.11)

C, K ∈ Rn×n, X ∈ Cn×2n, ||xi|| = 1.

Matrices C and K can be explicitly written in terms of X and Λ. Thus X is

the only variable in (6.3.11). Recall the results shown in Section 2.1.5; given a

nonsingular matrix M , a diagonal matrix Σ and some matrix X, so that col(X,XΣ)

is a nonsingular matrix, the relation

MXΣ2 + CXΣ + KX = 0

Page 96: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

88

holds if and only if

C = −MXΣ2V2 (6.3.12)

K = −MXΣ2V1, (6.3.13)

where V is the 2n× 2n matrix

V =

(X

)−1

= (V1, V2), V1, V2 ∈ C2n×n. (6.3.14)

Thus, the optimization problem (6.3.11) can be rewritten as an unconstrained opti-

mization problem:

F (X) = ||C −MXΣ2V2||2 + ||K −MXΣ2V1||2 → min . (6.3.15)

The problem (6.3.15) can be solved by solving an equivalent nearness problem,

formulated for the companion form linearization of the pencil, assuming that M is

nonsingular:

minX||A− ZΣZ−1||2 (6.3.16)

Z = col(X; XΣ), X ∈ Cn×n, (6.3.17)

where

A =

(0 I

−M−1K −M−1C

).

Consider problem (6.3.16) disregarding the form of matrix Z (6.3.17). Let H =

ZΣZ−1; then

F (Z) = ||A− ZΣZ−1||2 =< A−H, A−H >=

< A, A > + < H, H > −2 < A, H >= ||A||2 + tr(H∗H)− 2tr(A∗H).

The derivative of the function F can be easily found:

∂Zij

tr(A∗H) = tr

(A∗ ∂

∂Zij

(ZΣZ−1

))= tr

(A∗

(∂Z

∂Zij

ΣZ−1 − ZΣZ−1 ∂Z

∂Zij

Z−1

)).

Page 97: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

89

Now

dtr(A∗H) = ∇Ztr(A∗H) = AZ−∗Σ− Z−∗ΣZ∗AZ−∗.

Note

dtr(A∗H) = dtr(H∗A) = tr(A∗dH) = tr(dH∗A).

Thus,

dtr(H∗H) = tr(dH∗H) + tr(H∗dH) = 2(HZ−∗Σ− Z−∗ΣZ∗HZ−∗).

We now can write the matrix of derivatives with respect to Zi,j’s:

FZ = 2(HZ−∗Σ−Z−∗ΣZ∗HZ−∗)−2(AZ−∗Σ−Z−∗ΣZ∗AZ−∗) =

(J1

J2

). (6.3.18)

(FZ)i,j =∂

∂Zij

F (Z).

Note that if A is a normal matrix and we reduce the search space to unitary matrices

Z, i.e., consider the solution of the form X = ZΣZ∗, ZZ∗ = I, then The gradient

of F is the tangent vector is given by

∇ZF = FZ − ZF TZ Z = AZΣ− ZΣZ∗AZ,

see [27].

Thus, when Z is unitary, we have:

∇ZF = 0 ⇐⇒ AZΣ− ZΣZ∗AZ = 0 ⇐⇒ AH −HA = 0 ⇐⇒

A and X have common eigenvectors, which leads us to the closed-form solution of

the problem.

Theorem 6.3 If matrix A is symmetric, then the solution to the following problem:

minZ||A− ZΣZ∗||2

ZZ∗ = I

is given by Z = Y , where Y is the eigenvector matrix of A.

Page 98: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

90

Note the minimum value of the function F is

F (Y ) = ||Y (Λ− Σ)Y ∗||2 = ||Λ− Σ||2 =∑

(λi − σi)2.

The formula for the derivative (6.3.18) has been obtained disregarding the fact that

Z = col(X; XΣ); taking this fact into consideration, the formula of the derivative

will be as follows:

FZ =

(J1

J1Σ

).

Having the analytical expression for the derivative, we can solve problem (6.3.16)-

(6.3.17) using some gradient-based optimization algorithm, for example BFGS [60].

Once the solution matrix X has been found, the solution to the original problem

(6.3.15) C and K can be found by substituting X into (6.3.12)-(6.3.13). We should

note, that the computational cost of solving the minimization problem (6.3.16)-

(6.3.17) is high. However, our numerical experiments show that a good approxima-

tion to the solution to the nearness problem (6.3.15) could be obtained by choosing

X to be equal to the permuted eigenvector matrix of the pencil (M,C, K). The

solution can be approximated by

C ≈ −MYσΣ2U2 (6.3.19)

K ≈ −MYσΣ2U1, (6.3.20)

where Yσ is the permuted eigenvector matrix of the pencil (M, C, K) and the per-

mutation σ is as defined in (6.1.5) and

U =

(Yσ

YσΣ

)−1

= (U1, U2), U1, U2 ∈ C2n×n.

6.4 Method of Alternating Projections

Now we turn our attention to the problem of choosing a good initial guess which

can be used to compute a solution to Problem 1 using locally quadratically con-

Page 99: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

91

vergent Newton’s method. A good method to find a point close to a solution is

alternating projections method.

To describe this method, consider two sets:

L = (C, K) ∈ R2n2 |MXΣ2 + CXΣ + KX = 0, for some matrix X s.t ||xi|| = 1,

and

A = (C, K) ∈ R2n2 | C = C(α) = C0 +n∑

i=1

αiCi, K = K(α) = K0 +n∑

i=1

βiKi.

The set L denotes the set of quadratic matrix pencils with given fixed matrix M

as the leading matrix coefficient, and eigenvalues Σ. The set A denotes the set of

quadratic pencils with matrix M as a leading coefficient and the matrices C,K are

members of the corresponding affine families. In view of these definitions, Problem

1 can now be reformulated as follows.

Problem 2. Find (α, β) ∈ R2n such that (M,C(α), K(β)) ∈ L ∩ A.

Problem 2 can be solved using an alternating projections method [74]. The

following results form a basis of such a method.

Theorem 6.4 Let C1, C2 be closed convex sets in a finitely dimensional Hilbert

space H, C1

⋂C1 6= ∅ and let PC1 and PC2 denote projection operators onto C1 and

C2, respectively. Then,

limn→∞

(PC1PC2)n = lim

n→∞(PC2PC1)

n = PC1∩C2 .

Note, both of the sets L,A are closed, but the set L is nonconvex. Thus, alternating

projections might not converge. However, alternating projections never increases the

distance between two successive iterates, as the following result shows.

Theorem 6.5 Let C1, C2 be closed sets in a finitely dimensional Hilbert space H,

C1

⋂C1 6= ∅ and let y ∈ C2. If

x1 = PC1(y), y1 = PC2(x1), x2 = PC1(y1),

Page 100: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

92

then

||x2 − y1|| ≤ ||x1 − y1|| ≤ ||x1 − y||.

Corollary 6.2 For any given x0 ∈ H, (PC1PC2)n(x0)∞n=0 is a nondecreasing se-

quence.

As we noted above, the accumulation point of (PC1PC2)n(x0)∞n=0 is not necessarily

a solution to Problem 2. However, in practice, even if alternating projections do

not converge to the solution of Problem 2, an accumulation point is still close to a

solution.

Now we need to derive projection operators PL and PA. The inner product in

R2n2will be defined in the standard way:

< (C1, K1), (C2, K2) >= trace(CT1 C2 + KT

1 K2).

Let PA(C, K) = (C(α), K(β)), then the coefficients (α, β) are found as solutions of

the following linear systems:

A1α = b1, A2β = b2, (6.4.21)

where (A1)ij = trace(CTi Cj), (A2)ij = trace(KT

i Kj) and (b1)i = trace((C−C0)T Ci),

(b2)i = trace((K −K0)T Ki). Define

c : A → R2n, c(C,K) = (α, β). (6.4.22)

The projection onto L is the solution to the problem (6.3.11). Note, that the

definition of the projection operator is different for the one usually used in numerical

analysis and numerical linear algebra literature. The value of the operator could

be found using a gradient search optimization routine or an approximation to the

solution given in (6.3.19)-(6.3.20). Thus,

PL(C, K) ≈ (MXσΣ2V 2,MXσΣ2V 1), (6.4.23)

Page 101: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

93

where Xσ is the permuted eigenvector matrix of the pencil (M, C,K) and the per-

mutation σ is as defined in (6.1.5) and

V =

(Xσ

XσΣ

)−1

= (V 1, V 2), V 1, V 2 ∈ R2n×n. (6.4.24)

Our numerical experiments indicates that the convergence of alternating projections

method does not suffer when the approximation of the projection operator defined

above is used.

Algorithm 4 Alternating Projections Method

INPUT: λ∗, (α0, β0), εOUTPUT: (α, β)

1: for i = 0,1,... do2: Form (C(αi), K(βi))3: Compute eigenvalues-eigenvectors of (M,C(αi), K(βi)) and compute σ4: Form matrix Xσ

5: Compute (C, K), approximation of projection of (C(αi), K(βi)) onto L, using(6.4.23)

6: Compute (αi+1, βi+1), by solving (6.4.21)7: Stop if ||(αi+1, βi+1)− (αi, βi)|| < ε8: end for

As you can see from (6.4.23), (6.4.24) there is no need to compute the eigenvalues

in order to compute the approximation of the projection operator. Only eigenvectors

are needed. Instead of computing eigenvectors, we can approximate them. It can

be done by using inverse iterations. Suppose that (αi, βi) is our current estimate of

parameters, X(i)σ is an approximation to Xσ(αi, βi), and the matrix of eigenvectors of

(M,C(αi), K(βi)), arranged in the “right” order (6.1.5). Let xiσ(j) be the jth column

of X(i)σ . To find (αi+1, βi+1), we compute (C, K):

(C, K) = (−MX(i)σ Σ2V2,−MX(i)

σ Σ2V1), (6.4.25)

where

V =

(X

(i)σ

X(i)σ Σ

)−1

,

Page 102: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

94

and then calculate (αi+1, βi+1) by solving (6.4.21). To update our approximation to

the eigenvectors, we apply one step of inverse iteration: we compute uj, j = 1, ..., 2n:

(0 IK C

)(uj

zj

)= µj

(I 00 −M

)(xi

σ(j)

µjxiσ(j)

), i = 1, ..., 2n. (6.4.26)

We then define

xi+1σ(j) =

uj

||uj|| ,

which determines the new matrix X i+1σ . Vector uj can be obtained as a solution of

an n× n linear system (see [61]):

(λ∗2

j M + µjC + K)uj = (C + 2µjM)xiσ(j). (6.4.27)

Thus we are performing an alternating projections - like iteration where instead

of computing the exact eigenvectors of (M, C(α), K(β)) at each step and then com-

puting permutation σ, we update an approximation to them by performing one step

of inverse iterations.

Page 103: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

95

Algorithm 5 Alternating Projections - like Method

INPUT: λ∗, (α0, β0), εOUTPUT: (α, β)

1: Form (C(α0), K(β0))2: Compute eigenvalues-eigenvectors of (M, C(α0), K(β0)) and compute σ

3: Form matrix X(0)σ = Xσ(α0, β0)

4: for i = 0,1,... do5: Compute (C, K), quasi-projection of (C(αi), K(βi)) onto L, using (6.4.25)6: Compute (αi+1, βi+1), by solving (6.4.21)7: Stop if ||(αi+1, βi+1)− (αi, βi)|| < ε8: Form (C, K) = (C(αi+1), K(βi+1))9: Solve 2n linear systems

((µj)2M + µjC + K)uj = (C + 2µjM)xi

σ(j), j = 1, ..., 2n

and compute

xi+1σ(j) =

uj

||uj||10: end for

6.5 Hybrid Method

On the basis of the discussion above, we state the following hybrid algorithm:

Algorithm 6 Hybrid Method

INPUT: λ∗, (α0, β0), ε1, ε2

OUTPUT: (α∗, β∗)1: while ||(αi+1, βi+1)− (αi, βi)|| < ε1 do2: Form J(αi, βi) and compute (αi+1, βi+1)3: end while4: while ||(αi+1, βi+1)− (αi, βi)|| < ε2 do5: (αi+1, βi+1) = c((PAPL)(C(αi), K(βi)))6: end while

Page 104: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

96

6.6 Numerical Experiments

Consider the mass-spring system 2.2 with 3 degrees for freedom; let α0 =

(1, 1, 1), β0 = (1, 1, 1) and

λ∗ = −0.0271± i1.0108,−0.0177± i0.6724,−0.0023± i0.2658.

After applying a few iterations of Algorithm 5, we obtain:

αAP = (0.0332, 0.0134, 0.0169), βAP = (0.7188, 0.2193, 0.1915)

λ(αAP , βAP ) = −0.0271± i1.0110,−0.0176± i0.6724,−0.0021± i0.2554.

Using (αAP , βAP ) as initial guess for Newton’s method, we obtain after several

iterations the following improved values:

αN = (0.0139, 0.0203, 0.0199), βN = (0.6038, 0.2722, 0.1988).

Verify:

||λ∗ − λ(αN , βN)|| = 1.29× 10−9.

Page 105: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

REFERENCES

[1] Yuri Agranovich, Tomas Azizovb, Andrei Barsukov, and Aad Dijksmad. Onan inverse spectral problem for a quadratic jacobi matrix pencil. Journal ofMathematical Analysis and Applications, 306(1):1–17, 2005.

[2] Zheng-Jian Bai. Symmetric tridiagonal inverse quadratic eigenvalue problemswith partial eigendata. Inverse Problems, 24(1), 2008.

[3] Zheng-Jian Bai, Delin Chu, and Defeng Sun. A dual optimization approachto inverse quadratic eigenvalue problems with partial eigenstructure. SIAMJournal on Scientific Computing, 29(6):2531–2561, 2007.

[4] M. Baruch and I. Y. Bar-Itzhack. Optimal weighted orthogonalization of mea-sured modes. AIAA Journal, 16(4):346–351, 1978.

[5] Hellmut Baumgartel. Endlichdimensionale analytische Storungstheorie.Akademie-Verlag, Berlin, 1972.

[6] D. P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods.Academic Press, New York, NY, 1982.

[7] S.K. Brahma and B.N. Datta. A norm-minimizing parametric algorithm forquadratic partial eigenvalue assignment via sylvester equation. Proceedings ofthe IEEE European Control Conference, pages 490–496, 2007. The full versionof the paper has been submitted to Numerical Linear Algebra with Applications.

[8] S.K. Brahma and B.N. Datta. A sylvester-equation based parametric approachfor minimum norm and robust partial quadratic eigenvalue assignment prob-lems. Proceedings of the IEEE Mediterranean Control Conference, 1-6., 2007.The full version of the paper is to appear in J. Sound and Vibration.

[9] B. Caesar. Updating system matrices using modal test data. In Proceedingsof the 4th International modal Analysis conference, pages 453–459, London,England, 1987.

[10] J. Carvalho, B. N. Datta, A. Gupta, and M. Lagadapati. A direct method formatrix updating with incomplete measured data and without spurious modes.Mechanical Systems and Signal Processing, 21:2715–2731, 2007.

[11] J. Carvalho, B. N. Datta, W. Lin, and C. Wang. Symmetry preserving eigen-value embedding in finite element model updating of vibration structures. Jour-nal of Sound and Vibration, 290:839–864, 2006.

97

Page 106: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

98

[12] J. Carvalho, B. N. Datta, W. Lin, and C. Wang. Symmetry preserving eigen-value embedding in finite-element modelupdating of vibrating structures. J.Sound and Vibration, 290:839–864, 2006.

[13] Moody T. Chu, Nicoletta Del Buono, and Bo Yu. Structured quadratic inverseeigenvalue problem, i. serially linked systems. SIAM Journal on Scientific Com-puting, 29(6):2668–2685, 2007.

[14] Moody T. Chu and Kenneth R. Driessel. The projected gradient methods forleast squares matrix approximations with spectral constraints. SIAM Journalon Numerical Analysis, 27(4):1050–1060, 1990.

[15] Moody T. Chu and Gene H. Golub. Inverse Eigenvalue Problems: Theory,Algorithms, and Applications. Oxford University Press, New York, 2005.

[16] Moody T. Chu, Yuen-Cheng Kuo, and Wen-Wei Lin. On inverse quadraticeigenvalue problems with partially prescribed eigenstructure. SIAM Journalon Matrix Analysis and Applications, 25(4):995–1020, 2004.

[17] M.T. Chu, B.N. Datta, W. Lin, and S.F. Xu. The spill-over phenomenon inquadratic model updating. American Institute of Aeronautics and Astrophysics(AIAA) Journal, forthcoming 2008.

[18] B. N. Datta. Numerical Linear Algebra and Applications. Brooks/Cole Pub-lishing Company, Pacific Grove, CA, 1995.

[19] B. N. Datta, S. Elhay, and Y. M. Ram. Orthogonality and partial pole assign-ment for the symmetric definite quadratic pencil. Linear Algebra and Applica-tions, 257:29–48, 1997.

[20] B. N. Datta, S. Elhay, Y. M. Ram, and D. Sarkissian. Partial eigenstructureassignment for the quadratic pencil. Journal of Sound and Vibration, 230:101–110, 2000.

[21] B. N. Datta, S. Elhay, Y. M. Ram, and D. R. Sarkissian. Partial eigenstructureassignment for the quadratic pencil. J. Sound Vibration, 230(1):101–110, 2000.

[22] B. N. Datta and D. R. Sarkissian. Multi-input partial eigenvalue assignmentfor the symmetric quadratic pencil. In Proceedings of the American ControlConference, pages 2244–2247, 1999.

[23] B. N. Datta and D. R. Sarkissian. Theory and computations of some inverseeigenvalue problems for the quadratic pencil. In V. Olshevsky, editor, Struc-tured Matrices in Operator Theory, Control, and Signal and Image Processing,volume 280 of Contemporary Mathematics, pages 221–240. American Mathe-matical Society, Providence, RI, 2001.

Page 107: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

99

[24] B. N. Datta and D. R. Sarkissian. Partial eigenvalue assignment in linearsystems: Existence, uniqueness and numerical solution. In International Sym-posium on Mathematical Theory of Networks and Systems, University of NotreDame, South Bend, IN, Aug. 2002.

[25] B.N Datta. Finite element model updating, eigenstructure assignment andeigenvalue embedding techniques for vibrating systems. Mechanical Systemsand Signal Processing, 16(1):83–96, 2001.

[26] B.N. Datta, S. Deng, D.R. Sarkissian, and V. Sokolov. An optimization tech-nique for model updating with measured data satisfying quadratic orthogonalityconstraint. Mechanical Systems and Signal Processing, forthcoming 2008.

[27] Alan Edelman, Tomas A. Arias, and Steven T. Smith. The geometry of algo-rithms with orthogonality constraints. SIAM Journal on Matrix Analysis andApplications, 20(2):303–353, 1998.

[28] S. Elhay and Y.M. Ram. An affine inverse eigenvalue problem. Inverse Prob-lems, 18(2):455 – 66, 2002.

[29] D. J. Ewins. Adjustment or updating of models. Sadhana, 25:235–245, 2000.

[30] E. Foltete, G. M. L. Gladwell, and G. Lallement. On the reconstruction of adamped vibrating system from two complex spectra, part ii - experiement. J.Sound Vib., 240:219–240, 2001.

[31] S. Friedland, J. Nocedal, and M. L. Overton. The formulation and analysis ofnumerical methods for inverse eigenvalue problems. SIAM Journal on Numer-ical Analysis, 24(3):634–667, 1987.

[32] M. I. Friswell, D. J. Inman, and D. F. Pilkey. The direct updating of dampingand stiffness matrices. AIAA Journal, 36(3):491–493, 1998.

[33] M. I. Friswell and J. E. Mottershead. Finite Element Model Updating in Struc-tural Dynamics. Kluwer Academic Publishers, Boston, Dordrecht, London,1995.

[34] G. M. L. Gladwell. On the reconstruction of a damped vibrating system fromtwo complex spectra, part i - theory. J. Sound Vib., 240:203–217, 2001.

[35] Graham M. L. Gladwell. Inverse Problems in Vibration. Springer-Verlag,Berlin, 2004.

[36] I. Gohberg, P. Lancaster, and L. Rodman. Matrix Polynomials. AcademicPress, New York, NY, 1982.

[37] G. H. Golub and C. F. Van Loan. Matrix Computations (3rd ed). Johns HopkinsUniversity Press, Baltimore and London, 1996.

Page 108: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

100

[38] Y. Halevi and I. Bucher. Model updating via weighted reference basis withconnectivity constraints. Journal of Sound and Vibration, 265(3):561 – 581,2003.

[39] G. H. Hardy, J. E. Littlewood, and G. Po’lya. Inequalities. Cambridge UnivPress, 1988.

[40] C. Hatch, G.W. Skingle, C.H. Greaves, N.A.J. Lieven, J.E. Coote, M.I. Friswell,J.E. Mottershead, H. Shaverdi, C. Mares, A. McLaughlin, M. Link, N. Piet-Lahanier, M.H. Van Houten, D. Gge, and H. Rottmayr. Methods for refinementof structural finite element models: Summary of the garteur ag14 collaborativeprogramme. volume 32nd European Rotorcraft Forum, ERF 2006, Maastricht,Netherlands, pages 12–14, 2006.

[41] M. R. Hestenes. Multiplier and gradient methods. Journal of OptimizationTheory and Applications, 4(5):303–320, 1969.

[42] R. Kenigsbuch and Y. Halevi. Model updating in structural dynamics: A gen-eralized preference basis approach. Mechanical Systems and Signal Processing,12:75–90, 1998.

[43] H. W. Kuhn. The Hungarian method for the assignment problem. Naval Res.Logist. Quart, 2:83–97, 1955.

[44] Y. C. Kuo, W. W. Lin, and S. F. Xu. A new model updatingmethod for quadratic eigenvalue problems. Preprint, 2005. (avail-able at http://math.cts.nthu.edu.tw/Mathematics/preprints/prep2005-1-004-050221.pdf).

[45] Yuen-Cheng Kuo, Wen-Wei Lin, and Shu-Fang Xu. Solutions of the partiallydescribed inverse quadratic eigenvalue problem. SIAM Journal on Matrix Anal-ysis and Applications, 29(1):33–53, 2006.

[46] P. Lancaster. Lambda-Matrices and Vibrating Systems. Pergamon Press, Ox-ford, England, 1966.

[47] P. Lancaster and J. Maroulas. Inverse eigenvalue problems for damped vibratingsystems. J. Math. Anal. Appl., 123(1):238–261, 1987.

[48] P. Lancaster and U. Prells. Inverse problems for damped vibratingsystems.Journal of Sound and Vibration, 283:891–914, 2005.

[49] Peter Lancaster. Isospectral vibrating systems. Part 1: The spectral method.Linear Algebra and Applications, 409:51–79, 2005.

[50] Peter Lancaster. Model-updating for symmetric quadratic eigenvalue problems.MIMS EPrint 2006.407, Manchester Institute for Mathematical Sciences, Uni-versity of Manchester, Manchester, UK, November 2006.

Page 109: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

101

[51] Peter Lancaster. Inverse spectral problems for semisimple damped vibratingsystems. SIAM Journal on Matrix Analysis and Applications, 29(1):279–301,2007.

[52] Peter Lancaster. Model-updating for self-adjoint quadratic eigenvalue problems.Linear Algebra and Applications, 428:2778–2790, 2008.

[53] L. D. Landau and E. M. Lifshitz. Mechanics, Vol, 1 (3d ed.). Butterworth-Heinemann, 1976.

[54] M. Link. Updating analytical models by using local and global parameters andrelaxed optimization requirements. Mechanical Systems and Signal Processing,12(1):7 – 22, 1998.

[55] D.S. Mackey, N. Mackey, C. Mehl, and V. Mehrmann. Vector spaces of lineariza-tions for matrix polynomials. Preprint 238 MATHEON, DFG Research CenterMathematics for key technologies in Berlin. (available at http://www.math.tu-berlin.de/ mehrmann/).

[56] C. Minas and D. J. Inman. Matching finite element models to modal data.Transactions of ASME J. Appl. Mech., 112:84–92, 1990.

[57] C. Minas and D. J. Inman. Correcting finite element models with measuredmodal results using eigenstructure models. In Proc. 6th Intl. Modal AnalysisConf., pages 583–587, Orlando, FL, 1998.

[58] H. Natke. Problems of model updating procedures: A perspective resumption.Mechanical Systems and Signal Processing, 12:65–74, January 1998.

[59] H. G. Natke and Czes A. Cempel. Model-Aided Diagnosis of Mechanical Sys-tems: Fundamentals, Detection, Localization, and Assessment. Springer-VerlagNew York, Inc., Secaucus, NJ, USA, 1997.

[60] J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag, Berlin,1999.

[61] G. Peters and J. H. Wilkinson. Inverse iteration, ill-conditioned equations andnewton’s method. SIAM Review, 21(3):339–360, 1979.

[62] M. J. D. Powell. A method for nonlinear constraints in minimization problems.Optimization, pages 283–298, 1969.

[63] Uwe Prells and Peter Lancaster. Isospectral vibrating systems. Part 2: Struc-ture preserving transformations. Operator Theory: Advances and Applications,163:275–298, 2005.

[64] Yitshak M. Ram. Inverse eigenvalue problem for a modified vibrating system.SIAM Journal on Applied Mathematics, 53(6):1762–1775, 1993.

Page 110: QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS · 2020-04-09 · ABSTRACT QUADRATIC INVERSE EIGENVALUE PROBLEMS: THEORY, METHODS, AND APPLICATIONS Vadim

102

[65] Yitshak M. Ram and James Caldwell. Physical parameters reconstruction ofa free–free mass-spring system from its spectra. SIAM Journal on AppliedMathematics, 52(1):140–152, 1992.

[66] Yitshak M. Ram and Sylvan Elhay. An inverse eigenvalue problem for thesymmetric tridiagonal quadratic pencil with application to damped oscillatorysystems. SIAM Journal on Applied Mathematics, 56(1):232–244, 1996.

[67] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis. Springer-Verlag,Berlin, 1998.

[68] D. R. Sarkissian. Theory and Computations of the Partial Eigenvalue andEigenstructure Assignment in Matrix Second-order and Distributed ParameterSystems. Ph. D. dissertation, Northern Illinois University, DeKalb, IL, 2001.

[69] A.P. Seyranian and A.A. Mailybaev. Multiparameter Stability Theory withMechanical Applications. World Scientific, New Jersey, 2003.

[70] G. L. G. Sleijpen and H. A. van der Vorst. A Jacobi-Davidson iteration methodfor linear eigenvalue problems. SIAM Journal on Matrix Analysis and Appli-cations, 17:401–425, 1996.

[71] Gerard L. G. Sleijpen and Henk A. Van der Vorst. A Jacobi–Davidson iterationmethod for linear eigenvalue problems. SIAM Journal on Matrix Analysis andApplications, 17(2):401–425, 1996.

[72] L. Starek and D. J. Inamn. Symmetric inverse eigenvalue vibration problemand its applications. Mechanical Systems and Signal Processing, 15(1):11–29,2001.

[73] F. Tisseur and K. Meerbergen. The quadratic eigenvalue problem. SIAMReviews, 43(2):235–286, 2001.

[74] John von Neumann. Functional operators, volume 2: The geometry of orthog-onal spaces. (AM-22). Princeton University Press, 1950.

[75] F-S. Wei. Structural dynamic model improvement using vibration test data.AIAA Journal, 26(9):175–177, 1990.

[76] D. C. Zimmerman and M. Windengren. Correcting finite element model using asymmetric eigenstructure assignment technique. AIAA Journal, 28:1670–1676,1990.