spectral radius, numerical radius and unitarily invariant
Post on 11-Dec-2021
22 Views
Preview:
TRANSCRIPT
Spectral Radius, Numerical Radius and Unitarily Invariant Norm
Inequalities in Hilbert Space
By
Doaa Mahmoud Al-Saafin
Supervisor
Dr. Aliaa Abdel-Jawad Burqan
This Thesis was Submitted in Partial Fulfillment of the Requirements for the
Master’s Degree of Science in Mathematics
Faculty of Graduate Studies
Zarqa University
June, 2016
iii
الإهـــــــــــــداء
مــــــحيالر ن ــــمح الر الله م س ب
ف نفس حب المعرفة والإمان سل الطرق وغر ن أضاء إلى م
ــــأب
زهرة الحاة ونورها المكان .. وغمة الحنان .. مة إلى خ
ـــأم
.. خطوة بخطوة إلى شقق روح .. سندي بعد الله ورفق درب
وسفـأخ
ملاك ف الحاة وقدوت .. لى القلب الطاهر ..إ
أخت فاطمة
.هذا الجهد المتواضع عائلت الحببة .. أهدية أفراد وإلى بق
iv
ACKNOWLEDGEMENT
I would thank first Allah for helping me and inspiring me with patience and
endurance to be able to do this thesis.
I would also like to express my deepest gratitude to my supervisor, Assistant
Professor Aliaa Burqan, for her continuous support, guidance and encouragement to
achieve more than I thought possible.
Finally, my deepest gratitude go to my beloved family and friends. This dissertation
would not have been achieved without their encouragement and support day and night
to complete this thesis.
Doaa Al-Saafin
v
TABLE OF CONTENTS
Dedication………………………………………………………………………… iii
Acknowledgment…………………………………………………………………. iv
Abstract…………………………………………………………………………… vi
Introduction……………………………………………………………………… 1
Chapter One: Fundamentals of Matrix Analysis
Basic Results in matrix Theory…………………………………………………… 4
Positive Semidefinite Matrices……………………………………………………. 12
Unitarily Invariant Norms………………………………………………………… 16
Spectral Radius and Numerical radius……………………………………………. 21
Chapter Two: Numerical and Spectral Radius Inequalities of Matrices
Recent Numerical Radius Inequalities……………………………………………. 28
Cartesian Decomposition and Numerical Radius Inequalities …………………… 50
Spectral Radius Inequalities………………………………………………………. 56
Chapter Three: Two by Two Block Matrix Inequalities
Numerical Radius Inequalities for General Block Matrices……………….. 60
Inequalities for the off-Diagonal Part of Block Matrices………………... 70
On Unitarily Invariant Norm Inequalities and Hermitian Block Matrices……….. 80
References………………………………………………………………………… 96
Abstract in Arabic………………………………………………………………… 100
vi
Spectral Radius, Numerical Radius and Unitarily Invariant Norm
Inequalities in Hilbert Space
By
Doaa Mahmoud Al-Saafin
Supervisor
Dr. Aliaa Abdel-Jawad Burqan
ABSTRACT
In this thesis, we present several inequalities for spectral radius,
numerical radius and unitarily invariant norm for square matrices.
Related inequalities for spectral radius and numerical radius of two by
two block matrices are also given.
1
INTRODUCTION
The study of matrix theory has become more and more popular in the last few
decades. Researchers are attracted to this subject because of its connections with many
other pure and applied areas. In particular, the eigenvalues are crucial in solving system
of differential equations, analyzing population growth models and calculating powers of
matrices. It is not always easy to calculate the eigenvalues. However, in many scientific
problems it is enough to know that the eigenvalues lie in some specified regions. Such
information is provided in this thesis by comparing between spectral radius, numerical
radius and unitarily invariant norm.
Several inequalities involving spectral radius, numerical radius and matrix norm can
be found in many books on inequalities, like Bhatia (1997). Some investigations on
norm and numerical radius inequalities involving the Cartesian decomposition were
obtained by El-Haddad and Kittaneh (2007). In 2011 and 2012, Hirzallah, Kittaneh and
Shebrawi gave several inequalities for the numerical radius of two by two block
matrices. An estimate for the numeral radius of the matrix was given by Kittaneh
(2003), Yamazaki (2007) improved this result by using Aluthge transform. In 2013,
Abu-Omar and Kittaneh studied similar topics and gave inequalities that involve the
generalized Aluthge transform. Bahatia and Kittaneh (1990) and Zhan (2000) proved
important inequalities for the singular value of matrices. Tao (2006) employed these
inequalities to establish different equivalent inequalities for singular values of matrices.
On the other hand, Abu-Omar and Kittaneh (2015), applied spectral radius and norm
inequalities to two by two block matrices to give simple proofs and refinements of some
norm inequalities. Bourin and Lee (2012) proved a remarkable decomposition lemma
that plays a major role in several recent inequalities for positive semidefinite two by two
2
block matrices. Burqan (2013) proved a unitarily invariant norm inequality for positive
semidefinite two by two block matrices which gives a relation between the real part of
the off-diagonal blocks and the sum of the diagonal blocks.
This thesis is divided into three chapters.
Chapter One, which consists of four sections, highlights basic definitions and
properties for the square matrices in Hilbert space that are useful through the thesis.
Section (1.1) and section (1.2) present the most important properties of unitary,
Hermitian, normal and positive matrices. Also, the spectral mapping theorem, Schur's
unitary triagularization theorem and other famous decomposition results for matrices
such as the singular value decomposition and polar decomposition are presented.
Section (1.3) deals with unitarily invariant norms and various classes of norms such
as the Hilbert-Schmidt norm, spectral norm and Ky Fan norms.
Section (1.4) introduces the concepts of spectral radius and the numerical radius.
Also, some of the well-known facts about them are presented.
Chapter Two, which consists of three sections, concerns with numerical radius and
spectral radius inequalities.
Section (2.1) presents some improvements of basic numerical radius inequalities and
gives generalizations for these improvements.
Section (2.2) presents several interesting inequalities and identities for the numerical
radius which involve the Cartesian decomposition of matrices.
Section (2.3) deals with the spectral radius inequalities. several interesting
inequalities were proved by Abu-Omar and Kittaneh (2013).
3
Chapter Three, which consists of three sections, concerns with two by two block
matrices.
Section (3.1) presents recent inequalities for the numerical radius of general two by
two block matrices.
Section (3.2) deals with the off-diagonal part of two by two block matrices. we
present several numerical radius inequalities for this off-diagonal.
Section (3.3) discusses several inequalities for singular values of matrices and
employ them to prove several equivalent theorems. After that, we deal with special case
of positive semidefinite two by two block matrices. At the end of this section, we
establish new estimates for the spectral norm and numerical radius of the off-diagonal
part of positive semidefinite two by two block matrices.
4
Chapter One
Fundamentals of Matrix Analysis
This chapter contains a brief review of the basic concepts and results which are
important in this thesis. In section (1.1), we present some basic results in matrix theory.
In section (1.2), we consider the class of positive semidefinite matrices. This class,
which is included in the class of Hermitian matrices, arises naturally in many
applications. In section (1.3), we review the basic concepts and results taught in
unitarily invariant matrix norms. In section (1.4), we introduce the concepts of spectral
radius and the numerical radius and give the basic results concerning them that will be
used later.
1.1. Basic Results in Matrix Theory:
We will denote the algebra of all complex matrices by . It should be
mentioned that most results which hold for matrices can be generalized for operators
acting on Hilbert spaces.
Definition 1.1.1:
Let Then a complex number λ is called an eigenvalue of if there exists a
nonzero vector such that The vector x is called an eigenvector of
corresponding to λ.
Remark:
If with eigenvalues then ∑ ∏
where tr and det are the trace and determinant functions, respectively.
5
Definition 1.1.2:
If , then is called the characteristic equation of . The
polynomial is called the characteristic polynomial of . The set of
all λ that are eigenvalues of is called the spectrum of and it is denoted by ( ).
For , B the product matrices and need not be equal. However, we
have the following theorem.
Theorem 1.1.1.(Horn and Johnson, 1985):
Let . Then
.
Theorem 1.1.2:
Let such that . Then
and
Theorem 1.1.3.(The Spectral Mapping Theorem):
Let Then for every polynomial p,
( { }
6
Definition 1.1.3:
Let [ ] Then the adjoint of , denoted by is the matrix given by
[ ]
Remark:
For and , we have
1)
2)
3)
4)
5) det
6)
7) { } .
8) is invertible if and only if is invertible and .
9) If [ ] then ∑ | |
Thus and if and
only if
Definition 1.1.4:
For
the Euclidean inner product of x
and y is defined as ⟨ ⟩ ∑
Remark:
For all and , we have
1) ⟨ ⟩ ⟨ ⟩
7
2) ⟨ ⟩ ⟨ ⟩ and ⟨ ⟩ ⟨ ⟩
3) ⟨ ⟩ ⟨ ⟩ ⟨ ⟩
Theorem 1.1.4.(Cauchy-Schwarz Inequality):
Let Then
|⟨ ⟩| ⟨ ⟩ ⁄ ⟨ ⟩ ⁄
with equality if and only if x and y are linearly dependent.
From which it follows that the equation
‖ ‖ ⟨ ⟩ ⁄ , for every ,
defines a norm on called the Euclidean norm.
Definition 1.1.5:
Let . Then is said to be unitary matrix if
In the following theorem, we list some of the basic conditions for a matrix to be
unitary.
Theorem 1.1.5:
If the following are equivalent:
a) is unitary.
b) is invertible and
c)
d) is unitary.
8
e) The columns (rows) of form an orthonormal set.
f) ‖ ‖ ‖ ‖ ‖ ‖ for all
g) ⟨ ⟩ ⟨ ⟩ for all
Remark:
For any unitary matrix , we have
1) Every eigenvalue of has modulus one.
2) | | .
3) If is unitary, then is unitary.
Definition 1.1.6:
Two matrices are said to be unitary equivalent if there is a unitary matrix
such that If is unitary equivalent to a diagonal matrix, is
said to be unitary diagonalizable.
Theorem 1.1.6.(Schur's Unitary Triagularization Theorem):
Let with { } Then there is a unitary matrix such
that , where [ ] is an upper triangular matrix with diagonal
entries ,
Definition 1.1.7:
A matrix is called Hermitian (or self-adjoint) if . It is called skew-
Hermitian if
9
Remark:
1) The sum of two Hermitian matrices is Hermitian.
2) The product of two Hermitian matrices is Hermitian if and only if the matrices
commute.
3) If , then and are Hermitian, but is skew-Hermitian.
4) If is Hermitian, then the main diagonal entries of are all real and if is skew-
Hermitian, then the main diagonal entries of are all pure imaginary.
5) If is Hermitian, then the eigenvalues of are all real and if is skew-Hermitian,
then the eigenvalues of are all pure imaginary.
Matrix normality is one of the most interesting topics in linear algebra and matrix
theory, since normal matrices have not only simple structures under unitary equivalence
but also many applications.
Definition 1.1.8:
Let Then is called normal if
Remark:
1) It is obvious that Hermitian, skew-Hermitian and unitary matrices are normal
matrices.
2) The sum and product of two commuting normal matrices are normal.
Next, we present the most fundamental facts about normal matrices.
10
Theorem 1.1.7:
If [ ] the following statements are equivalent:
a) is normal.
b) is unitary diagonalizable.
c) ‖ ‖ ‖ ‖ for all
d) ∑ | |
∑ | |
, where are the eigenvalues of
The equivalent of (a) and (b) in the previous theorem is called the spectral theorem
for normal matrices.
Theorem 1.1.8.(Cartesian Decomposition):
Let Then there exist Hermitian matrices B and C such that
Necessarily,
and
The matrices B and C are called the
real part and the imaginary part of , and denoted by Re and Im , respectively.
It is easy to verify that and is normal if and only if Re
and Im commute.
For all the expression of the inner product of and as
⟨ ⟩
‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖
is called the polarization identity.
11
The following formula is a generalized of the identity .
Theorem 1.1.9.(Generalized Polarization Identity):
Let and Then
⟨ ⟩
⟨ ⟩
⟨ ⟩
⟨ ⟩
⟨ ⟩
12
1.2. Positive Semidefinite Matrices:
Definition 1.2.1:
A Hermitian matrix is said to be positive semidefinite, written as , if
⟨ ⟩
is further called positive definite, written as , if
⟨ ⟩
Remark:
1) The sum of any two positive definite (semidefinite) matrices of the same size is
positive definite (semidefinite).
2) The product of any two positive definite (semidefinite) matrices is positive definite
(semidefinite) if and only if the two matrices commute.
3) Each eigenvalue of a positive definite (semidefinite) matrix is positive
(nonnegative) real number.
4) The Hermitian matrix is positive definite (semidefinite) if and only if all
eigenvalues of are positive (nonnegative) real numbers.
5) The trace and determinant of positive definite (semidefinite) matrices are positive
(nonnegative) real numbers.
6) If is positive semidefinite and then is positive semidefinite,
and if is positive definite matrix, then is positive definite if and only if is
invertible.
13
Theorem 1.2.1:
Let be a positive semidefinite (definite) matrix and let be given
integer. Then there exists a unique positive semidefinite (definite) matrix such
that , written as ⁄ or √
Remark:
1) Let and be two Hermitian matrices of the same size. If , we write
or
2) If are positive semidefinite matrices, then ⁄ ⁄ .
3) If are positive semidefinite matrices, then the eigenvalues of are all
nonnegative.
Theorem 1.2.2.(Weyl's Monotonicity Principle Theorem).(Zhang, 1999):
Let be two Hermitian matrices. Then
Theorem 1.2.3:
Let Then is positive semidefinite if and only if for some
In the positive definite case B is taken to be invertible.
The absolute value of a matrix is defined as the square root of the positive
semidefinite matrix and denoted by | |. That is,
| | ⁄
14
Theorem 1.2.4:
Let be Hermitian, and let be any vector. Then
|⟨ ⟩| ⟨| | ⟩
Theorem 1.2.5:
Let be positive semidefinite, and let be any unit vector. Then
(a) ⟨ ⟩ ⟨ ⟩
(b) ⟨ ⟩ ⟨ ⟩
The eigenvalues of | | are called the singular values of . We will always
enumerate them in decreasing order and use for them the notation
The singular value decomposition is one of the most important factorization of
complex matrices which depends on the singular values.
Theorem 1.2.6.(Singular Value Decomposition):
If then
for some unitary matrices , and the matrix
( )
15
Theorem 1.2.7.(Polar Decomposition):
If then there exists a unitary matrix such that
where is positive semidefinite and it is uniquely determined as | |. If is
invertible, then is uniquely determined as
16
1.3. Unitarily Invariant Norms:
If one has several matrices in , what might it mean to say that some are "small" or
that others are "large"? One way to answer this question is to study norms of matrices.
A matrix norm is a number defined in terms of the entries of the matrix. The norm is
a useful quantity which can give important information about a matrix.
Definition 1.3.1:
A function is called a matrix norm if for all and all it
satisfies the following axioms:
1) , and if and only if .
2) | | .
3)
4)
Notice that the properties (1) (3) are identical to the axiom for a vector norm. A
vector norm on matrices, that is a function satisfies (1) (3) and not necessarily (4), is
often called a generalized matrix norm.
Example 1.3.1:
If [ ] then
1. The Frobenius (Hilbert Schmidt) norm of is given by
‖ ‖ (∑| |
)
*∑
+
17
2. The spectral (operator) norm of is given by
‖ ‖
‖ ‖
‖ ‖
‖ ‖ ‖ ‖
where is the largest singular value of
Theorem 1.3.1:
Let . Then
‖ ‖ ‖ ‖ √ ‖ ‖
Definition 1.3.2:
A matrix norm is called unitarily invariant norm, if
whenever and are unitary matrices, and it is denoted by ‖ ‖
The following are the most familiar unitarily invariant norms.
1. The Schatten p-norms are defined as
‖ ‖ [∑( )
]
⁄
[ | | ] ⁄
Notice that the Frobenius and the spectral norms are important special cases of the
Schatten p-norms, corresponding to the values and , respectively.
2. The Ky Fan k-norms are defined as
‖ ‖ ∑
18
It is clear that the norm ‖ ‖ is the same as ‖ ‖ and the norm ‖ ‖ is the same
as ‖ ‖
Remark:
For any unitarily invariant norm ‖ ‖ and for any we have
‖ ‖ ‖ ‖ ‖| |‖
Theorem 1.3.2.(Fan Dominace Theorem):
Let . Then
‖ ‖ ‖ ‖
for all unitarily invariant norms on if and only if
‖ ‖ ‖ ‖ , .
By using the Ky Fan k-norms formula we can rewrite Theorem (1.3.2) as:
For
‖ ‖ ‖ ‖
for all unitarily invariant norms if and only if
∑ ∑
This is known as the Fan Dominace property.
19
Remark:
If are positive semidefinite such that , then
‖ ‖ ‖ ‖
for every unitarily invariant norm, and
Theorem 1.3.3:
Let be Hermitian matrices. Then
‖ ‖ ‖ ‖
for every unitarily invariant norm.
Theorem 1.3.4:
Let be positive semidefinite matrices. Then
‖ ‖ ‖ ‖
Theorem 1.3.5.(Bhatia, 1997):
Let be positive semidefinite matrices. Then
‖*
+‖ ‖*
+‖
for every unitarily invariant norm.
We end this section with a matrix versions of the arithmetic-geometric mean.
20
Theorem 1.3.6.(Bhatia and Kittaneh, 1990):
Let be positive semidefinite matrices. Then
‖ ‖
‖ ‖
for every unitarily invariant norm.
The following is a generalization of Theorem (1.3.6).
Theorem 1.3.7.(Bhatia and Davis, 1993):
Let such that are positive semidefinite matrices. Then
‖ ‖
‖ ‖
for every unitarily invariant norm.
21
1.4. Spectral Radius and Numerical Radius:
Definition 1.4.1:
The spectral radius of a matrix is defined as
{| | }
It is well known that
for every matrix norm Moreover, if is normal, then
‖ ‖
Let and n is positive integer. It follows readily from the spectral
mapping theorem and Theorem (1.1.1) that
| |
and
Theorem 1.4.1.(Spectral Radius Formula):
Let Then
⁄
22
Theorem 1.4.2:
Let such that . Then
and
The spectral radius is not a norm, this can be easily seen by considering the
matrix *
+ and noting that
Definition 1.4.2:
The numerical range of a matrix is the subset of the complex numbers ,
given by
{⟨ ⟩ ‖ ‖ }
Note that: ⟨ ⟩
Let and let Then the following are immediate:
{ }
and
23
A very important property of the numerical range of a matrix is that it includes the
spectrum of the matrix as in the following theorem.
Theorem 1.4.3.(Horn and Johnson, 1991):
Let Then
Theorem 1.4.4.(Gustafson and Rao, 1997):
If such that is positive definite and , then
.
The following example can be found in Halmos (1982).
Example 1.4.1:
1. (*
+) [ ]
2. (*
+) { | | ⁄ }
Definition 1.4.3:
The numerical radius of a matrix is given by
| |
It is easy to verify that defined a vector norm on This norm is weakly
unitarily invariant (i.e., for any matrix and any unitary
matrix ) and satisfies for any matrix We will denote
| | by for any
24
Theorem 1.4.5:
The numerical radius norm and the matrix norm ‖ ‖ on are equivalent. In
fact,
‖ ‖ ‖ ‖
Theorem 1.4.6:
Let Then
‖ ‖
Moreover, if is normal, then
‖ ‖
A matrix is called nilpotent if for some positive integer The
smallest such is sometimes called the power of nilpotency of
Theorem 1.4.7:
Let be a nilpotent matrix. Then
‖ ‖ (
)
where is the power of nilpotency of .
It follows from Theorem and Theorem that both inequalities in
Theorem are sharp. The first inequality becomes an equality if . The
second inequality becomes an equality if is normal.
25
Remark:
Numerical radius is not submultiplicative. But for any we have
In particular, if are commute, then
and if are normal, then
Theorem 1.4.8:
Let Then
for every positive integer
The following useful theorem provides alternative way to compute the numerical
radius of a matrix, and will be used frequently throughout this thesis.
Theorem 1.4.9:
Let Then
‖ ( )‖
‖ ( )‖
26
Theorem 1.4.10:
Let [ ] The following statements hold:
a) ([ ]) ([| |])
b) If for all then
([ ])
([ ])
Definition 1.4.4:
Let | | be the polar decomposition of The Aluthge transform of
is defined by
| | | |
Aluthge transform was first defined by Aluthge (1990). The following are among the
well-known relations:
1) ( )
2) ‖ ‖ ‖ ‖
3) ( )
4) ( )
Definition 1.4.5:
For any two matrices and in we let denote the direct sum of and ,
that is matrix *
+
27
Remark:
Let Then
1) ‖ ‖ ‖*
+‖
2) ‖ ‖ ‖ ‖
3) ‖ ‖ ‖ ‖ ‖ ‖
4)
5) (*
+) √
6) ( )
The material in this chapter can be found in almost every book on matrix analysis.
Here we mention the books of , and Aluthge .
28
Chapter Two
Numerical Radius and Spectral Radius Inequalities of Matrices
This chapter is devoted to the recent numerical and spectral radius inequalities. In
section (2.1), we present some improvements of basic numerical radius inequalities,
then we give generalizations of these improvements. In section (2.2), we present some
generalizations and results of numerical radius inequalities that are concerning the
Cartesian decomposition. In section (2.3), we present several spectral radius inequalities
for sum, product and power of matrices in
2.1. Recent Numerical Radius Inequalities:
It has been mentioned earlier that if then
‖ ‖ ‖ ‖
The inequalities (2.1.1) have been improved by many mathematicians. In this section
we will present recent improvements of these inequalities.
Theorem 2.1.1.(Kittaneh, 2003):
Let Then
‖ ‖
‖ ‖ ⁄
To prove Theorem (2.1.1), Kittaneh used the following useful lemmas. The first
lemma, which contains a mixed Schwarz inequality, can be found in Halmos (1982).
29
Lemma 2.1.1:
If , then
|⟨ ⟩| ⟨| | ⟩ ⟨| | ⟩
for all
The second lemma contains a special case of more general norm inequality. see
Furuta (1989).
Lemma 2.1.2:
If are positive semidefinite matrices, then
‖
‖ ‖ ‖
The third lemma contains a norm inequality for sums of positive semidefinite
matrices that is sharper than the triangle inequality. See Kittaneh (2002).
Lemma 2.1.3:
If are positive semidefinite matrices, then
‖ ‖
‖ ‖ ‖ ‖
√ ‖ ‖ ‖ ‖ ‖
‖
Proof of Theorem (2.1.1):
By Lemma (2.1.1) and by the arithmetic-geometric mean inequality, we have for
every
|⟨ ⟩| ⟨| | ⟩ ⟨| | ⟩
⟨| | ⟩ ⟨| | ⟩
30
And so
|⟨ ⟩|
⟨ | | | | ⟩
By taking the maximum on both sides in the above inequality over with
‖ ‖ and observing that | | | | is positive semidefinite matrix, we get
‖| | | |‖
Applying Lemma (2.1.2) and Lemma (2.1.3) to the positive semidefinite matrices | |
and | |, and using the fact that ‖| |‖ ‖| |‖ ‖ ‖ and ‖| || |‖ ‖ ‖ we have
‖| | | |‖ ‖ ‖ ‖ ‖
and so
‖| | | |‖
‖ ‖
‖ ‖
as required.
Since ‖ ‖ ‖ ‖ for every the inequality (2.1.2) is a refinement of the
second inequality in (2.1.1).
Theorem 2.1.2.(Kittaneh, 2005):
Let Then
‖ ‖
‖ ‖
31
Proof:
Let be the Cartesian decomposition of , and let be any vector in
Then by the convexity of the function we have
|⟨ ⟩| ⟨ ⟩ ⟨ ⟩
|⟨ ⟩| |⟨ ⟩|
|⟨ ⟩|
By taking the maximum on both sides in the above inequality over ‖ ‖
we get
‖ ‖
‖ ‖
Thus,
‖ ‖
‖ ‖
‖ ‖
‖ ‖
‖ ‖
and so
‖ ‖
which proves the first inequality in (2.1.4).
32
To prove the second inequality in (2.1.4) let be any unit vector. Then by
Cauchy-Schwarz inequality, we have
|⟨ ⟩| ⟨ ⟩ ⟨ ⟩
‖ ‖ ‖ ‖
⟨ ⟩ ⟨ ⟩
⟨ ⟩
Thus,
|⟨ ⟩| ⟨ ⟩
By taking the maximum on both sides in the above inequality over with
‖ ‖ we get
‖ ‖
‖ ‖
which proves the second inequality in (2.1.4) and completes the proof.
To see that the inequalities (2.1.4) improve the inequalities (2.1.1), consider the
chain of inequalities
‖ ‖
‖ ‖
‖ ‖ ‖ ‖
In order to prove the inequality (2.1.6), we need the following lemma which can be
found in Kittaneh (2004).
33
Lemma 2.1.4:
For any we have
‖ ‖ ( ) ‖ ‖
Now, the first inequality in (2.1.6) is an immediate consequence of Lemma (2.1.4)
while the last follows by the triangle inequality and the fact that
‖ ‖ ‖ ‖ ‖ ‖
By using Aluthge transform and the generalized polarization identity, Yamazaki
(2007) improved the second inequality in (2.1.1) as follows:
Theorem 2.1.3.(Yamazaki, 2007):
If then
‖ ‖
( )
Proof:
Let | | be the polar decomposition of and let . Then by the
generalized polarization identity, we have for any unit vector ,
⟨ ⟩ ⟨ | | ⟩
⟨| |( ) ( ) ⟩
⟨| |( ) ( ) ⟩
⟨| |( ) ( ) ⟩
⟨| |( ) ( ) ⟩
Now, since | | is positive semidefinite, all inner products of the terminal side are
positive.
34
Thus,
⟨ ⟩
⟨| |( ) ( ) ⟩
⟨| |( ) ( ) ⟩
⟨| |( ) ( ) ⟩
⟨( )| |( ) ⟩
‖( )| |( )‖
‖( )| |
(( )| |
)
‖
‖(( )| |
)
( )| | ‖
‖| |
( )( )| |
‖
‖( | |
| |
) ( | |
| |
)‖
‖| | | |
| |
| |
| |
| |‖
‖ | | ‖
‖| | ( )‖
‖ ‖
‖ ( )‖
‖ ‖
( )
35
Thus,
⟨ ⟩
‖ ‖
( )
Now, since
|⟨ ⟩|
( ⟨ ⟩)
⟨ ⟩
we get
‖ ‖
( )
as required.
Remark:
If then
( ) ‖ ‖ ‖| | | |
‖
‖(| | | |
) (| |
| |
)
‖
((| |
| |
) (| |
| |
)
)
(| |
| | | |
)
| | | |
( | | | |)
36
Now, since
( | | | |) ‖ | | | |‖
‖ ‖
we have
( ) ‖ ‖
Thus, the inequality (2.1.7) is sharper than the inequality (2.1.2).
By Theorem (2.1.3) and Theorem (1.4.5), we have the following corollary.
Corollary 2.1.1:
Let If then
‖ ‖
Another improvement of the inequalities (2.1.1) is due to Abu-Omar and Kittaneh
(2015),(c) as follows:.
Theorem 2.1.4.(Abu-Omar and Kittaneh, 2015):
Let Then
√‖ ‖
√‖ ‖
37
Proof:
Let be a unit vector and let be a real number such that
⟨ ⟩ |⟨ ⟩|
We have
‖ ( )‖
‖ ‖
‖( )
‖
√‖ ‖
√|⟨ ⟩|
√|⟨ ⟩ ⟨ ⟩|
√|⟨ ⟩ ⟨ ⟩ |
√|⟨ ⟩ |⟨ ⟩||
√|⟨ ⟩ |
Thus,
‖ ‖
√|⟨ ⟩ |
√‖ ‖
38
which proves the first inequality in (2.1.9).
To prove the second inequality in (2.1.9), we have
‖ ( )‖
‖ ‖
‖( ) ‖
‖ ( )‖
√ ‖ ‖
‖ ‖
√‖ ‖
which proves the second inequality in (2.1.9) and completes the proof.
The following theorems are generalizations of the first inequality in (2.1.3) and the
second inequality in (2.1.4).
Theorem 2.1.5.(El-Haddad and Kittaneh, 2007):
Let Then
‖| | | | ‖
Theorem 2.1.6.(El-Haddad and Kittaneh, 2007):
Let Then
‖ | | | | ‖
39
In prooving their generalizations, El-Haddad and Kittaneh (2007) used the following
lemmas. The first lemma is an application of Jensen's inequality, and can be found in
Hardy, Littlewood and Pólya (1988).
Lemma (2.1.5):
For ,
a) [ ]
b)
The second lemma is known as the generalized mixed Schwarz's inequality, and can
be found in Kittaneh (1988).
Lemma (2.1.6):
Let Then
|⟨ ⟩| ⟨| | ⟩⟨| | ⟩
Proof of Theorem (2.1.5):
For every unit vector we have
|⟨ ⟩| ⟨| | ⟩ ⟨| | ⟩
( )
(⟨| | ⟩ ⟨| | ⟩
)
( )
(⟨| | ⟩ ⟨| | ⟩
)
( )
40
Thus,
|⟨ ⟩|
⟨(| | | | ) ⟩
taking the maximum on both sides in the above inequality over ‖ ‖
produces
‖| | | | ‖
as required.
Proof of Theorem (2.1.6):
We have
|⟨ ⟩| ⟨| | ⟩ ⟨| | ⟩ ( )
⟨| | ⟩ ⟨| | ⟩ ( )
⟨| | ⟩ ⟨| | ⟩ ( )
⟨| | ⟩ ⟨| | ⟩ ( )
Thus,
|⟨ ⟩| ⟨ | | | | ⟩
By taking the maximum on both sides in the above inequality over ‖ ‖ , we
get
‖ | | | | ‖
as required.
41
Dragomir (2008,2009), established other inequalities related to the spectral norm and
the numerical radius as follows:
Theorem 2.1.7.(Dragomir, 2008):
Let Then
‖ ‖
Dragomir used the following useful lemma in proving Theorem which can
be considered as a refinement of Cauchy-Schwarz inequality.
Lemma 2.1.8:
For such that ‖ ‖
‖ ‖‖ ‖ |⟨ ⟩ ⟨ ⟩⟨ ⟩| |⟨ ⟩⟨ ⟩| |⟨ ⟩|
Proof:
By the first inequality in (2.1.11), we deduce
‖ ‖‖ ‖ |⟨ ⟩| |⟨ ⟩⟨ ⟩|
Let be any unit vector and put in the inequality
(2.1.12).
Thus,
‖ ‖‖ ‖ |⟨ ⟩| |⟨ ⟩|
By taking the maximum on both sides in the inequality (2.1.13) over with
‖ ‖ we have
42
‖ ‖
as required.
Theorem 2.1.8.(Dragomir, 2009):
Let Then
‖ ‖
for all .
Proof:
Let be any vector in By the Schwarz's inequality, we have
|⟨ ⟩| |⟨ ⟩| ‖ ‖ ‖ ‖
⟨ ⟩ ⟨ ⟩
By the arithmetic-geometric mean inequality and the convexity of
we have
⟨ ⟩ ⟨ ⟩
⟨ ⟩ ⟨ ⟩
(⟨ ⟩ ⟨ ⟩
)
(⟨ ⟩ ⟨ ⟩
)
(⟨ ⟩
)
43
Thus,
|⟨ ⟩|
⟨ ⟩
Note that, is Hermitian. So by taking the maximum on both sides in
the above inequality over ‖ ‖ we deduce the desired inequality.
Sattari, Moslehian and Yamazaki (2015) generalized inequality (2.1.10) as follows:
Theorem 2.1.9.(Sattari, Moslehian and Yamazaki, 2015):
Let Then
‖ ‖
for all
Proof:
By applying Lemma (2.1.5) (a) on the inequality (2.1.13), we get
|⟨ ⟩|
‖ ‖‖ ‖ |⟨ ⟩| (
‖ ‖ ‖ ‖ |⟨ ⟩|
)
Hence,
|⟨ ⟩|
‖ ‖ ‖ ‖ |⟨ ⟩|
Taking the maximum on both sides in the above inequality over with ‖ ‖ ,
we obtain the desired inequality.
44
The following is another upper bound for given by Sattari, Moslehian and
Yamazaki (2015).
Theorem 2.1.10.(Sattari, Moslehian and Yamazaki, 2015):
Let Then
‖ ‖
for all
Proof:
For any unit vector we have
⟨ ⟩
‖( ) ‖
‖( ) ‖
( )
‖( ) ‖
‖ ‖
‖ ‖
‖( )
( )‖
‖ ‖
‖ ‖
‖ ( )‖
‖ ‖
45
Since ⟨ ⟩ ⟨ ⟩ the inequality
‖ ‖
|⟨ ⟩|
⟨ ⟩
For since and
are convex and matrix concave functions, respectively, we
have
(
‖
‖
)
‖
‖
‖(
)
‖
‖
‖
as required.
Remark:
By Theorem (2.1.10) and Theorem (2.1.8), we have
‖ ‖
‖ ‖
Hence if both and are normal matrices, then the inequality (2.1.16) is sharper than
the inequality (2.1.14).
46
Kittaneh (2006) established a general spectral radius inequality which gives spectral
radius inequalities for sums and products of matrices. In fact, Kittaneh has shown that if
then
‖ ‖ ‖ ‖
√ ‖ ‖ ‖ ‖ ‖ ‖‖ ‖
Recently, by using the inequality (2.1.17), Abu-Omar and Kittaneh (2015),(a),
improved the triangle inequality of numerical radius as follows:
Theorem 2.1.11.(Abu-Omar and Kittaneh, 2015):
Let Then
( )
√( )
‖ ‖
Proof:
Let be any real number. Then by letting ( ) ( )
and in the inequality (2.1.17), we have
‖ ( )‖ ( ( ) ( ))
(‖ ( )‖ ‖ ( )‖)
√ ‖ ‖ ‖ ‖ ‖ ‖
47
Thus,
‖ ( )‖ ‖[‖ ( )‖ √‖ ‖
√‖ ‖ ‖ ( )‖]‖
Hence, by the norm monotonicity of matrices with nonnegative entries and then by
Theorem (1.4.9), we have
‖ ( )‖
‖*
‖ ( )‖
√‖ ‖
√‖ ‖
‖ ( )‖+‖
‖*
√‖ ‖
√‖ ‖ +‖
( )
√( )
‖ ‖
Now, the inequality (2.1.18) follows by taking the maximum over all and by
using Theorem (1.4.9).
Hou and Du (1995) established useful estimates for the spectral radius, the numerical
radius and the spectral norm of matrix [ ] with entries In particular,
they proved that
([‖ ‖])
([‖ ‖])
and
‖ ‖ ‖[‖ ‖]‖
48
Abu-Omar and Kittaneh (2015),(b) improved the inequality as follows:
Theorem 2.1.12.(Abu-Omar and Kittaneh, 2015):
Let [ ] be a matrix with Then
([ ])
where
, ( )
‖ ‖
Proof:
Let [
] be a unit vector in Then
|⟨ ⟩| |∑ ⟨ ⟩
| ∑ |⟨ ⟩|
∑|⟨ ⟩|
∑|⟨ ⟩|
∑ ‖ ‖
∑‖ ‖‖ ‖‖ ‖
∑
‖ ‖‖ ‖
⟨[ ] ⟩
49
where [
‖ ‖
‖ ‖
‖ ‖
]
Now, since is a unit vector in then
|⟨ ⟩| ([ ])
and so
‖ ‖
|⟨ ⟩| ([ ])
as required.
The following corollary is an immediate consequence of Theorem (2.1.12) and
Corollary 2.1.2:
Let Then
(*
+)
( )
√( )
‖ ‖ ‖ ‖
50
2.2. Cartesian Decomposition and Numerical Radius Inequalities:
It well-known that if is the Cartesian decomposition of a matrix
then
| | | |
Thus, the inequalities (2.1.4) can be written as
‖ ‖ ‖ ‖
Or equivalently, as
‖ ‖
‖ ‖
El-Haddad and Kittaneh (2007) gave generalizations of the second inequality of
(2.2.1) and the inequalities (2.2.2) by using the Cartesian decomposition of the matrix as
the following theorems.
Theorem 2.2.1.(El-Haddad and Kittaneh, 2007):
Let with the Cartesian decomposition and let Then
‖| | | | ‖
To prove Theorem (2.2.1), we need the following lemma which is an application of
Jensen's inequality, can be found in Hardy, Littlewood and Pólya (1988).
Lemma (2.2.1):
Let . Then
51
Proof:
For any unit vector and for we have
|⟨ ⟩| ⟨ ⟩ ⟨ ⟩
|⟨ ⟩| |⟨ ⟩| ( )
⟨| | ⟩ ⟨| | ⟩ ( )
⟨| | ⟩ ⟨| | ⟩
⟨ | | | | ⟩
Thus, we obtain the inequality
|⟨ ⟩| ⟨ | | | | ⟩
By taking the maximum on both sides in the above inequality over with
‖ ‖ , we obtain
‖| | | | ‖
For the case we have
‖ ‖ ( )
‖ ‖
‖| | | | ‖ ( )
as required.
52
Theorem 2.2.2.(El-Haddad and Kittaneh, 2007):
Let with the Cartesian decomposition and let Then
‖| | | | ‖
Proof:
For any unit vector we have
|⟨ ⟩|
√ (
⟨ ⟩ ⟨ ⟩
)
(|⟨ ⟩| |⟨ ⟩|
)
( )
⟨| | ⟩ ⟨| | ⟩
( )
⟨| | ⟩ ⟨| | ⟩
( )
Thus,
|⟨ ⟩| ⟨ | | | | ⟩
By taking the maximum on both sides in the above inequality over with
‖ ‖ , we obtain
‖| | | | ‖
as required.
53
Theorem 2.2.3.(El-Haddad and Kittaneh, 2007):
Let with the Cartesian decomposition and let Then
(
)‖| | | | ‖
‖| | | | ‖
Proof:
As in the proof of the first inequality in (2.1.4), we have
‖ ‖
Thus,
‖ ‖
‖| | ‖
and so
‖| | ‖ ‖| | ‖
‖| | | | ‖
Hence,
(
)‖| | | | ‖
which proves the first inequality in (2.2.4).
To prove the second inequality in (2.2.4), let be any unit vector in Then
|⟨ ⟩| ⟨ ⟩ ⟨ ⟩
implies
54
|⟨ ⟩| ⟨ ⟩ ⟨ ⟩
⟨ ⟩ ⟨ ⟩
(
) |⟨ ⟩| |⟨ ⟩|
( [ )
⟨| | ⟩ ⟨| | ⟩ ( )
⟨| | ⟩ ⟨| | ⟩ ( )
⟨ | | | | ⟩
Thus,
|⟨ ⟩|
⟨ | | | | ⟩
By taking the maximum on both sides in the above inequality over with
‖ ‖ , we obtain
‖| | | | ‖
which proves the second inequality in (2.2.4), and completes the proof of the theorem.
Kittaneh, Moslehian and Yamazaki (2015) proved useful theorem concerning the
Cartesian decomposition and gave a new identity of the numerical radius of matrices in
as in the following theorem.
55
Theorem 2.2.4.(Kittaneh, Moslehian and Yamazaki, 2015):
Let be the Cartesian decomposition of Then for
‖ ‖
In particular,
‖ ‖
‖ ‖
Proof:
Since ( ⟨ ⟩) |⟨ ⟩| then
‖ ( )‖
( ( ))
On the other hand, let be the Cartesian decomposition of Then
( )
(( ) ( ) )
( )
( )
( ) ( )
Therefore, by putting in , we obtain (2.2.5).
Especially, by setting and we reach to inequalities
(2.2.6).
56
2.3. Spectral Radius Inequalities:
It is well-known that if then the spectral radius of defined as
{| | }
By using the inequality (2.1.23), Abu-Omar and Kittaneh (2013) proved a general
spectral radius inequality which improves the inequality (2.1.17).
Theorem 2.3.1.(Abu-Omar and Kittaneh, 2013):
Let . Then
( )
√( )
‖ ‖‖ ‖
Proof:
By using basic properties of the spectral radius, we have
(*
+)
(*
+ [
])
([
] *
+)
([
])
([
])
57
by the inequality (2.1.23), we have
( )
√( )
‖ ‖ ‖ ‖
The desired inequality follows by replacing and by and
respectively, in
the last inequality and then taking the infimum over
Corollary 2.3.1:
Let Then
( )
√( )
‖ ‖ ‖ ‖
Proof:
Letting , and in Theorem (2.3.1), we have
( )
√( )
‖ ‖
Similarly, letting , and in Theorem (2.3.1), we have
( )
√( )
‖ ‖
The desired inequality now follows from the inequalities (2.3.1) and ( ).
Corollary 2.3.2:
Let Then
( )
√( )
‖ ‖‖ ‖
58
Proof:
The desired inequality follows from Theorem (2.3.1) by letting ,
and .
Corollary 2.3.3:
Let Then
√ ‖ ‖‖ ‖ ‖ ‖‖ ‖
and
√ ‖ ‖‖ ‖ ‖ ‖‖ ‖
Proof:
Letting , , and in Theorem (2.3.1), we have
√‖ ‖‖ ‖
Similarly, letting , , and in Theorem (2.3.1), we have
√‖ ‖‖ ‖
Now, the inequality (2.3.3) follows from the inequalities (2.3.5) and (2.3.6). The
inequality (2.3.4) follows from inequality (2.3.3) by symmetry.
Corollary 2.3.4:
Let Then
( )
√( )
‖ ‖‖ ‖ ‖ ‖‖ ‖
59
Proof:
Letting
,
, and in Theorem (2.3.1), we have
( )
√( )
‖ ‖‖ ‖
Similarly, letting , ,
and
in Theorem (2.3.1), we
have
( )
√( )
‖ ‖‖ ‖
The desired inequality follows from the inequalities (2.3.7) and (2.3.8).
Corollary 2.3.5:
Let Then
( )
√ ‖ ‖‖ ‖ ‖ ‖‖ ‖
‖ ‖
for every positive integer
Proof:
The desired inequality follows by letting ,where is a positive integer, in
Corollary (2.3.4).
60
Chapter Three
Two by Two Block Matrix Inequalities
Block matrices arise naturally in many aspects of matrix theory. If *
+,
where is block matrix (or partitioned matrix), then it is very useful to
explore the relations between various functions of the matrix and those of its block
entries and The matrix *
+ is called the diagonal part of *
+ and
*
+ is the off-diagonal part.
In this chapter we present general inequalities for block matrices. In section (3.1), we
present several bounds of numerical radius for the general block matrix.
In section (3.2), we present several bounds of numerical radius for the off-diagonal part
of block matrix. In section (3.3), we present and investigate inequalities for
special case of block matrices.
3.1. Numerical Radius Inequalities for General Block Matrices:
It is well-known ( ) that
(*
+) (*
+)
(*
+) (*
+)
and
(*
+) ( )
where
61
For with it is known that
‖ ‖
Since *
+
*
+
*
+ the subadditivity of the numerical radius
and the inequalities , together with the identities and ,
imply that
(*
+) ( ) ‖ ‖ ‖ ‖
and
(*
+) ( ( ) (*
+))
In particular, if then
(*
+) ‖ ‖ ‖ ‖
Hirzallah, Kittaneh and Shebrawi gave other upper bounds for the numerical
radius of the general block matrix *
+ as we will see.
Theorem 3.1.1.(Hirzallah, Kittaneh and Shebrawi, 2012):
Let *
+ be a matrix with Then
62
where
√‖ ‖ ‖ ‖
√
‖ ‖ ‖ ‖
√‖ ‖ ‖ ‖
√
‖ ‖ ‖ ‖
To prove Theorem (3.1.1), we need the following lemma.
Lemma 3.1.1:
Let Then
‖ ‖ ‖ ‖ ‖| | | | ‖ |‖ ‖ ‖ ‖ |
Proof:
We have
‖ ‖ ‖ ‖
‖ ‖ ‖ ‖ | ‖ ‖ ‖ ‖ |
‖ | | ‖ ‖ | | ‖ | ‖ ‖ ‖ ‖ |
‖ | | | | ‖ | ‖ ‖ ‖ ‖ |
‖ | | | | ‖ | ‖ ‖ ‖ ‖ |
Thus,
‖ ‖ ‖ ‖ ‖ | | | | ‖ | ‖ ‖ ‖ ‖ |
63
Replacing and by and in (3.1.8), respectively, we obtain
‖ ‖ ‖ ‖ ‖ | | | | ‖ | ‖ ‖ ‖ ‖ |
and so
‖ ‖ ‖ ‖ ‖ | | | | ‖ | ‖ ‖ ‖ ‖ |
as required.
Proof of Theorem (3.1.1):
We have
(*
+) ‖*
+‖
‖*
+ *
+‖
‖*| | | |
+‖
‖ | | | | ‖
and so
(*
+) ‖| | | | ‖
√ ‖ ‖ ‖ ‖ | ‖ ‖ ‖ ‖ |
( )
√‖ ‖ ‖ ‖
64
By taking a unitary matrix *
+ we obtain
(*
+) (*
+) (*
+)
(*
+) ( *
+ )
√‖ ‖ ‖ ‖
√
‖ ‖ ‖ ‖
Similarly,
(*
+) √‖ ‖ ‖ ‖
√
‖ ‖ ‖ ‖
√‖ ‖ ‖ ‖
√
‖ ‖ ‖ ‖
By observing that (*
+) (*
+) we have
(*
+)
as required.
Theorem 3.1.2.(Hirzallah, Kittaneh and Shebrawi, 2012):
Let *
+ be a matrix with Then
65
where
√ ‖ ‖ ‖ ‖
√ ‖ ‖ ‖ ‖
√ ‖ ‖ ‖ ‖
√ ‖ ‖ ‖ ‖
Proof:
We have
‖ ‖
( )
( )
‖ ‖
‖ ‖
‖ ‖
‖ ‖ ( )
and so
‖ ‖ ‖ ‖ ‖ ‖
By using the inequality , we have
(*
+) √ ‖ ‖ ‖ ‖
66
The proof of the general case can be obtained by an argument similar to that used in the
proof of Theorem (3.1.1).
Abu-Omar and Kittaneh, 2015,(b) improved and refined the inequalities (2.1.20) and
(2.1.22), respectively, as follows:
Theorem 3.1.3.(Abu-Omar and Kittaneh, 2015):
Let [ ] be a matrix with Then
([ ])
where
,
( )
([
])
To prove Theorem Abu-Omar and Kittaneh used the following lemma.
Lemma 3.1.2:
Let Then
(*
+)
‖ ‖
Proof:
By Theorem we have
(*
+)
‖ *
+ *
+
‖
67
and so
(*
+)
‖[
]‖
‖*
( )
+‖
‖ ‖
Proof of Theorem (3.1.3):
For any we have
‖ ( )‖
‖
‖
[ ( )
(
)
(
) ( )
(
)
(
)
(
)
(
) ( ) ]
‖
‖
‖
‖
[ ‖ ( )‖
‖(
)‖
‖(
)‖ ‖ ( )‖
‖(
)‖
‖(
)‖
‖(
)‖
‖(
)‖ ‖ ( )‖ ]
‖
‖
( )
‖[ ]‖
(by Lemma (3.1.2) and by the norm monotonicity of matrices with nonnegative entries)
68
Now, since the matrix [ ] is real symmetric, then we have
‖[ ]‖ ([ ])
Thus,
‖ ( )‖ ([ ])
as required.
Remark:
The inequality is sharper than the inequality . To see this, note that
[ ] is real symmetric, and so
([ ]) ‖[ ]‖
By the inequality and by the norm monotonicity of matrices with nonnegative
entries, we have
‖[ ]‖
‖[ ]‖
([ ])
([ ])
Corollary 3.1.1:
Let Then
(*
+)
( )
√( )
(*
+)
69
Proof:
By Theorem
(*
+) (* (*
+)
(*
+) +)
(* (*
+)
(*
+) +)
Since the matrix * (*
+)
(*
+) + is real symmetric, it follows that
(* (*
+)
(*
+) +) (*
(*
+)
(*
+) +)
( )
√( )
(*
+)
as required.
70
3.2. Inequalities for the off-Diagonal Part of 2×2 Block Matrices:
Recall that defines a vector norm on and
for any matrix and any unitary matrix
By applying the identity to the matrix *
+ and the unitary matrix
* ⁄
+ we get
*
+ [ ⁄ ⁄
]
( ⁄ [ ⁄ ⁄
]) ( | ⁄ | )
and so
(*
+) (*
+)
Also, by applying the identity to the matrix *
+ and the unitary matrix
√ *
+, we get
(*
+) ( )
Theorem 3.2.1.(Hirzallah, Kittaneh and Shebrawi, 2011):
Let Then
(*
+) ( )
71
and
(*
+)
Proof:
To prove the inequality (3.2.4), we have
(*
+)
(*
+ *
+)
(*
+) (*
+)
(*
+)
and so
(*
+)
By replacing by – in the inequality and then by using the inequality (3.2.2),
we have
(*
+)
(*
+)
Now, the inequality follows from the inequalities and
To prove the inequality consider the unitary matrix
√ *
+,
72
then
(*
+) ( *
+ )
([
])
([
] [
])
( ([
]) ([
]))
which proves the second inequality in and completes the proof of the theorem.
Corollary 3.2.1:
Let with the Cartesian decomposition Then
(*
+)
for any
Proof:
By replacing by in Theorem , we get
( )
(*
+)
73
Thus,
( )
(*
+)
( )
Since the result follows from the inequalities
Note that, for any we have
*
+
*
+
So by the identity we have
(*
+)
‖*
+‖
Take a unitary matrix
√ *
+ Then
‖*
+‖ ‖ *
+ ‖ ‖ ‖
and so
(*
+) ‖ ‖
In the following theorem Hirzallah, Kittaneh and Shebrawi gave upper
bound for the numerical radius of *
+ that involves and .
74
Theorem 3.2.2.(Hirzallah, Kittaneh and Shebrawi, 2011):
Let Then
(*
+) ( ) ‖ ‖ ‖ ‖
Proof:
Consider the unitary matrix
√ *
+ By the inequality (3.2.2) and the
identity (3.2.10), we have
(*
+) ( *
+ )
([
])
([
] *
+)
( ([
]) (*
+))
‖ ‖
Thus,
(*
+) ‖ ‖
By replacing by – in the inequality and by using the inequality we
have
(*
+) ‖ ‖
From the inequalities and , we have
75
(*
+) ‖ ‖ ‖ ‖
By interchanging and in the inequality we get
(*
+) ‖ ‖ ‖ ‖
Thus, the desired inequality follows from the inequalities and
Theorem 3.2.3:
Let Then
‖ ‖ (*
+) ‖ ‖ ‖ ‖
Theorem (3.2.3) was proved by Kittaneh, Moslehian and Yamazaki (2015). Now we
present our proof.
Proof :
For any we have
‖ ‖ ‖ ‖
(*
+) ( )
([
]) ( | | )
‖ ( ) ( ) ‖ ( )
‖ ( ) ( )
‖
‖ ‖
Replacing by we get
76
‖ ‖
(*
+) ‖ ‖ ‖ ‖
as required.
Abu-Omar and Kittaneh, 2015,(b) extended Theorem as follows:
Theorem 3.2.4.(Abu-Omar and Kittaneh 2015):
Let *
+ Then
√‖| | | | ‖
√‖| | | | ‖
Proof:
Let be a unit vector and let be a real number such that
⟨ ⟩ |⟨ ⟩|
Then we have
‖ ‖ ( )
‖( )
( )‖
√‖| | | | ‖
√|⟨(| | | | ) ⟩|
√|⟨ | | | | ⟩ ⟨ ⟩|
77
and so
√|⟨ | | | | ⟩ ⟨ ⟩ |
√|⟨ | | | | ⟩ |⟨ ⟩||
√|⟨ | | | | ⟩ |
Thus,
‖ ‖
√⟨ | | | | ⟩
√‖| | | | ‖
which proves the first inequality in .
To prove the second inequality in , we have
‖ ‖ ( )
‖( ) ( )‖
‖| | | | ( )‖
√ ‖| | | | ‖
‖ ‖
√‖| | | | ‖
which proves the second inequality in and completes the proof of the theorem.
78
Remark:
Note that
√‖ ‖
√‖ ‖ ‖ ‖ ‖ ‖
√‖ ‖ ‖ ‖ ‖ ‖‖ ‖
√ ‖ ‖ ‖ ‖
‖ ‖ ‖ ‖
Thus, the second inequality in is sharper than the inequality
In the following theorem, Abu-Omar and Kittaneh, 2015,(b) gave another upper
estimate for the numerical radius of the matrix *
+
Theorem 3.2.5.(Abu-Omar and Kittaneh, 2015):
Let *
+ Then
‖| | | |‖
‖| | | |‖
Proof:
Let *
+
be a unit vector. Then
|⟨ ⟩| |⟨ ⟩ ⟨ ⟩| |⟨ ⟩| |⟨ ⟩|
79
By Lemma (2.1.1), we have
|⟨ ⟩| |⟨ ⟩|
⟨| | ⟩ ⟨| | ⟩
⟨| | ⟩
⟨| | ⟩
⟨| | ⟩ ⟨| | ⟩ ⟨| | ⟩ ⟨| | ⟩
(by Cauchy-Schwarz inequality)
⟨ | | | | ⟩ ⟨ | | | | ⟩
‖| | | |‖ ‖| | | |‖
‖ ‖ ‖ ‖
‖| | | |‖ ‖| | | |‖
‖ ‖
‖ ‖
(by the arithmetic-geometric mean inequality)
‖| | | |‖
‖| | | |‖
and so
‖ ‖
|⟨ ⟩|
‖| | | |‖
‖| | | |‖
as required.
80
3.3 On Unitarily Invariant Norm Inequalities and Hermitian Block Matrices:
The well-known arithmetic geometric mean inequality for singular values due to
Bahatia and Kittaneh (1990) says that
for any
On the other hand, Zhan (2000) has proved
Zhan (2002) has proved that the two inequalities (3.3.1) and (3.3.2) are equivalent. Tao
(2006) gave an equivalent form of the two inequalities which is in the following
theorem.
Theorem 3.3.1.(Tao, 2006):
Let such that *
+ Then
*
+
for .
To prove Theorem Tao used the following lemma which can be found in
Bhatia (1997).
Lemma 3.3.1:
The Hermitian matrix *
+ where with rank has eigenvalues
81
Proof of Theorem (3.3.1):
Consider the unitary matrix *
+ Then
*
+ *
+ *
+
*
+
*
+ *
+
Thus,
*
+ *
+
By Weyl's monotonicity principle, we have
*
+ *
+
By using Lemma we get
*
+
as required.
Theorem 3.3.2.(Tao, 2006):
The following statements are equivalent:
(i) Let be positive semidefinite matrices. Then
82
(ii) For any
(iii) Let such that *
+ Then
*
+
Proof:
(i)(ii) Zhan (2002) proved this part as follows:
Let *
+ *
+ Then is unitary, and so
* + *
+
[ ]
*
+
[
]
Thus, we have
83
(ii)(iii) Since *
+ then there exist such that
*
+ [ ] [ ]
Now, from (ii) and since [ ] [ ] *
+, we have
[ ][ ]
[ ] [ ]
*
+
*
+
(iii)(i) Let be positive semidefinite matrices and let
√ *
+ be a
unitary matrix. Then *
+ and so
*
+ ( *
+ )
*
+
From (iii), we have
*
+
84
Corollary 3.3.1:
Let such that is Hermitian, and Then
( )
Proof:
Let *
+ and
√ *
+ be a unitary matrix.
Then
*
+
Since so is positive semidefinite and by Theorem (3.3.1) we have
*
+
( )
as required.
Bhatia and Kittaneh obtained that if such that is Hermitian,
and then
Recently, Audeh and Kittaneh employed the previous inequality as in the
following theorem.
85
Theorem 3.3.3.(Audeh and Kittaneh, 2012):
Let such that *
+ Then
Proof:
Consider the unitary matrix *
+
then
*
+ *
+ *
+ *
+ *
+ *
+
Thus,
*
+ *
+
By applying the inequality we get
( )
and so
as required.
Audeh and Kittaneh proved that the inequalities and are
equivalent as follows:
86
Theorem 3.3.4.(Audeh and Kittaneh, 2012):
The following statements are equivalent:
(i) Let where is Hermitian, and Then
for
(ii) Let such that *
+ Then
for
Proof:
(i)(ii) This follows from the proof of Theorem
(ii)(i) Let where is Hermitian, and Since *
+ is
unitarily equivalent to *
+ (by the identity (3.3.4), then *
+
Thus, by (ii) we have
for
By Theorem (3.3.1), we have
‖ ‖
‖*
+‖
for all such that *
+
87
The following theorem gives another upper bound for ‖ ‖
Theorem 3.3.5:
Let such that *
+ Then
‖ ‖ ‖ ‖ ‖ ‖
To prove Theorem we need the following lemma which can be found in
Zhang (1999).
Lemma 3.3.2:
Let be positive semidefinite matrices. Then
*
+
for some contraction ‖ ‖
Proof of Theorem (3.3.5):
By Lemma (3.3.2), we have
And so
‖ ‖ ‖ ‖ ‖ ‖‖
‖
‖ ‖ ‖ ‖
as required.
88
The following theorem gives another upper bound for ‖ ‖ in case and are
commute.
Theorem 3.3.6:
Let such that *
+ If then
‖ ‖ ‖ ‖
To prove Theorem (3.3.6) we need the following lemma which can be found in
Zhang
Lemma 3.3.3:
Let such that *
+ If then
Our proof of Theorem (3.3.6):
We have
‖ ‖ ‖ ‖ ‖
‖
‖ ‖ ( )
‖ ‖ ‖ ‖
‖ ‖ ( )
as required.
89
It follows from the inequality that if such that is Hermitian,
and then
‖ ‖ ‖ ‖
for every unitarily invariant norm.
Theorem 3.3.7:
Let such that *
+ . Then
‖ ‖
‖ ‖
for every unitarily invariant norm.
Proof:
Since *
+ then
*
+ *
+ *
+ *
+
By using the fact that a matrix is positive semidefinite if and only if the matrix
*
+ is positive semidefinite, we get
Similarly, since
*
+ *
+ *
+ *
+
then
90
Thus,
So the desired inequality follows from the inequality
Corollary 3.3.2:
Let such that *
+ If is Hermitian, then
‖ ‖
‖ ‖
for every unitarily invariant norm.
Now, we present a remarkable decomposition lemma noticed in Bourin and Lee
(2012).
Lemma 3.3.4:
Let such that *
+ Then
*
+ *
+ *
+
for some unitary matrices
Proof:
Since *
+ then we can write it as a square of positive semidefinite matrix,
say,
91
*
+ *
+ *
+
where
Let *
+ and *
+ Then
*
+ *
+ *
+ *
+
Since *
+ *
+ and by using the fact that and are unitary
equivalent to and respectively, we get the desired inequality.
This decomposition turned out to be an efficient tool and it also plays a major role
below.
Theorem 3.3.8.(Bourin, Lee and Lin, 2012):
Let such that *
+ Then
*
+ [
] [
]
and
*
+ [
] [
]
for some unitary matrices
Proof:
It is easy to see that *
+ and *
+ are unitary equivalent.
92
In fact,
*
+ *
+ *
+ *
+
Now, if we take the unitary matrix
√ *
+ then we observe that
*
+ [
]
and
*
+ [
]
where stands for unspecified entries. Now the desired inequalities follow from Lemma
(3.3.4).
The following lemma can be found in Bourin, Lee and Lin (2012).
Lemma 3.3.5:
Let such that *
+ If is Hermitian, then
‖*
+‖ ‖ ‖
for every unitarily invariant norm.
93
In the following theorem, we give an upper and lower bounds for the spectral norm
of positive semidefinite block matrices.
Theorem 3.3.9:
Let such that *
+ If is Hermitian, then
‖ ‖ ‖*
+‖ ‖ ‖
Proof:
Since *
+ , then by the inequality , we have
‖ ‖ ‖*
+‖
and by Lemma we have
‖*
+‖ ‖ ‖
The desired inequality follows from the inequalities and
Our next theorem gives an upper bound for the spectral norm of the off-diagonal part
of positive semidefinite block matrices
Theorem 3.3.10:
Let such that *
+ Then
‖ ‖ (‖ ‖
‖ ‖
)
94
Proof:
Since *
+ then
*
+ *
+ *
+ *
+
and
*
+ *
+ *
+ * +
By using Corollary and by the inequalities and , we deduce
‖ ‖ ‖ ‖ ‖ ‖
and
‖ ‖ ‖ ‖ ‖ ‖
respectively. So the desired inequality follows from the inequalities and
At the end of this section, we give an estimate for the numerical radius of the off-
diagonal part of positive semidefinite block matrices. In fact, our result improve
the inequality (3.3.8) for the spectral norm.
Theorem 3.3.11:
Let such that *
+ Then
‖ ‖
95
Proof:
Since *
+ it follows that [
] In fact, if
we take *
+ then is unitary and
[
] *
+
Thus, by the inequality we have
‖ ( )‖
‖ ‖
and so by Theorem (1.4.9), we have
‖ ( )‖
‖ ‖
as required.
96
REFERENCES
Abu-Omar, A. and Kittaneh, F. (2013), A Numerical Radius Inequality Involving the
Generalized Aluthge Transform. Studia Math, 216, 69-75.
Abu-Omar, A. and Kittaneh, F. (2015),a, Notes on some Spectral Radius and Numerical
Radius Inequalities. Studia Math, 2875, 97-109.
Abu-Omar, A. and Kittaneh, F. (2015),b, Numerical Radius Inequalities for
Operator Matrices. Linear Algebra and its Applications, 468, 18-26.
Abu-Omar, A. and Kittaneh, F. (2015),c, Upper and Lower Bounds for the Numerical
Radius with an Application to Involution Operators. Rocky Mountain J. Math, 45,
1055-1064.
Aluthge, A. (1990), On -hyponormal Operators for Integral Equations
Operator Theory. 13, 307-315.
Audeh, W. and Kittaneh, F. (2012), Singular Value Inequalities for Compact Operators.
Linear Algebra and its Applications, 437, 2516-2522.
Bhatia, R. (1997), Matrix Analysis, New York: Springer-Verlag.
Bhatia, R. (2007), Positive Definite Matrices, Princeton University Press.
Bhatia, R. and Davis, C. (1993), More Matrix Forms of the Arithmetic-Geometric Mean
Inequality. SIAM J. Matrix Anal. Appl, 14, 132-136.
Bhatia, R. and Kittaneh, F. (1990), On the Singulars Values of a Product of Operator.
SIAM J. Matrix Anal. Appl, 11, 272-277.
97
Bhatia, R. and Kittaneh, F. (2008), The Matrix Arithmetic-Geometric Mean Inequality
Revisited. Linear Algebra and its Applications, 428, 2177-2191.
Bourin, J. C. and Lee, E.Y. (2012), Unitary Orbits of Hermitian Operators with Convex
or Concave Functions. Bulletin of the London Mathematical Society, 44(6), 1058-1102.
Bourin, J. C. Lee, E.Y. and Lin, M. (2012), On a Decomposition Lemma for Positive
Semidefinite Block-Matrices. Linear Algebra and its Applications, 437, 1906-1912.
Dragomir, S. S. (2009), Power Inequalities for the Numerical Radius of a Product of
Two Operators in Hilbert Spaces. Sarajevo J Math, 5(18), 269-278.
Dragomir, S. S. (2008), Some Inequalities for the Norm and the Numerical Radius of
Linear Operators in Hilbert Spaces. Tamkang J. Math, 39, 1-7.
El-Haddad, M. and Kittaneh, F. (2007), Numerical Radius Inequalities for Hilbert Space
Operators, II. Studia Math, 182 (2), 133-140.
Furuta, T. (1989), Norm Inequalities Equivalent to Löwner-Heinz Theorem. Rev. Math.
Phys, 1, 135-137.
Gustafson, K. E. and Rao, D. K. M. (1997), Numerical Range, New York: Springer-
Verlag.
Halmos, P. R. (1982), A Hilbert Space Problem Book, New York: Springer-Verlag..
Hardy, G. H. Littlewood, J. E. and Pólya, G. (1988), Inequalities, 2nd
ed., Cambridge:
Cambridge Univ. Press.
Hirzallah, O. Kittaneh, F. and Shebrawi, K. (2011), Numerical Radius Inequalities for
Certain Operator Matrices. Integral Equations Operator Theory. 17, 129-147.
98
Hirzallah, O. Kittaneh, F. and Shebrawi, K. (2012), Numerical Radius Inequalities for
Operator Matrices. Studia Math, 210, 101-115.
Hou, J.C. and Do, H.K. (1995), Norm Inequalities of Positive Operator Matrices.
Integral Equations Operator Theory, 22, 281-294.
Horn, R. and Johnson, C. (1985), Matrix Analysis, Cambridge: Cambridge University
Press.
Horn, R. and Johnson, C. (1991), Topics in Matrix Analysis, Cambridge: Cambridge
University Press.
Kittaneh, F. (2003), A Numerical Radius Inequalities and an Estimate for the Numerical
Radius of the Frobenius Companion Matrix. Studia Math, 158, 11-17.
Kittaneh, F. (2004), Norm Inequalities for Sums and Differences of Positive Operators.
Linear Algebra Appl, 383, 85-91.
Kittaneh, F. (2002), Norm Inequalities for Sums of Positive Operators. Studia Math,
168, 73-80.
Kittaneh, F. (1988), Notes on some Inequalities for Hilbert Space Operators. Publ. Res.
Inst. Math. Sci, 24, 283-293.
Kittaneh, F. (2005), Numerical Radius Inequalities for Hilbert Space Operators. Studia
Math, 168, 73-80.
Kittaneh, F. (2006), Spectral Radius Inequalities for Hilbert Space Operators. Proc.
Amer. Math. Soc, 134, 385-390.
Kittaneh, F. Moslehian, M.S. and Yamazaki, T. (2015), Cartesian Decomposition and
Numerical Radius Inequalities. Linear Algebra and its Applications, 471, 46-53.
99
Sattari, M. Moslehian, M.S. and Yamazaki, T. (2015), Some Generalized Numerical
Radius Inequalities for Hilbert Space Operators. Linear Algebra and its Applications,
470, 216-227.
Tao, Y. (2006), More Results on Singular Value Inequalities on Matrices. Linear
Algebra and its Applications, 461, 724-729.
Yamazaki, T. (2007), On Upper and Lower Bounds for the Numerical Radius and an
Equality Condition. Studia Math, 178, 83-89.
Zhan, X. (2002), Matrix Inequalities, LNM1790. Berlin: Springer-Verlag.
Zhan, X. (2000), Singular Values of Differences of Positive Semidefinite Matrices.
SIAM J. Matrix Anal. Appl, 22, 819-823.
Zhang, F. (1999), Matrix Theory, New York: Springer-Verlag.
100
همبرت فضاء في دي والمعايير اللا متغيرةالعد قطرالنصف , الطيفي القطرنصف متباينات
إعداد
السعافين دعاء محمود
المشرف
د. عمياء عبد الجواد برقان
الممـخــــص
غيرة العددي والمعايير اللا متالقطر نصف , القطر الطيفينصف الأطروحة نعرض العديد من متباينات في هذه
لممصفوفات المربعة. كما نقدم لمتباينات مرتبطة بالمصفوفات المجزأة.
top related