From here on out, all and denote finite-dimensional inner product spaces over .
7.A: Self-Adjoint and Normal Operators
adjoint,
Suppose . The adjoint of is the function such that:
for all .
In trying to construct this definition, this is the derivation. Suppose some . If you fix a then consider the linear functional on that maps to , namely defined by:
The conjugate transpose of an matrix is the matrix obtained by interchanging the rows and columns and then taking the complex conjugate of each entry.
Using orthonormal bases
The next result only applies when we have an orthonormal basis. If you don't, then it's not necessarily true.
The matrix of
Let . Suppose is an orthonormal basis of and is an orthonormal basis of . Then:
is the conjugate transpose of:
\begin{proof}
We obtain the -th column of by writing as linear combination of the 's; the scalars used in this linear combination become the -th column of . Because is an orthonormal basis of , we know how to write as a linear combination of the 's:
Thus the entry in row column of is .
Replacing with and interchanging the roles played by the 's and 's we see that row column of is but here:
Thus is the conjugate transpose of and vice versa. \end{proof}
Self-Adjoint Operators
self-adjoint
An operator is called self-adjoint if . In other words, is self-adjoint iff:
for all
Eigenvalues of self-adjoint operators are real
Every eigenvalue of a self-adjoint operator is real
\begin{proof}
Suppose is a self-adjoint operator on . Let be an eigenvalue of and let be in where . Then:
So thus is real. \end{proof}
Over , is orthogonal to for all only for the operator
Suppose is a complex inner product space and . Suppose:
so since then we must have as expected. \end{proof}
7.B: Spectral Theorem
Recall that a diagonal matrix is a square matrix that is 0 except possibly on the diagonal. Recall that an operator on has a diagonal matrix w.r.t a basis iff the basis consists of eigenvectors of the operator via Chapter 5 - Eigenvalues, Eigenvectors, and Invariant Subspaces#^a68a4e.
The nicest operators on are those for which there is an orthonormal basis of with respect to which the operator has a diagonal matrix. These are precisely the operatrors such that there is an orthonormal basis of consisting of eigenvectors of .
The Complex Spectral Theorem
The key part of the Complex Spectral Theorem states that if and is normal, then has a diagonal matrix with respect to some orthonormal basis of .
For example, consider whose matrix w.r.t the standard basis is:
As you can see, is an orthonormal basis of consisting of eigenvectors of , and with respect to this basis the matrix of is the diagonal matrix:
Complex Spectral Theorem
Suppose and . Then the following are equivalent:
is normal
has an orthonormal basis consisting of eigenvectors of .
has a diagonal matrix with respect to some orthonormal basis of .
(c) (a): Suppose (c), so has a diagonal matrix w.r.t. some orthonormal basis of . The matrix of w.r.t the same basis is obtained by taking the conjugate transpose of the matrix of ; hence also has a diagonal matrix. Any two diagonal matrices commute, so commutes with , so is normal showing (a)
Hence is not injective for at least one so has an eigenvalue with eigenvector . \end{proof}
Self-adjoint operators and invariant subspaces
Suppose is self-adjoint and is a subspace of that is invariant under . Then:
is invariant under .
is self-adjoint
is self-adjoint.
\begin{proof}
(a): Suppose . Let . Then:
since , and the second equality comes from how is invariant under so and since we get it equals 0. Because the equation holds for each we conclude that so then is invariant under .
(b): Note if then:
Thus is self-adjoint.
(c): Replace with in (b), which is allowed via (a). \end{proof}
Real Spectral Theorem
Suppose and . Then the following are equivalent:
is self-adjoint
has an orthonormal basis consisting of eigenvectors of .
has a diagonal matrix with respect to some orthonormal basis of
\begin{proof}
(c) (a): is self-adjoint iff has a diagonal matrix w.r.t. some orthonormal basis of . A diagonal matrix equals its transpose, so , thus is self-adjoint showing (a).
(a) (b): Do induction over . If it's 1, then (a) implies (b) via a trivial case. Suppose that and (a) implies (b) for all real inner product spaces of smaller dimension. Suppose (a), so then is self adjoint. Let be an eigenvector of with , which is guarunteed by Chapter 7 - Operators on Inner Product Spaces#^ea761b, where this eigenvector can be divided by its norm to produce a unit eigenvector.
By the inductive hypothesis, there is an orthonormal basis of consisting of eigenvectors of . Adjoining to this orthonormal basis of gives an orthonormal basis of consisting of eigenvectors of , completing the proof for (a) (b).
(b) (c): Trivial \end{proof}
7.C: Positive Operators and Isometries
positive operator
An operator is called positive if is self-adjoint and:
An operator is called a square root of an operator if .
As an example, if is defined by then the operator 𝟛 defined by is a square root of .
Notice that we said a square root. The square root is only unique in very specific circumstances. Further, the characterizations of the positive operators in the next result correspond to the characterizations of the nonnegative numbers among . Specifically:
is nonnegative iff has a nonnegative square root.
is nonnegative iff it has a real square root
is nonnegative iff there exists a complex number where .
These conditions are similar to the one below:
Characterization of positive operators
Let . Then the following are equivalent:
is positive
is self-adjoint and all eigenvalues of are non-negative
has a positive square root
has a self-adjoint square root
s.t. .
\begin{proof}
We prove (a) (b) (c) (d) (e) (a).
(a) (b): Suppose is positive. is self-adjoint from the definition. To show all eigenvalues are non-negative, suppose is an eigenvalue of and is it's eigenvector:
for via Chapter 3 - Linear Maps#^9a80a8. Then is a positive operator through verification. Furthermore, for each . Thus . Thus is a positive square root of showing (c).
(c) (d): Obvious because, by definition, every positive operator is self-adjoint.
(d) (e): There is some self-adjoint operator on where . Then because , so is self-adjoint.
(e) (a): Let be such that . Then so is self-adoint. To show positivity:
thus is positive. \end{proof}
Each positive operator has only one positive square root
Every positive operator on has a unique positive square root.
\begin{proof}
Suppose is positive. Suppose is an eigenvector of . There exists such that .
Let be a positive square root of . We'll show , implying that the behavior of the eigenvectors of is uniquely determined. Because there is a basis of consisting of eigenvectors of (by the Chapter 7 - Operators on Inner Product Spaces#^6995f7) then is uniquely determined.
To prove that , use the Spectral Theorem to say that there is an orthonormal basis of of eigenvectors from . Because is a positive operator, all its eigenvalues are nonnegative via Chapter 7 - Operators on Inner Product Spaces#^aaa8f3. Thus there exist nonnegative number such that for .
Because is a basis of we can write:
where . Thus:
Thus the equation for implies that for all . Hence ignoring where :
thus:
\end{proof}
Isometries
isometry
An operator is called an isometry if:
for all .
In other words, an operator is an isometry if it preserves norms.
For example, is an isometry if satisfies .
As an example, let all be scalars with absolute value 1 and satisfies for some orthonormal basis of . is an isometry since for any :
(b) (c): preserves inner products. Suppose is an orthonormal list of vectors in . Then we see that the list is orthonormal because by definition. Thus (c) holds.
(c) (d): Trivial
(d) (e): Let be an orthonormal basis of such that is orthonormal. Thus:
for (because the left term equals and is orthonormal). All vectors can be written as a linear combination of and thus the equation above implies . Hence so (e).
(e) (f): . In general an operator need not commute with but iff , a special case of 3.D Exercise 10. Thus showing (f).
(f) (g): If :
So is an isometry (g).
(g) (h): is an isometry. Using (a) (e) and (a) (f), using replaced with (and using ) then . Thus is invertible and , showing (h).
(h) (a): is invertible and . Thus . If :
So is an isometry, showing (a). \end{proof}
Since every isometry is normal using the previous lemma, then we can use these characterizations to describe isometries:
Description of isometries when .
Suppose is a complex inner product space and . Then the following are equivalent:
is an isometry
There is an orthonormal basis of consisting of eigenvectors of whose corresponding eigenvalues all have absolute value of 1.
To prove (a) (b), suppose is an isometry. By the Chapter 7 - Operators on Inner Product Spaces#^6995f7, there is an orthonormal basis of consisting of eigenvectors of . For let be the eigenvalue corresponding to . Then:
Each as desired. \end{proof}
7.D: Polar Decomposition and Singular Value Decomposition
Polar Decomposition
Recall the analogy between and :
A complex number corresponds to some operator
corresponds to
Real numbers, so corresponds to self-adjoint operators where
Non-negative reals correspond to the positive operators
Another interesting characteristic of is the unit circle, . Under our analogy this implies that is "on" the unit circle, which is corresponds to our isometry definition. So corresponds to isometries
All complex numbers can be written as:
Which we do similarly with operators:
If is a positive operator then denotes the unique positive square root of .
Polar Decomposition
Suppose . Then an isometry such that:
\begin{proof}
While the proof in the book is alright, I think the lecture proof we did is a bit better at understanding what's going on. The proofs are essentially identical, but this one is more intuitive. \end{proof}
The Polar Decomposition Theorem states that each is the product of an isometry and a positive operator. This allows us to turn a complicated operator into two, easy to work with operators.
Using Spectral Theorem under , then is a polar decomposition of where is an isometry. So there is an orthonormal basis of consisting of eigenvectors of whose corresponding eigenvalues all have magnitude 1 (and thus there's a diagonal matrix), and there is an orthonormal basis of with respect to has a diagonal matrix.
Possible difference in ONEB's
Even though what's said above is true, we may have different orthonormal eigenbases
Singular Value Decomposition
While the eigenvalues of an operator tell us something about the behavior of an operator, singular values are essentially the square roots of the eigenvalues of 's positive operator decomposition (essentially doing the same thing):
singular values
Suppose . The singular values of are the eigenvalues of with each eigenvalues repeated time.
The singular values of must be all non-negative, because they are eigenvalues of a positive operator .
Example
Consider where:
To find the singular values of , calculate first. Notice:
Thus fix and let be arbitrary:
Thus . So:
So then are just the square-rooted eigenvalues of :
With eigenvalues and:
so our singular values are .
Note
While the eigenvalues of are only, the singular values incorporate the missing singular value.
Suppose has singular values . Then there exist orthonormal bases and such that:
for every .
\begin{proof}
Apply the lecture proof. \end{proof}
The idea in the proof seen above is that by using two different bases for our , we can always get a diagonal matrix! In other words, every operator on has a diagonal matrix w.r.t. some orthonormal bases of , assuming we can use different bases and .
To compute singular values for computational linear algebra:
Compute
Compute approximations to eigenvalues of
Square roots of these approximations are our singular values
We don't have to compute !
Singular values without taking square root of an operator
Suppose . Then the singular values of are the nonnegative square roots of the eigenvalues of with each eigenvalue repeated times.