Chapter 7 - Operators on Inner Product Spaces

From here on out, all V and W denote finite-dimensional inner product spaces over F.

7.A: Self-Adjoint and Normal Operators

adjoint, T

Suppose TL(V,W). The adjoint of T is the function T:WV such that:

Tv,w=v,Tw

for all vV,wW.

In trying to construct this definition, this is the derivation. Suppose some TL(V,W). If you fix a wW then consider the linear functional on V that maps vV to Tv,w, namely φV defined by:

φ(v)=Tv,w

where this works since T,w are both fixed, while vV is arbitrary. By Chapter 6 (cont.) - Finishing Inner Product Spaces#^a857c6, there exists a unique vector in V, we call it Tw, that Tv,w=v,Tw.

As an example, consider T:R3R2 by:

T(x1,x2,x3)=(x2+3x3,2x1)

Here T will be a function T:R2R3. To find it, fix a point in the output y=(y1,y2)R2 and then let x=(x1,x2,x3)R3 be arbitrary:

x,Ty=Tx,y=T(x1,x2,x3),y=(x2+3x3,2x1),(y1,y2)=x2y1+3x3y1+2x1y2=(x1,x2,x3),(2y2,y1,3y1)=x,(2y2,y1,3y1)

thus equate the right sides of the and we see that T(y)=(2y2,y1,3y1).

Notice if we want to algorithmize this process, fix uV and xW. Define TL(V,W) by:

Tv=v,ux

Then if we fix wW then for any arbitrary vV we have:

v,Tw=Tv,w=v,ux,w=v,ux,w=v,w,xu

So:

Tw=w,xu
The adjoint is a linear map

If TL(V,W) then TL(W,V).

\begin{proof}
Suppose TL(V,W). Fix w1,w2W. If vV then:

v,T(w1+w2)=Tv,w1+w2=Tv,w1+Tv,w2=v,Tw1+v,Tw2=v,Tw1+Tw2

Fix wW and λF. If vV then:

v,T(λw)=Tv,λw=λTv,w=λv,Tw=v,λTw

\end{proof}

Properties of the adjoint

For all S,TL(V,W) and λF:

  • (S+T)=S+T
  • (λT)=λT
  • (T)=T
  • I=I where I is the identity operator on V.
  • If instead SL(W,U), then (ST)=TS. Here U is an inner product space over F.

The proofs are via 2015_Book_LinearAlgebraDoneRight#page=206 and are easy to see.

Null space and range of T

Suppose TL(V,W). Then:

  • null(T)=(range(T))
  • range(T)=(null(T))
  • null(T)=(range(T))
  • range(T)=(null(T))

\begin{proof}
Prove the first one. Let wW. Then:

wnull(T)Tw=0v,Tw=0vVTv,w=0vVw(range(T))

where the last step is because otherwise, the inner product would not be 0.

If we take the orthogonal complement of both sides of (a), we get (d), using Chapter 6 (cont.) - Finishing Inner Product Spaces#^0268c7. Replacing T with T in (a) gives (c), where we have used Chapter 7 - Operators on Inner Product Spaces#^232dd6 (c). Finally replacing T with T in (d) gives (b).
\end{proof}

conjugate transpose

The conjugate transpose of an m×n matrix is the n×m matrix obtained by interchanging the rows and columns and then taking the complex conjugate of each entry.

Using orthonormal bases

The next result only applies when we have an orthonormal basis. If you don't, then it's not necessarily true.

The matrix of T

Let TL(V,W). Suppose e1,...,en is an orthonormal basis of V and f1,...,fm is an orthonormal basis of W. Then:

M(T,(f1,...,fm),(e1,...,em))

is the conjugate transpose of:

M(T,(e1,...,en),(f1,...,fm))

\begin{proof}
We obtain the k-th column of M(T) by writing Tek as linear combination of the fj's; the scalars used in this linear combination become the k-th column of M(T). Because f1,...,fm is an orthonormal basis of W, we know how to write Tek as a linear combination of the fj's:

Tek=i=1mTek,fifi

Thus the entry in row j column k of M(T) is Tek,fj.

Replacing T with T and interchanging the roles played by the e's and f's we see that row j column k of M(T) is Tfk,ej but here:

Tfk,ej=fk,Tej=Tej,fk=ckjM(T)

Thus M(T) is the conjugate transpose of M(T) and vice versa.
\end{proof}

Self-Adjoint Operators

self-adjoint

An operator TL(V) is called self-adjoint if T=T. In other words, TL(V) is self-adjoint iff:

Tv,w=v,Tw

for all v,wV

Eigenvalues of self-adjoint operators are real

Every eigenvalue of a self-adjoint operator is real

\begin{proof}
Suppose T is a self-adjoint operator on V. Let λ be an eigenvalue of T and let v0 be in V where Tv=λv. Then:

λv2=λv,v=Tv,v=v,Tv=v,λv=λv2

So λ=λ thus λ is real.
\end{proof}

Over C, Tv is orthogonal to v for all v only for the 0 operator

Suppose V is a complex inner product space and TL(V). Suppose:

Tv,v=0

for all vV. Then T=0.

See the proof via 2015_Book_LinearAlgebraDoneRight#page=210.

Over C, Tv,v is real for all v only for self-adjoint operators.

Suppose V is a complex inner product space and TL(V). Then T is self-adjoint iff:

Tv,vR

for every vV.

\begin{proof}
Let vV. Then:

Tv,vTv,v=Tv,vv,Tv=Tv,vTv,v=(TT)v,v

If Tv,vR for every vV then the LHS is 0, so the RHS is 0. Thus TT=0 so T is self-adjoint.

Conversely if T is self-adjoint then the RHS is 0, so Tv,v=Tv,v for every vV. Thus Tv,vR.
\end{proof}

If T=T and Tv,v=0 for all v, then T=0

Suppose T is a self-adjoint operator on V, such that:

Tv,v=0

for all vV. Then T=0.

\begin{proof}
See Chapter 7 - Operators on Inner Product Spaces#^bb6e99 for when V is a complex inner product space. Assume V is a real inner product space. If u,wV then:

Tu,w=T(u+w),u+wT(uw),uw4

using the property that T is self adjoint where:

Tw,u=w,Tu=Tu,w

Each term on the RHS of the equation on top is of the form Tv,v, so Tu,w=0 for all u,wV, so taking w=Tu implies that T=0.
\end{proof}

Normal Operators

normal

  • An operator on an inner product space is called normal if it commutes with its adjoint.
  • TL(V) is normal if TT=TT.

Clearly every self-adjoint operator is normal.

T is normal iff Tv=Tv for all v.

An operator TL(V) is normal iff:

Tv=Tv

for all vV.

\begin{proof}
Let TL(V):

T is normalTTTT=0(TTTT)v,v=0vVTTv,v=TTv,vvVTv2=Tv2vV

using Chapter 7 - Operators on Inner Product Spaces#^427201 for the second equivalence.
\end{proof}
Compare the next lemma with that of HW 3 - Self-Adjoint and Normal Operators#2.

For T normal, T,T have the same eigenvectors

Suppose TL(V) is normal and vV is an eigenvector of T with eigenvalue λ. Then v is also an eigenvector of T with eigenvalue λ.

\begin{proof}
Because T is normal so is TλI. Using Chapter 7 - Operators on Inner Product Spaces#^f09319, we have:

0=(TλI)v=(TλI)v=(TλI)v

so v is an eigenvector of T with eigenvalue λ as desired.
\end{proof}

Orthogonal eigenvectors for normal operators

Suppose TL(V) is normal. Then eigenvectors of T corresponding to distinct eigenvalues are orthogonal.

\begin{proof}
Suppose α,β are distinct eigenvalues of T with corresponding eigenvectors u,v. Thus Tu=αu and Tv=βv. From Chapter 7 - Operators on Inner Product Spaces#^d72e15, we have Tv=βv so then:

(αβ)u,v=αu,vu,βv=Tu,vu,Tv=0

so since αβ then we must have u,v as expected.
\end{proof}

7.B: Spectral Theorem

Recall that a diagonal matrix is a square matrix that is 0 except possibly on the diagonal. Recall that an operator on V has a diagonal matrix w.r.t a basis iff the basis consists of eigenvectors of the operator via Chapter 5 - Eigenvalues, Eigenvectors, and Invariant Subspaces#^a68a4e.

The nicest operators on V are those for which there is an orthonormal basis of V with respect to which the operator has a diagonal matrix. These are precisely the operatrors TL(V) such that there is an orthonormal basis of V consisting of eigenvectors of T.

The Complex Spectral Theorem

The key part of the Complex Spectral Theorem states that if F=C and TL(V) is normal, then T has a diagonal matrix with respect to some orthonormal basis of V.

For example, consider TL(C2) whose matrix w.r.t the standard basis is:

(2332)

As you can see, (i,1)2,(i,1)2 is an orthonormal basis of C2 consisting of eigenvectors of T, and with respect to this basis the matrix of T is the diagonal matrix:

(2+3i0023i)
Complex Spectral Theorem

Suppose F=C and TL(V). Then the following are equivalent:

  • T is normal
  • V has an orthonormal basis consisting of eigenvectors of T.
  • T has a diagonal matrix with respect to some orthonormal basis of V.

\begin{proof}

(b) = (c) via Chapter 5 - Eigenvalues, Eigenvectors, and Invariant Subspaces#^a68a4e.

Thus we only need to prove that (c) (a) .

(c) (a): Suppose (c), so T has a diagonal matrix w.r.t. some orthonormal basis of V. The matrix of T w.r.t the same basis is obtained by taking the conjugate transpose of the matrix of T; hence T also has a diagonal matrix. Any two diagonal matrices commute, so T commutes with T, so T is normal showing (a)

(a) (c): Suppose (a), so T is normal. By Chapter 6 - Inner Product Spaces#^71c5e4, there is an orthonormal basis e1,...,en of V w.r.t. which T has an UT matrix. Thus:

M(T,(e1,...,en))=[a11a1n0ann]

We will show this matrix is actually diagonal. From above:

Te12=|a11|2

and:

Te12=i=1n|a1i|2

Because T is normal, then two lines above are actually equal. This implies that:

i=2n|a1i|2=0i{2,...,n}(a1i=0)

Similarly then:

Te22=|a22|2=Te22=i=2n|a2i|2

So then a23,...,a2n=0.

Continue in this fashion to see that all non-diagonal entries in the matrix equal 0, so (c) holds.
\end{proof}

The Real Spectral Theorem

Let's get a few preliminary results.

Invertible quadratic expressions

Suppose TL(V) is self-adjoint and b,cR such that b2<4c. Then:

T2+bT+cI

is invertible.

\begin{proof}
Let v0V. Then:

(T2+bT+cI)v,v=T2v,v+bTv,v+cv,v=Tv,Tv+bTv,v+cv2Tv2|b|Tvv+cv2=(Tv|b|v2)2+(cb24)v2>0

where the third line is via Cauchy-Schartz from Chapter 6 - Inner Product Spaces#^2ef5ba. The last inequality implies that (T2+bT+cI)v0 so then T2+bT+cI is injective, implying invertibility via Chapter 3 - Linear Maps#^315386.
\end{proof}

Self-adjoint operators have eigenvalues

Suppose V{0} and TL(V) is a self-adjoint operator. Then T has an eigenvalue.

\begin{proof}
We can assume V is a real inner product space. Let n=dim(V) and choose vV with v0. Then:

v,Tv,T2v,...,Tnv

cannot be linearly independent since V has dimension n and we have n+1 vectors. Thus there exist real numbers ai not all 0 such that:

i=0naiTnv=0

Make the ai's the coefficients of a polynomial, which can be written in factored form via Chapter 4 - Polynomials (short)#^63c018:

0=i=0naiTnv=(i=0naiTn)v=c(T2+b1T+c1I)(T2+bMT+cMI)(Tλ1I)(TλmI)v

Using Chapter 7 - Operators on Inner Product Spaces#^94dc7b, each T2+bjT+cjI is invertible. Recall also that c0 so then m>0 and thus:

0=(Tλ1I)(TλmI)v

Hence TλjI is not injective for at least one j so T has an eigenvalue with eigenvector v.
\end{proof}

Self-adjoint operators and invariant subspaces

Suppose TL(V) is self-adjoint and U is a subspace of V that is invariant under T. Then:

  • U is invariant under T.
  • T|UL(U) is self-adjoint
  • T|UL(U) is self-adjoint.

\begin{proof}
(a): Suppose vU. Let uU. Then:

Tv,u=v,Tu=0

since T=T, and the second equality comes from how U is invariant under T so TuU and since vU we get it equals 0. Because the equation holds for each uU we conclude that TvU so then U is invariant under T.

(b): Note if u,vU then:

(T|U)u,v=Tu,v=u,Tv=u,(T|U)v

Thus T|U is self-adjoint.

(c): Replace U with U in (b), which is allowed via (a).
\end{proof}

Real Spectral Theorem

Suppose F=R and TL(V). Then the following are equivalent:

  • T is self-adjoint
  • V has an orthonormal basis consisting of eigenvectors of T.
  • T has a diagonal matrix with respect to some orthonormal basis of V

\begin{proof}
(c) (a): T is self-adjoint iff T has a diagonal matrix w.r.t. some orthonormal basis of V. A diagonal matrix equals its transpose, so T=T, thus T is self-adjoint showing (a).

(a) (b): Do induction over dim(V). If it's 1, then (a) implies (b) via a trivial case. Suppose that dim(V)>1 and (a) implies (b) for all real inner product spaces of smaller dimension. Suppose (a), so then T is self adjoint. Let u be an eigenvector of T with u=1, which is guarunteed by Chapter 7 - Operators on Inner Product Spaces#^ea761b, where this eigenvector can be divided by its norm to produce a unit eigenvector.

Let U=span(u). Then U is a 1-dimensional subspace of V that is invariant under T, so by Chapter 7 - Operators on Inner Product Spaces#^41adf0 (c) the operator T|UL(U) is self-adjoint.

By the inductive hypothesis, there is an orthonormal basis of U consisting of eigenvectors of T|U. Adjoining u to this orthonormal basis of U gives an orthonormal basis of V consisting of eigenvectors of T, completing the proof for (a) (b).

(b) (c): Trivial
\end{proof}

7.C: Positive Operators and Isometries

positive operator

An operator TL(V) is called positive if T is self-adjoint and:

Tv,v0

for all vV.

If V is a complex vector space then the requirement that T is self-adjoint can be dropped from the definition by Chapter 7 - Operators on Inner Product Spaces#^a30225. We require it for real vector spaces though.

As an example, if U is a subspace of V then the orthogonal projection PU is a positive operator.

As another example, if TL(V) is self-adjoint and b,cR are such that b2<4c then T2+bT+cI is a positive operator via the proof for Chapter 7 - Operators on Inner Product Spaces#^94dc7b.

square root

An operator R is called a square root of an operator T if R2=T.

As an example, if TL(F3) is defined by T(z1,z2,z3)=(z3,0,0) then the operator RL(F3) defined by R(z1,z2,z3)=(z2,z3,0) is a square root of T.

Notice that we said a square root. The square root is only unique in very specific circumstances. Further, the characterizations of the positive operators in the next result correspond to the characterizations of the nonnegative numbers among C. Specifically:

These conditions are similar to the one below:

Characterization of positive operators

Let TL(V). Then the following are equivalent:

  • T is positive
  • T is self-adjoint and all eigenvalues of T are non-negative
  • T has a positive square root
  • T has a self-adjoint square root
  • RL(V) s.t. T=RR.

\begin{proof}
We prove (a) (b) (c) (d) (e) (a).

(a) (b): Suppose T is positive. T is self-adjoint from the definition. To show all eigenvalues are non-negative, suppose λ is an eigenvalue of T and v is it's eigenvector:

0Tv,v=λv,v=λv,v

So since v,v0 then λ0.

(b) (c): Suppose T is self-adjoint and all λ's are non-negative. By the Chapter 7 - Operators on Inner Product Spaces#^6995f7 or Chapter 7 - Operators on Inner Product Spaces#^d93d1a, there is an orthonormal basis e1,...,en of V all eigenvectors. Let λ1,...,λn be these eigenvalues. Thus λj is non-negative. Let R be the linear map from VV such that:

Rej=λjej

for j=1,...,n via Chapter 3 - Linear Maps#^9a80a8. Then R is a positive operator through verification. Furthermore, R2ej=λjej=Tej for each j. Thus R2=T. Thus R is a positive square root of T showing (c).

(c) (d): Obvious because, by definition, every positive operator is self-adjoint.

(d) (e): There is some self-adjoint operator R on V where R2=T. Then T=RR because R=R, so R is self-adjoint.

(e) (a): Let RL(V) be such that T=RR. Then T=(RR)=RR=T so T is self-adoint. To show positivity:

Tv,v=RRv,v=Rv,Rv=Rv20

thus T is positive.
\end{proof}

Each positive operator has only one positive square root

Every positive operator on V has a unique positive square root.

\begin{proof}
Suppose TL(V) is positive. Suppose vV is an eigenvector of T. There exists λ0 such that Tv=λv.

Let R be a positive square root of T. We'll show Rv=λv, implying that the behavior of the eigenvectors of R is uniquely determined. Because there is a basis of V consisting of eigenvectors of T (by the Chapter 7 - Operators on Inner Product Spaces#^6995f7) then R is uniquely determined.

To prove that Rv=λv, use the Spectral Theorem to say that there is an orthonormal basis e1,...,en of V of eigenvectors from R. Because R is a positive operator, all its eigenvalues are nonnegative via Chapter 7 - Operators on Inner Product Spaces#^aaa8f3. Thus there exist nonnegative number λ1,...,λn such that Rej=λjej for j=1,...,n.

Because e1,...,en is a basis of V we can write:

v=i=1nαiei

where αiF. Thus:

Rv=i=1nαiλieiR2v=i=1nαiλiei

Thus the equation for R2v implies that αi(λλi)=0 for all i. Hence ignoring where α=0:

v=j:λj=λαjej

thus:

Rv=j:λj=λajλej=λv

\end{proof}

Isometries

isometry

  • An operator SL(V) is called an isometry if:
Sv=v

for all vV.

  • In other words, an operator is an isometry if it preserves norms.

For example, λI is an isometry if λF satisfies |λ|=1.

As an example, let λi all be scalars with absolute value 1 and SL(V) satisfies Sej=λjej for some orthonormal basis e1,...,en of V. S is an isometry since for any vV:

v=i=1nv,eiei

thus:

v2=i=1n|v,ei|2

where we have used Chapter 6 - Inner Product Spaces#^ef9c3b. Applying S to both sides of the first equation gives:

Sv=i=1nv,eiSei=i=1nλiv,eiei

Using the fact that λi has magnitude 1:

Sv2=i=1n|v,ei|2=v2

Thus S is an isometry.

For the next lemma:

Characterization of isometries

  • S is an isometry
  • Su,Sv=u,v for all u,vV
  • Se1,...,Sen is orthonormal for every orthonormal list of vectors e1,...,en in V
  • an orthonormal basis e1,...,en of V such that Se1,...,Sen is orthonormal.
  • SS=I
  • SS=I
  • S is an isometry
  • S is invertible and S1=S.

\begin{proof}
We go (a) (b) (h) (a)

(a) (b): Suppose S is an isometry. Using HW 6 - Finishing UT Matrices, Eigenspaces and Diagonal Matrices#20 and 19 to get that inner product can be computer from norms. Because S preserves norms, then S preserves inner products, so (b). In actually calculating this:

Su,Sv=(Su+Sv2SuSv2)/46.A Exercise 19=(S(u+v)2S(uv)2)/4SL(V)=(u+v2uv2)/4S is an isometry=u,v6.A Exercise 19

When V is a complex inner space, HW 6 - Finishing UT Matrices, Eigenspaces and Diagonal Matrices#20 will give you the same answer. Either case, (b) holds.

(b) (c): S preserves inner products. Suppose e1,...,en is an orthonormal list of vectors in V. Then we see that the list Se1,...,Sen is orthonormal because Sej,Sek=ej,ek by definition. Thus (c) holds.

(c) (d): Trivial

(d) (e): Let e1,...,en be an orthonormal basis of V such that Se1,...,Sen is orthonormal. Thus:

SSej,ek=ej,ek

for j,k=1,...,n (because the left term equals Sej,Sek and Se1,...,Sen is orthonormal). All vectors u,vV can be written as a linear combination of e1,...,en and thus the equation above implies SSu,v=u,v. Hence SS=I so (e).

(e) (f): SS=I. In general an operator S need not commute with S but SS=I iff SS=I, a special case of 3.D Exercise 10. Thus SS=I showing (f).

(f) (g): If vV:

Sv2=Sv,Sv=SSv,v=v,v=v2

So S is an isometry (g).

(g) (h): S is an isometry. Using (a) (e) and (a) (f), using S replaced with S (and using (S)=S) then SS=SS=I. Thus S is invertible and S1=S, showing (h).

(h) (a): S is invertible and S1=S. Thus SS=I. If vV:

Sv2=Sv,Sv=SSv,v=v,v=v2

So S is an isometry, showing (a).
\end{proof}
Since every isometry is normal using the previous lemma, then we can use these characterizations to describe isometries:

Description of isometries when F=C.

Suppose V is a complex inner product space and SL(V). Then the following are equivalent:

  • S is an isometry
  • There is an orthonormal basis of V consisting of eigenvectors of S whose corresponding eigenvalues all have absolute value of 1.

\begin{proof}
We have shown that (b) implies (a) via our example near the top of Chapter 7 - Operators on Inner Product Spaces#Isometries.

To prove (a) (b), suppose S is an isometry. By the Chapter 7 - Operators on Inner Product Spaces#^6995f7, there is an orthonormal basis e1,...,en of V consisting of eigenvectors of S. For j=1,...,n let λj be the eigenvalue corresponding to ej. Then:

|λj|=λjej=Sej=ej=1

Each |λj|=1 as desired.
\end{proof}

7.D: Polar Decomposition and Singular Value Decomposition

Polar Decomposition

Recall the analogy between C and L(V):

Another interesting characteristic of C is the unit circle, |z|=1=zz. Under our analogy this implies that TT is "on" the unit circle, which is corresponds to our isometry definition. So C corresponds to isometries

All complex numbers can be written as:

z=z|z||z|=z|z|zz

Which we do similarly with operators:

T

If T is a positive operator then T denotes the unique positive square root of T.

Polar Decomposition

Suppose TL(V). Then an isometry SL(V) such that:

T=STT

\begin{proof}
While the proof in the book is alright, I think the lecture proof we did is a bit better at understanding what's going on. The proofs are essentially identical, but this one is more intuitive.
\end{proof}
The Polar Decomposition Theorem states that each TL(V) is the product of an isometry and a positive operator. This allows us to turn a complicated operator into two, easy to work with operators.

Using Spectral Theorem under C=F, then T=STT is a polar decomposition of TL(V) where S is an isometry. So there is an orthonormal basis of V consisting of eigenvectors of S whose corresponding eigenvalues all have magnitude 1 (and thus there's a diagonal matrix), and there is an orthonormal basis of V with respect to TT has a diagonal matrix.

Possible difference in ONEB's

Even though what's said above is true, we may have different orthonormal eigenbases

Singular Value Decomposition

While the eigenvalues of an operator tell us something about the behavior of an operator, singular values are essentially the square roots of the eigenvalues of T's positive operator decomposition (essentially doing the same thing):

singular values

Suppose TL(V). The singular values of T are the eigenvalues of TT with each eigenvalues λ repeated dim(E(λ,TT)) time.

The singular values of T must be all non-negative, because they are eigenvalues of a positive operator TT.

Example

Consider TL(F4) where:

T(z1,z2,z3,z4)=(0,3z1,2z2,3z4)

To find the singular values of T, calculate TT first. Notice:

T(e1)=(0,3,0,0),T(e2)=(0,0,2,0),T(e3)=(0,0,0,0),T(e4)=(0,0,0,3)

Thus fix yF4 and let xF4 be arbitrary:

x,Ty=Tx,y=(0,3x1,2x2,3x4),(y1,y2,y3,y4)=3x1y2+2x2y33x4y4=3x1y2+2x2y3+0x33x4y4=x,(3y2,2y3,0,3y4)

Thus T(x)=(3x2,2x3,0,3x4). So:

TT(x)=T(0,3x1,2x2,3x4)=(9x1,4x2,0,9x4)

So then TT are just the square-rooted eigenvalues of TT:

TT(x)=(3x1,2x2,0,3x4)

With eigenvalues 3,2,0 and:

dim(E(3,TT))=2,dim(E(2,TT))=1,dim(E(0,TT))=1

so our singular values are 3,3,2,0.

Note

While the eigenvalues of T are only 3,0, the singular values incorporate the missing 2 singular value.

Applying Spectral Theorem and diagonalizability conditions (e), each TL(V) has dim(V) singular values (apply to TT). The operator from the prior example has 4 singular values since dim(F4)=4.

Singular-Value Decomposition

Suppose TL(V) has singular values s1,...,sn. Then there exist orthonormal bases e1,...,en and f1,...,fn such that:

Tv=s1v,e1f1++snv,enfn

for every vV.

\begin{proof}
Apply the lecture proof.
\end{proof}
The idea in the proof seen above is that by using two different bases for our M(T), we can always get a diagonal matrix! In other words, every operator on V has a diagonal matrix w.r.t. some orthonormal bases of V, assuming we can use different bases ei and fi.

To compute singular values for computational linear algebra:

  1. Compute TT
  2. Compute approximations to eigenvalues of TT
  3. Square roots of these approximations are our singular values

We don't have to compute TT!

Singular values without taking square root of an operator

Suppose TL(V). Then the singular values of T are the nonnegative square roots of the eigenvalues of TT with each eigenvalue λ repeated dim(E(λ,TT)) times.

\begin{proof}
The Spectral Theorem implies there is an orthonormal basis e1,...,en and nonnegative numbers λ1,...,λn such that TTej=λjej for all j. Clearly TTej=λjej for all j via Chapter 7 - Operators on Inner Product Spaces#^1a3d65, which gives the desired result.
\end{proof}