Lecture 31 - Last Lecture!

We'll do some cool things with vector projections today!

Consider:

We'll define vector projection in an abstract sense. Recall for our purposes:

Lemma

Suppose U is a finite-dimensional subspace of V. Then:
V=UU
as well as:
(U)=U

We now show a cool definition:

orthogonal projection

Suppose U is a finite-dimensional subspace of V, the orthogonal projection of V onto U is the operator Pu, defined by:

PU(v)=u

whenever v=u+w where uU and wU.

Properties of orthogonal projection

  • PUL(V);
  • PU(u)=u;
  • PU(w)=0 for wU;
  • range(PU)=U;
  • null(PU)=U;
  • vPU(v)U;
  • (PU)2=PU;
  • PU(v)v;
  • Given any orthonormal basis for the space U, say e1,...,em, then PU(v)=i=1mv,eiei (this is the vPU(v)) idea coming from Gram-Schmidt.

We'll prove some of these. Let's prove PU(v)v:

\begin{proof}
Suppose v=u+w where uU and wU. Then:

PU(v)2=u2u2+w2=u+w2=v

Thus taking both sides reveals the theorem.
\end{proof}

Minimization Problems

We sometimes want to find the minimum norm between two vectors. For some setup, suppose U is a finite-dimensional subspace of V. Then:

vPU(v)vu

for all uU. Namely, the distance from the "plane" U, is less than just taking the distance fr

\begin{proof}

vPU(v)2vPU(v)2U+PU(v)u2U=vu2

Which comes from using the Pythagorean Theorem along with the fact that the two vectors are orthogonal.
\end{proof}
So the best vector to use as an approximation of v, using only vectors in U, would be Pu(v).

An Example

See HW 7 - Inner Product Spaces#^af860a problem. The list is orthonormal with respect to:

f,g=ππf(x)g(x)dx

Let Un=span(Bn), where Bn was up to the cos(nx) and sin(nx) terms from our list. Un is a finite-dimensional subspace of all continuous real valued functions on π to π.

Note that then we can compute PU1(ex):

U1=span(12π,cos(x)π,sin(x)π)

So then:

PU1(ex)=ex,12π12π+