Lecture from 06.11.2024 | Video: Videos ETHZ
Orthogonality of Vectors and Subspaces
Definition
Two vectors are orthogonal or perpendicular if their dot product is zero:
Two subspaces and of are orthogonal if every vector is orthogonal to every vector .
Example in
Fact 1
If and are orthogonal, any scalar multiple of is also orthogonal to any scalar multiple of . This means that the subspaces spanned by and are orthogonal.
Let’s take and .
. So, and are orthogonal.
Now consider and .
. As expected, the scalar multiples are also orthogonal.
The subspace spanned by is and the subspace spanned by is . These subspaces are orthogonal.
Fact 2
If and are linearly independent and orthogonal vectors in , they form a basis for . This implies that if a linear combination of and is zero, then the coefficients must be zero.
Continuing with the same and as above, we can see they are linearly independent because neither is a scalar multiple of the other. Since they are orthogonal and linearly independent in , they form a basis for .
Consider the equation . We want to show that .
This gives us the system of equations:
Multiplying the first equation by 2, we get . Subtracting this from the second equation gives , so . Substituting into the first equation gives .
Thus, the only solution is , demonstrating that if a linear combination of orthogonal, linearly independent vectors is zero, the coefficients must be zero. This is a consequence of them forming a basis.
Decomposition of and the Minkowski Sum
We can express the decomposition of in terms of the Minkowski sum. The Minkowski sum of two sets and in , denoted , is defined as:
Orthogonality of Subspaces and Bases
Lemma: Let be a basis for subspace and be a basis for subspace . Then and are orthogonal if and only if for all and .
Proof
-
Forward Direction: If and are orthogonal, then any and are orthogonal. Since the and are basis vectors, they belong to their respective subspaces, so .
-
Reverse Direction: Assume for all and . Any can be written as , and any as . Then:
since each . Therefore, and are orthogonal.
Linear Independence of Basis Vectors from Orthogonal Subspaces
Lemma: Let and be two orthogonal subspaces of . Let be a basis of and be a basis of . The set of vectors is linearly independent.
Proof
Consider a linear combination of the basis vectors that equals the zero vector:
We want to show that for all and for all .
Let . Then equation is equivalent to: Now, take the dot product of both sides of this equation with : Since and each , and and are orthogonal, we have for all . Therefore: This implies that . Since is a basis for (and thus linearly independent), the only way for is if for all . Analogously for all .
Alternatively, substituting back into equation gives:
Since is a basis for (and thus linearly independent), we must have for all .
Thus, we have shown that all the coefficients and are zero, which proves that the set is linearly independent.
Intersection and Sum of Orthogonal Subspaces
Corollary: Let and be orthogonal subspaces of . Then:
Proof
- Intersection: Suppose . Since and are orthogonal, for all . But since as well, we can choose , which means . This implies . Therefore, .
- Sum and Dimension: The sum of two subspaces and , denoted , is defined as . Let be a basis for and be a basis for . Since and are orthogonal subspaces, their bases can be combined to form a basis for , given by . In simpler terms, any vector in can be uniquely represented as a linear combination of vectors from the combined basis. Hence, the dimension adds up to form the dimension of the combined subspace. As both V and W are subspaces of , their sum is also a subspace in . Hence, its dimension will always be less than or equal to n. This leads to . Since is a subspace of , its dimension must be less than or equal to . Therefore, .
The Orthogonal Complement of a Subspace
Definitions
Let be a subspace of . The orthogonal complement of , denoted , is the set of all vectors in that are orthogonal to every vector in :
is itself a subspace of . Intuitively, imagine a plane in 3D space. Its orthogonal complement would be the line through the origin that’s perpendicular to the plane. Every vector in the line is orthogonal to every vector in the plane.
Theorem: Nullspace and Row Space
Let be a matrix. Then . This theorem states that the nullspace of a matrix is the orthogonal complement of its row space (or equivalently, the column space of its transpose).
Proof
-
: Let be a vector in the nullspace of . This means . We want to show that is orthogonal to every vector in the column space of . Any vector in can be written as for some . The dot product of and is . Since their dot product is zero, is orthogonal to . Because was an arbitrary vector in , this means is orthogonal to every vector in , and therefore .
-
: Let . This means for all . Let . Since , is in . Because is orthogonal to every vector in , it must be orthogonal to : For the squared norm of a vector to be zero, the vector itself must be zero. Thus, , which means .
Since the row space of is the same as the column space of (), we also have .
Example
Consider the matrix:
The row space is spanned by the vector . The nullspace contains vectors of the form (and their scalar multiples). Notice that the dot product of and is . This confirms that the nullspace is orthogonal to the row space.
This relationship is fundamental in linear algebra, particularly in understanding the solutions to systems of linear equations and in concepts like least squares approximations. The orthogonal complement provides a powerful tool for decomposing vector spaces and analyzing the behavior of linear transformations.
Multiplying by Invertible Matrix Doesn’t Change Row Space
If is an matrix and is an invertible matrix, then . See: Invariance of Row Space under Multiplication by an Invertible Matrix
The Orthogonal Complement of a Subspace II
Theorem: Properties of Orthogonal Complements
Let and be orthogonal subspaces of . The following statements are equivalent:
(i) (ii) (iii) Every can be written uniquely as , where and .
Proof
-
(i) implies (ii): Let be a basis for . Form a matrix whose rows are the basis vectors of . Then . By the previous theorem, . From the rank-nullity theorem, . Since , we have .
-
(ii) implies (iii): Let be a basis for and be a basis for . Since and are orthogonal, for all and . The set is linearly independent. Since , this set forms a basis for . Thus, any can be written uniquely as a linear combination of these basis vectors, which is equivalent to for unique and .
-
(iii) implies (i): We need to show . Since and are orthogonal, . For the reverse inclusion, let . By (iii), we can write for unique and . Then . This implies , so . Thus, , and we conclude .
Continue here: 16 Orthogonal Complementary Subspaces, Projections, Normal Form