Lecture from 04.12.2024 | Video: Videos ETHZ
Introduction
This week, we delve into a fundamental concept in linear algebra: eigenvalues and eigenvectors.
Intro to Eigenvalues and Eigenvectors... (MIT)
A good and intuitive lecture…
For a shorter and more visual understanding…
Informal Definition
Given a square matrix , an eigenvalue (a scalar) and its corresponding eigenvector (a non-zero vector) satisfy the following equation:
This equation signifies that when the linear transformation represented by matrix is applied to the eigenvector , the resulting vector is simply a scaled version of the original vector . The scaling factor is the eigenvalue .
Rewriting the equation, we get:
where is the identity matrix. For this equation to hold with a non-zero vector , the matrix must be singular (non-invertible). Therefore:
This equation is crucial for finding eigenvalues.
Strategy for Finding Eigenvalues
We can find eigenvalues by solving the equation . The expression is a polynomial in of degree (where is an matrix), called the characteristic polynomial. The roots of this polynomial are the eigenvalues of .
Drawback: Complex Numbers
A significant challenge is that not all polynomials have real roots. This means that some matrices might have eigenvalues that are complex numbers. Therefore, we need to take a detour into the realm of complex numbers to fully understand and analyze eigenvalues and eigenvectors.
A Bit More Formal
Definition:
Given a square matrix , a scalar is an eigenvalue of , and a non-zero vector is the corresponding eigenvector if:
is called an eigenvalue-eigenvector pair. If is a real number (), then it’s called a real eigenvalue, and the pair is a real pair.
Example: Consider the matrix . To find the eigenvalues, we solve:
This equation has no real solutions. The solutions are complex: and , where is the imaginary unit.
Complex Numbers
What are complex numbers?
Complex numbers are numbers of the form , where and is the imaginary unit (). is the real part of (denoted as ), and is the imaginary part (denoted as ). The set of all complex numbers is denoted by .
Operations with Complex Numbers
Keeping in mind that , we can perform arithmetic operations on complex numbers:
- Addition:
- Multiplication: (Distribute and use ).
- Complex Conjugate: The complex conjugate of is . The product of a complex number and its conjugate is a real number: . This is useful for division.
- Division: To divide complex numbers, we multiply the numerator and denominator by the complex conjugate of the denominator: This expresses the division as a complex number in standard form.
More on Complex Numbers
Notation: Given a complex number :
- : the real part of .
- : the imaginary part of .
- : the modulus (or magnitude or absolute value) of , representing its distance from the origin in the complex plane.
- : the complex conjugate of .
Properties: For :
- (This relates the modulus to the complex conjugate)
- (This is the derivation of the division formula)
- and provided
Complex Vectors and Matrices
Complex Matrices; Fast Fourier Transform... (MIT)
I suggest watching the first part of this lecture as it introduces the topic quite well…
We extend the concepts of vectors and matrices to the complex domain.
- denotes the set of all -dimensional column vectors with complex entries.
- denotes the set of all matrices with complex entries.
Transposing Complex Vectors and Matrices
For a complex vector and a complex matrix :
- denotes the conjugate transpose (or Hermitian transpose) of , obtained by taking the transpose of and then taking the complex conjugate of each entry. If , then .
- denotes the conjugate transpose of . In other words, .
Inner Product in :
For , the inner product is defined as:
The norm (length) of a complex vector is defined as:
where is the modulus of the complex number . Note that
Canonical Notation (Linear Algebra in )
The standard linear algebra concepts of linear independence, span, and basis extend naturally to the complex vector space :
-
Linear Independence: Vectors are linearly independent if implies that all scalars are zero: , where .
-
Span: The span of vectors is the set of all their linear combinations:
-
Basis: A basis for a subspace is a set of linearly independent vectors in that spans .
Why Complex Numbers, Vectors, and Matrices?
The use of complex numbers, vectors, and matrices in the study of eigenvalues and eigenvectors is motivated by the Fundamental Theorem of Algebra, which guarantees the existence of eigenvalues for any square matrix.
Theorem (Fundamental Theorem of Algebra)
Any non-constant polynomial of degree with complex coefficients has at least one complex root (or zero). In other words, there exists at least one such that . A formal proof of this theorem requires complex analysis and is beyond the scope of this lecture.
Repeated Roots and Multiplicity: A polynomial can have repeated roots. For example, the polynomial has the root 2 repeated three times and the root appearing once.
Corollary (Complete Factorization)
Any non-constant polynomial of degree with complex coefficients () can be factored completely into linear terms:
Where:
- is the leading coefficient (non-zero).
- are the roots (zeros) of the polynomial, not necessarily distinct.
- The number of times a root appears in the factorization is its algebraic multiplicity.
Connection to Eigenvalues:
The characteristic polynomial of a matrix is given by:
This is a polynomial of degree in the variable . By the Fundamental Theorem of Algebra, the characteristic polynomial always has exactly complex roots, counting multiplicities.
Key Implication:
- Existence of Eigenvalues: Every square matrix has exactly eigenvalues, counting algebraic multiplicities. These eigenvalues might be real or complex, and some might be repeated.
This fact justifies our use of complex numbers. Even if we start with a real matrix, its eigenvalues (and consequently, its eigenvectors) might be complex. Working with complex numbers ensures we can always find a complete set of eigenvalues for any square matrix, forming a foundational concept in linear algebra. Without complex numbers we wouldn’t be able to fully analyze the behavior of many linear transformations described by matrices.
Revisiting Example 1
Recall the matrix from above. We found the eigenvalues to be and . Now let’s find the corresponding eigenvectors.
Suggestion
I’d suggest you simply try to generate independent vectors in the null space using Gauss-Jordan instead…
For :
We need to solve :
This leads to the system of equations:
Let for some real numbers and . From the first equation, . Substituting in the second equation yields so the equations are consistent and is a solution for any and . Choosing and gives , . Hence the eigenvector corresponding to is .
For :
Similarly, solving :
Let for some real numbers and . From the first equation, . Substitute in the second equation to get Choosing and gives the eigenvector corresponding to as .
The Theory in General
This section delves into the core concepts and general results concerning eigenvalues and eigenvectors.
Observations and Definitions:
Let be a square matrix.
Eigenvalue and Eigenvector
A scalar is an eigenvalue of if there exists a non-zero vector such that . The vector is called an eigenvector corresponding to the eigenvalue . This fundamental equation means that applying the linear transformation represented by to the eigenvector simply scales the vector by a factor of .
Real Eigenvalues and Real Eigenvectors
If is a real matrix and is a real eigenvalue, then there exists a corresponding real eigenvector.
Justification: Suppose is a real eigenvalue. The equation is a system of linear equations with real coefficients (since is real and is real). Gaussian elimination, our method for solving systems of linear equations, involves only arithmetic operations (addition, subtraction, multiplication, division) on the coefficients. Therefore, if the system has a non-trivial solution (which it must, since is an eigenvalue), it must have a real solution. Thus, there exists a real eigenvector corresponding to the real eigenvalue .
Real vs. Complex Eigenvalues
While the characteristic equation always has roots in the complex numbers, we often focus on real eigenvalues and eigenvectors for applications in real vector spaces. However, the theory extends naturally to complex vector spaces. If is a complex eigenvalue, the corresponding eigenvectors will generally also be complex.
Eigenspace
The set of all eigenvectors corresponding to a particular eigenvalue , together with the zero vector, forms a subspace of called the eigenspace associated with . It’s denoted by , the nullspace of . The dimension of the eigenspace is called the geometric multiplicity of the eigenvalue.
Complex Eigenvalues and Conjugate Pairs
If is a real matrix and is a complex eigenvalue-eigenvector pair, then the complex conjugate is also an eigenvalue-eigenvector pair. Where is the component-wise conjugate of the eigenvector .
Slightly different than what the prof did…
Proof: Since is an eigenvalue-eigenvector pair, we have . Taking the complex conjugate of both sides:
Since is a real matrix , and using properties of complex conjugation:
This equation shows that is an eigenvector of corresponding to the eigenvalue . Therefore, complex eigenvalues of real matrices always come in conjugate pairs.
Eigenvalues of Orthogonal Matrices
If is an orthogonal matrix (, and since is real, ) and is an eigenvalue of , then . This means complex eigenvalues of real orthogonal matrices lie on the unit circle in the complex plane.
Proof: Let be a (possibly complex) eigenvector corresponding to the eigenvalue . Then .
-
Inner Product: Consider the inner product of with itself in :
-
Properties of Conjugate Transpose: Using properties of the conjugate transpose, we have and . Note that for a real matrix like . Thus:
-
Orthogonality and Modulus: Since is orthogonal, . Also, and :
-
Conclusion: Since is an eigenvector, it is non-zero, so . Dividing both sides by :
This result has important implications in geometry and transformations. Orthogonal transformations preserve lengths and angles. The fact that their eigenvalues have modulus 1 reflects this preservation of length: eigenvectors are only scaled by a factor of 1 under the transformation, possibly with a rotation in the complex plane if is complex.
Continue here: 24 Explicit Fibonacci Formula, Eigenvalue and Eigenvector properties, Trace