Lecture from 04.10.2024  Video: Videos ETHZ
CRFactorization
Imagine an $m×n$ matrix, $A$, with a rank of r. This means its columns span an rdimensional space. CRFactorization says we can build $A$ from two special matrices:
 C: A smaller $m×r$ matrix containing only the independent columns of $A$. These columns form a solid foundation, acting as a basis for the entire column space of $A$. Think of them as the “essential” building blocks.
 R: A unique $r×n$ matrix that describes how the remaining columns of $A$ relate to these independent ones. It essentially encodes the “dependencies” between columns.
The beauty of CRFactorization lies in this simple equation:
$A=CR$This means we can reconstruct the original matrix, $A$, simply by multiplying $C$ and $R$.
Example Rank 1 Matrix (Lemma 2.21)
Consider the matrix:
$A=(23 46 69 )$This is a $2×3$ matrix where the rows are scalar multiples of each other, meaning the rank of the matrix is 1. We can express this matrix as the outer product of two vectors, which is a characteristic of rank 1 matrices.
According to Lemma 2.21, a matrix $A$ has rank 1 if and only if there exist nonzero vectors $v∈R_{m}$ and $w∈R_{n}$ such that:
$A=vw_{T}$In this case, we can write:
$v=(23 ),w= 123 $The outer product $vw_{T}$ results in:
$vw_{T}=(23 )(1 2 3 )=(2⋅13⋅1 2⋅23⋅2 2⋅33⋅3 )=(23 46 69 )$Thus, we have successfully expressed matrix $A$ as the product of two vectors, confirming that it has rank 1.
Example Rank 2
Fast Fibonacci Numbers by Iterative (Matrix) Squaring
Now what if we wanted to find out $f_{100}$?
But this takes way to long. How can we do this faster?
But now we’d have to compute $n−1$ matrix multiplications. We need to have a smarter way of doing that.
Matrices and Linear Transformations
A matrix can be thought of as a function or transformation that maps vectors from one vector space to another. This mapping adheres to the rules of linearity:

Additivity: Transforming the sum of two vectors is equivalent to transforming each vector individually and then adding the results. Mathematically, for matrices A and vectors x, y:
 $T_{A}(x+y)=T_{A}(x)+T_{A}(y)$

Homogeneity: Scaling a vector by a scalar before transformation is equivalent to scaling the transformed vector. Mathematically, for a matrix A and vectors x, scalar λ:
 $T_{A}(λx)=λT_{A}(x)$
These properties are crucial because they allow us to manipulate linear combinations of vectors efficiently using matrix operations.
The standard notation for this transformation is $Ax$, where A represents the matrix and x is the input vector. This product yields an output vector in a different vector space.
Input  Matrix  Output 

$x∈R_{n}$  $A∈R_{m×n}$  $Ax∈R_{m}$ 
Here:
 $x∈R_{n}$ : The input vector belongs to an ndimensional Euclidean space.
 $A∈R_{m×n}$ : The matrix, A, is a rectangular array of real numbers with m rows and n columns. It defines the transformation between the spaces.
 $Ax∈R_{m}$ : The output vector resides in an mdimensional Euclidean space, reflecting the dimensionality change induced by the transformation.
Example 1: Matrix Multiplication
If we have a matrix:
$A= 135 246 $and an input vector:
$x=(12 ),$we can multiply them together. This multiplication is not like regular multiplication; it’s a specific process where we take each element of the matrix and combine it with corresponding elements from the vector. The result, $Ax$, is another vector:
$Ax= 1⋅1+2⋅23⋅1+4⋅25⋅1+6⋅2 = 51117 .$Here, the matrix transformed the vector $x$ from $R_{2}$ (two dimensions) to $R_{3}$ (three dimensions).
Example 2: Linear Transformation
A linear transformation is a special type of function that preserves certain properties. We can represent it using a matrix. Let’s consider the transformation defined by this matrix:
$A=(01 10 ).$This matrix “swaps” the components of a vector. Applying the transformation $T_{A}$ to a vector $x=(x_{1}x_{2} )$ gives us:
$T_{A}(x)=Ax=(01 10 )(x_{1}x_{2} )=(x_{2}x_{1} ).$So, if we input $(37 )$, the output is $(73 )$.
Visual Transformation
Let us define that a transformation of a set $X$ of inputs is defined as:
$T_{A}(X):={T_{A}(x):x∈X}$
Example ($R_{2}→R_{2}$): “Space Distortion”:
A simple way to think of what’s happening is to look at the unit vectors start and end position. 2D matrices allow you to operate on vectors in RR2 distort them by: stretching, mirroring, rotating, shearing,…
Example $R_{3}→R_{2}$: “Orthogonal Projection”
Input 3D unit cube → Output: 2D Projection.
Linear Transformations Generalization
Linear transformations provide a powerful framework for describing how vectors can be transformed in a consistent and predictable manner.
A function $T:R_{n}→R_{m}$ is considered a linear transformation if it satisfies the following two axioms for all vectors $x,y∈R_{n}$ and any scalar $λ∈R$:

Additivity: $T(x+y)=T(x)+T(y)$

Homogeneity: $T(λx)=λT(x)$
Examples
Linear Transformations Represented by Matrices:
 Rotation in 2D: Rotating a vector in the plane by an angle θ can be achieved with the following 2x2 matrix:
 Scaling in ℝ²: To scale a vector in the plane by a factor ‘k’ along both axes, we use the matrix:
 Projection onto the xaxis in ℝ²: This transformation maps any vector (x, y) to (x, 0). The corresponding matrix is:
 Shearing in ℝ²: Shearing a vector distorts its shape by moving points along diagonal lines. A common shear transformation can be represented with the matrix:
where ‘s’ is the shearing factor.
Linear Transformations Without Matrices:

Differentiation: The derivative of a function (e.g., $f_{′}(x)$) is a linear transformation because:
 It satisfies additivity: $D(f(x)+g(x))=f_{′}(x)+g_{′}(x)$
 It satisfies homogeneity: $D(cf(x))=c∗f_{′}(x)$

Integration: Similar to differentiation, the integral of a function (e.g., $∫f(x)dx$) is also a linear transformation.
Continue here: 07 Linear Transformations, Linear Systems of Equations, PageRank