Lecture from 09.10.2024 | Video: Videos ETHZ

## Linear Transformations

We showed last time that a transformation is linear if it is additive and homogenous.

- $T(x+y)=T(x)+T(y)$
- $T(λx)=λT(x)$

**Lemma 2.28:**

Let $T:R_{n}→R_{m}$ be a linear transformation. $x_{1},…x_{l}∈R_{n}$, $λ_{1},…,λ_{l}∈R$ then:

$T(j=1∑ℓ λ_{j}x_{j})=j=1∑ℓ λ_{j}T(x_{j})$To be rigorous let us prove this properly by using induction. We are going to use this method in order to not use the “unrigorous” dotted notation which uses $…$ in it’s proof.

### The matrix of a linear transformation (Theorem 2.29)

Let $T:R_{n}→R_{m}$ be a linear transformation. Then there __exists an unique__ $m×n$ matrix $A$ such that $T=T_{A}$.

Now this might seem daunting at first, but once we prove it it makes the job of working with generic linear transformations much easier, since we only have to think of matrices.

**Proof:**

In order for $T=T_{A}$ we need:

$T(e_{j})=T_{A}(e_{j})=Ae_{j}:∀j=1…n$ which means that the linear transformation of the unit vector $e_{j}$ is the same for both transformations.

This way the only matrix $A$ we can get is:

$A= ∣T(e_{1})∣ ∣T(e_{2})∣ ⋯ ∣T(e_{n})∣ $This works, because for all $x∈R_{n}$:

$T_{A}(x) = Ax =j=1∑n x_{j}T(e_{j})=T(j=1∑n x_{j}e_{j})=T(x) $Now using this information we can show some general properties of linear transformations:

## Systems of Linear Equations

A **system of linear equations** is a collection of two or more linear equations involving the same variables. The goal is to find values for these variables that simultaneously satisfy all the equations in the system.

A system of linear equations in $m$ equations and n variables $x_{1},x_{2},…,x_{n}$ is of the form:

$a_{11}x_{1}+a_{12}x_{2}+⋯+a_{1n}x_{n}a_{21}x_{1}+a_{22}x_{2}+⋯+a_{2n}x_{n}a_{m1}x_{1}+a_{m2}x_{2}+⋯+a_{mn}x_{n} ==⋮= b_{1}b_{2}b_{m} $Or in a “more simpler” standardized matrix-vector form:

$Ax=b:A,m×n a_{11}a_{21}⋮a_{m1} a_{12}a_{22}⋮a_{m2} ⋯⋯⋱⋯ a_{1n}a_{2n}⋮a_{mn} x∈R_{n} x_{1}x_{2}⋮x_{n} =b∈R_{m} b_{1}b_{2}⋮b_{m} $Let us call A: “coefficient matrix”, b = “right side” and x: “vector of the variables”. A and b are known real values. The x vector is the “unknown”.

**The following observation (Beobachtung 3.2) becomes trivial:**

Let $A$ be a a $m×n$ matrix. The columns of A are linearly independent if and only if the system $Ax=0$ has a single __unique trivial__ solution ($x=0$).

## Pagerank Algorithm (Google)

**PageRank-Principles:**

- Citations from relevant pages are worth more (no spamming links from useless sites)
- Citations from sites which cite a lot of sites are worth less

Now you might see that we have a small problem. We want to define the rank of a page based on the rank of other pages (which is also based on the rank of other pages…), effectively creating a recursive problem.

`Relevance:`

Sum of the relevance of the sites citing this site, divided by the number of sites which are sited on that site.

*Analogously for the other sites…*

And wow, look at this, we got a system of 6 linear equations and 6 variables (sites). Unfortunately we have a useless solution of $0$.

**Fix:**

We will use a slight approximation using a damping factor $d$ close to $1$ (for example, $d=7/8$):

$x_{2}=(1−d)+d(2x_{1} +x_{4}+x_{5}+4x_{6} )$This now allows us to have a unique (non-trivial) solution to our problem ($d=7/8$):

$x_{1}=0.31797,x_{2}=1.6761,x_{3}=1.7307,x_{4}=0.31797,x_{5}=1.0751,x_{6}=0.88217$Page 3 (rank 1.7307) wins.

**Continue here:** 08 Gauss Elimination, Elimination Matrices and Solution Sets