Lecture from 16.10.2024 | Video: Videos ETHZ

## Elementary Row Operations

### Why Elementary Row Operations Preserve the Solution Set

Each elementary row operation corresponds to an invertible transformation of the system of equations. Since each operation is invertible, we can always reverse the operation to recover the original system, implying that the two systems (before and after applying operations) are equivalent.

**Formally:** (Lemma 3.3)

If we say $A_{′}x=MAx=Mb=b_{′}$ then there exists $M_{′}$ such that $Ax=M_{′}A_{′}x=M_{′}b_{′}=b$. Hence, every solution of $Ax=b$ is a solution of $A_{′}x=b_{′}$.

Intuitively, if we have a $x$ which works out for $Ax=b$ and we simply multiply both side by the same thing, then $A_{′}x=b_{′}$ must be true too. $Ax=b⟺A_{′}x=b_{′}$.

## Gauss Elimination

### Theorem 3.5

Let $Ax=b$ be a system of $m$ equations and $m$ variables. The following statements are equivalent:

- Gauss Elimination works out
- The columns of $A$ are linearly independent

#### Proof:

We’ll show (i) $⟹$ (ii) and not (i) $⟹$ not (ii) (contrapositive).

**(i) ⇒(ii):**

If Gauss elimination succeeded, then we transformed $A→U$, an upper diagonal matrix with $u_{jj}=0$ for all $j$. U has linearly independent columns (i.e. no column is a linear combination of the previous). It becomes trivial to see that this is the case, since for every pivot the previous columns all have zeros in that position.

**not (i) ⇒ not (ii)**

If Gauss elimination fails, then for some column $c_{j}$ a pivot is zero.

The part before is alright, cause it’s pivots are non-zero, but at our column $c_{j}$ we have a zero pivot. Now we need to show that the columns of this matrix are linearly dependent.

To do that we’ll construct $x=0$ such that $A_{′}x=0$.

Let us set $x_{j+1}=x_{j+2}=⋯=x_{m}=0$ and choose the other $x_{i}$ such that:

$Uy x_{1}x_{2}⋮x_{j−1} +x_{j}v=0$Now this equation will have a solution since U is a valid upper triangular matrix with non-pivots which means that $Uy=v$ can be easily solved by back substitution (see: Gauss Elimination). We’ll then get a non-zero vector implying linear dependence.

### Runtime

The runtime of Gauss Elimination is:

$32 m_{3}+23 m_{2}+67 m=O(m_{3})$## Inverse Matrices

**TLDR:** $MM_{−1}=M_{−1}M=I$.

In simpler terms, the inverse of a matrix “undoes” the effect of that matrix, similar to how division undoes multiplication for real numbers. For example, if you apply a matrix transformation to a vector, applying the inverse matrix transformation will return the vector to its original state.

### Invertible Matrices

Not all square matrices are invertible. (i.e. not all linear transformations are undo-able, for example $R_{3}→R_{2}$).

#### Case 1x1:

$M=[a ]⇒M_{−1}=[a1 ](ifa=0).$#### Case 2x2:

$M=[ac bd ],⇒M_{−1}=ad−bc1 [d−c −ba ](ifad−bc=0)$**Proof:**

- Let’s first make an augmented matrix:

- We’ll make the pivot in the first row = 1. If $a=0$

- We’ll now eliminate the first element in the second row: $R_{2}→R_{2}−cR_{2}$

*which simplifies to:*

- Make the pivot in the second row and second column 1

*Next, we need to make the pivot in the second row and second column equal to 1. If ($ad−bc=0$), we can divide the second row by ($aad−bc $). The resulting matrix is:*
$[10 ab 1 a1 ad−bc−c 0ad−bca ]$

- Eliminate the second element in the first row

*We subtract ($ab $) times the second row from the first row:*
$R_{1}→R_{1}−ab R_{2}$ The resulting matrix becomes:

which holds provided that ($ad−bc=0$) (the determinant of $M$ is non-zero).

### The inverse is unique

If we have a matrix $M$ with two inverses $A$ and $B$, then $A=B$.

**Proof:**

### Inverse of a product

If $A$ and $B$ are invertible matrices then $AB$ is also invertible and $(AB)=B_{−1}A_{−1}$.

**Proof:**

### Inverse of a Transposition

If we have a matrix $A$ which is invertible then:

$(A_{T)_{−1}}=(A_{−1})_{T}$## Inverse Theorem (Theorem 3.11)

Let $A$ be a $m×m$ matrix. The following are equivalent.

- A is invertible
- For every $b∈R_{m}$, $Ax=b$ has a unique solution $x$
- The columns of $A$ are independent

### Proof:

We’ll prove the following implications:

$(i)⇑(ii) ⇒⇐ (ii)⇓(iii) $**1. (i) ⇒ (ii)**

If A is invertible then: $x=A_{−1}b$ is a solution because

$A(A_{−1}b)=(AA_{−1})b=Ib=b$And it’s also unique because for any $x$, $x=Ix=A_{−1}Ax=A_{−1}b$.

**(ii) ⇒ (iii)**

If $Ax=b$ always has a unique solution, then also for $b=0$. And we’ve proven that if $Ax=0$ has a unique (trivial) solution, then it’s linearly independent (See Gauss Elimination).

**(iii) ⇒ (ii)**

Let $x,x_{′}$ be solutions of $Ax=b$.

If Gauss elimination succeeds, which it must since the columns are independent, then we’ll find a solution to our equation.

$⟹Ax=Ax_{′}=b⟹A(x−x_{′})=0$But since the columns of A are linearly independent, there must only exists one trivial solution $x−x_{′}=0⟹x=x_{′}$.

**(ii) ⇒ (i)**

If $Ax=b$ has a unique solution for all $b$, then also for $b=e_{1},e_{2},…,e_{m}$ (unit vectors). This implies that there are $v_{1},v_{2},…,v_{m}$ such that:

$Av_{1}=e_{1} 10⋯0 ,Av_{2}=e_{2} 01⋯0 ,…,Av_{m}=e_{m} 00⋯1 ⇒AB ∣v_{1}∣ ∣v_{2}∣ ⋯ ∣v_{m}∣ =I 10⋮0 01⋮0 ⋯⋯⋱⋯ 00⋮1 $But now we also need to show $BA=I$.

$A=AI=IA=(AB)A=A(BA)⟹BA=I$**Continue here:** 10 Calculating the Inverse, LU and LUP Decomposition