Lecture from 16.10.2024 | Video: Videos ETHZ

Elementary Row Operations

Why Elementary Row Operations Preserve the Solution Set

Each elementary row operation corresponds to an invertible transformation of the system of equations. Since each operation is invertible, we can always reverse the operation to recover the original system, implying that the two systems (before and after applying operations) are equivalent.

Formally: (Lemma 3.3)

If we say then there exists such that . Hence, every solution of is a solution of .

Intuitively, if we have a which works out for and we simply multiply both side by the same thing, then must be true too. .

Gauss Elimination

Theorem 3.5

Let be a system of equations and variables. The following statements are equivalent:

  1. Gauss Elimination works out
  2. The columns of are linearly independent

Proof:

We’ll show (i) (ii) and not (i) not (ii) (contrapositive).

(i) (ii):

If Gauss elimination succeeded, then we transformed , an upper diagonal matrix with for all . U has linearly independent columns (i.e. no column is a linear combination of the previous). It becomes trivial to see that this is the case, since for every pivot the previous columns all have zeros in that position.

not (i) not (ii)

If Gauss elimination fails, then for some column a pivot is zero.

The part before is alright, cause it’s pivots are non-zero, but at our column we have a zero pivot. Now we need to show that the columns of this matrix are linearly dependent.

To do that we’ll construct such that .

Let us set and choose the other such that:

Now this equation will have a solution since U is a valid upper triangular matrix with non-pivots which means that can be easily solved by back substitution (see: Gauss Elimination). We’ll then get a non-zero vector implying linear dependence.

Runtime

The runtime of Gauss Elimination is:

Inverse Matrices

TLDR: .

In simpler terms, the inverse of a matrix “undoes” the effect of that matrix, similar to how division undoes multiplication for real numbers. For example, if you apply a matrix transformation to a vector, applying the inverse matrix transformation will return the vector to its original state.

Invertible Matrices

Not all square matrices are invertible. (i.e. not all linear transformations are undo-able, for example ).

Case 1x1:

Case 2x2:

Proof:

  1. Let’s first make an augmented matrix:

  2. We’ll make the pivot in the first row = 1. If

  3. We’ll now eliminate the first element in the second row:

    which simplifies to:

  4. Make the pivot in the second row and second column 1

    Next, we need to make the pivot in the second row and second column equal to 1. If (), we can divide the second row by (). The resulting matrix is:

  5. Eliminate the second element in the first row

    We subtract () times the second row from the first row: The resulting matrix becomes:

    which holds provided that () (the determinant of is non-zero).

The inverse is unique

If we have a matrix with two inverses and , then .

Proof:

Inverse of a product

If and are invertible matrices then is also invertible and .

Proof:

Inverse of a Transposition

If we have a matrix which is invertible then:

Inverse Theorem (Theorem 3.11)

Let be a matrix. The following are equivalent.

  1. A is invertible
  2. For every , has a unique solution
  3. The columns of are independent

Proof:

We’ll prove the following implications:

1. (i) (ii)

If A is invertible then: is a solution because

And it’s also unique because for any , .

(ii) (iii)

If always has a unique solution, then also for . And we’ve proven that if has a unique (trivial) solution, then it’s linearly independent (See Gauss Elimination).

(iii) (ii)

Let be solutions of .

If Gauss elimination succeeds, which it must since the columns are independent, then we’ll find a solution to our equation.

But since the columns of A are linearly independent, there must only exists one trivial solution .

(ii) (i)

If has a unique solution for all , then also for (unit vectors). This implies that there are such that:

But now we also need to show .

Continue here: 10 Calculating the Inverse, LU and LUP Decomposition