Lecture from 25.09.2024 | Video: Videos ETHZ

**Related to:** Linear Combinations

Vectors $v_{1},v_{2},…,v_{n}$ in $R_{m}$ are said to be **linearly dependent** if __at least one of them__ is a linear combination of the others. This means there exists some index $k∈[n]$ and scalars $λ_{j}$ such that:

If no such scalars exist, the vectors $v_{1,}v_{2},…,v_{n}$ are **linearly dependent**.

## Examples

Case | In/dependent | Explanation |
---|---|---|

Two vectors $v_{1}=(2,3)$ and $v_{2}=(3,−1)$ | Independent | These two vectors are not scalar multiples of each other, so they are linearly independent. |

Two vectors $v_{1}=(2,3)$ and $v_{2}=(4,6)$ | Dependent | $v_{2}$ is a scalar multiple of $v_{1}$ ($v_{2}=2v_{1}$), so they are linearly dependent. |

Vectors $v_{1},v_{2},…,v_{n}$ in $R_{n−1}$. Example: Vectors $v_{1},v_{2},v_{3}$ in $R_{2}$ | Dependent | Any set of $n$ vectors in $R_{n−1}$ is always linearly dependent, as either: 1) $n−1$ are independent and span $R_{n−1}$ and the the n-th vector is a combination of them or 2) $n−1$ are not independent and therefore $n$ won’t be either. |

A sequence of a single non-zero vector $(v=0)$ | Independent | A single non-zero vector is always linearly independent because there is no other vector to form a linear combination. |

A sequence of a single zero vector $(v=0)$ | Dependent | A single zero vector is always linearly dependent, since the only possible scalar is $0$, and the zero vector can be written as $0×v=0$. |

Sequence of vectors including the zero vector $(v_{1},v_{2},…,0,…,v_{n})$ | Dependent | Any sequence containing the zero vector is linearly dependent because the zero vector can be expressed as a linear combination of the other vectors (with all $λ_{j}=0$). |

Sequence of vectors where one vector appears twice or more $(v_{1},v_{2},…,v_{duplicate},…,v_{n})$ | Dependent | If a vector appears twice, they are linearly dependent because one vector is a scalar multiple (identity) of the other. |

The empty sequence $()$ | Independent | The empty sequence is trivially linearly independent since there are no vectors to form a linear dependence relation. |

## Alternate Definitions of Linear Dependence

Let $v_{1},v_{2},…,v_{n}$ be vectors in $R_{m}$. The following statements are equivalent (either all true or all false):

- At least one vector is a linear combination of the others (as in the main definition).
- There exist scalars $λ_{1},λ_{2},…,λ_{n}∈R$, not all zero, such that: $j=1∑n λ_{j}v_{j}=0$ In other words, the zero vector is a non-trivial linear combination of the vectors.
- At least one vector is a linear combination of the preceding ones in the sequence $v_{1},v_{2},…,v_{n}$.

How do we know these are true? Here are the proofs: The idea is that we prove: $(i)→(ii)∧(ii)→(iii)∧(iii)→(i)$.

### Proof: (i) $⟹$ (ii)

Assume (i) is true: one of the vectors, say $v_{k}$, is a linear combination of the others. That is, there exist scalars $λ_{1},λ_{2},…,λ_{k−1},λ_{k+1},…,λ_{n}∈R$ such that:

$v_{k}=j=1j=k ∑n λ_{j}v_{j}$Now, subtract $v_{k}$ from both sides:

$0=−v_{k}+j=1j=k ∑n λ_{j}v_{j}$This equation can be rewritten as:

$0=j=1 ∑n λ_{j}v_{j}$Where $λ_{k}=−1$ and $λ_{j}=λ_{j}$ for all $j=k$. Since $λ_{k}=−1$, we have a non-trivial solution (not all $λ_{j}$ are zero).

Thus, (ii) holds: there exist scalars, not all zero, such that their linear combination of the vectors equals the zero vector.

### Proof: (ii) $⟹$ (iii)

Assume (ii) is true: there exist scalars $λ_{1},λ_{2},…,λ_{n}∈R$, not all zero, such that:

$j=1∑n λ_{j}v_{j}=0$Let $k$ be the largest index such that $λ_{k}=0$ (this must exist, as not all $λ_{j}$ are zero by assumption). We can now separate the term involving $v_{k}$:

$λ_{k}v_{k}+j=1∑k−1 λ_{j}v_{j}=0$Now, isolate $v_{k}$ by moving the sum to the other side:

$λ_{k}v_{k}=−j=1∑k−1 λ_{j}v_{j}$Next, divide through by $λ_{k}$ (which is non-zero by assumption) to solve for $v_{k}$:

$v_{k}=j=1∑k−1 λ_{k}−λ_{j} v_{j}$Thus, $v_{k}$ can be written as a linear combination of the preceding vectors $v_{1},v_{2},…,v_{k−1}$. This matches statement (iii): at least one vector (in this case, $v_{k}$) is a linear combination of the preceding vectors.

Therefore, (ii) implies (iii).

### Proof: (iii) $⟹$ (i)

This implication is straightforward.

Assume (iii) is true: at least one vector, say $v_{k}$, is a linear combination of the preceding vectors in the sequence $v_{1},v_{2},…,v_{k−1}$:

$v_{k}=j=1∑k−1 λ_{j}v_{j}$Since $v_{k}$ is a linear combination of the other vectors in the set, it directly follows that there exists a vector, $v_{k}$, which is a linear combination of the others. This is exactly what statement (i) asserts.

Thus, (iii) implies (i).

## Definitions of Linear Independence

Let $v_{1},v_{2},…,v_{n}$ be vectors in $R_{m}$. The following statements are equivalent for the vectors to be linearly independent (either all true or all false):

- No vector in the set can be written as a linear combination of the others.
- The only scalars $λ_{1},λ_{2},…,λ_{n}∈R$ that satisfy a linear combination of the vectors resulting in the zero vector are all zero (i.e., the trivial solution).
- No vector is a linear combination of the preceding ones in the sequence $v_{1},v_{2},…,v_{n}$.

## Uniqueness of Solutions with Linearly Independent Sets of Vectors

When a set of vectors $v_{1},v_{2},…,v_{n}$ in $R_{m}$ is linearly independent, any vector $b∈R_{m}$ can have at most one unique solution when expressed as a linear combination of the vectors in that set.

## Uniqueness of Solutions with Linearly Independent Sets of Vectors

When a set of vectors $v_{1},v_{2},…,v_{n}$ in $R_{m}$ is linearly independent, any vector $b∈R_{m}$ can have at most one unique solution when expressed as a linear combination of the vectors in that set.

### Proof of Uniqueness

Suppose $b∈R_{m}$ can be written as a linear combination of the vectors $v_{1},v_{2},…,v_{n}$ in two different ways with two sets of coefficients, $λ_{1},λ_{2},…,λ_{n}$ and $μ_{1},μ_{2},…,μ_{n}$, such that:

$b=λ_{1}v_{1}+λ_{2}v_{2}+⋯+λ_{n}v_{n}$and

$b=μ_{1}v_{1}+μ_{2}v_{2}+⋯+μ_{n}v_{n}$Now subtract the second equation from the first:

$(λ_{1}v_{1}+λ_{2}v_{2}+⋯+λ_{n}v_{n})−(μ_{1}v_{1}+μ_{2}v_{2}+⋯+μ_{n}v_{n})=0$This simplifies to:

$(λ_{1}−μ_{1})v_{1}+(λ_{2}−μ_{2})v_{2}+⋯+(λ_{n}−μ_{n})v_{n}=0$Since the vectors $v_{1},v_{2},…,v_{n}$ are linearly independent, the only solution to this equation is for each coefficient to be zero. That is:

$λ_{1}−μ_{1}=0,λ_{2}−μ_{2}=0,…,λ_{n}−μ_{n}=0$Therefore, $λ_{k}=μ_{k}$ for all $k=1,2,…,n$.

### Conclusion

Since $λ_{k}=μ_{k}$ for all $k$, the scalars (coefficients) used to express $b$ as a linear combination of $v_{1},v_{2},…,v_{n}$ must be the same. This proves that the solution is unique.

## Vector Span

### Definition

The **span** of a set of vectors ${v_{1},v_{2},…,v_{n}}$ in a vector space is the set of all possible linear combinations of those vectors. Mathematically, the span is defined as:

In other words, the span of a set of vectors is the collection of all vectors that can be formed by taking scalar multiples of each vector and adding them together.

### Proof: Adding a Vector from a Span Does Not Change the Span

Let $v_{1},v_{2},…,v_{n}∈R_{m}$ be vectors, and let $v∈R_{m}$ be a linear combination of $v_{1},…,v_{n}$. We aim to show that:

$span(v_{1},…,v_{n})=span(v_{1},…,v_{n},v)$**Proof Strategy:**

We will prove that the two spans are equal by showing:

- $span(v_{1},…,v_{n})⊆span(v_{1},…,v_{n},v)$
- $span(v_{1},…,v_{n},v)⊆span(v_{1},…,v_{n})$

Once both inclusions are established, we can conclude that the two spans are identical.

**Part 1: $span(v_{1},…,v_{n})⊆span(v_{1},…,v_{n},v)$**

Let $q$ be an arbitrary element of $span(v_{1},…,v_{n})$. By definition, $q$ can be written as a linear combination of $v_{1},…,v_{n}$:

$q=a_{1}v_{1}+a_{2}v_{2}+⋯+a_{n}v_{n}$Since $v∈span(v_{1},…,v_{n})$, we can set the coefficient of $v$ to be 0:

$q=a_{1}v_{1}+a_{2}v_{2}+⋯+a_{n}v_{n}+0⋅v$Thus, $q$ is also an element of $span(v_{1},…,v_{n},v)$. Therefore, every element of $span(v_{1},…,v_{n})$ is in $span(v_{1},…,v_{n},v)$, proving:

$span(v_{1},…,v_{n})⊆span(v_{1},…,v_{n},v)$**Part 2: $span(v_{1},…,v_{n},v)⊆span(v_{1},…,v_{n})$**

Let $w$ be an arbitrary element of $span(v_{1},…,v_{n},v)$. By definition, $w$ can be written as a linear combination of $v_{1},v_{2},…,v_{n},v$:

$w=m_{1}v_{1}+m_{2}v_{2}+⋯+m_{n}v_{n}+m_{v}v$Since $v∈span(v_{1},…,v_{n})$, we know that $v$ can be expressed as:

$v=n_{1}v_{1}+n_{2}v_{2}+⋯+n_{n}v_{n}$Substitute this expression for $v$ into the equation for $w$:

$w=m_{1}v_{1}+m_{2}v_{2}+⋯+m_{n}v_{n}+m_{v}(n_{1}v_{1}+n_{2}v_{2}+⋯+n_{n}v_{n})$Now, simplify the expression:

$w=(m_{1}+m_{v}n_{1})v_{1}+(m_{2}+m_{v}n_{2})v_{2}+⋯+(m_{n}+m_{v}n_{n})v_{n}$This shows that $w$ is a linear combination of $v_{1},…,v_{n}$. Therefore, $w∈span(v_{1},…,v_{n})$, proving:

$span(v_{1},…,v_{n},v)⊆span(v_{1},…,v_{n})$**Conclusion:**

Since both:

$span(v_{1},…,v_{n})⊆span(v_{1},…,v_{n},v)andspan(v_{1},…,v_{n},v)⊆span(v_{1},…,v_{n})$we conclude that:

$span(v_{1},…,v_{n})=span(v_{1},…,v_{n},v)$**Continue here:** 04 Matrices and Linear Combinations