Lecture from 25.10.2024  Video: Videos ETHZ
Vector Spaces
What exactly is a vector? A common initial thought might be an element of $R_{m}$. While true, this is merely an example of a vector, not a definition. We need a more precise and encompassing understanding.
Definition of a Vector Space (Definition 4.1)
A vector is an element of a vector space. Vector spaces are characterized by two key operations on their elements: vector addition and scalar multiplication. These operations must adhere to specific rules to ensure the space behaves in a consistent and predictable manner.
Formal Definition: A vector space is a triple $(V,+,⋅)$ where:
 $V$: A set containing the vectors.
 $+$: A function $+:V×V→V$ representing vector addition. It takes two vectors from $V$ and returns another vector in $V$.
 $⋅$: A function $⋅:R×V→V$ representing scalar multiplication. It takes a real number (scalar) and a vector from $V$ and returns another vector in $V$.
These operations must satisfy the following eight axioms for all $u,v,w∈V$ and all $λ,μ∈R$:
 Commutativity of addition: $v+w=w+v$
 Associativity of addition: $u+(v+w)=(u+v)+w$
 Existence of a zero vector: There exists a vector $0∈V$ such that $v+0=v$ for all $v∈V$.
 Existence of additive inverses: For every $v∈V$, there exists a vector $−v∈V$ such that $v+(−v)=0$.
 Scalar multiplication identity: $1⋅v=v$
 Compatibility of scalar multiplication: $(λ⋅μ)v=λ⋅(μ⋅v)$
 Distributivity of scalar multiplication over vector addition: $λ⋅(v+w)=λv+λw$
 Distributivity of scalar multiplication over scalar addition: $(λ+μ)v=λv+μv$
Examples of Vector Spaces
We’ll now explore concrete examples of vector spaces, clarifying how the abstract definition applies in specific cases.
The Real Coordinate Space: $R_{m}$ (Observation 4.2)
The set $R_{m}$ consists of all $m$tuples of real numbers. We represent elements (vectors) in $R_{m}$ as ordered lists:
$v=(v_{1},v_{2},...,v_{m})$, where $v_{i}∈R$.
Vector addition and scalar multiplication are defined componentwise:
 Addition (Definition 1.1): If $v=(v_{1},...,v_{m})$ and $w=(w_{1},...,w_{m})$, then $v+w=(v_{1}+w_{1},...,v_{m}+w_{m})$. For example, in $R_{2}$, $(1,2)+(3,4)=(4,6)$.
 Scalar Multiplication (Definition 1.3): If $v=(v_{1},...,v_{m})$ and $λ∈R$, then $λv=(λv_{1},...,λv_{m})$. For example, in $R_{3}$, $2⋅(1,0,−1)=(2,0,−2)$.
$(R_{m},+,⋅)$ forms a vector space because these operations satisfy all eight axioms of a vector space. $R_{m}$ is often the first and most familiar example encountered when learning about vector spaces.
The Space of Polynomials: $R[x]$ (Lemma 4.4 and Definition 4.3)
The set $R[x]$ contains all polynomials with real coefficients. A polynomial $p$ is expressed as:
$p(x)=∑_{i=0}a_{i}x_{i}=a_{0}+a_{1}x+a_{2}x_{2}+⋯+a_{n}x_{n}$, where $a_{i}∈R$. The largest $i$ such that $a_{i}=0$ is the degree of the polynomial. The zero polynomial (all $a_{i}=0$) has a degree of 1 by convention (Definition 4.3).
Polynomial addition and scalar multiplication are defined as:

Addition: $(p+q)(x)=∑_{i=0}(a_{i}+b_{i})x_{i}$ where $p(x)$ has degree $n$ and $q(x)$ has degree $m$. Example: $(2x+1)+(x_{2}−3)=x_{2}+2x−2$.

Scalar Multiplication: $(λp)(x)=∑_{i=0}(λa_{i})x_{i}$. Example: $2⋅(x_{2}+1)=2x_{2}+2$.
$(R[x],+,⋅)$ constitutes a vector space (Lemma 4.4), satisfying all eight axioms. This example demonstrates that vector spaces can encompass more abstract objects than just tuples of numbers or matrices. The fact that polynomials form a vector space is a consequence of the way addition and scalar multiplication are defined for polynomials, which align with the axioms of a vector space.
This approach integrates the definition of a polynomial (Definition 4.3) directly into the explanation of why the set of all polynomials forms a vector space (Lemma 4.4). This makes the connection between the two concepts clearer and avoids redundancy.
The Space of Real Matrices: $R_{m×n}$ (Lemma 4.5)
The set $R_{m×n}$ comprises all $m×n$ matrices with real entries. Matrices are rectangular arrays of numbers:
$A= a_{11}a_{21}⋮a_{m1} a_{12}a_{22}⋮a_{m2} ⋯⋯⋱⋯ a_{1n}a_{2n}⋮a_{mn} $, where $a_{ij}∈R$.
Standard matrix addition and scalar multiplication are defined as:

Addition (Definition 2.2): If $A$ and $B$ are two $m×n$ matrices, then $(A+B)_{ij}=a_{ij}+b_{ij}$. For example, $[13 24 ]+[01 −10 ]=[14 14 ]$.

Scalar Multiplication (Definition 2.2): If $A$ is an $m×n$ matrix and $λ∈R$, then $(λA)_{ij}=λa_{ij}$. For example, $3⋅[1−1 02 ]=[3−3 06 ]$.
$(R_{m×n},+,⋅)$ forms a vector space, as these operations fulfill all the vector space axioms.
Uniqueness of the Zero Vector (Fact 4.6)
Within any vector space, the zero vector is unique. This can be proven directly from the axioms.
Proof: Assume that there are two distinct zero vectors, denoted as $0$ and $0_{′}$. We will show that they must be equal.
 Using Axiom 3: Since $0$ is a zero vector, we have $v+0=v$ for any vector $v$ in the space.
 Using Axiom 3 for $0_{′}$: Similarly, since $0_{′}$ is a zero vector, we have $v+0_{′}=v$ for any vector $v$ in the space.
 Setting $v=0_{′}$ and $v=0$: Let’s apply these properties with $v=0_{′}$ and $v=0$:
 $0_{′}+0=0_{′}$ (using Axiom 3 for $0$ and $v=0_{′}$)
 $0+0_{′}=0$ (using Axiom 3 for $0_{′}$ and $v=0$)
 Applying Commutativity (Axiom 1): The lefthand sides of these equations are equal by the commutativity of addition (Axiom 1): $0_{′}+0=0+0_{′}$.
 Therefore: Combining the results, we get $0=0_{′}$.
Therefore, our initial assumption that $0$ and $0_{′}$ are distinct zero vectors is false. This proves that the zero vector in any vector space must be unique.
Abuse of Notation: $(V,+,⋅)→V$
In the context of vector spaces, we often use the notation $(V,+,⋅)$ to represent the entire vector space, emphasizing that it is the set $V$ together with the specific operations of vector addition ($+$) and scalar multiplication ($⋅$). This notation highlights the fundamental components that define a vector space.
However, for brevity and convenience, we often “abuse notation” and simply write $V$ to refer to the entire vector space, implicitly assuming that the associated operations ($+$ and $⋅$) are understood. This convention is common in linear algebra and allows for more concise expression.
For instance, instead of saying “Let $(V,+,⋅)$ be a vector space,” we might simply say “Let $V$ be a vector space.” It is understood from context that $V$ refers to a set equipped with specific addition and scalar multiplication operations.
While technically an abuse of notation, this simplification does not introduce ambiguity in most contexts and makes the presentation more compact.
Subspaces (Definition 4.8)
The concept of a subspace is crucial to understanding vector spaces. A subspace is a subset of a vector space that is itself a vector space under the same operations. This means that the subset must be closed under vector addition and scalar multiplication.
Formal Definition: Let $V$ be a vector space. A nonempty subset $U⊆V$ is a subspace of $V$ if the following two axioms hold for all $v,w∈U$ and all $λ∈R$:
(i) Closure under addition: $v+w∈U$; (ii) Closure under scalar multiplication: $λv∈U$.
Important Note: The zero vector $0$ always belongs to a subspace $U$. This follows directly from axiom (ii): for any $u∈U$, we have $0u=0∈U$.
Examples:
 The zero subspace: The set containing only the zero vector, {0}, is always a subspace of any vector space.
 Lines and Planes in $R_{3}$: In threedimensional space, lines and planes passing through the origin are subspaces. They are closed under vector addition and scalar multiplication, meaning that the sum of two vectors on the line/plane remains on the line/plane, and scaling a vector on the line/plane keeps it on the line/plane.
Visualizing Subspaces:
 Lines: Subspaces of $R_{3}$ representing lines through the origin are the simplest example.
 Planes: Planes passing through the origin also form subspaces of $R_{3}$.
 Not a Subspace: A line or plane not passing through the origin is not a subspace. It fails to contain the zero vector and, therefore, violates the closure properties.
Subspaces of $R_{m}$ (Lemma 4.11)
Lemma 4.11: Let $A$ be an $m×n$ matrix. Then $C(A)$ (the column space of $A$) is a subspace of $R_{m}$.
Proof:
 Closure under addition: Let $v,w∈C(A)$, which means there exist $x,y∈R_{n}$ such that $v=Ax$ and $w=Ay$. Then, $v+w=Ax+Ay=A(x+y)$. Since $x+y∈R_{n}$, we have $v+w∈C(A)$.
 Closure under scalar multiplication: Let $v∈C(A)$ (so $v=Ax$ for some $x∈R_{n}$) and let $λ∈R$. Then, $λv=λAx=A(λx)$. Since $λx∈R_{n}$, we have $λv∈C(A)$.
Therefore, $C(A)$ satisfies both axioms and is a subspace of $R_{m}$.
Subspaces and Vector Spaces (Lemma 4.12)
Lemma 4.12: Let $V$ be a vector space and $U$ a subspace of $V$. Then $U$ is also a vector space (with the same “+” and “·” as $V$).
Proof: The proof follows directly from the definition of a subspace. Since $U$ is a subspace, it satisfies the closure axioms under addition and scalar multiplication, which are the same operations as in $V$. The remaining axioms (commutativity, associativity, etc.) also hold in $U$ because they hold in $V$, and $U$ is a subset of $V$.
Subspaces of…
Now, let’s examine some specific examples of subspaces within various familiar vector spaces.
… $R[x]$: The Space of Polynomials
The polynomials without constant term:
The set of polynomials of the form $p(x)=∑_{i=1}p_{i}x_{i}$ (where $p_{0}=0$), forms a subspace of $R[x]$. It is closed under addition and scalar multiplication:
 Adding two polynomials without constant terms results in another polynomial without a constant term.
 Multiplying a polynomial without a constant term by a scalar also yields a polynomial without a constant term.
The quadratic polynomials:
The set of quadratic polynomials of the form $p(x)=p_{0}+p_{1}x+p_{2}x_{2}$ forms a subspace of $R[x]$. This is a special case of the polynomials of degree less than or equal to $n$, which also form a subspace.
Furthermore, this subspace is isomorphic to $R_{3}$.
Isomorphism means that there exists a onetoone and onto mapping (a bijection) between the two spaces that preserves the structure of the spaces. In other words, the two spaces “behave” the same way, even though their elements are represented differently.
For quadratic polynomials, the isomorphism is given by: $p(x)=p_{0}+p_{1}x+p_{2}x_{2}⟷(p_{0},p_{1},p_{2})∈R_{3}$
This mapping preserves addition and scalar multiplication:
 $(p+q)(x)⟷(p_{0}+q_{0},p_{1}+q_{1},p_{2}+q_{2})$
 $(λp)(x)⟷(λp_{0},λp_{1},λp_{2})$
Therefore, we can think of quadratic polynomials as being “equivalent” to vectors in $R_{3}$. They have the same structure and behave in the same way under the operations of addition and scalar multiplication.
Generalization:
The set of polynomials of degree less than or equal to $n$ (where $n≥0$ is an integer) also forms a subspace of $R[x]$. This can be generalized further: the set of polynomials with a degree less than or equal to any fixed integer $n$ always forms a subspace of $R[x]$.
Using the same idea as for quadratic polynomials, $R[x]$ of degree $n$ is isomorphic to $R_{n}$.
… $R_{2×2}$: The Space of $2×2$ Matrices
The symmetric matrices:
The set of $2×2$ matrices of the form $[ab bd ]$ (where $a,b,d∈R$) forms a subspace of $R_{2×2}$. Adding two symmetric matrices results in another symmetric matrix, and scaling a symmetric matrix by a scalar also produces a symmetric matrix.
The matrices of trace 0:
The set of $2×2$ matrices of the form $[ac bd ]$ (where $a+d=0$) forms a subspace of $R_{2×2}$. The trace of a matrix is the sum of its diagonal elements.
… $R_{n}$ (General Case)
It’s important to understand that many subspaces are not explicitly named or described in detail. For instance, in $R_{n}$, you can have:
 Lines and Planes: As we saw earlier, these can be subspaces if they pass through the origin.
 The Span of a set of vectors: If you have a set of vectors in $R_{n}$, their span (the set of all possible linear combinations of those vectors) is always a subspace.
These examples illustrate that subspaces are abundant and often represent interesting and meaningful subsets within vector spaces. The concept of subspaces is crucial for analyzing, understanding, and solving problems in linear algebra.
Continue here: 13 Vector Spaces, Bases, Dimension