## 3.1 Vector spaces

When we met vectors and matrices, we saw that these could be added,
subtracted, and multiplied by scalars, and these operations obeyed
some simple rules. A **vector space** generalizes this
situation.

**Definition 3.1 **
A **real vector space** or ‘vector space over \(\mathbb{R}\)’ consists of

- a set
*V* - a map \(+: V\times V \to V\) called (vector) addition
- a special element of
*V*called the zero vector and written \(\mathbf{0}_V\) - and a map \(\mathbb{R} \times V \to V\) written \((\lambda,v)\mapsto \lambda v\) called scalar multiplication,

such that for all \(\mathbf{u},\mathbf{v},\mathbf{w} \in V\) and all \(l, m \in \mathbb{R}\):

- \(\mathbf{0}_V + \mathbf{v}= \mathbf{v}\).
- There exists \(\mathbf{v}' \in V\) such that \(\mathbf{v}+\mathbf{v}' = \mathbf{0}_V\).
- \(\mathbf{u}+(\mathbf{v}+\mathbf{w}) = (\mathbf{u}+\mathbf{v})+\mathbf{w}\) (‘vector addition is associative’).
- \(\mathbf{u}+\mathbf{v}=\mathbf{v}+\mathbf{u}\) (‘vector addition is commutative’).
- \(l(m \mathbf{v}) = (l m)\mathbf{v}\).
- \(1 \mathbf{v} = \mathbf{v}\).
- \(l (\mathbf{u}+\mathbf{v}) = l \mathbf{u}+ l \mathbf{v}\). (‘scalar multiplication distributes over addition’)
- \((l+m)\mathbf{v} = l \mathbf{v} + m \mathbf{v}\). (‘scalar multiplication distributes over vector addition’)

These are called the **vector space axioms** .

A **complex vector space** has the same definition except with \(\mathbb{C}\) in place of \(\mathbb{R}\). Most of what we do applies to real *or* complex vector spaces, so we say “an \(\mathbb{F}\)-vector space” or “vector space over \(\mathbb{F}\)” with the understanding that \(\mathbb{F}\) could be \(\mathbb{R}\) or \(\mathbb{C}\). When we talk about \(\mathbb{F}\)-vector spaces, the word **scalar** refers to an element of \(\mathbb{F}\).

Though we won’t do this in MATH0007, it is possible to define vector spaces over any ‘field’, not just \(\mathbb{R}\) or \(\mathbb{C}\). Roughly speaking a field is something where you can add and multiply
elements subject to the usual rules of algebra, and such that you can
divide by any nonzero element: \(\mathbb{Q}\) would be another example
of a field, as would the set \(\mathbb{F}_p\) of integers modulo a prime number *p*.

**Example 3.1 **

- \(\RR^n\), the set of all n✕1 column vectors, is an \(\RR\)-vector space (check for yourself that all of the axioms hold), and \(\CC^n\) is a \(\CC\)-vector space when given the usual vector addition and scalar multiplication. We usually call a \(\RR\)-vector space a real vector space and a \(\CC\)-vector space a complex vector space.
- \(\rr\) is an \(\rr\)-vector space, and \(\cc\) is a \(\cc\)-vector space, with the usual + and scalar multiplication being the usual multiplication.
- \(M_{n\times m}(\RR)\) and \(M_{n\times m}(\CC)\), the sets of n✕m real and complex matrices, are real and complex vector spaces when equipped with the usual addition and scalar multiplication.
- Let \(C[0,1]\) be the set of continuous functions \([0,1] \to \RR\). For two functions \(f,g :\RR \to \RR\) define \(f+g\) by \((f+g)(x) = f(x) + g(x)\) and for \(\lambda \in \RR\) define \(\lambda f\) by \((\lambda f)(x) = \lambda f(x)\). Then \(C[0,1]\) is a real vector space, because of the results from MATH0003 that say sums and constant multiples of continuous functions are again continuous.
- Let \(\RR[x]\) be the set of all polynomials in one variable
*x*. This is a real vector space given the usual definitions of addition and scalar multiplication of polynomials. - Let \(\RR[x]_{\leq n}\) be the set of all polynomials of degree at most
*n*in one variable*x*. This is a real vector space with the same operations as \(\RR[x]\). - \(\{0\}\) is an \(\ff\)-vector space under the operations \(0+0=0\) and \(\lambda 0=0\) for any \(\lambda \in \ff\). This is called the
**zero vector space**.

Consider a familar property of vectors or matrices like “if you scalar multiply *v* by \(-1\), you get the additive inverse -*v*”. (The *additive inverse* of something is what you have to add to it to get the zero vector in your vector space). Is this something which is special to vectors or vector spaces, or must it be true in every vector space?

To answer this positively we have to give a proof which uses only the axioms of a vector space and not the particular form of the vector space’s elements.

**Lemma 3.1**Let

*V*be a vector space and let \(\mathbf{x}\in V\). Then \(\mathbf{x} + (-1)\mathbf{x}= \mathbf{0}_V\).

*Proof. * First we show that \(0 \mathbf{x}= \mathbf{0}_V\). We have
\[\begin{align*}
0 \mathbf{x}& = (0+0)\mathbf{x} & \text{as } 0+0=0\\
&= 0\mathbf{x}+0 \mathbf{x} & \text{axiom 8}
\end{align*}\]
Axiom 2 guarantees the existence of an additive inverse to \(0 \mathbf{x}\), which we will call \(\mathbf{v}\). Add this to both sides:
\[\begin{align*}
0 \mathbf{x} + \mathbf{v} &= (0\mathbf{x} + 0\mathbf{x}) + \mathbf{v} \\
\mathbf{0}_V &= (0\mathbf{x} + 0\mathbf{x}) + \mathbf{v} & \text{axiom 2}\\
\mathbf{0}_V &= 0\mathbf{x} + (0\mathbf{x} + \mathbf{v}) & \text{axiom 3} \\
\mathbf{0}_V &= 0\mathbf{x} + \mathbf{0}_V \\
\mathbf{0}_V &= \mathbf{0}_V + 0\mathbf{x} & \text{axiom 4} \\
\mathbf{0}_V &= 0\mathbf{x} & \text{axiom 1.}
\end{align*}\]

We write \(-\mathbf{x}\) for the additive inverse of \(\mathbf{x}\) which axiom 2 provides, and \(\mathbf{y}-\mathbf{x}\) as shorthand for \(\mathbf{y}+ -\mathbf{x}\).

Here is another example of something which is a familiar property of vectors and matrices which we can prove to be true in *any* vector space by giving a proof using only the axioms.

**Lemma 3.2 **

- Let
*l*be a scalar. Then \(l \mathbf{0} _V = \mathbf{0} _V\). - Suppose \(l \neq 0\) is a scalar and \(l \mathbf{x} = \mathbf{0} _V\). Then \(\mathbf{x} = \mathbf{0}_V\).

*Proof. * **1.** \[\begin{align*}l \mathbf{0} _V & = l( \mathbf{0} _V + \mathbf{0}_V) & \text{axiom 1} \\ &= l \mathbf{0} _V + l \mathbf{0}_V & \text{axiom 7.} \end{align*}\] Axiom 2 tells there’s an additive inverse to \(l \mathbf{0}_V\). Adding it to both sides and using axiom 3, we get \(\mathbf{0} _V=l \mathbf{0} _V\).

**2.**\[\begin{align*} l \mathbf{x} &= \mathbf{0} _V \\ l^{-1} (l \mathbf{x} ) &= \mathbf{0} _V &\text{by the above} \\ (l ^{-1}l) \mathbf{x} &= \mathbf{0} _V & \text{axiom 5} \\ 1 \mathbf{x} &= \mathbf{0} _V \\ \mathbf{x} &= \mathbf{0} _V &\text{axiom 6}. \end{align*}\]

### 3.1.1 Subspaces

Informally, a subspace *U* of a vector space *V* is a subset which is a vector space in its own right, using the same operations as *V*.

**Definition 3.2 **
Let *V* be a \(\ff\)-vector space and *U* be a subset of *V*.

*U*is called**closed under addition**if for all \(\mathbf{u}_1,\mathbf{u}_2 \in U\) we have \(\mathbf{u}_1 + \mathbf{u}_2 \in U\).*U*is called**closed under scalar multiplication**if for all \(\mathbf{u}\in U\) and all \(\lambda \in \ff\) we have \(\lambda \mathbf{u}\in U\).*U*is called a**subspace**of*V*, and we write \(U \leq V\), if*U*contains the zero vector and is closed under addition and scalar multiplication.

Closure under addition and scalar multiplication tell us that if \(l_1, l_2 \in \ff\) and \(\mathbf{u}_1,\mathbf{u}_2 \in U\) then \(l_1\mathbf{u}_1 + l_2 \mathbf{u}_2 \in U\). Using this repeatedly, if *U* is a subspace, \(l_1,\ldots, l_n \in \ff\), and \(\mathbf{u}_1,\ldots, \mathbf{u}_n \in U\) then
\[ l_1 \mathbf{u}_1 + \cdots + l_n \mathbf{u}_n \in U\]

**Example 3.2 **

- The set of column vectors \(U = \left\{\begin{pmatrix} \lambda \\ 0 \\ \mu \end{pmatrix}: \lambda, \mu \in \RR \right\}\)
is a subspace of \(\RR^3\). It contains the zero vector (because we can take \(\lambda=\mu=0\)). It is closed under addition because if \(\mathbf{u}_1=\begin{pmatrix} a\\0\\b \end{pmatrix}\) and \(\mathbf{u}_2= \begin{pmatrix} c \\ 0 \\ d\end{pmatrix}\) are any two elements of
*U*, \[ \mathbf{u}_1 + \mathbf{u}_2 = \begin{pmatrix} a+c \\0\\b+d\end{pmatrix}\] is an element of*U*. Finally if \(\lambda \in \RR\), \[ \lambda \mathbf{u}_1 = \begin{pmatrix} \lambda a \\ 0 \\ \lambda b\end{pmatrix}\] has the correct form to be an element of*U*, so*U*is closed under scalar multiplication. - The set of column vectors \((a_1,\ldots, a_n)^T\) such that \(\sum _{i=1}^n a_i = 0\) is a subspace of \(\mathbb{R}^n\).
- The set of all functions \(f: \RR \to \RR\) such that \(f(1)=0\) is a subspace of the vector space of all functions \(\RR \to \RR\).
- Let \(\mathbf{v}\) be an element of an \(\ff\)-vector space
*V*. Then the set \(\{ \lambda \mathbf{v} : \lambda \in \FF \}\) is a subspace of*V*. - A vector space is a subspace of itself.
- \(\{ \mathbf{0}_V \}\) is a subspace of
*V*(the "zero subspace’’). - The set of column vectors whose entries are integers is
**not**a subspace of \(\mathbb{R}^n\). It contains the zero vector and is closed under addition, but it isn’t closed under scalar multiplication.

**Lemma 3.3 ** Let *U* and *W* be subspaces of a vector space *V*. Then

- \(U\cap W\) is a subspace of
*V*, and *U*+*W*, defined to be \(\{ \mathbf{u}+\mathbf{w}: \mathbf{u} \in U, \mathbf{w}\in W\}\), is a subspace of*V*.

*Proof. * To show something is a subspace we have to check three properties: containing the zero vector, closure under addition, and closure under scalar multiplication.

**1.**

- \(\mathbf{0}_V \in U \cap W\) as
*U*and*W*are subspaces so contain \(\mathbf{0}_V\). - Let \(\mathbf{x},\mathbf{y}\in U \cap W\).
*U*is a subspace, so closed under addition, so \(\mathbf{x}+\mathbf{y} \in U\). For the same reason \(\mathbf{x}+\mathbf{y} \in W\). Therefore \(\mathbf{x}+\mathbf{y} \in U\cap W\). - Let \(\lambda\) be a scalar and \(\mathbf{x} \in U \cap W\).
*U*is a subspace, so closed under scalar multiplication, so \(\lambda \mathbf{x} \in U\). For the same reason \(\lambda \mathbf{x} \in W\). Therefore \(\lambda \mathbf{x} \in U \cap W\).

**2.**

- \(\mathbf{0}_V\) is in
*U*and*W*as they are subspaces, so \(\mathbf{0}_V + \mathbf{0}_V = \mathbf{0}_V\) is in*U*+*W*. - Any two elements of
*U*+*W*have the form \(\mathbf{u}_1+\mathbf{w}_1\) and \(\mathbf{u}_2+\mathbf{w}_2\), where \(\mathbf{u}_i \in U\) and \(\mathbf{w}_i \in W\). \[\begin{equation*} (\mathbf{u}_1+\mathbf{w}_1) + (\mathbf{u}_2+\mathbf{w}_2) = (\mathbf{u}_1+\mathbf{u}_2) + (\mathbf{w}_1+\mathbf{w}_2) \end{equation*}\] by using the vector space axioms. But \(\mathbf{u}_1+\mathbf{u}_2 \in U\) as*U*is a subspace and \(\mathbf{w}_1+\mathbf{w}_2\in W\) as*W*is a subspace, so this is in \(U+W\) which is therefore closed under addition. - Let \(\lambda\) be a scalar.
\[\begin{equation*}
\lambda (\mathbf{u}_1+\mathbf{w}_1) = \lambda \mathbf{u}_1+\lambda \mathbf{w}_1
\end{equation*}\]
\(\lambda \mathbf{u}_1 \in U\) as
*U*is a subspace so closed under scalar multiplication, \(\lambda \mathbf{w}_1 \in W\) for the same reason, so their sum is in*U*+*W*which is therefore closed under scalar multiplication.