3.2 Bases and dimension

3.2.1 Spanning sets

If \(\mathbf{v}_1,\ldots \mathbf{v}_n\) are elements of an \(\FF\)-vector space V then a linear combination of \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) is an element of V equal to

\[\lambda_1 \mathbf{v}_1 + \cdots + \lambda_n \mathbf{v}_n \]

for some \(\lambda_1,\ldots,\lambda_n \in \FF\). The \(\lambda_i\) are called the coefficients in this linear combination. A non-trivial linear combination is one where not all the coefficients are zero.

This allows us to rephrase our definition of subspace: a non-empty subset U of V is a subspace if and only if every linear combination of elements of U is again in U.


Definition 3.3 Let V be an \(\FF\)-vector space and \(\mathbf{s}_1,\ldots \mathbf{s}_n \in V\). The span of \(\mathbf{s}_1,\ldots, \mathbf{s}_n\), written \(\spa \{ \mathbf{s} _1,\ldots, \mathbf{s} _n\}\), is the set of all linear combinations of \(\mathbf{s}_1,\ldots, \mathbf{s}_n\).

So \(\spa \{ \mathbf{s} _1,\ldots, \mathbf{s} _n\}\) consists of all elements of the form \[\begin{equation*} \lambda_1 \mathbf{s}_1 + \cdots + \lambda_n \mathbf{s}_n \end{equation*}\] for \(\lambda_i \in \ff\). The span of the empty set of vectors is defined to be \(\{ \mathbf{0}_V\}\).

Example 3.3

  • The span of a single element \(\mathbf{s}\) of an \(\FF\)-vector space V is \(\{ \lambda \mathbf{s} : \lambda \in \FF\}\), since any linear combination of \(\mathbf{s}\) is just a scalar multiple of \(\mathbf{s}\).
  • Let \(\mathbf{u} = \begin{pmatrix} 1\\ 0 \\ 0 \end{pmatrix}, \mathbf{v} = \begin{pmatrix} 0\\1\\0 \end{pmatrix} \in \rr^3\). Then \(\spa \{ \mathbf{u}, \mathbf{v} \}\) is the set \[\begin{equation*} \left\{ \begin{pmatrix} \lambda \\ \mu \\ 0 \end{pmatrix} : \lambda,\mu \in \rr \right\} \end{equation*}\]
  • The span of \(\mathbf{u} , \mathbf{v}\), and \(\mathbf{w} = \begin{pmatrix} 1\\2\\0 \end{pmatrix}\) is equal to \(\spa \{ \mathbf{u} , \mathbf{v} \}\).

Lemma 3.4 \(\spa \{ \mathbf{s} _1,\ldots, \mathbf{s} _n\}\) is a subspace of V.

Proof. Write S for \(\spa \{ \mathbf{s} _1,\ldots, \mathbf{s} _n\}\). Recall that S consists of every linear combination \(\sum_{i=1}^n \lambda_i\mathbf{s}_i\), where the \(\lambda_i\) are scalars.

  • S contains the zero vector because it contains \(\sum_{i=1}^n 0\mathbf{s}_i\).
  • S is closed under addition because if \(\sum_{i=1}^n \lambda _i \mathbf{s}_i\) and \(\sum _{i=1}^n \mu_i \mathbf{s}_i\) are any two elements of S then \[\sum_{i=1}^n \lambda_i \mathbf{s}_i + \sum_{i=1}^n \mu_i \mathbf{s}_i = \sum_{i=1}^n (\lambda_i + \mu_i) \mathbf{s}_i\] is in S.
  • S is closed under scalar multiplication because if \(\sum_{i=1}^n \lambda_i \mathbf{s}_i\) is in S and \(\lambda\) is a scalar then \[ \lambda \sum_{i=1}^n \lambda_i \mathbf{s}_i = \sum_{i=1}^n (\lambda \lambda_i) \mathbf{s}_i\] is also in S.

Definition 3.4 A subset \(\mathcal{S}\) of the \(\FF\)-vector space V is called a spanning set if the span of the elements of \(\mathcal{S}\) equals V.


In other words, \(\mathcal{S}\) is a spanning set if every element of V can be written as a linear combination of elements of \(\mathcal{S}\).

Lemma 3.5 If \(U \leq V\) and U contains a spanning set \(\mathcal{S}\) for V, then \(U=V\).
Proof. U is closed under taking linear combinations of elements of U, so it contains every linear combination of elements of \(\mathcal{S}\), but every element of V is a linear combination of \(\mathcal{S}\), so U contains every element of V.

Definition 3.5 An \(\FF\)-vector space is called finite-dimensional if it has a finite spanning set.


So V is finite dimensional if there is a finite list \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) of elements of V such that any element of V is a linear combination of the \(\mathbf{v}_i\). An example of a vector space which isn’t finite-dimensional is the set \(\{(a_0, a_1, \ldots) : a_i \in \mathbb{R}\}\) of all infinite real sequences.

Example 3.4 Recall that the standard basis vector \(\mathbf{e}_i \in \RR^n\) is the height n column vector whose entries are all zero except for the ith, which is one. Then the \(\mathbf{e}_i\) form a spanning set for the real vector space \(\RR^n\) since any vector \[\begin{equation*} \mathbf{v} = \begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix} \end{equation*}\] can be written as a linear combination of the \(\mathbf{e}_i\) as follows: \[\begin{equation*} \mathbf{v} = v_1 \mathbf{e}_1 + \cdots + v_n \mathbf{e}_n. \end{equation*}\] It follows that \(\RR^n\) is finite-dimensional.
Example 3.5 The vectors \[\begin{equation*} \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \text{ and } \begin{pmatrix} 0\\2\\2 \end{pmatrix} \end{equation*}\] form a spanning set for the vector space in the first part of Example 3.2.

3.2.2 Linear independence

A sequence of vectors is an ordered list, written \((\mathbf{u}, \mathbf{v}, \mathbf{w},\ldots)\) or just \(\mathbf{u},\mathbf{v},\mathbf{w},\ldots\). Ordered means that, for example \((\mathbf{u},\mathbf{v}) \neq (\mathbf{v},\mathbf{u})\). Sequences are different to sets for two reasons: first \(\{\mathbf{v},\mathbf{u}\}\) is the same set as \(\{ \mathbf{u},\mathbf{v}\}\) — order doesn’t matter to sets — and secondly \(\{\mathbf{u},\mathbf{u}\} = \{\mathbf{u}\}\) whereas \((\mathbf{u},\mathbf{u}) \neq (\mathbf{u} )\).

Definition 3.6 A sequence \(\mathbf{v}_1,\ldots ,\mathbf{v}_n\) of elements of the \(\FF\)-vector space V is called linearly independent if \[\begin{equation} \tag{3.1} \lambda_1 \mathbf{v}_1 + \cdots + \lambda_n \mathbf{v}_n = \mathbf{0}_V \end{equation}\] implies all of the \(\lambda_i\) equal zero. Otherwise it is called linearly dependent, and an equation (3.1) in which not all the \(\lambda_i=0\) is called a nontrivial linear dependence relation between the \(\mathbf{v}_i\).


So to check if a sequence \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) of elements of V is linearly independent, you have to see if there are any non-zero solutions to the equation \[ \lambda_1\mathbf{v}_1+ \cdots + \lambda _n \mathbf{v}_n = \mathbf{0}_V. \] Notice that if the sequence \(\mathbf{v}_1,\ldots,\mathbf{v}_n\) contains the same element twice, it is is linearly dependent: if \(\mathbf{v}_i = \mathbf{v}_j\) for \(i \neq j\) then \(\mathbf{v}_i-\mathbf{v}_j=\mathbf{0}_V\) is a nontrivial linear dependence relation.

By convention we regard the empty subset \(\emptyset\) of a vector space V as being linearly independent.

Example 3.6

  • The vectors \(\mathbf{x} =\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \mathbf{y} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\) are linearly independent in \(\mathbb{R}^2\). For suppose that \(\lambda \mathbf{x} + \mu \mathbf{y} = \mathbf{0} _{\rr^2}\). Then we have \[\begin{equation*} \begin{pmatrix} \lambda \\ 0 \end{pmatrix} + \begin{pmatrix} \mu \\ \mu \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \end{equation*}\] which is equivalent to saying that \(\lambda + \mu = 0\) and \(\mu=0\). It follows that \(\lambda=\mu=0\), which means those two vectors were linearly independent.
  • The one-element sequence \(\mathbf{0}_V\) isn’t linearly independent: \(1 \mathbf{0}_V = \mathbf{0}_V\) so there is a nontrivial linear dependence relation on \(\{\mathbf{0}_V\}\).
  • \(\begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} -2 \\ 0 \end{pmatrix}\) are not linearly independent in \(\RR^2\), since we can make the zero vector of \(\RR^2\) as a non-trivial linear combination of these two vectors: \[\begin{equation*} \begin{pmatrix} 0\\0 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \begin{pmatrix} -1 \\ 0 \end{pmatrix} \end{equation*}\]
  • The vectors \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) in \(\RR^n\) are linearly independent. For \[\begin{equation*} \sum_{i=1}^n \lambda_i \mathbf{e_i} = \begin{pmatrix} \lambda_1 \\ \vdots \\ \lambda_n \end{pmatrix} \end{equation*}\] and the only way for this to be the zero vector is if all of the coefficients \(\lambda_i\) are zero.

A spanning set in a vector space has to be ‘large’ enough to span the whole space. A linearly independent set has to be ‘small’ enough that it doesn’t admit any nontrivial linear dependences. So there should be something special about sequences of vectors which are linearly independent and which form spanning sets.

3.2.3 Bases and dimension

Definition 3.7 A basis of a vector space is a linearly independent sequence of vectors which is also a spanning set.

Example 3.7

  • In Example 3.6 part 3 above we showed \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) is linearly independent, and in Example 3.4 we showed it was a spanning set of \(\RR^n\). Thus \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) is a basis of \(\RR^n\). This is called the standard basis of \(\rr^n\).
  • Let \(M = M_{2\times 2}(\RR)\) be the \(\RR\)-vector space of all 2✕2 real matrices, so the zero vector \(\mathbf{0}_M\) is the 2✕2 zero matrix. Let \(E_{ij}\) be the matrix with a 1 in position i, j and 0 elsewhere. Any element of M looks like \(\begin{pmatrix} a & b \\ c & d \end{pmatrix}\) for some \(a,b,c,d \in \RR\), and \[\begin{equation*} \begin{pmatrix} a & b \\ c & d \end{pmatrix}=a E_{11} + b E_{12} + cE_{21} + dE_{22}. \end{equation*}\] It follows that \(E_{11}, E_{12}, E_{21}, E_{22}\) are a spanning set for M. They are also linearly independent, for if \(\alpha,\beta,\gamma,\delta\) are scalars such that \[\begin{equation*} \alpha E_{11}+\beta E_{12} + \gamma E_{21} + \delta E_{22} = \mathbf{0}_M \end{equation*}\] then \[\begin{equation*} \begin{pmatrix} \alpha & \beta \\ \gamma&\delta \end{pmatrix} =\begin{pmatrix} 0&0\\0&0 \end{pmatrix} \end{equation*}\] and so \(\alpha=\beta=\gamma=\delta=0\). Therefore \(E_{11}, E_{12}, E_{21}, E_{22}\) is a basis of M.

A basis of a vector space V is a fortiori a spanning set for V, so every element of V can be written as a linear combination of the elements of the basis. The following lemma shows that every element of V can be written one and only one way as a linear combination of the elements of a particular basis.

Lemma 3.6 Let \(\mathcal{B}= \mathbf{b} _1, \ldots, \mathbf{b} _n\) be a basis of a vector space V. If \(\lambda_i\) and \(\mu_i\) are scalars such that \(\sum_{i=1}^n \lambda_{i} \mathbf{b}_i = \sum_{i=1}^n \mu_{i} \mathbf{b}_i\), then \(\lambda_{i}=\mu_{i}\) for all i.
Proof. Rearranging we get \[\begin{equation*} \mathbf{0}_V=\sum_{i=1}^n (\lambda_i-\mu_i) \mathbf{b}_i \end{equation*}\] and since \(\mathcal{B}\) is linearly independent, \(\lambda_i-\mu_i=0\) for all i.

This result shows that a basis can be used to “coordinatize” an abstract finite-dimensional vector space - to show that it is essentially “just like” \(\mathbb{R}^n\) or \(\mathbb{C}^n\).

We would like to use the size of a basis of a vector space as a measure of the size of the vector space. The difficulty we have is that a vector space can have many different bases, and it is not clear that they should all have the same size. The next few results will help us prove this.

Lemma 3.7 (Extension lemma). Suppose \(\mathbf{l}_1, \ldots, \mathbf{l}_n\) are linearly independent. Let \(\mathbf{v} \notin \spa \{ \mathbf{l} _1, \ldots, \mathbf{l} _n \}\). Then \(\mathbf{l} _1,\ldots, \mathbf{l} _n, \mathbf{v}\) is linearly independent.

Proof. We’ll prove the contrapositive: if \(\mathbf{l} _1,\ldots, \mathbf{l}_n, \mathbf{v}\) is linearly dependent then \(\mathbf{v} \in \spa \{ \mathbf{l} _1,\ldots, \mathbf{l}_n \}\).

Suppose \(\mathbf{l} _1,\ldots, \mathbf{l} _n,\mathbf{v}\) is linearly dependent. There are scalars \(\lambda,\lambda_1,\ldots,\lambda_n\) not all zero such that \[\begin{equation*} \lambda \mathbf{v} + \sum_{i=1}^n \lambda_i \mathbf{v}_i = \mathbf{0}_V. \end{equation*}\] \(\lambda\) can’t be zero, for then this equation would say that \(\mathbf{l} _1,\ldots, \mathbf{l} _n\) was linearly dependent. Therefore we can rearrange to get \[\begin{equation*} \mathbf{v} = -\lambda^{-1} \sum_{i=1}^n \lambda_i \mathbf{l}_i=\sum_{i=1}^n (-\lambda^{-1} \lambda_i)\mathbf{l}_i \in \spa \{ \mathbf{l} _1,\ldots, \mathbf{l} _n\}. \end{equation*}\]

Proposition 3.1 (Extending to a basis). Let V be finite-dimensional and let \(\mathbf{l} _1,\ldots, \mathbf{l}_n\) be linearly independent. Then there is a basis of V containing \(\mathbf{l} _1,\ldots, \mathbf{l} _n\).

Proof. Let \(\mathcal{L}=( \mathbf{l} _1,\ldots, \mathbf{l} _n)\). Since V is finite-dimensional it has a finite spanning set \(\{\mathbf{v}_1,\ldots,\mathbf{v}_m\}\). Define a sequence of subsets of V as follows: \(\mathcal{S}_0 = \mathcal{L}\), and for \(i\geq 0\), \[\begin{equation*}\mathcal{S}_{i+1}= \begin{cases} \mathcal{S}_i & \text{if }\mathbf{v}_{i+1} \in \spa \mathcal{S}_i \\ \mathcal{S}_i \cup \{ \mathbf{v}_{i+1}\} & \text{otherwise.}\end{cases} \end{equation*}\]
Note that in either case \(\mathbf{v}_{i+1} \in \spa \mathcal{S}_{i+1}\), and also that \(\mathcal{S}_0 \subseteq \mathcal{S}_1 \subseteq \cdots \subseteq \mathcal{S}_m\).

Each set \(\mathcal{S}_i\) is linearly independent by Lemma 3.7, and in particular \(\mathcal{S}_m\) is linearly independent. Furthermore \(\spa \mathcal{S}_m\) contains the spanning set \(\{\mathbf{v}_1,\ldots, \mathbf{v}_m\}\) because for each i we have \(\mathbf{v}_i \in \spa \mathcal{S}_i \subseteq \spa \mathcal{S}_m\), so by Lemma 3.5, \(\spa \mathcal{S}_m = V\). Therefore \(\mathcal{S}_m\) is a basis containing \(\mathcal{L}\).

Corollary 3.1 Every finite-dimensional vector space has a basis.
Proof. Apply the previous lemma to the linearly independent set \(\emptyset\).

The process of taking a linearly independent set and finding a basis containing it is called extending to a basis. In general there will be many different bases containing a given linearly independent set.

The next lemma is a technical result that will be used in our proof that any two bases have the same size.

Lemma 3.8 Let \(( \mathbf{e} _1,\ldots, \mathbf{e} _n)\) be a basis of the vector space V, and let \(\mathbf{f} = \sum _{i=1}^n \lambda _i \mathbf{e}_i\) with \(\lambda_j \neq 0\). Then \(\mathcal{B} =( \mathbf{e} _1,\ldots, \mathbf{e} _{j-1}, \mathbf{f} , \mathbf{e} _{j+1},\ldots, \mathbf{e} _n)\) is a basis of V.

Proof. Suppose \(\mu \mathbf{f}+\sum_{i\neq j} \mu_i \mathbf{e} _i = \mathbf{0}_V\). Since \(\mathbf{f} = \sum_{i=1}^n \lambda_i \mathbf{e}_i\) we have \[\begin{equation*} \mu\lambda_j \mathbf{e} _j+ \sum_{i \neq j} (\mu_i+\mu\lambda_i) \mathbf{e} _i = \mathbf{0} _V \end{equation*}\] Linear independence of the \(\mathbf{e} _i\) implies that all the coefficients here are zero. So \(\mu\lambda_j=0\), and since \(\lambda_j\neq 0\) we must have \(\mu=0\). Now \(\mu_i + \mu\lambda_i=0\) for all \(i\neq j\), but since \(\mu=0\) we have \(\mu_i=0\) for all \(i\neq j\). It follows that \(\mathcal{B}\) is linearly independent.

\(\spa \mathcal{B}\) obviously contains each \(\mathbf{e} _i\) with \(i \neq j\), and \[\begin{equation*} \mathbf{e} _j = \lambda_j^{-1} \mathbf{f} - \sum_{i\neq j} \lambda_j^{-1}\lambda_i \mathbf{e} _i \in \spa \mathcal{B} . \end{equation*}\] It follows that \(\spa \mathcal{B}\) contains the spanning set \(\mathbf{e}_1,\ldots, \mathbf{e}_n\), so is all of V, so \(\mathcal{B}\) is a spanning set and therefore is a basis.

Theorem 3.1 Any two bases of a vector space have the same size.

Proof. (not examinable). Let \(\mathcal{B} =( \mathbf{e} _1,\ldots, \mathbf{e} _n)\) and \(\mathcal{C}=( \mathbf{f} _1, \ldots, \mathbf{f}_m)\) be bases and suppose that \(m\geq n\).

\(\mathbf{f} _1\) can be written as a linear combination of the \(\mathbf{e}_i\) since they form a spanning set. Since \(\mathbf{f} _1\neq 0\), one of the coefficients in this linear combination is nonzero — by renumbering the \(\mathbf{e} _i\) if necessary, we can assume it is the coefficient of \(\mathbf{e} _1\). By Lemma 3.8 \(\mathcal{B} '=( \mathbf{f} _1, \mathbf{e} _2,\ldots, \mathbf{e}_n)\) is a basis.

Now repeat this with \(\mathbf{f} _2\) and \(\mathcal{B} '\): we can write \(\mathbf{f} _2\) as a linear combination of \(\mathcal{B} '\), one of the coefficients of the \(\mathbf{e} _i\) in this linear combination must be nonzero (otherwise \(\mathbf{f} _2\) would be a multiple of \(\mathbf{f} _1\), contradicting linear independence of \(\mathcal{C}\)), by renumbering we can assume the coefficient of \(\mathbf{e} _2\) is nonzero, and then by the lemma \(( \mathbf{f} _1, \mathbf{f} _2, \mathbf{e}_3,\ldots, \mathbf{e} _n)\) is a basis.

Repeating n times we end up with \(( \mathbf{f}_1,\ldots, \mathbf{f}_n)\) being a basis. If \(n>m\) we would then have \(\mathbf{f} _{n+1} \in \spa \{ \mathbf{f} _1,\ldots, \mathbf{f} _n\}\) contradicting linear independence of \(\mathcal{C}\), so it must be \(m=n\).


Corollary 3.2 If \(\dim V=n\) then any \(n+1\) elements of V are linearly dependent.
Proof. Otherwise we could use Proposition 3.1 to find a basis containing these \(n+1\) elements, which has size at least \(n+1\). But every basis has size n.

Definition 3.8 Let V be a finite dimensional vector space. The dimension of V, written \(\dim V\), is the size of any basis of V.

Example 3.8

  • From Example 3.7 we see that the dimension of the zero vector space is zero, \(\dim \RR^n = n\), and \(\dim M_{2\times 2}(\RR) = 4\).
  • You can generalize the calculation in Example 3.7 to prove that the dimension of \(\dim M_{n\times m}(\RR)\) and \(M_{n\times m}(\CC)\) is \(nm\).
  • Suppose V is a one-dimensional \(\mathbb{F}\)-vector space. It has a basis \(\mathbf{v}\) of size 1, and every element of V can be written as a linear combination of this basis, that is, a scalar multiple of \(\mathbf{v}\). So \(V = \{\lambda \mathbf{v} : \lambda \in \mathbb{F}\}\).

Example 3.9 Let V be the set of 3✕1 column vectors \(\begin{pmatrix} a\\b\\c \end{pmatrix}\) with real entries such that \(a+b+c=0\). You should check that V is a subspace of \(\rr^3\). To find \(\dim V\), we need a basis of V.

A typical element of V looks like \(\begin{pmatrix} a \\ b \\ -a-b \end{pmatrix}\), so a good start is to notice that \[\begin{equation} \tag{3.2} \begin{pmatrix} a\\b\\-a-b \end{pmatrix}= a \begin{pmatrix} 1\\0\\-1 \end{pmatrix} + b \begin{pmatrix} 0\\1\\-1 \end{pmatrix}. \end{equation}\] We might guess that the two vectors \(\mathbf{u} = \begin{pmatrix} 1\\0\\-1 \end{pmatrix}\) and \(\mathbf{v} = \begin{pmatrix} 0\\1\\-1 \end{pmatrix}\) are a basis. Since any element of V equals \(\begin{pmatrix} a\\b\\-a-b \end{pmatrix}\) for some \(a,b\), equation (3.2) shows that they are a spanning set. To check they are linearly independent, suppose that \(\lambda \mathbf{u} +\mu \mathbf{v} = \mathbf{0}_V\), so that \[\begin{equation*} \lambda \begin{pmatrix} 1\\0\\-1 \end{pmatrix} + \mu \begin{pmatrix} 0\\1\\-1 \end{pmatrix} = \begin{pmatrix} 0\\0\\0 \end{pmatrix} . \end{equation*}\] The vector on the right has entries \(\lambda, \mu, -\lambda-\mu\) so we have \(\lambda = \mu=0\). This shows that \(\mathbf{u}\) and \(\mathbf{v}\) are linearly independent, so they’re a basis of V which therefore has dimension 2.

3.2.4 Dimensions of subspaces

If dimension is really a good measure of the size of a vector space, then when U is a subspace of V we ought to have \(\dim U \leq \dim V\). But it isn’t obvious from the definitions that a subspace of a finite-dimensional vector space even has a dimension, so we need the following:

Lemma 3.9 If \(U \leq V\) and V is finite-dimensional then U is finite-dimensional.

Proof. Suppose for a contradiction that U is not finite-dimensional, so it is not spanned by any finite set of elements of U.

We claim that for any \(n\geq 0\) there exists a linearly independent subset of U of size n. The proof is by induction, and for \(n=0\) the empty set works. For the inductive step, suppose \(\mathcal{L}\) is a linearly independent subset of U of size n. Since U is not spanned by any finite set of its elements, there exists \(\mathbf{u}\in U \setminus \spa \mathcal{L}\). Then \(\mathcal{L}\cup \{\mathbf{u}\}\) is linearly independent by Lemma 3.7 and has size \(n+1\), completing the inductive step.

In particular there is a linearly independent subset of V with size \(\dim V+1\), contradicting Corollary 3.2.

Proposition 3.2 Let U be a subspace of the finite-dimensional vector space V. Then

  • \(\dim U \leq \dim V\), and
  • if \(\dim U=\dim V\) then \(U=V\).

Proof.

  • U is finite-dimensional by Lemma 3.9, so it has a finite basis \(\mathcal{B}\). By Corollary 3.2, \(\mathcal{B}\), being a linearly independent subset of V, has size at most \(\dim V\). Therefore \(\dim U = | \mathcal{B} | \leq \dim V\).
  • If \(\dim U = \dim V\) and \(\mathbf{v} \in V \setminus U\) then \(\mathcal{B} \cup \{ \mathbf{v} \}\) is linearly independent by Lemma 3.7. But it has size larger than \(\dim V\), contradicting Corollary 3.2. So \(V\setminus U=\emptyset\) and \(U=V\).