# 3.6 Systems of linear equations

## 3.6.1 Definition of a linear system

###### Definition 3.6.1.

A system of $m$ linear equations in $n$ unknowns $x_{1},\ldots,x_{n}$ with coefficients $a_{ij},1\leqslant i\leqslant m,1\leqslant j\leqslant n$ and $b_{1},\ldots,b_{m}$ is a list of simultaneous equations

 $\displaystyle a_{11}x_{1}+a_{12}x_{2}+\cdots+a_{1n}x_{n}$ $\displaystyle=b_{1}$ $\displaystyle a_{21}x_{1}+a_{22}x_{2}+\cdots+a_{2n}x_{n}$ $\displaystyle=b_{2}$ $\displaystyle\phantom{a_{11}x_{1}+}\vdots\phantom{+a_{2n}x_{n}}$ $\displaystyle\phantom{=b}\vdots$ $\displaystyle a_{m1}x_{1}+a_{m2}x_{2}+\cdots+a_{mn}x_{n}$ $\displaystyle=b_{m}$

As the notation suggests, we can turn a system of linear equations into a matrix equation and study it using matrix methods.

## 3.6.2 Matrix form of a linear system

Every system of linear equations can be written in matrix form: the above system is equivalent to saying that $A\mathbf{x}=\mathbf{b}$, where $A=(a_{ij})$, $\mathbf{x}=\begin{pmatrix}x_{1}\\ \vdots\\ x_{n}\end{pmatrix}$, and $\mathbf{b}=\begin{pmatrix}b_{1}\\ \vdots\\ b_{m}\end{pmatrix}$.

###### Example 3.6.1.

The system of linear equations

 $\displaystyle 2x+3y+4z$ $\displaystyle=5$ $\displaystyle x+5z$ $\displaystyle=0$ (3.9)

has matrix form

 $\begin{pmatrix}2&3&4\\ 1&0&5\end{pmatrix}\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}5\\ 0\end{pmatrix}.$

This connection means that we can use systems of linear equations to learn about matrices, and use matrices to learn about systems of linear equations. For example, if $A$ is invertible and we want to solve the matrix equation

 $A\mathbf{x}=\mathbf{b}$

we could multiply both sides by $A^{-1}$ to see that there is a unique solution $\mathbf{x}=A^{-1}\mathbf{b}$.

We are going to make two more observations about solving linear systems based on what we know about matrix multiplication. The first is that by Proposition 3.2.1, the vectors which can be written as $A\mathbf{u}$ for some $\mathbf{u}$ are exactly the ones which are linear combinations of the columns of $A$, that is, vectors of the form

 $u_{1}\mathbf{c}_{1}+\cdots+u_{n}\mathbf{c}_{n}$

where $\mathbf{c}_{j}$ is the $j$th column of $A$. So the matrix equation $A\mathbf{x}=\mathbf{b}$ has a solution if and only if $\mathbf{b}$ can be written as a linear combination of the columns of $A$. This set of linear combinations is therefore important enough to have a name.

###### Definition 3.6.2.

The column space of a matrix $A$, written $C(A)$, is the set of all linear combinations of the columns of $A$.

A homogeneous matrix equation is one of the form $A\mathbf{x}=\mathbf{0}$. These are particularly important because the solutions to any matrix equation $A\mathbf{x}=\mathbf{b}$ can be expressed in terms of the solutions to the corresponding homogeneous equation $A\mathbf{x}=\mathbf{0}$.

###### Theorem 3.6.1.

Let $\mathbf{p}$ be a solution of the matrix equation $A\mathbf{x}=\mathbf{b}$. Then any solution of $A\mathbf{x}=\mathbf{b}$ can be written as $\mathbf{p}+\mathbf{k}$ for some vector $\mathbf{k}$ such that $A\mathbf{k}=\mathbf{0}$.

###### Proof.

Suppose $\mathbf{q}$ is a solution of $A\mathbf{x}=\mathbf{b}$. Then $A\mathbf{p}=A\mathbf{q}$, so $A(\mathbf{p}-\mathbf{q})=\mathbf{0}$. Letting $\mathbf{k}=\mathbf{p}-\mathbf{q}$ we get $\mathbf{q}=\mathbf{p}+\mathbf{k}$ as claimed. ∎

The theorem tells you that if you can solve the homogeneous equation $A\mathbf{x}=\mathbf{0}$ and you can somehow find a particular solution $\mathbf{p}$ of $A\mathbf{x}=\mathbf{b}$, you know all the solutions of the inhomogeneous equation $A\mathbf{x}=\mathbf{b}$.

What does it mean for $A\mathbf{k}=\mathbf{0}$ to be true? Using Proposition 3.2.1 again, it says that

 $k_{1}\mathbf{c}_{1}+\cdots+k_{n}\mathbf{c}_{n}=\mathbf{0}$ (3.10)

where the $k_{j}$ are the entries of $\mathbf{k}$ and the $\mathbf{c}_{j}$ are the columns of $A$. An equation of the form (3.10) is called a linear dependence relation, or just a linear dependence, on $\mathbf{c}_{1},\ldots,\mathbf{c}_{n}$. We’ve seen that solutions of the matrix equation $A\mathbf{x}=\mathbf{0}$ correspond to linear dependences on the columns of $A$.

The solutions of the matrix equation $A\mathbf{x}=\mathbf{0}_{m}$ are so important that they get their own name.

###### Definition 3.6.3.

The nullspace of an $m\times n$ matrix $A$, written $N(A)$, is $\{\mathbf{v}\in\mathbb{R}^{n}:A\mathbf{v}=\mathbf{0}_{m}\}$.

The homogeneous equation $A\mathbf{x}=\mathbf{0}_{m}$ has the property that the zero vector is a solution, if $\mathbf{u}$ and $\mathbf{v}$ are solutions then so is $\mathbf{u}+\mathbf{v}$, and if $\lambda$ is a number then $\lambda\mathbf{u}$ is also a solution. This is what it means to say that $N(A)$ is a subspace of $\mathbb{R}^{n}$, something we will cover in the final chapter of MATH0005.

## 3.6.3 Augmented matrix

The augmented matrix of a system of linear equations whose matrix form is $A\mathbf{x}=\mathbf{b}$ is the matrix which you get by adding $\mathbf{b}$ as an extra column on the right of $A$. We write this as $(A\mid\mathbf{b})$ or just $(A\;\mathbf{b})$.

For example, the augmented matrix for the system of linear equations (3.9) above would be

 $\begin{pmatrix}2&3&4&5\\ 1&0&5&0\end{pmatrix}.$
###### Definition 3.6.4.

A solution to a matrix equation $A\mathbf{x}=\mathbf{b}$ is a vector $\mathbf{y}$ (of numbers this time, not unknowns) such that $A\mathbf{y}=\mathbf{b}$.

A system of linear equations may have a unique solution, many different solutions, or no solutions at all. In future lectures we will see how to find out how many solutions, if any, a system has.