3 Matrices

3.6 Systems of linear equations

3.6.1 Definition of a linear system

Definition 3.6.1.

A system of m linear equations in n unknowns x1,,xn with coefficients aij,1im,1jn and b1,,bm is a list of simultaneous equations

a11x1+a12x2++a1nxn =b1
a21x1+a22x2++a2nxn =b2
am1x1+am2x2++amnxn =bm

As the notation suggests, we can turn a system of linear equations into a matrix equation and study it using matrix methods.

3.6.2 Matrix form of a linear system

Every system of linear equations can be written in matrix form: the above system is equivalent to saying that A𝐱=𝐛, where A=(aij), 𝐱=(x1xn), and 𝐛=(b1bm).

Example 3.6.1.

The system of linear equations

2x+3y+4z =5
x+5z =0 (3.9)

has matrix form

(234105)(xyz)=(50).

This connection means that we can use systems of linear equations to learn about matrices, and use matrices to learn about systems of linear equations. For example, if A is invertible and we want to solve the matrix equation

A𝐱=𝐛

we could multiply both sides by A1 to see that there is a unique solution 𝐱=A1𝐛.

We are going to make two more observations about solving linear systems based on what we know about matrix multiplication. The first is that by Proposition 3.2.1, the vectors which can be written as A𝐮 for some 𝐮 are exactly the ones which are linear combinations of the columns of A, that is, vectors of the form

u1𝐜1++un𝐜n

where 𝐜j is the jth column of A. So the matrix equation A𝐱=𝐛 has a solution if and only if 𝐛 can be written as a linear combination of the columns of A. This set of linear combinations is therefore important enough to have a name.

Definition 3.6.2.

The column space of a matrix A, written C(A), is the set of all linear combinations of the columns of A.

A homogeneous matrix equation is one of the form A𝐱=𝟎. These are particularly important because the solutions to any matrix equation A𝐱=𝐛 can be expressed in terms of the solutions to the corresponding homogeneous equation A𝐱=𝟎.

Theorem 3.6.1.

Let 𝐩 be a solution of the matrix equation A𝐱=𝐛. Then any solution of A𝐱=𝐛 can be written as 𝐩+𝐤 for some vector 𝐤 such that A𝐤=𝟎.

Proof.

Suppose 𝐪 is a solution of A𝐱=𝐛. Then A𝐩=A𝐪, so A(𝐩𝐪)=𝟎. Letting 𝐤=𝐩𝐪 we get 𝐪=𝐩+𝐤 as claimed. ∎

The theorem tells you that if you can solve the homogeneous equation A𝐱=𝟎 and you can somehow find a particular solution 𝐩 of A𝐱=𝐛, you know all the solutions of the inhomogeneous equation A𝐱=𝐛.

What does it mean for A𝐤=𝟎 to be true? Using Proposition 3.2.1 again, it says that

k1𝐜1++kn𝐜n=𝟎 (3.10)

where the kj are the entries of 𝐤 and the 𝐜j are the columns of A. An equation of the form (3.10) is called a linear dependence relation, or just a linear dependence, on 𝐜1,,𝐜n. We’ve seen that solutions of the matrix equation A𝐱=𝟎 correspond to linear dependences on the columns of A.

The solutions of the matrix equation A𝐱=𝟎m are so important that they get their own name.

Definition 3.6.3.

The nullspace of an m×n matrix A, written N(A), is {𝐯n:A𝐯=𝟎m}.

The homogeneous equation A𝐱=𝟎m has the property that the zero vector is a solution, if 𝐮 and 𝐯 are solutions then so is 𝐮+𝐯, and if λ is a number then λ𝐮 is also a solution. This is what it means to say that N(A) is a subspace of n, something we will cover in the final chapter of MATH0005.

3.6.3 Augmented matrix

The augmented matrix of a system of linear equations whose matrix form is A𝐱=𝐛 is the matrix which you get by adding 𝐛 as an extra column on the right of A. We write this as (A𝐛) or just (A𝐛).

For example, the augmented matrix for the system of linear equations (3.9) above would be

(23451050).
Definition 3.6.4.

A solution to a matrix equation A𝐱=𝐛 is a vector 𝐲 (of numbers this time, not unknowns) such that A𝐲=𝐛.

A system of linear equations may have a unique solution, many different solutions, or no solutions at all. In future lectures we will see how to find out how many solutions, if any, a system has.