So...

We can readily solve systems with triangular coefficient matrices.

Not every system has a triangular coefficient matrix, but what if we could transform an arbitrary system into one with an upper triangular coefficient matrix, without changing the solution set?

Gaussian elimination

Guiding philosophy: Use a sequence of moves to transform an arbitrary system into a system with an upper triangular coefficient matrix, without changing the solution set.

This procedure is attributed to Carl Friedrich Gauss but it has been found in ancient Chinese tablets dating back millenia.

The moves

We have three elementary row operations (ERO), at our disposal.
  • Multiply the $i$th row by a nonzero scalar $c$: $$r_i \gets c r_i.$$
  • Add the $i$th row to the $j$th row: $$r_j \gets r_j + r_i.$$
  • Swap the $i$th and $j$th rows: $$r_i \gets r_j, \qquad r_j \gets r_i.$$
We may combine the first two elementary row operations into the single move $$r_i \gets r_i + c r_j.$$

The game

Think of Gaussian elimination as a game in which you make an ERO at each turn and the goal is to end up with an upper triangular system.

Consider the following linear system. \begin{array}{rccl} 2u &+ v &+ w &= 5\\ 4u &- 6v & &=-2\\ -2u &+ 7v &+ 2w &=9 \end{array}

Step 1

Clear the first column.
\begin{align} r_2 \gets r_2+(-2)r_1\\ r_3 \gets r_3+r_1 \end{align}
\begin{equation} \begin{array}{rccl} \bbox[3pt, border: 3pt solid var(--emphColor)]{2}u &+ v &+ w &= 5\\ 4u &- 6v & &=-2\\ -2u &+ 7v &+ 2w &=9 \end{array} \color{var(--emphColor)}{\longrightarrow} \begin{array}{rccl} \bbox[3pt, border: 3pt solid var(--emphColor)]{2}u &+ v &+ w &= 5\\ & -8v &- 2w &=-12\\ & 8v &+ 3w &= 14 \end{array} \end{equation}
Why did we multiply the second row by $-2$? Why did we simply add the first row to the third?
We're using the pivot $\bbox[3pt, border: 3pt solid var(--emphColor)]{2}$ to eliminate entries below it. We multiply by $-2$ to clear the leading term $4u$ in the second row and we multiply by $+1$ to clear the $-2u$ in the third equation.

Step 2

Clear the second column.
\begin{align} r_3 \gets r_3+ r_2 \end{align}
\begin{equation} \begin{array}{rccl} 2u &+ v &+ w &= 5\\ & \bbox[3pt, border: 3pt solid var(--emphColor)]{-8}v &- 2w &=-12\\ & 8v &+ 3w &= 14 \end{array} \color{var(--emphColor)}{\longrightarrow} \begin{array}{rcrl} 2u &+ v &+ w &= 5\\ & \bbox[3pt, border: 3pt solid var(--emphColor)]{-8}v &- 2w &= -12\\ & & w &= 2 \end{array} \end{equation}
Again, why did we add the second row to the third?
We're using the pivot $\bbox[3pt, border: 3pt solid var(--emphColor)]{-8}$ to eliminate entries below it. We multiply by $+1$ to clear the $-8v$ in the third equation.

Success!

In two steps, we obtain an upper triangular system that we can easily solve using backward substitution.

\begin{equation} \begin{array}{rccl} \bbox[3pt, border: 3pt solid var(--emphColor)]{2}u &+ v &+ w &= 5\\ 4u &- 6v & &=-2\\ -2u &+ 7v &+ 2w &=9 \end{array} \color{var(--emphColor)}{\longrightarrow} \begin{array}{rccl} 2u &+ v &+ w &= 5\\ & \bbox[3pt, border: 3pt solid var(--emphColor)]{-8}v &- 2w &=-12\\ & 8v &+ 3w &= 14 \end{array} \color{var(--emphColor)}{\longrightarrow} \begin{array}{rcrl} 2u &+ v &+ w &= 5\\ & -8v &- 2w &= -12\\ & & \bbox[3pt, border: 3pt solid var(--emphColor)]{1}w &= 2 \end{array} \end{equation}

Procedure

In general, we may proceed by clearing out one column at a time, from left to right.

The example shows we may use the pivot $a_{kk}$ to zero out the $k$th column.
At step $k$, we may set $\color{var(--emphColor)}{l_{ik}} = a_{ik} / a_{kk}$ and use $$r_i \gets r_i - \color{var(--emphColor)}{l_{ik}} r_k$$ for $i \geq k+1$ to clear out the $k$th column.

Pseudocode

							\begin{algorithm}
							\caption{Gaussian Elimination on $A x = b$}
							\begin{algorithmic}
							\FOR{$k \gets 1$ to $n-1$}
							\FOR{$i \gets k+1$ to $n$}
							 \STATE {$l_{ik} \gets a_{ik}/a_{kk}$}
							 \FOR{$j \gets k+1$ to $n$}
							 \STATE{$a_{ij} \gets a_{ij} - l_{ik}a_{kj}$}
							 \ENDFOR
							 \STATE{$b_i \gets b_i - l_{ik}b_k$}
							\ENDFOR
							\ENDFOR
							\end{algorithmic}
							\end{algorithm}
						
This procedure changes the matrix “in place,” overwriting $A$ at each step.

Pseudocode

							\begin{algorithm}
							\caption{Gaussian Elimination on $A x = b$}
							\begin{algorithmic}
							\FOR{$\color{var(--emphColor)}{k \gets 1 \text{ to } n-1}$}
							\FOR{$i \gets k+1$ to $n$}
							 \STATE {$l_{ik} \gets a_{ik}/a_{kk}$}
							 \FOR{$j \gets k+1$ to $n$}
							 \STATE{$a_{ij} \gets a_{ij} - l_{ik}a_{kj}$}
							 \ENDFOR
							 \STATE{$b_i \gets b_i - l_{ik}b_k$}
							\ENDFOR
							\ENDFOR
							\end{algorithmic}
							\end{algorithm}
						
The loop over $\color{var(--emphColor)}{k}$ iterates over columns, zeroing out a column one step at a time.

Pseudocode

							\begin{algorithm}
							\caption{Gaussian Elimination on $A x = b$}
							\begin{algorithmic}
							\FOR{$k \gets 1 \text{ to } n-1$}
							\FOR{$\color{var(--emphColor)}{i \gets k+1 \text{ to } n}$}
							 \STATE {$l_{ik} \gets a_{ik}/a_{kk}$}
							 \FOR{$j \gets k+1$ to $n$}
							 \STATE{$a_{ij} \gets a_{ij} - l_{ik}a_{kj}$}
							 \ENDFOR
							 \STATE{$b_i \gets b_i - l_{ik}b_k$}
							\ENDFOR
							\ENDFOR
							\end{algorithmic}
							\end{algorithm}
						
The loop over $\color{var(--emphColor)}{i}$ updates each row below the pivot $a_{kk}$ with $r_i \gets l_{ik} r_k$.

Pseudocode

							\begin{algorithm}
							\caption{Gaussian Elimination on $A x = b$}
							\begin{algorithmic}
							\FOR{$k \gets 1 \text{ to } n-1$}
							\FOR{$i \gets k+1 \text{ to } n$}
							 \STATE {$l_{ik} \gets a_{ik}/a_{kk}$}
							 \FOR{$\color{var(--emphColor)}{j \gets k+1 \text{ to } n}$}
							 \STATE{$a_{ij} \gets a_{ij} - l_{ik}a_{kj}$}
							 \ENDFOR
							 \STATE{$b_i \gets b_i - l_{ik}b_k$}
							\ENDFOR
							\ENDFOR
							\end{algorithmic}
							\end{algorithm}
						
The loop over $\color{var(--emphColor)}{j}$ updates each entry in row $r_i$ one at a time.
Our procedure requires that all pivots encountered are nonzero. We will discuss this point when we talk about pivoting strategies and the LU factorization.

Try your hand on the Gaussian elimination warm-up problems on HW2!

For even more examples, see Bronson and Costa's Linear Algebra. (Some exercises are answered in the back of the book!)

Complexity

What is the computational cost of running Gaussian elimination on an $n \times n$ matrix?
Roughly speaking, there are three nested loops and the innermost loop performs a constant number of operations. Each loop is repeated roughly $n$ times, so the computational cost grows like $n^3$.

How many flops exactly?

The following table counts the number of floating point operations required by our Gaussian elimination pseudocode.

\begin{array}{cccc} \text{outer } k \text{ loop} & \text{loop over } i & +/- & \times/ \div \\ \hline 1 & 2 : n & (n-1)n & (n-1)(n+1)\\ 2 & 3 : n & (n-2)(n-1) & (n-2)n\\ \vdots & \vdots & \vdots & \vdots\\ k & k+1: n & (n-k)(n+1+k) & (n-k)(n+2-k)\\ \vdots & \vdots & \vdots & \vdots\\ n-1 & n : n & 1\cdot 2 & 1\cdot 3 \end{array}

Congratulations! You reached the end of this lecture.