Gauss transform

So each elementary row operation $r_i \gets r_i + c r_j$ can be implemented by multiplying by the lower triangular matrix with ones along the diagonal, $l_{ij} = c$, and zeroes everywhere else.
									\begin{algorithm}
							    \caption{$k$th step of Gaussian Elimination}
									\begin{algorithmic}
										\FOR{$i \gets k+1$ to $n$}
											\STATE {$r_i \gets r_i - l_{ik} r_k$}
										\ENDFOR
									\end{algorithmic}
									\end{algorithm}
								
How can we implement the $k$th step of Gaussian Elimination, using a single matrix multiplication?
How can we implement the $\color{var(--emphColor)}{k}$th step of Gaussian Elimination, using a single matrix multiplication?
Use the lower triangular matrix $G_{\color{var(--emphColor)}{k}}$ with ones along the diagonal, $$(G_\color{var(--emphColor)}{k})_{ij} = \begin{cases} 0, & \text{if } i \leq \color{var(--emphColor)}{k} \\ -{a_{i\color{var(--emphColor)}{k}} \over a_{\color{var(--emphColor)}{kk}}}, & \text{if } i \geq \color{var(--emphColor)}{k}+1 \end{cases},$$ and zeroes everywhere else.

This kind of matrix deserves a special name.

For any $1 \leq \color{var(--emphColor)}{j} \leq n$ and any vector $g$ of length $n$ with $g_i = 0$ whenever $i \leq \color{var(--emphColor)}{j}$, the matrix $$G_{\color{var(--emphColor)}{j}} = I_n + g e_{\color{var(--emphColor)}{j}}^T$$ is a Gauss transform.
The superscript $T$ denotes the transpose and $e_j$ denotes the $j$th standard basis vector, with $(e_j)_i = \delta_{ij}$.

If $n = 4$ and $k = 2$, $$G_2 = \begin{bmatrix} 1 & & & \\ & 1 & & \\ & g_3 & 1 & \\ & g_4 & & 1 \end{bmatrix}.$$

Multiplying by the Gauss tranform $G_1 = I + g_1 e_1^T = \begin{bmatrix} 1 & &\\ -\frac{4}{2} & 1&\\ \frac{2}{2} &&1 \end{bmatrix} $ eliminates the first column of $A= \begin{bmatrix} 2 & 1 & 1\\ 4 & -6 & 0\\ -2 & 7 & 2 \end{bmatrix}$:
\begin{equation} G_1 A= \begin{bmatrix} 2 & 1 & 1\\ & -8 & -2\\ & 8 & 3 \end{bmatrix}. \end{equation}

What do Gauss transforms have to do with LU?

The $k$th step of the Gaussian Elimination routine can be accomplished by multiplying $A$ by a Gauss transform $G_k$.
So we can view the Gaussian Elimination routine as computing $$G_{n-1} \cdots G_2 G_1 A = U,$$ where $U$ is the resulting upper triangular matrix.

Now if $G_{n-1} \cdots G_2 G_1 A = U,$ and if each $G_k$ is invertible, then $$A = (G_1^{-1}G_2^{-2}\cdots G_{n-1}^{-1})U.$$

In the warm-up to HW2 you'll show that the product of lower triangular matrices is lower triangular, and that the inverse of a lower triangular matrix is also lower triangular, so $L = G_1^{-1}G_2^{-2}\cdots G_{n-1}^{-1}$ is lower triangular.

Getting closer...

So the lower triangular factor $L$ in an LU decomposition is given by $$L = G_1^{-1}G_2^{-2}\cdots G_{n-1}^{-1},$$ assuming each inverse exists.

In HW2 you'll show that each $G_k = I - g_k e_k^T$ is invertible, with inverse $$G_{k}^{-1} = I + g_k e_k^T.$$

Almost there...

In the warm-up to HW2 you'll show that if each $G_k = I - g_k e_k^T$, then the product $$L = G_1^{-1}G_2^{-2}\cdots G_{n-1}^{-1} = I + \sum_{k=1}^{n-1} g_k e_k^T,$$

We made it!

proving that $$\small L= G_1^{-1} G_2^{-1}\cdots G _{n-1}^{-1}= \begin{bmatrix} 1\\ \frac{a_{21}}{a_{11}} & 1\\ \frac{a_{31}}{a_{11}}&\frac{a_{32}}{a_{22}}&1\\ \vdots &\vdots &\vdots & \ddots\\ \frac{a_{n1}}{a_{11}}&\frac{a_{n2}}{a_{z2}}&\frac{a_{n3}}{a_{33}}&\cdots &1 \end{bmatrix}.$$
These are precisely the entries $l_{ik}$ computed at each step of our Gaussian Elimination routine!

Conclusion

So to compute an LU decomposition we need only run Gaussian Elimination, maintaining an array $L$ to keep tabs on the elementary row operations performed.
We don't need to create a separate array $L$.
Since entries below the diagonal of $A$ are zeroed out, we may update $a_{ik} \gets a_{ik} / a_{kk}$ at the $k$th step, thereby overwriting the lower triangular portion of $A$ with $L$.
Congratulations! You reached the end of this lecture.