GUIDING QUESTION:
How can I compute a solution to a linear systems?
We'll develop a different class of solvers for linear systems, based on fixed point iteration.
When $A$ is sparse, Gaussian Elimination might introduce significant fill-in, nonzero entries in $L$ and $U$ where $A$ has zeroes, destroying matrix sparsity.
Consider a linear system $Ax = b$ with a square coefficient matrix.
Guiding philosophy: Instead of solving all the equations once simultaneously, let's solve the $i$th equation for the $i$th variable over and over.
When $A$ is $n\times n$, the $\color{var(--emphColor)}{i}$th component of the system $Ax = b$ reads $$a_{\color{var(--emphColor)}{i}1} x_1 + \cdots + a_{\color{var(--emphColor)}{ii}} x_{\color{var(--emphColor)}{i}} + \cdots + a_{\color{var(--emphColor)}{i}n} x_n = b_{\color{var(--emphColor)}{i}}.$$
Given a starting vector $x^{(0)}$,
The intuition is so simple, it is perhaps surprising that the Jacobi iteration works well in practice.
Inspired by our earlier discussion of fixed point iteration schemes, we approximate the fixed point satisfying $\color{var(--emphColor)}{x} = T\color{var(--emphColor)}{x} + c$ using the iterates defined by $$x^{(\color{var(--emphColor)}{k+1})} = T x^{(\color{var(--emphColor)}{k})} + c.$$
Suppose $x^*$ solves the fixed point problem and denote the error in the $k$th approximation by $e^{(k)} = x^{(k)} - x^*$.
The following remarks are highly technical and are considered slightly beyond the scope of this course.
Since the spectral norm is submultiplicative and $e^{(k+1)} = T^{k+1}e^{(0)}$, the norm of the error at the $(\color{var(--emphColor)}{k+1})$th step is bounded by $$||e^{(\color{var(--emphColor)}{k+1})}||_2 = ||T^{k+1}e^{(0)}|| \leq ||T||^{\color{var(--emphColor)}{k+1}}||e^{(0)}||_2.$$
For any square $A$, set $A= D \color{var(--emphColor)}{+} L \color{var(--emphColor)}{+} U$, with $D$ the diagonal part, $\color{var(--emphColor)}{+}L$ the striclty lower triangular part, $\color{var(--emphColor)}{+}U$ the striclty upper triangular part.
Suppose $A$ has nonzero diagonal entries and set $M=D$ and $N=-(L+U)$.
$x^{(\color{var(--emphColor)}{k+1})} = - (D^{-1}(L+U)) x^{(\color{var(--emphColor)}{k})} + D ^{-1} b$
$x^{(\color{var(--emphColor)}{k+1})} = - (D^{-1}(L+U)) x^{(\color{var(--emphColor)}{k})} + D ^{-1} b$
$x^{(\color{var(--emphColor)}{k+1})} = - (D^{-1}(L+U)) x^{(\color{var(--emphColor)}{k})} + D ^{-1} b$
So the Jacobi update, which we obtained by solving the $i$th equation for the $i$th variable, approximates a solution to $Ax = b$ by translating the root finding problem into a fixed point problem using the splitting $(M, N) = (D, -(L+U))$ of $A$.
If we loop through the components of $x^{(\color{var(--emphColor)}{k+1})}$ in order when performing the Jacobi update, $$x_i^{(\color{var(--emphColor)}{k+1})} = \frac{1}{a_{ii}} (b_i- \sum _{j\not= i} a_{ij} x^{(\color{var(--emphColor)}{k})}_j),$$
If loop through the components of $x^{(\color{var(--emphColor)}{k+1})}$ when performing the Jacobi update, $$x_i^{(\color{var(--emphColor)}{k+1})} = \frac{1}{a_{ii}} (b_i- \sum _{j = \color{var(--emphColor)}{1}}^{\color{var(--emphColor)}{i-1}} a_{ij} x^{(\color{var(--emphColor)}{k})}_{\color{var(--emphColor)}{j}} + \sum _{j= i + 1}^n a_{ij} x^{(\color{var(--emphColor)}{k})}_j),$$
We hope that $x_i^{(\color{var(--emphColor)}{k+1})}$ is a better approximation than $x_i^{(\color{var(--emphColor)}{k})}$, so we might be better off by using most updated information when updating $x_i^{(\color{var(--emphColor)}{k+1})}$.
The avid reader will verify that the Gauss-Seidel iteration is based on the splitting $M = L + D$ and $N = - U$.
It's hard to say when exactly these methods work, but we can make some guarantees.
It's hard to say when exactly these methods work, but we can make some guarantees.
It's hard to say when exactly these methods work, but we can make some guarantees.
We can make a similar guarantee for Gauss-Seidel iteration.
We can make a similar guarantee for Gauss-Seidel iteration.
We can make a similar guarantee for Gauss-Seidel iteration.
Typically, Gauss-Seidel will be faster in practice (with respect to the number of iterations required for convergence, not accounting for gains due to parallelizing the Jacobi iteration).