Note that only square matrices can be SDD.

Why care about SDD matrices?

If $A$ is SDD, then
  1. $A$ is invertible, so the system $Ax = b$ has a unique solution for every vector $b$.
  2. Gaussian Elimination can run to completion on $A$ and no row swaps are needed!
  3. (This means that all pivots encountered are nonzero.)
  4. Gaussian elimination is stable.
We'll prove only the first statement. For more details, see Bradie.
If $A$ is SDD then $A$ is invertible.
We proceed by contradiction.
Suppose that $A$ is not invertible. Then the system $Ax = 0$ does not have a unique solution and there is a nonzero $x$ such that $Ax = 0$.
Since $x \neq 0$, the largest entry (in absolute value) of $x$, say $x_{\color{var(--emphColor)}{k}}$, is nonzero.
If $A$ is SDD then $A$ is invertible.
(Continued) The $\color{var(--emphColor)}{k}$th row of $Ax = 0$ reads $$\sum^n_{j=1} a_{\color{var(--emphColor)}{k}j} x_j = 0.$$
Rearranging, dividing both sides by $x_{\color{var(--emphColor)}{k}}$, which is the largest element in the vector $x$, and taking absolute values gives
$$|a_{\color{var(--emphColor)}{kk}}| = \bigg|\sum_{j \neq \color{var(--emphColor)}{k}} a_{\color{var(--emphColor)}{k}j} { x_{j} \over x_{\color{var(--emphColor)}{k}} } \bigg| \leq \sum_{j \neq \color{var(--emphColor)}{k}} |a_{\color{var(--emphColor)}{k}j} | \bigg|\frac{x_j}{x_{\color{var(--emphColor)}{k}}} \bigg| = \sum_{j\not=\color{var(--emphColor)}{k}} |a_{\color{var(--emphColor)}{k}j}|,$$
contradicting the hypothesis that $A$ is SDD.

OK, so what?

The theorem has important consequences for theory and practice.

  • (Theory) If the coefficient matrix describing our system is SDD the system has a unique solution and our numerical methods will run smoothly.
  • (Practice) When the coefficient matrix is SDD, we can safely avoid (potentially) costly pivoting strategies.

Another special matrix

An $n \times n$ matrix $A$ is symmetric if $A^T=A$.
It is symmetric positive definite (SPD) if $$x^T A x > 0$$ whenever $x \neq 0$.
Although SDD and SPD are close in Levenshtein distance, they refer to quite different notions and you should be careful to avoid confusing them.

Quadratic

If $A$ is an $n \times n$ matrix and $x$ is an $n$-vector, what is the scalar $x^T A x$?
Take a second to compute this product. $$x^T A x = \begin{bmatrix} x_1 & x_2 & \cdots & x_n \end{bmatrix} \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$$
Let $A = \begin{bmatrix} \bbox[3pt, border: 3pt solid var(--emphColor)]{3} & 1 & -1\\ 1 & \bbox[3pt, border: 3pt solid var(--emphColor)]{4} & 2\\ -1 & 2 & \bbox[3pt, border: 3pt solid var(--emphColor)]{5} \end{bmatrix}$.
Notice that $A^T = A$, so $A$ is symmetric.
Furthermore, \begin{align} x^T A x &= \sum^n_{i=1} \color{var(--emphColor)}{a_{ii}} x^2_i +2\sum_{i < j} a_{ij} x_i x_j\\ &= \color{var(--emphColor)}{3}x^2_1 + \color{var(--emphColor)}{4} x^2_2 + \color{var(--emphColor)}{5} x^2_3 + 2(1x_1 x_2 - 1 x_1 x_3 +2 x_2 x_3)\\ &= x^2_1 + x_2^2 + 2x^2_3 + (x_1+x_2)^2 + (x_1-x_3)^2 + 2(x_2 + x_3)^2\\ &> 0. \end{align}