Procedure
- Begin with an interval $[a,b]$ such that $f(a) f(b)< 0$. (So $f$ has a root in $[a, b]$.)
- Compute the midpoint $p=(a+b)/2$.
- CHOOSE: left or right?
- If $f(a)f(p)< 0$, then $f$ changes sign in $[a,p]$ and the left half contains a root by the IVT.
- Alternatively, $f$ must have a root in $[p, b]$
(since $f$ has a root in $[a, b]$)
- Set $[a, p]$ or $[p, b]$ as new interval and go back to Step 1.
At each iteration, use $p$ as your root approximation.
Square root via bisection.
$f(x)=x^2-2,$ start with $[0,2]$
Stopping criteria
We need to figure out when to stop. Are we guaranteed convergence? If so, in which cases?
Let $f$ be a continuous function on $[a,b]$ and suppose that $f(a)f(p)< 0$. The bisection method generates a sequence of iterates $p_n$ which converges to a root $p \in(a,b)$ with the property that
\begin{equation}
|p_n-p| \leq \frac{b-a}{2^n}.
\end{equation}
Let $f$ be a continuous function on $[a,b]$ and suppose that
$\color{var(--emphColor)}{f(a)f(p)< 0}$. The bisection method
generates a sequence of iterates $p_n$ which converges to a root
$p \in(a,b)$ with the property that
\begin{equation}
|p_n-p| \leq \frac{b-a}{2^n}.
\end{equation}
Since $\frac{b-a}{2^n} \to 0$ as $n \to \infty$, it is
enough to establish the error bound. By construction, we are
guaranteed an interval $(a_n, b_n)$ containing the root $p$ at
each iteration. The distance from any point in an interval to the
interval's midpoint cannot exceed half the interval's length.
Therefore since $p_n$ is the midpoint of $(a_n, b_n)$ and $p$ is
is a point in $(a_n, b_n)$,
\begin{equation}
|p_n - p |\leq \frac {b_n-a_n}{2}.
\end{equation}
The interval containing $p$ is halved at each iteration, so that
$b_n-a_n = \frac{1}{2}(b_{n-1}-a_{n-1})$. Thus we obtain
\begin{align}
b_n-a_n & =\frac{1}{2}(b_{n-1}-a_{n-1}) \\
& =\frac{1}{2}\cdot\frac{1}{2}(b_{n-2}-a_{n-2})\\
& \quad \vdots\\
& = \frac{1}{2^{n-1}} (b_1 - a_1).
\end{align}
Great theorem!
This is a great theorem. It guarantees convergence in certain cases,
it provides a bound on the absolute error $|p_n - p|$,
and it tells us exactly the number of steps required to reach to achieve a desired accuracy.
It also suggests a stopping criterion. If we iterate until $$\color{var(--emphColor)}{{b_n - a_n \over 2} < \epsilon},$$
the theorem guarantees the absolute error in our approximation $p_n = (a_n + b_n)/2$
is at most $\epsilon$.