Compactly, the first-order approximation of $f: \mathbb{R}^n \to \mathbb{R}^n$ is given by $$f(a + h) \approx f(a) + \color{var(--emphColor)}{J_f}(a) h.$$
Our method is based on this approximation.
\begin{algorithm}
\caption{Newton's method for systems}
\begin{algorithmic}
\REQUIRE{Initial guess $x^{(0)}$}
\FOR{$k \gets 1$ to $\texttt{max_iter}$}
\STATE{solve $J_f(x^{(k)})h = -f(x^{(k)})$ for $h$}
\STATE{$x^{(k+1)} \gets x^{(k)} + h$}
\IF{stopping criterion is met}
\STATE{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Newton's method for systems}
\begin{algorithmic}
\REQUIRE{$\color{var(--emphColor)}{\text{Initial guess } x^{(0)}}$}
\FOR{$k \gets 1$ to $\texttt{max_iter}$}
\STATE{solve $J_f(x^{(k)})h = -f(x^{(k)})$ for $h$}
\STATE{$x^{(k+1)} \gets x^{(k)} + h$}
\IF{stopping criterion is met}
\STATE{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Newton's method for systems}
\begin{algorithmic}
\REQUIRE{Initial guess $x^{(0)}$}
\FOR{$\color{var(--emphColor)}{k \gets 1 \text{ to } \texttt{max_iter}}$}
\STATE{solve $J_f(x^{(k)})h = -f(x^{(k)})$ for $h$}
\STATE{$x^{(k+1)} \gets x^{(k)} + h$}
\IF{stopping criterion is met}
\STATE{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Newton's method for systems}
\begin{algorithmic}
\REQUIRE{Initial guess $x^{(0)}$}
\FOR{$k \gets 1$ to $\texttt{max_iter}$}
\STATE{solve $\color{var(--emphColor)}{J_f(x^{(k)})h = -f(x^{(k)})}$ for $h$}
\STATE{$x^{(k+1)} \gets x^{(k)} + h$}
\IF{stopping criterion is met}
\STATE{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Newton's method for systems}
\begin{algorithmic}
\REQUIRE{Initial guess $x^{(0)}$}
\FOR{$k \gets 1$ to $\texttt{max_iter}$}
\STATE{solve $J_f(x^{(k)})h = -f(x^{(k)})$ for $h$}
\STATE{$x^{(k+1)} \gets x^{(k)} + h$}
\IF{$\color{var(--emphColor)}{\text{stopping criterion is met}}$}
\STATE{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
There are a few options here. If $\epsilon_{\mathrm{tol}}$ denotes your specified tolerance, we could try:
When the Jacobian $J_f(x^{(k)})$ is invertible, the update $h=-J_f(x^{(k)})^{-1} f(x^{(k)})$, so $$x^{(k+1)} = x^{(k)} - J_f(x^{(k)})^{-1} f(x^{(k)}).$$
f = @(x) [x(1)^3 - 2 * x(2) - 2; ...
x(1)^3 - 5 * x(3)^2 + 7; ...
x(2) * x(3)^2 -1];
J = @(x) [3 * x(1)^2, -2, 0; ...
3 * x(1)^2, 0, -10 * x(3)^2; ...
0, x(3)^2, 2 * x(3) * x(2)];
for k = 1:N
f_val = f(x);
J_val = J(x);
h = -J_val \ f_val;
x = x + h
if norm(h) < eps
break
end
end
f = @(x) [x(1)^3 - 2 * x(2) - 2; ...
x(1)^3 - 5 * x(3)^2 + 7; ...
x(2) * x(3)^2 -1];
J = @(x) [3 * x(1)^2, -2, 0; ...
3 * x(1)^2, 0, -10 * x(3)^2; ...
0, x(3)^2, 2 * x(3) * x(2)];
for k = 1:N
f_val = f(x);
J_val = J(x);
h = -J_val \ f_val;
x = x + h
if norm(h) < eps
break
end
end
We use
“lambda” or “anonymous”
functions to implement our function and its Jacobian.
f = @(x) [x(1)^3 - 2 * x(2) - 2; ...
x(1)^3 - 5 * x(3)^2 + 7; ...
x(2) * x(3)^2 -1];
J = @(x) [3 * x(1)^2, -2, 0; ...
3 * x(1)^2, 0, -10 * x(3)^2; ...
0, x(3)^2, 2 * x(3) * x(2)];
for k = 1:N
f_val = f(x);
J_val = J(x);
h = -J_val \ f_val;
x = x + h
if norm(h) < eps
break
end
end
In MATLAB, we use the
backslash operator
to solve linear systems.
f = @(x) [x(1)^3 - 2 * x(2) - 2; ...
x(1)^3 - 5 * x(3)^2 + 7; ...
x(2) * x(3)^2 -1];
J = @(x) [3 * x(1)^2, -2, 0; ...
3 * x(1)^2, 0, -10 * x(3)^2; ...
0, x(3)^2, 2 * x(3) * x(2)];
for k = 1:N
f_val = f(x);
J_val = J(x);
h = -J_val \ f_val;
x = x + h
if norm(h) < eps
break
end
end
We use the MATLAB function
$\texttt{norm}$
to implement the stopping criterion $||h|| < \epsilon$.