Newton Iterative Method#
Newton itertative method is typically used to find root of a given equality and to find a minimum of a convex and twice-differentiable function.
For finding the root of \(f(x) = 0\), the Newton method update root \(x\) using
For minimizing an unconstrainted convex function \(f(x)\), the Newton method update \(x\) using
Root Finding#
The Newton method can be derived and proved from Taylor expansion theory, which approximates \(f(x)\) at \(x\) with different orders of truncating errors.
Let \(x^*\) be the optimal solution of a root finding problem as follows:
The first-order approximation of the optimality \(f(x^*)\) at \(x\), would be:
Since \(x^*\) is a solution to Eqn.(1), Eqn.(2) can be rewritten as:
Therefore the above update rules can be used to find the optomal solution given tolerances. Numerically, a step size factor \(t\) is typically added as follows.
The stoping crietia can be:
Unconstrained Minimization Probelm#
Here we want to find the optimal solution of a convex and twice-differentiable function \(g(x)\) as follows:
Applying the Newton’s method for this optimziation problem is the same as find the root of \(\nabla g(x) = 0\) as the gradient vanishes at the optimal point of a convex and differentiable function.
Let \(f(x) = \nabla g(x)\), then we have
Newton Step#
Newton step is known as:
Newton Decrement#
Define Newton decrement as
Stopping Crietia#
Using Newton’s updating rule to update \(x\) at each step, we can get the second-order approximation of the funcation \(g\) at \(x\):
Thus the function evaluation error between steps is:
Therefore controlling Newton decrement can help stop the iterations.
Equality Constrained Minimization Problem#
Newton’s method can also be extended to solve the following equality constrained problem: