2.4 Newton's Method

We’d like to convert a root-finding problem to a fixed point problem that converges and does so quickly.

In the previous section, we saw that the condition f(x^)=0 yields quadratic convergence.

Idea: If x^ is a root of g(x) and g(x^)0, then x^ is a fixed point of

f(x)=xg(x)g(x).

Furthermore, f(x^)=0.

Geometry of Newton's Method

Pasted image 20250213171443.webp|267

Tangent: y=g(xi1)+g(xi1)(xxi1)
xi is computed from the intersection of the tangent line at (xi1,g(xi1)) and the x-axis.

The “rise” (g(xi1)) over the “run” (xixi1) must equal g(xi1). In other words,

g(xi1)xixi1=g(xi1)g(xi1)g(xi1)=xixi1xi=xi1g(xi1)g(xi1).

Deriving Newton's Method

Input: A function g with root x^, where g is differentiable around x^, an initial point x0 in a neighbourhood (x^δ,x^+δ) such that |f|<1 where f=1gg. Also a desired tolerance ε and max iterations N.

Output: An approximate x close to the root x^ of g, or failure.

Outline:
• Given x0, evaluate x=x0g(x0)g(x0).
• If |xx0|ε, then tolerance is achieved, so output x.
• Otherwise, set x0x.
• Repeat this either until the desired accuracy is achieved (and return the iterate) or until the maximum iterations N is reached (print method failed).

Potential issue: What if we don’t know or can’t compute the derivative g?

Secant Method

Workaround: Use a difference formula instead:

xn+1=xng(xn)g(xn)g(xn1)xnxn1

• This method is called the secant method.
• Need to keep track of two iterates at any given time.
• Need two initial values!
• Some strategies are to pick x1 close to x0, or to set x1=x0+g(x0) which works well if x0 is already a good approximation.
• The order of convergence of the secant method is 1+52.

Pasted image 20250213172542.webp|350