Newton’s method using rational arithmetic

The floating-point number system puts further restraints on what can be expected. We can at best hope to keep the relative error under control, and we cannot expect to find zeros far from the origin with great absolute accuracy.

I am somewhat disenchanted with floating-point arithmetic. We all have implemented numerical methods using floating-point numbers. But, how many of us have implemented numerical methods using rational numbers of arbitrary precision? I certainly have not. Hence, let us give it a try. In this post I will implement Newton’s method [1] (also known as the Newton-Raphson method) in Haskell using arbitrary-precision rational numbers.

Given a continuous (real) function $f : \mathbb{R} \to \mathbb{R}$ and $x_0$, an initial approximation of the zero of function $f$, we then obtain the next approximation using the following recurrence relation

$x_{k+1} = x_k - \displaystyle\frac{f (x_k)}{f' (x_k)}$

where $f'$ is the first derivative of $f$. We will use arbitrary-precision rational numbers (represented as a ratio of two Integer values)

type Rational = Ratio Integer

which are defined in the Data.Ratio library.

__________

Example

Suppose we would like to find a rational approximation of $\sqrt{2}$. We create a real function defined by $f (x) = x^2 - 2$, whose zeros are  $\sqrt{2}$ and $-\sqrt{2}$. Note that the first derivative of $f$ is $f' (x) = 2 x$. The initial approximation is, say, $x_0 = 1 / 1$, which is closer to $\sqrt{2}$ than to $-\sqrt{2}$. Hopefully, the sequence $\{x_k\}$ will converge to $\sqrt{2}$ instead of converging to $-\sqrt{2}$. We are performing a numerical experiment, and success is not guaranteed!

The following Haskell script implements Newton’s method:

import Data.Ratio

-- define function f
f :: Rational -> Rational
f x = x*x - 2

-- define 1st derivative of function f
f' :: Rational -> Rational
f' x = 2*x

-- define Newton map
g :: Rational -> Rational
g x = x - (f x / f' x)

-- initial approximation
x0 :: Rational
x0 = 1 % 1

-- list of rational approximations
xs :: [Rational]
xs = iterate g x0

where we used (higher-order) function iterate to create the list of rational approximations. We run the script above in GHCi and use take to compute the first few rational approximations in the list:

*Main> take 6 xs
[1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832,
886731088897 % 627013566048]

Hence, we have, for instance, the following rational approximation

$x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$.

How good is this rational approximation? Let us see:

*Main> let x5 = xs !! 5
*Main> x5*x5 - 2
1 % 393146012008229658338304
*Main> 886731088897 / 627013566048
1.4142135623730951
*Main> sqrt 2
1.4142135623730951

Fairly good indeed! And all it took was five iterations!

__________

References

[1] Richard W. Hamming, Numerical Methods for Scientists and Engineers, Dover Publications, 1973.

12 Responses to “Newton’s method using rational arithmetic”

1. Ivo Maljevic Says:

Don’t know Haskell, but this looks like cheating. Looks like you are finding the floating-point values which are then replaced with their rational approximation. That is not the same as using rational numbers all along.

• Rod Carvalho Says:

Looks like you are finding the floating-point values which are then replaced with their rational approximation.

That is completely wrong. I do indeed work with rational numbers all along. Check the type declarations: I use type Rational everywhere, and nowhere do I use type Float. Take a look at the list of rational approximations:

*Main> take 6 xs
[1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832,
886731088897 % 627013566048]


You do not see floating-point numbers anywhere. Where I did “cheat” was in my previous post, where I started with floating-point numbers and then obtained the corresponding rational approximations.

• Ivo Maljevic Says:

I really don’t know Haskell, so I could be wrong, but either Haskell does an exhaustive search over all rational numbers that satisfy the iterative Newton expression, or it is internally representing the values as floats just like the values I did (and then finds the rational approximation, just keeping the floats hidden). In either case, there is much more involved than 5 iterations. Anything else would be just magic.

• Rod Carvalho Says:

There is no magic. An arbitrary-precision rational number in Haskell (type Rational) is merely a pair of arbitrary-precision integers. I start with $x_0 = 1 / 1$, then use the recurrence relation to obtain $x_1 = 3/2$, and then $x_2 = 17 / 12$. Haskell does not convert anything to floating-point numbers, it merely works with fractions like everyone does in elementary school! Multiplication of rational numbers is easier than addition, though.

Note, however, that since we are working with arbitrary-precision rational numbers, the more accurate the rational approximation, the longer it takes to compute it. There are only 5 iterations, but each iteration requires more computational work than the previous one, as we are dealing with ratios of larger integer numbers.

• Ivo Maljevic Says:

It looks like I owe you an apology, I just checked manually, it is possible to do it using rational numbers. Excellent idea!

• Rod Carvalho Says:

Take a look at this 1978 paper:

In this paper they discuss finite-precision rational numbers (think of ordered pairs of Int values in C or C++).

• Ivo Maljevic Says:

Interesting, thanks! I haven’t done any DSP in a long time, but it would have been interesting to compare the sin and cos from the last page with the CORDIC and lookup table approach. And the formulas up to $\pi/4$ are sufficient since for instance $\sin (x) = \cos (\pi/2 - x)$ so the range from $\pi/4$ to $\pi/2$ is not a problem.

2. Ivo Maljevic Says:

Here is the sequence you would get using just regular rational numbers:

1.5000000000000000 (rational: 3/2)
1.4166666666666667 (rational: 17/12, etc.)
1.4142156862745099
1.4142135623746899
1.4142135623730951

which, of course, is most likely what your code does.

3. Ivo Maljevic Says:

Sorry, I meant just using floating-point values. The brackets are just used to indicate what your approximation does.

Of COURSE it’s possible to do it using rational arithmetic!! After all, we were taught how to manipulate factions in school.

In fact, I thought of rational Newton-Raphson years ago and it’s interesting that someone else has thought of it.

Why not write the equations themselves as rational arithmetic?

x(i) = p(i)/q(i), where p(i) and q(i) are integers.
To find root of N

f(x)=x^2-N = p(i)^2/q(i)^2-N
f’(x)=2x = 2p(i)/q(i)

x(i+1)= x(i) -f(x)/f’(x) OK so far…

take p and q as p(i) and q(i), otherwise the equations get incomprehensible in this pure text format.

x(i+1) = p/q – f(p/q)/f’(p/q)

= p/q – (p^2/q^2 – N)/(2*p/q)

multiply numerator and denominator by q^2

x(i+1)=p/q- (p^2- N.q^2) / (2pq)
put over common denominator

x(i+1) = 2p^2/(2pq) – (p^2-N.q^2)/(2pq)

= (p^2 + N.q^2)/(2pq)
take the numerator and denominator as the new p(i+1) and q(i+1)

so the new values are
p(i+1) = p^2+N.q^2
q(i+1) = 2pq

Note that this uses multiplication and no division.
You might want to check that the numbers have no
common factor.

**********************************************************************************
So we have Newton Raphson square roots in rational arithmetic.!!!
**********************************************************************************

so if we want the square root of 10 and start with p=3 and q=1,
(i.e. first approximation x0 = p/q = 3)

new p and q are
19
and
6
i.e. next approximation is 19/6 (or 3.16666 in floating point)

Now, from 19 and 6, next approximation is
19^2 + 10 x 6^2 = 360+361 = 721
and
2x19x6 = 228

721 / 228 (or 3.1622807 in floating point)

I have tried rational Newton Raphson on cubic polynomials, but the numbers get huge very quickly.

I have also compared rational Newton Raphson with the rational approximations if you start with floating-point square roots. If you start with the right rational approximation, you get EXACTLY THE SAME rational number as you would get by approximating floating point numbers, but Newton Raphson gets there far quicker, by missing out most of the fractions.

and the approximations go
1/1
3/2
7/5
17/12
41/29
99/70
239/169
577/408
etc.

However, do it in Newton Raphson and the approximations go
1/1
3/2
17/12
577/408
so you get to the same place much more quickly, as one might hope with Newton Raphson.