The floating-point number system puts further restraints on what can be expected. We can at best hope to keep the relative error under control, and we cannot expect to find zeros far from the origin with great absolute accuracy.
– Richard W. Hamming 
I am somewhat disenchanted with floating-point arithmetic. We all have implemented numerical methods using floating-point numbers. But, how many of us have implemented numerical methods using rational numbers of arbitrary precision? I certainly have not. Hence, let us give it a try. In this post I will implement Newton’s method  (also known as the Newton-Raphson method) in Haskell using arbitrary-precision rational numbers.
Given a continuous (real) function and , an initial approximation of the zero of function , we then obtain the next approximation using the following recurrence relation
where is the first derivative of . We will use arbitrary-precision rational numbers (represented as a ratio of two Integer values)
type Rational = Ratio Integer
which are defined in the Data.Ratio library.
Suppose we would like to find a rational approximation of . We create a real function defined by , whose zeros are and . Note that the first derivative of is . The initial approximation is, say, , which is closer to than to . Hopefully, the sequence will converge to instead of converging to . We are performing a numerical experiment, and success is not guaranteed!
The following Haskell script implements Newton’s method:
import Data.Ratio -- define function f f :: Rational -> Rational f x = x*x - 2 -- define 1st derivative of function f f' :: Rational -> Rational f' x = 2*x -- define Newton map g :: Rational -> Rational g x = x - (f x / f' x) -- initial approximation x0 :: Rational x0 = 1 % 1 -- list of rational approximations xs :: [Rational] xs = iterate g x0
*Main> take 6 xs [1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832, 886731088897 % 627013566048]
Hence, we have, for instance, the following rational approximation
How good is this rational approximation? Let us see:
*Main> let x5 = xs !! 5 *Main> x5*x5 - 2 1 % 393146012008229658338304 *Main> 886731088897 / 627013566048 1.4142135623730951 *Main> sqrt 2 1.4142135623730951
Fairly good indeed! And all it took was five iterations!
 Richard W. Hamming, Numerical Methods for Scientists and Engineers, Dover Publications, 1973.