## Posts Tagged ‘Numerical Methods’

### Generalizing Heron’s method

November 25, 2012

Suppose we are given a positive rational number $y$. We would like to compute an approximation of the $n$-th root of $y$, which we denote by $\sqrt[n]{y}$. If $n = 2$, we can use Heron’s method to approximate $\sqrt{y}$, as we saw a few days ago. But, what if we have that $n > 2$? Is it possible to generalize Heron’s method?

Indeed, it is. Let us introduce a function $f_{n,y} : \mathbb{R} \to \mathbb{R}$, defined by $f_{n,y} (x) = x^n - y$, where $n \geq 2$ and $y > 0$. By construction, the positive zero of function $f_{n,y}$ is $\sqrt[n]{y}$. Do note that if $n$ is odd, then $-\sqrt[n]{y}$ is not a zero of $f_{n,y}$. Recall that Newton’s method uses the recurrence relation $x_{k+1} = g_{n,y} ( x_k )$, where function $g_{n,y} : \mathbb{R} \to \mathbb{R}$ is defined by

$g_{n,y} (x) = x - \displaystyle\frac{f_{n,y} (x)}{f_{n,y}' (x)}$

where $f_{n,y}'$ is the first derivative of $f_{n,y}$. Hence, we obtain

$g_{n,y} (x) = x - \displaystyle\frac{f_{n,y} (x)}{f_{n,y}' (x)} = x - \displaystyle\left(\frac{x^n - y}{n x^{n-1}}\right) = \displaystyle\frac{1}{n} \left( (n-1) x + \frac{y}{x^{n-1}}\right)$

Thus, we have the generalized Heron’s method

$x_{k+1} = \displaystyle\frac{1}{n} \left( (n-1) x_k + \frac{y}{x_k^{n-1}}\right)$

If we make $n = 2$, we obtain the standard Heron’s method, as we expected. The following Haskell script implements this generalized Heron’s method using arbitrary-precision rational numbers (included in the Data.Ratio library) to represent $y$ and the sequence of approximations $(x_k)_{k \in \mathbb{N}_0}$:

import Data.Ratio

-- define generalized Heron map
g :: Integer -> Rational -> Rational -> Rational
g n y x = (1 % n) * ((fromIntegral (n-1) * x) + (y / (x^(n-1))))

-- initial approximation
x0 :: Rational
x0 = 1 % 1

-- list of approximations of the n-th root of y
roots :: Integer -> Rational -> [Rational]
roots n y | n  > 1 = iterate (g n y) x0
| n <= 1 = error "Invalid value of n!!"

We load this script into GHCi. Let us now perform some numerical experiments.

__________

Experiment #1

We would like to compute a rational approximation of $\sqrt{2}$. We have $n = 2$ and $y = 2$. Here is a very brief GHCi session:

*Main Data.Ratio> take 6 (roots 2 2)
[1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832,
886731088897 % 627013566048]

We obtain the following rational approximation after five iterations

$x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$

which is the same approximation I obtained twice before.

__________

Experiment #2

We would like to compute a rational approximation of $\sqrt{5}$. We have $n = 2$ and $y = 5$. Here is a very brief GHCi session:

*Main Data.Ratio> take 7 (roots 2 5)
[1 % 1,3 % 1,7 % 3,47 % 21,2207 % 987,4870847 % 2178309,
23725150497407 % 10610209857723]

We obtain the following rational approximation after six iterations

$x_6 = \displaystyle\frac{23725150497407}{10610209857723} \approx \sqrt{5}$

How good is this approximation? Let us check:

*Main Data.Ratio> (23725150497407 % 10610209857723)^2 - 5
4 % 112576553224922323902744729
*Main Data.Ratio> 4 / 112576553224922323902744729
3.5531377408652764e-26

Not bad! We could now obtain a rational approximation of the famous golden ratio $\varphi := \frac{1 + \sqrt{5}}{2}$, as follows:

*Main Data.Ratio> let xs = roots 2 5
*Main Data.Ratio> let phis = map (/2) (map (+1) xs)
*Main Data.Ratio> take 7 phis
[1 % 1,2 % 1,5 % 3,34 % 21,1597 % 987,3524578 % 2178309,
17167680177565 % 10610209857723]

We obtain the following rational approximation after six iterations

$\tilde{\varphi}_6 = \displaystyle\frac{17167680177565}{10610209857723} \approx \varphi$

How good is this approximation? Let us recall that the golden ratio is one of the solutions of the equation $x^2 - x - 1 = 0$. Hence, we can check how close to zero $\tilde{\varphi}_6^2 - \tilde{\varphi}_6 - 1$ is:

*Main Data.Ratio> let phi6 = 17167680177565 % 10610209857723
*Main Data.Ratio> phi6^2 - phi6 - 1
1 % 112576553224922323902744729
*Main Data.Ratio> 1 / 112576553224922323902744729
8.882844352163191e-27

I would say that is fairly close.

__________

Experiment #3

We now would like to compute a rational approximation of $\sqrt[3]{10}$. We have $n = 3$ and $y = 10$. Here is a very brief GHCi session:

*Main Data.Ratio> take 6 (roots 3 10)
[1 % 1,4 % 1,23 % 8,4909 % 2116,55223315303 % 25495981298,
83759169926117983945469262167029 % 38876457805393768546966848104041]

We obtain the following rational approximation after five iterations

$x_5 = \displaystyle\frac{83759169926117983945469262167029}{38876457805393768546966848104041} \approx \sqrt[3]{10}$

Is this approximation any good? Let us check:

*Main Data.Ratio> let x5 = (roots 3 10) !! 5
*Main Data.Ratio> x5^3
5876207108075025254603649658428809744645894398375582704
87986732898174481543266076676033790365389 % 58757060813
2677736272189042625165443146909261971794220978628909441
43919908404893015241356540921
*Main Data.Ratio> 5876207108075025254603649658428809744
6458943983755827048798673289817448154326607667603379036
5389 / 587570608132677736272189042625165443146909261971
79422097862890944143919908404893015241356540921
10.000852709004354

This is somewhat disappointing. The rational approximation $x_5$ is a ratio of two enormously long integers, but we do not have such a small error. Note that $x_5^3 - 10 \approx 10^{-3}$. We can do some quick error analysis. Let $f_{3,10} (x) = x^3 - 10$. The 1st order Taylor approximation of $f_{3,10}$ around $x_5$ evaluated at $x = \sqrt[3]{10}$ is

$f_{3,10} (\sqrt[3]{10}) \approx f_{3,10} (x_5) + f_{3,10}' (x_5) (\sqrt[3]{10} - x_5)$

and, since $f_{3,10} (\sqrt[3]{10}) = 0$, we obtain the estimate of the error

$x_5 - \sqrt[3]{10} \approx \displaystyle\frac{f_{3,10} (x_5)}{f_{3,10}' (x_5)}$

In other words, the output deviation $f_{3,10} (x_5) - f_{3,10} (\sqrt[3]{10})$ should be divided by the slope $f_{3,10}' (x_5)$ to obtain an estimate of the magnitude of the approximation error $x_5 - \sqrt[3]{10}$. Let us use GHCi yet once again:

*Main Data.Ratio> let error = (x5^3 - 10) / (3 * x5^2)
*Main Data.Ratio> error
1670089160826306272530773923851043922672595525468316978
5941152245094153072382174540074985393 % 272741620880842
8590518540423731512512814773109630109912907276686930689
12248623218095571481624481
*Main Data.Ratio> 1670089160826306272530773923851043922
6725955254683169785941152245094153072382174540074985393
/ 2727416208808428590518540423731512512814773109630109
91290727668693068912248623218095571481624481
6.123338108179484e-5

This is still quite a huge error, I would say.

__________

Bonus Experiment

Let us go crazy and attempt to compute a rational approximation of $\sqrt[100]{100}$. We have $n = 100$ and $y = 100$. We expect the rational numbers to quickly become ratios of astronomically long integers. Therefore, we will not be showing any rational approximations on GHCi. Here is something scary:

*Main Data.Ratio> let x3 = (roots 100 100) !! 3
*Main Data.Ratio> let error = (x3^100 - 100) / (100 * x3^99)
*Main Data.Ratio> let error_float = fromIntegral (numerator error)
/ fromIntegral (denominator error)
*Main Data.Ratio> error_float

After some 20 minutes without an output, I decided to abort this experiment. If you want to show x3, you can, but you will see thousands of digits. Thousands! You have been warned.

### Heron’s method using integer arithmetic

November 22, 2012

Suppose we are given a positive rational number $y$, whose square root $\sqrt{y}$ we would like to compute. To be more precise, we would like to compute an approximation of $\sqrt{y}$.

Let $x_k \neq 0$ be an approximation of $\sqrt{y}$. Hence, we have that $\sqrt{y}$ is the geometric mean of $x_k$ and $y / x_k$. If we replace the geometric mean of these two numbers by their arithmetic mean, we hopefully obtain a “better” approximation $x_{k+1}$, as follows

$x_{k+1} = \displaystyle\frac{1}{2} \left( x_k + \frac{y}{x_k}\right)$

If all goes well, then $x_N$ will be “sufficiently close” to $\sqrt{y}$ for a “sufficiently large” integer $N$. This algorithm is the famous Heron’s method, devised by Heron of Alexandria some 2000 years ago. Some call it the Babylonian method, which suggests that the algorithm may be some 4000 years old. It’s not merely vintage, it’s antique.

Nonetheless, this is not a post on archaeology. We don’t want to sit around and talk about Heron’s method, we want to actually implement it. What programming language should we use? Let us use Haskell. How are we going to represent $y$ and the sequence of approximations $(x_k)_{k \in \mathbb{N}_0}$?

__________

Heron’s method using floating-point arithmetic

Though millions of people have already implemented Heron’s method using floating-point arithmetic (I implemented it in C in late 2000), one more implementation would do no harm:

y :: Floating a => a
y = 2

-- define Heron map
g :: Floating a => a -> a
g x = 0.5 * (x + y/x)

-- initial approximation
x0:: Floating a => a
x0 = 1

-- list of approximations
xs :: Floating a => [a]
xs = iterate g x0

Let us load this script into GHCi. Here’s a GHCi session:

*Main> take 6 xs
[1.0,1.5,1.4166666666666665,1.4142156862745097,
1.4142135623746899,1.414213562373095]
*Main> let x5 = xs !! 5
*Main> x5**2 - 2
-4.440892098500626e-16

We thus have the following approximation after five iterations

$x_5 = 1.414213562373095 \approx \sqrt{2}$

__________

Heron’s method using rational arithmetic

Let us now use rational numbers of arbitrary precision. The following script uses the Data.Ratio library:

import Data.Ratio

y :: Rational
y = 2

-- define Heron map
g :: Rational -> Rational
g x = 0.5 * (x + y/x)

-- initial approximation
x0:: Rational
x0 = 1 % 1

-- list of approximations
xs :: [Rational]
xs = iterate g x0

Let us load this script into GHCi. Here’s a GHCi session:

*Main> take 6 xs
[1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832,
886731088897 % 627013566048]
*Main> let x5 = xs !! 5
*Main> x5*x5 - 2
1 % 393146012008229658338304
*Main> 1 / 393146012008229658338304
2.5435842395854372e-24

Note that the error is eight orders of magnitude smaller than in the previous case (where we used finite-precision floating-point numbers). We pay for more accuracy with more computation. Thus, after five iterations, we have the following rational approximation

$x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$

This is probably not news to you, my dear reader. Last April, I did, in fact, implement Heron’s method under the disguise of Newton’s method (see the appendix).

__________

Heron’s method using integer arithmetic

A rational number $y \in \mathbb{Q}$ can be expressed as a fraction $a / b$, where $a$ and $b$ are integers and $b \neq 0$. Note that $a / b$ can (obviously?) be represented as a pair of integers $(a,b)$. Since Haskell has arbitrary-precision integers, we can implement Heron’s method using pairs of arbitrary-precision integers to represent $y$ and the sequence of approximations $(x_k)_{k \in \mathbb{N}_0}$.

Let $y := a / b$ and $x_k := p_k / q_k$. The recurrence relation

$x_{k+1} = \displaystyle\frac{1}{2} \left( x_k + \frac{y}{x_k}\right)$

can thus be rewritten in the following form

$\displaystyle\frac{p_{k+1}}{q_{k+1}} = \frac{1}{2} \left( \frac{p_k}{q_k} + \frac{a}{b} \frac{q_k}{p_k} \right) = \frac{b \, p_k^2 + a \, q_k^2}{2 b \, p_k \, q_k}$

We then have two coupled recurrence relations

$\left[\begin{array}{c} p_{k+1}\\ q_{k+1}\end{array}\right] = \left[\begin{array}{c} b \, p_k^2 + a \, q_k^2\\ 2 b \, p_k \, q_k\end{array}\right]$

where $a$$b$$p_k$ and $q_k$ are integers. Here is a Haskell script that implements this vector recurrence relation:

a, b :: Integer
a = 2
b = 1

-- define Heron map
g :: (Integer,Integer) -> (Integer,Integer)
g (p,q) = (b * p^2 + a * q^2, 2 * b * p * q)

-- initial approximation
p0, q0 :: Integer
p0 = 1
q0 = 1

-- list of approximations
xs :: [(Integer,Integer)]
xs = iterate g (p0,q0)

Let us load this script into GHCi. Here’s a GHCi session:

*Main> take 6 xs
[(1,1),(3,2),(17,12),(577,408),(665857,470832),
(886731088897,627013566048)]
*Main> let x5 = xs !! 5
*Main> x5
(886731088897,627013566048)

Hence, after five iterations, we obtain the very same rational approximation we obtained in the previous case (where we used rational numbers of arbitrary-precision)

$x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$

It should be noted, however, that this implementation using arbitrary-precision integers is different in one very important aspect from the one using arbitrary-precision rational numbers. What is the difference? Let GHCi answer this question:

*Main> import Data.Ratio
*Main Data.Ratio> 2 % 2
1 % 1
*Main Data.Ratio> 4 % 4
1 % 1
*Main Data.Ratio> 3 % 6
1 % 2

The main difference between the two implementations is that when we use arbitrary-precision rational numbers, there is automatic reduction of every fraction to its irreducible form, which requires computing the greatest common divisor (GCD) of the numerator and denominator of each rational number at each iteration. This has a computational cost, of course, but it also has the benefit of avoiding reducible fractions (which are inefficient representations of rational numbers). When we use pairs of arbitrary-precision integers, we do not compute the GCD (although we could, if we wanted to), but we may have to work with pairs of unnecessarily long integers (that correspond to reducible fractions), which is potentially dangerous.

For example, suppose that we replace the first three lines in the script above with the following three lines:

a, b :: Integer
a = 2000
b = 1000

We run the script again. Here’s yet another GHCi session:


*Main> take 6 xs
[(1,1),(3000,2000),(17000000000,12000000000),
(577000000000000000000000,408000000000000000000000),
(665857000000000000000000000000000000000000000000000,
470832000000000000000000000000000000000000000000000),
(8867310888970000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000,
62701356604800000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000)]
*Main> let x5 = xs !! 5
*Main> x5
(8867310888970000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000,
62701356604800000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000)

Frankly, I am shocked! Multiplying $a$ and $b$ by $1000$ results in $p_5$ and $q_5$ being multiplied by $10^{90}$. I may have made a mistake counting all those zeros, but it’s a lot more zeros than I expected.

__________

Appendix: from Heron to Newton

Interestingly, Heron’s method can be derived from the no less famous Newton’s method. Let us introduce a function $f : \mathbb{R} \to \mathbb{R}$, defined by $f (x) = x^2 - y$. By construction, the zeros of function $f$ are $\sqrt{y}$ and $-\sqrt{y}$. Recall that Newton’s method uses the recurrence relation $x_{k+1} = g ( x_k )$, where function $g : \mathbb{R} \to \mathbb{R}$ is defined by

$g (x) = x - \displaystyle\frac{f (x)}{f' (x)}$

where $f'$ is the first derivative of $f$. Hence, we obtain the following

$g (x) = x - \displaystyle\frac{f (x)}{f' (x)} = x - \displaystyle\left(\frac{x^2 - y}{2 x}\right) = \displaystyle\frac{1}{2} \left( x + \frac{y}{x}\right)$

which is the recurrence relation used in Heron’s method.

### Runge–Kutta in Haskell

June 29, 2012

Last time I implemented a Runge-Kutta method was in December 2001. Back then, I implemented the classical 4th order Runge-Kutta (RK4) method in C. However, in the past decade, whenever I needed a numerical solution of an ordinary differential equation (ODE), I used MATLAB. Now, I would like to use Haskell instead.

Consider the following initial value problem (IVP)

$\dot{x} (t) = f ( x (t) )$,     $x (t_0) = x_0$

where the vector field $f : \mathbb{R}^n \to \mathbb{R}^n$ and the initial condition $x_0$ are given. Solving an instance of the IVP consists of finding a function $x : [t_0, \infty) \to \mathbb{R}^n$ that satisfies both the differential equation and the initial condition.

Let $h > 0$ be the time-step, and let $t_k = t_0 + k h$ be the $k$-th time instant, where $k \in \mathbb{N}_0 := \{0, 1, 2, \dots\}$. Suppose that we solve a given instance of the IVP and obtain a closed-form solution $x : [t_0, \infty) \to \mathbb{R}^n$; we then discretize the closed-form solution to obtain an infinite sequence $x_0, x_1, x_2, \dots$ where $x_k = x (t_k)$. However, not all instances of the IVP have a closed-form solution (or even an analytic solution), and even in the cases where a closed-form solution does exist, it can be quite hard to find such a solution. Therefore, instead of discretizing the closed-form solution, one is tempted to discretize the ODE itself to obtain an approximation of the sequence $x_0, x_1, x_2, \dots$, which we will denote by $\tilde{x}_0, \tilde{x}_1, \tilde{x}_2, \dots$. We will call the approximate sequence a numerical solution.

Consider now the following discretized initial value problem (DIVP)

$\tilde{x}_{k+1} = g ( \tilde{x}_k )$,     $\tilde{x}_0 = x_0$

where $g : \mathbb{R}^n \to \mathbb{R}^n$ depends on what particular numerical method one wants to use. We will use the (classical) 4th order Runge-Kutta (RK4) method [1], which is given by

$g (x) = \displaystyle x + \frac{1}{6} \left( k_1 + 2 k_2 + 2 k_3 + k_4 \right)$

where the $k_i$ variables are given by

$\begin{array}{rl} k_1 &= h f (x)\\\\ k_2 &= h f (x + \frac{1}{2} k_1)\\\\ k_3 &= h f (x + \frac{1}{2} k_2)\\\\ k_4 &= h f (x + k_3)\end{array}$

In a nutshell, one hopes that by making the time-step $h$ “small enough”, the numerical solution $\tilde{x}_0, \tilde{x}_1, \tilde{x}_2, \dots$ will be “close enough” to $x_0, x_1, x_2, \dots$, the discretization of the actual solution $x$. This hoping involves a fair amount of faith. Especially so if floating-point arithmetic is used, as underflow, overflow, and round-off error are not to be dismissed.

__________

Haskell implementation of RK4

The following Haskell script implements the RK4 method:

-- define 4th order Runge-Kutta map (RK4)
rk4 :: Floating a => (a -> a) -> a -> a -> a
rk4 f h x = x + (1/6) * (k1 + 2*k2 + 2*k3 + k4)
where k1 = h * f (x)
k2 = h * f (x + 0.5*k1)
k3 = h * f (x + 0.5*k2)
k4 = h * f (x + k3)

Let us now test this rather succinct implementation.

__________

Example

Consider the following IVP where $t_0 = 0$, $x_0 = 1$, and $n = 1$

$\dot{x} (t) = - x (t)$,     $x (0) = 1$.

The closed-form solution of this IVP is $x (t) = e^{-t}$. We load the script above into GHCi and compute the numerical solution of the IVP using the higher-order function iterate:

*Main> -- discretize closed-form solution
*Main> let xs = [ exp (-t) | t <- [0,0.1..]]
*Main> -- define function f
*Main> let f x = -x
*Main> -- define time-step h
*Main> let h = 1 / 100000
*Main> -- define initial condition
*Main> let x0 = 1.0
*Main> -- compute numerical solution
*Main> let xs_tilde = iterate (rk4 f h) x0
*Main> -- decimate numerical solution
*Main> let xs_dec = [ xs_tilde !! k | k <- [0,1..], rem k 10000 == 0]

Note that the time-step is $h = 1 / 100000$. Hence, the numerical solution will have $10^5$ samples in between $t = 0$ and $t = 1$, which probably is too fine a discretization. Thus, we decimate the numerical solution by a factor of $10^4$, so that we are left with only eleven samples in between $t = 0$ and $t = 1$, with a uniform spacing of $\Delta t = 0.1$ between consecutive samples. We can now compare the discretization of the closed-form solution with the heavily decimated numerical solution, as follows:

*Main> -- print discretized closed-form solution
*Main> take 11 xs
[1.0,0.9048374180359595,0.8187307530779818,
0.7408182206817179,0.6703200460356392,
0.6065306597126333,0.5488116360940264,
0.49658530379140947,0.44932896411722156,
0.4065696597405991,0.36787944117144233]
*Main> -- print decimated numerical solution
*Main> take 11 xs_dec
[1.0,0.9048374180359563,0.8187307530779842,
0.7408182206817214,0.6703200460356442,
0.6065306597126376,0.5488116360940266,
0.49658530379141175,0.44932896411722334,
0.40656965974059905,0.367879441171439]
*Main> -- compute difference of two solutions
*Main> let es = zipWith (-) xs_dec xs
*Main> -- print difference
*Main> take 11 es
[0.0,-3.219646771412954e-15,2.3314683517128287e-15,
3.552713678800501e-15,4.9960036108132044e-15,
4.3298697960381105e-15,2.220446049250313e-16,
2.275957200481571e-15,1.7763568394002505e-15,
-5.551115123125783e-17,-3.3306690738754696e-15]
*Main> -- compute energy of the difference
*Main> sum $take 11$ map (^2) es
9.161263559461204e-29

Note how small the values of difference sequence are, of the order of $10^{-15}$. The implementation of the RK4 method seems to be working. Who needs MATLAB now? ;-)

__________

References

[1] Richard W. Hamming, Numerical Methods for Scientists and Engineers, Dover Publications, 1973.

### Newton’s method using rational arithmetic

April 2, 2012

The floating-point number system puts further restraints on what can be expected. We can at best hope to keep the relative error under control, and we cannot expect to find zeros far from the origin with great absolute accuracy.

I am somewhat disenchanted with floating-point arithmetic. We all have implemented numerical methods using floating-point numbers. But, how many of us have implemented numerical methods using rational numbers of arbitrary precision? I certainly have not. Hence, let us give it a try. In this post I will implement Newton’s method [1] (also known as the Newton-Raphson method) in Haskell using arbitrary-precision rational numbers.

Given a continuous (real) function $f : \mathbb{R} \to \mathbb{R}$ and $x_0$, an initial approximation of the zero of function $f$, we then obtain the next approximation using the following recurrence relation

$x_{k+1} = x_k - \displaystyle\frac{f (x_k)}{f' (x_k)}$

where $f'$ is the first derivative of $f$. We will use arbitrary-precision rational numbers (represented as a ratio of two Integer values)

type Rational = Ratio Integer

which are defined in the Data.Ratio library.

__________

Example

Suppose we would like to find a rational approximation of $\sqrt{2}$. We create a real function defined by $f (x) = x^2 - 2$, whose zeros are  $\sqrt{2}$ and $-\sqrt{2}$. Note that the first derivative of $f$ is $f' (x) = 2 x$. The initial approximation is, say, $x_0 = 1 / 1$, which is closer to $\sqrt{2}$ than to $-\sqrt{2}$. Hopefully, the sequence $\{x_k\}$ will converge to $\sqrt{2}$ instead of converging to $-\sqrt{2}$. We are performing a numerical experiment, and success is not guaranteed!

The following Haskell script implements Newton’s method:

import Data.Ratio

-- define function f
f :: Rational -> Rational
f x = x*x - 2

-- define 1st derivative of function f
f' :: Rational -> Rational
f' x = 2*x

-- define Newton map
g :: Rational -> Rational
g x = x - (f x / f' x)

-- initial approximation
x0 :: Rational
x0 = 1 % 1

-- list of rational approximations
xs :: [Rational]
xs = iterate g x0

where we used (higher-order) function iterate to create the list of rational approximations. We run the script above in GHCi and use take to compute the first few rational approximations in the list:

*Main> take 6 xs
[1 % 1,3 % 2,17 % 12,577 % 408,665857 % 470832,
886731088897 % 627013566048]

Hence, we have, for instance, the following rational approximation

$x_5 = \displaystyle\frac{886731088897}{627013566048} \approx \sqrt{2}$.

How good is this rational approximation? Let us see:

*Main> let x5 = xs !! 5
*Main> x5*x5 - 2
1 % 393146012008229658338304
*Main> 886731088897 / 627013566048
1.4142135623730951
*Main> sqrt 2
1.4142135623730951

Fairly good indeed! And all it took was five iterations!

__________

References

[1] Richard W. Hamming, Numerical Methods for Scientists and Engineers, Dover Publications, 1973.

### Variational Integrators

June 3, 2008

In my early days as an undergraduate student, I had a lot of fun writing C code to integrate ODE’s numerically, so I could simulate physical systems in a computer (integrating the pendulum ODE is a classic). However, I was not very happy to realize that traditional numerical methods very easily give results with no physical meaning.

For example, think of a simple pendulum oscillating on a vertical plane. Assuming that there are no damping forces, there’s energy conservation. However, traditional numerical methods (such as Runge-Kutta methods) do not, in general, preserve energy. The numerical simulation of the pendulum may thus exhibit non-physical energy dissipation (or amplification).

[ image courtesy of Ari Stern ]

When we integrate an ODE numerically, we obtain a discrete trajectory which we hope is “close enough” to the exact flow of the differential equation. In some problems in Physics, the continuous flow lies on a manifold, and we would like to preserve this geometric property: it would be wonderful if the discrete trajectory lied on the same manifold as the continuous flow. While thinking about numerical methods that preserve certain invariants, I became interested in the field of geometric integration.

One particular class of geometric integrators I have been interested in over the past year is the class of variational integrators. Many physical laws are formulated as variational principles from which differential equations can be derived. Instead of discretizing the differential equations, why not discretize the variational principles instead? This rather sexy thought lies at the basis of the nascent field of variational integrators.

Here’s an overview on variational integrators, by Prof. Matthew West. I have been reading some papers on the topic. This field seems to have been explored rather vigorously at Caltech, as there are/were a lot of people working on it:

and their collaborators. I have compiled a list of lecture slides, theses and papers on the area of variational integrators. Hope you find the list useful (I sure do!).

-/-

Some lecture slides for a gentle introduction:

-/-

A few PhD theses on variational integrators and their application:

-/-

Some papers on the foundations of variational integrators:

Applications to Computational Mechanics:

Applications to Optimal Control Theory:

Applications to Computational Electromagnetism:

Applications to Computational Physics:

The papers I have listed so far refer to deterministic integrators. However, quite recently Prof. Houman Owhadi and Nawaf Bou-Rabee have been working on stochastic variational integrators:

I wonder if stochastic variational integrators could be used in quantitative finance. Just a wild thought…

-/-

If you know some more papers, please let me know! Last but not least, Ganesh Swami once wrote a brief post on variational integrators.