## Archive for June 29th, 2012

June 29, 2012

Last time I implemented a Runge-Kutta method was in December 2001. Back then, I implemented the classical 4th order Runge-Kutta (RK4) method in C. However, in the past decade, whenever I needed a numerical solution of an ordinary differential equation (ODE), I used MATLAB. Now, I would like to use Haskell instead.

Consider the following initial value problem (IVP)

$\dot{x} (t) = f ( x (t) )$,     $x (t_0) = x_0$

where the vector field $f : \mathbb{R}^n \to \mathbb{R}^n$ and the initial condition $x_0$ are given. Solving an instance of the IVP consists of finding a function $x : [t_0, \infty) \to \mathbb{R}^n$ that satisfies both the differential equation and the initial condition.

Let $h > 0$ be the time-step, and let $t_k = t_0 + k h$ be the $k$-th time instant, where $k \in \mathbb{N}_0 := \{0, 1, 2, \dots\}$. Suppose that we solve a given instance of the IVP and obtain a closed-form solution $x : [t_0, \infty) \to \mathbb{R}^n$; we then discretize the closed-form solution to obtain an infinite sequence $x_0, x_1, x_2, \dots$ where $x_k = x (t_k)$. However, not all instances of the IVP have a closed-form solution (or even an analytic solution), and even in the cases where a closed-form solution does exist, it can be quite hard to find such a solution. Therefore, instead of discretizing the closed-form solution, one is tempted to discretize the ODE itself to obtain an approximation of the sequence $x_0, x_1, x_2, \dots$, which we will denote by $\tilde{x}_0, \tilde{x}_1, \tilde{x}_2, \dots$. We will call the approximate sequence a numerical solution.

Consider now the following discretized initial value problem (DIVP)

$\tilde{x}_{k+1} = g ( \tilde{x}_k )$,     $\tilde{x}_0 = x_0$

where $g : \mathbb{R}^n \to \mathbb{R}^n$ depends on what particular numerical method one wants to use. We will use the (classical) 4th order Runge-Kutta (RK4) method [1], which is given by

$g (x) = \displaystyle x + \frac{1}{6} \left( k_1 + 2 k_2 + 2 k_3 + k_4 \right)$

where the $k_i$ variables are given by

$\begin{array}{rl} k_1 &= h f (x)\\\\ k_2 &= h f (x + \frac{1}{2} k_1)\\\\ k_3 &= h f (x + \frac{1}{2} k_2)\\\\ k_4 &= h f (x + k_3)\end{array}$

In a nutshell, one hopes that by making the time-step $h$ “small enough”, the numerical solution $\tilde{x}_0, \tilde{x}_1, \tilde{x}_2, \dots$ will be “close enough” to $x_0, x_1, x_2, \dots$, the discretization of the actual solution $x$. This hoping involves a fair amount of faith. Especially so if floating-point arithmetic is used, as underflow, overflow, and round-off error are not to be dismissed.

__________

The following Haskell script implements the RK4 method:

-- define 4th order Runge-Kutta map (RK4)
rk4 :: Floating a => (a -> a) -> a -> a -> a
rk4 f h x = x + (1/6) * (k1 + 2*k2 + 2*k3 + k4)
where k1 = h * f (x)
k2 = h * f (x + 0.5*k1)
k3 = h * f (x + 0.5*k2)
k4 = h * f (x + k3)

Let us now test this rather succinct implementation.

__________

Example

Consider the following IVP where $t_0 = 0$, $x_0 = 1$, and $n = 1$

$\dot{x} (t) = - x (t)$,     $x (0) = 1$.

The closed-form solution of this IVP is $x (t) = e^{-t}$. We load the script above into GHCi and compute the numerical solution of the IVP using the higher-order function iterate:

*Main> -- discretize closed-form solution
*Main> let xs = [ exp (-t) | t <- [0,0.1..]]
*Main> -- define function f
*Main> let f x = -x
*Main> -- define time-step h
*Main> let h = 1 / 100000
*Main> -- define initial condition
*Main> let x0 = 1.0
*Main> -- compute numerical solution
*Main> let xs_tilde = iterate (rk4 f h) x0
*Main> -- decimate numerical solution
*Main> let xs_dec = [ xs_tilde !! k | k <- [0,1..], rem k 10000 == 0]

Note that the time-step is $h = 1 / 100000$. Hence, the numerical solution will have $10^5$ samples in between $t = 0$ and $t = 1$, which probably is too fine a discretization. Thus, we decimate the numerical solution by a factor of $10^4$, so that we are left with only eleven samples in between $t = 0$ and $t = 1$, with a uniform spacing of $\Delta t = 0.1$ between consecutive samples. We can now compare the discretization of the closed-form solution with the heavily decimated numerical solution, as follows:

*Main> -- print discretized closed-form solution
*Main> take 11 xs
[1.0,0.9048374180359595,0.8187307530779818,
0.7408182206817179,0.6703200460356392,
0.6065306597126333,0.5488116360940264,
0.49658530379140947,0.44932896411722156,
0.4065696597405991,0.36787944117144233]
*Main> -- print decimated numerical solution
*Main> take 11 xs_dec
[1.0,0.9048374180359563,0.8187307530779842,
0.7408182206817214,0.6703200460356442,
0.6065306597126376,0.5488116360940266,
0.49658530379141175,0.44932896411722334,
0.40656965974059905,0.367879441171439]
*Main> -- compute difference of two solutions
*Main> let es = zipWith (-) xs_dec xs
*Main> -- print difference
*Main> take 11 es
[0.0,-3.219646771412954e-15,2.3314683517128287e-15,
3.552713678800501e-15,4.9960036108132044e-15,
4.3298697960381105e-15,2.220446049250313e-16,
2.275957200481571e-15,1.7763568394002505e-15,
-5.551115123125783e-17,-3.3306690738754696e-15]
*Main> -- compute energy of the difference
*Main> sum $take 11$ map (^2) es
9.161263559461204e-29

Note how small the values of difference sequence are, of the order of $10^{-15}$. The implementation of the RK4 method seems to be working. Who needs MATLAB now? ;-)

__________

References

[1] Richard W. Hamming, Numerical Methods for Scientists and Engineers, Dover Publications, 1973.