## Archive for the ‘Systems Theory’ Category

January 30, 2012

We recently studied the first-order causal LTI system which is described by the difference equation $y (n) - \alpha y (n-1) = u (n)$, where $|\alpha| < 1$. The system can be represented by the block diagram

Observing the block diagram, we conclude that there are three basic operations: addition, multiplication by a constant coefficient, and delay. We arrive at the same conclusion if we rewrite the difference equation in the form $y (n) = u (n) + \alpha y (n-1)$. Do note that the output is obtained by adding the input to a scaled and delayed version of the output and, therefore, we have a feedback system.

The system’s input-output relationship can be written as follows

$y = u + \alpha\,\mathcal{D} (y)$

where $\mathcal{D}$ is the unit-delay operator. Note that we have signals rather than signal samples on both sides of the equation. To clarify, when I say “sample”, I mean the value of the signal at some (discrete) time instant. Let us introduce the linear operator $\mathcal{H}$ such that the output signal can be written as a function of the input, $y = \mathcal{H} (u)$, assuming zero initial condition. Hence, we obtain $\mathcal{H} (u) = u + \alpha\,\mathcal{D} (\mathcal{H} (u))$. It would be convenient to introduce also a gain operator $\mathcal{G}_{\alpha}$ to carry out the multiplication by a constant coefficient, i.e., $\mathcal{G}_{\alpha} (x) = \alpha \, x$. Composing the operators, we finally obtain the following equation

$\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$

which we will now implement in Haskell.

__________

Our first implementation of the LTI system under study relied on state-space models and the scanl trick to propagate the initial state forwards in time. Our second implementation was little more than a beautified version of the first one. This third implementation will be radically different from the previous two.

type Signal a = [a]
type System a b = Signal a -> Signal b

which hopefully will make the code more readable. We build a function that takes two discrete-time signals (of the same type) and returns their elementwise addition:

(.+) :: Num a => Signal a -> Signal a -> Signal a
(.+) = zipWith (+)

Since we represent discrete-time signals as lists, the function above merely adds two lists elementwise. Let us test this function:

*Main> -- create test input signals
*Main> let us = repeat 1.0 :: Signal Float
*Main> let vs = [1..] :: Signal Float
*Main> -- add two signals elementwise
*Main> let ys = us .+ vs
*Main> take 10 ys
[2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0]
*Main> -- check types
*Main> :type (us,vs,ys)
(us,vs,ys) :: (Signal Float, Signal Float, Signal Float)

We now implement the unit-delay with zero initial condition:

delay :: Num a => System a a
delay us = 0 : us

which right-shifts the input list and introduces a zero at the output list’s head. If we want a unit-delay operator with non-zero initial condition, we would have the following code instead:

delay :: Num a => System a a
delay us = ini : us

where ini is the initial condition of the delay block. Lastly, we create the gain operator:

gain :: Num a => a -> System a a
gain alpha = map (alpha*)

which takes a number (the gain factor) and returns a system (that maps signals to signals). Note that we use partial function application. One can think of the gain operator as a function that takes a number and a signal and returns a signal. If we fix the first argument (the gain factor), we obtain a function that maps signals to signals, i.e., a system. Here is a quick test of the delay and gain operators:

*Main> -- create signal
*Main> let xs = [1..] :: Signal Float
*Main> -- delay signal
*Main> let ys = delay xs
*Main> take 10 ys
[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
*Main> -- amplify delayed signal
*Main> let zs = gain 2.0 ys
*Main> take 10 zs
[0.0,2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0,18.0]

Finally, we have the following Haskell script:

type Signal a = [a]
type System a b = Signal a -> Signal b

(.+) :: Num a => Signal a -> Signal a -> Signal a
(.+) = zipWith (+)

-- delay operator
delay :: Num a => System a a
delay us = 0 : us

-- gain operator
gain :: Num a => a -> System a a
gain alpha = map (alpha*)

-- build feedback system
sys :: Floating a => System a a
sys us = us .+ (gain 0.5 . delay . sys) us

where we used $\alpha = 0.5$. Note that the last line is a direct translation of the equation $\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$ to Haskell! Beautiful!

To finalize, let us obtain the impulse response of the LTI system under study for $\alpha = 0.5$:

*Main> -- create unit impulse
*Main> let delta = 1.0 : repeat 0.0 :: Signal Float
*Main> -- compute impulse response
*Main> let hs = sys delta
*Main> take 8 hs
[1.0,0.5,0.25,0.125,6.25e-2,3.125e-2,1.5625e-2,7.8125e-3]

which is the expected impulse response $h (n) = \alpha^n$ for $n \geq 0$. Frankly, I am in awe. It is amazing that this implementation works!

January 25, 2012

Last weekend we learned how to build systems (LTI or otherwise) using Haskell. We now want to construct interconnections of systems. In this post we will study the series interconnection of systems, usually known as cascade interconnection. Parallel and feedback interconnections will be discussed in future posts.

__________

Let us consider the series interconnection of two causal discrete-time LTI systems, $\mathcal{H}_1$ and $\mathcal{H}_2$, as depicted below

where $y = \mathcal{H}_1 (x)$ and $w =\mathcal{H}_2 (y)$ are the outputs of each LTI system in the cascade. Since the output of system $\mathcal{H}_1$ is the input of system $\mathcal{H}_2$, we have $w = (\mathcal{H}_2 \circ \mathcal{H}_1) (x)$.

What LTI systems should we consider? Let us choose the simplest ones: the accumulator and the differentiator. Thus, let $\mathcal{H}_1$ be an accumulator (also known as “discrete-time integrator”), whose input-output relationship is as follows

$y (n) = y (n-1) + x (n)$,

and let $\mathcal{H}_2$ be a first difference operator (also known as “discrete-time differentiator”), whose input-output relationship is

$w (n) = y (n) - y (n-1)$.

Note that the output of the cascade of these two LTI systems is thus

$w (n) = y (n) - y (n-1) = (y (n-1) + x (n)) - y (n-1) = x (n)$

and, hence, the cascade is input-output equivalent to the identity operator. Since $(\mathcal{H}_2 \circ \mathcal{H}_1) (x) = x$ for all signals $x$, we say that $\mathcal{H}_2$ is the left-inverse of system $\mathcal{H}_1$. Since both systems are LTI, the operators commute, i.e., $\mathcal{H}_2 \circ \mathcal{H}_1 = \mathcal{H}_1 \circ \mathcal{H}_2$ and, therefore, $\mathcal{H}_2$ is also the right-inverse of system $\mathcal{H}_1$. Since $\mathcal{H}_2$ is the left- and right-inverse of $\mathcal{H}_1$, we say that $\mathcal{H}_2$ is the inverse of system $\mathcal{H}_1$ (and vice-versa). Do keep in mind, however, that not all systems are invertible. For details, take a look at Oppenheim & Willsky [1].

__________

We can easily find a state-space realization for the accumulator, but that will not be necessary. As in previous posts, let us view discrete-signals as lists. Thus, the accumulator takes a list $[x_0, x_1, x_2, \dots]$, and returns the following list

$[y_0, y_1, y_2, \dots] = [x_0, x_0 + x_1, x_0 + x_1 + x_2, \dots]$

where we assume that the initial condition of the accumulator is zero (i.e., $y_{-1} = 0$). Instead of using scanl yet once again (which would require us to drop the head of the list), let us now use scanl1 to implement the accumulator:

acc :: Num a => System a a
acc = scanl1 (+)

Please note that if the initial condition of the accumulator is not zero, we should use the following code instead:

acc' :: Num a => System a a
acc' us = tail $scanl (+) acc_ini us where acc_ini is the initial condition of the accumulator (analogous to the constant of integration in integral calculus). The differentiator is not a proper system [2] and, therefore, it has no state-space realization. The differentiator takes a list $[y_0, y_1, y_2, \dots]$, and returns the following list $[w_0, w_1, w_2, \dots] = [y_0, y_1 - y_0, y_2 - y_1, \dots]$ where we again assume that $y_{-1} = 0$. Note the following $[w_0, w_1, w_2, \dots] = [y_0, y_1, y_2, \dots] - [0, y_0, y_1, \dots]$ i.e., list $w$ is obtained by elementwise subtraction of a right-shifted version of list $y$ from list $y$ itself. Subtracting two lists elementwise can be implemented using zipWith: diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) If the initial condition of the differentiator is not zero, we should instead use the following code: diff' :: Num a => System a a diff' ys = zipWith (-) ys (diff_ini : ys) where diff_ini is the initial condition of the differentiator. Piecing it all together, we finally obtain the following Haskell script: type Signal a = [a] type System a b = Signal a -> Signal b -- accumulator acc :: Num a => System a a acc = scanl1 (+) -- differentiator diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) -- cascade of the acc. and diff. sys :: Num a => System a a sys = diff . acc Take a look at the last line. It says that cascading systems is the same as composing systems! Hence, the Haskell implementation is conceptually very close to the mathematical formulation using operators. Functional analysis meets functional programming We run the script above on GHCi and then play with it: *Main> -- build unit impulse *Main> let delta = 1.0 : repeat 0.0 :: Signal Float *Main> -- output of the accumulator *Main> let ys = acc delta *Main> take 20 ys [1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0] *Main> -- output of the differentiator *Main> let ws = diff ys *Main> take 20 ws [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] *Main> -- impulse response of the cascade *Main> let hs = sys delta *Main> take 20 hs [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] The impulse response of the accumulator is the unit step. The impulse response of the cascade is the unit impulse, as we expected. The differentiator is the inverse of the accumulator, and vice-versa. __________ References [1] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab, Signals & Systems, 2nd edition, Prentice-Hall, 1997. [2] Panos Antsaklis, Anthony Michel, A Linear Systems Primer, Birkhäuser Boston, 2007. ### Discrete-time LTI systems in Haskell II January 22, 2012 Yesterday I wrote a post on how to implement discrete-time LTI systems in Haskell. The code I presented did not quite please my delicate aesthetic sensitivity, as it bundled the declaration of the signals with the declaration of the LTI system under study. This is not practical. Ideally, I would like to define the LTI system in a Haskell script, run it on the GHCi interpreter, then create the signals using the GHCi command line. Why? Because I can then compute the response of the system to various test input signals without reloading the script. In this post, we will again consider the first-order causal discrete-time LTI system described by the difference equation $y (n) - \alpha y (n-1) = u (n)$ where $|\alpha| < 1$. In my previous post, I obtained a state-space representation of this LTI system. In this post, I will propose a better implementation of the given system in Haskell. __________ Signal and System type synonyms What is a discrete-time signal? We can think of a discrete-time signal as a sequence of numbers or symbols. In Haskell, we often use lists of floating-point numbers to represent real-valued discrete-time signals. More generally, we have the type synonym: type Signal a = [a] which says that a signal of type $a$ is a list of elements of type $a$. We can then have signals of various types: integer, fixed-point, floating-point, bit, binary word, etc. This should be particularly useful in case we want to simulate mixed-signal circuits. What is a discrete-time system? It is that which maps input discrete-time signals to output discrete-time signals. We will use the type synonym: type System a b = (Signal a -> Signal b) which tells us that a system of type $a \,\, b$ is a function that maps an input signal of type $a$ to an output signal of type $b$. In other words, it takes a list whose elements are of type $a$, and it returns a list whose elements are of type $b$. __________ Implementing the LTI system in Haskell In my previous post, I created the state and output sequences using xs = scanl f x0 us ys = zipWith g xs us Combining the two lines above into a single line, we obtain ys = zipWith g (scanl f x0 us) us which returns the output sequence when given the input sequence. Note that the state sequence is not created. Assuming zero initial condition, we can then create a system (i.e., a function that takes a list and returns a list) as follows sys :: Floating a => System a a sys us = zipWith g (scanl f 0.0 us) us which takes signals of type $a$ and returns signals of the same type, where $a$ is a floating-point type (either Float or Double). Putting it all together in a script, we finally obtain: type Signal a = [a] type System a b = (Signal a -> Signal b) -- state-space model f,g :: Floating a => a -> a -> a f x u = (0.5 * x) + u -- state-transition g x u = f x u -- output -- define LTI system sys :: Floating a => System a a sys us = zipWith g (scanl f 0.0 us) us which defines the LTI system under study with $\alpha = 1/2$. For this choice of $\alpha$, the system is a lowpass filter. __________ Example: lowpass-filtering a square wave We run the script above on GHCi to create the desired LTI system. Let us now create an input signal, a square wave of period equal to 16 and duty cycle equal to 50%, and lowpass-filter it using the LTI system we created: *Main> -- create square wave *Main> let ones = take 8$ repeat 1.0 :: Signal Float
*Main> let zeros = take 8 \$ repeat 0.0 :: Signal Float
*Main> let us = cycle (ones ++ zeros) :: Signal Float
*Main> -- create output sequence
*Main> let ys = sys us
*Main> -- check types
*Main> :type us
us :: Signal Float
*Main> :type ys
ys :: Signal Float
*Main> :type sys
sys :: Floating a => System a a

Finally, we compute the first 64 samples of the input and output signals using function take, and plot the signals using MATLAB:

In case you are wondering why the peak amplitude of the output signal is twice that of the input signal, compute the transfer function of the LTI system and note that the DC gain is equal to $\frac{1}{1 - \alpha} = 2$.

### Discrete-time LTI systems in Haskell

January 21, 2012

Consider the first-order causal discrete-time LTI system described by the following difference equation

$y (n) - \alpha y (n-1) = u (n)$

where $|\alpha| < 1$. This system can be represented by the block diagram

where $D$ is a unit-delay block, the output of which is $y (n-1)$. We now introduce state variable $x (n)$ denoting the output of the unit-delay block, i.e., $x(n) = y(n-1)$. Thus, $x (n+1) = y(n)$ and, since $y (n) = \alpha y (n-1) + u (n)$, we obtain the following state-transition equation

$x (n+1) = \alpha x (n) + u (n)$.

The initial condition is $x(0) = y(-1)$, which we assume to be zero. Note that the output is $y (n) = x(n+1)$ and, thus, the output equation is

$y (n) = \alpha x (n) + u (n)$.

Lastly, we define $f (x,u) = g (x,u) := \alpha x + u$, which allows us to write the state-transition and output equations as follows

$\begin{array}{rl} x (n+1) &= f (x(n), u(n))\\ y (n) &= g (x(n), u(n))\\\end{array}$

We can now implement this state-space model in Haskell!

__________

Given an initial state $x_0$ and an input sequence $u = [u_0, u_1, \dots]$, we can compute the state sequence using the scanl trick I wrote about last weekend. Once we have obtained the state sequence $x = [x_0, x_1, \dots]$, the output sequence is

$y = [y_0, y_1, \dots] = [g (x_0, u_0), g (x_1, u_1), \dots]$.

Do you recognize the pattern? We are zipping lists $x$ and $u$ using function $g$!! We thus use (higher-order) function zipWith to compute the output sequence.

For $\alpha = 7/8$ and a constant input sequence, the following Haskell code computes the state and output trajectories:

-- constant input sequence
us :: Floating a => [a]
us = repeat 1.0

-- parameter alpha
alpha :: Floating a => a
alpha = 7/8

-- state-transition function
f :: Floating a => a -> a -> a
f x u = (alpha * x) + u

-- output function
g :: Floating a => a -> a -> a
g x u = f x u

-- initial condition
x0 :: Floating a => a
x0 = 0.0

-- state sequence
xs :: Floating a => [a]
xs = scanl f x0 us

-- output sequence
ys :: Floating a => [a]
ys = zipWith g xs us

Not quite. I am still stuck in the imperative paradigm… and need to be corrected. The code above does not compute anything. It does declare what the input, state and output sequences are. After running the script above in GHCi, we compute 30 samples of the input and output sequences using function take:

*Main> take 30 us
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,
1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,
1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
*Main> take 30 ys
[1.0,1.875,2.640625,3.310546875,3.896728515625,
4.409637451171875,4.858432769775391,5.251128673553467,
5.5947375893592834,5.895395390689373,6.158470966853201,
6.388662095996551,6.590079333996982,6.7663194172473595,
6.92052949009144,7.05546330383001,7.173530390851258,
7.276839091994852,7.367234205495495,7.446329929808559,
7.515538688582489,7.576096352509678,7.629084308445968,
7.675448769890222,7.716017673653944,7.751515464447201,
7.782576031391301,7.809754027467388,7.833534774033964,
7.854342927279719]

We can plot these sequences in MATLAB:

We now know how to simulate discrete-time LTI systems using Haskell. Note that our framework is based on state-space models, not on transfer functions. Hence, it should be possible to simulate discrete-time nonlinear systems in the same manner: using scanl to compute the state sequence, and zipWith to compute the output sequence. All we need is a state-space representation of the system under study.

### Cascading (almost) identical LTI systems

October 23, 2010

Consider a causal continuous-time LTI system $\mathcal{H}$ with transfer function

$H (s) = \displaystyle\frac{1}{(s+a) (s+b)}$

where $a, b \in \mathbb{R}$. Note that we have two poles (at $s = -a$ and $s = -b$), but no finite zeros. We can view $\mathcal{H}$, a 2nd order system, as the cascade connection of two causal 1st order LTI systems

If $b = a$, then the transfer function becomes $H (s) = 1 / (s+a)^2$ and we have a double pole at $s = -a$. Taking the inverse Laplace transform of $H (s)$ we obtain the impulse response of the overall system $\mathcal{H}$

$h (t) = t e^{- a t} u (t)$

where $u (t)$ is the Heaviside step function. If $a > 0$, then the impulse response will eventually decay to zero. However, note that the exponential is multiplied by $t$, which means that there is a transient “peak”. If $a$ is positive but “close” to zero, this transient could be quite “wild”! This phenomenon is somewhat similar to resonance, although we are dealing with decaying exponential excitations, not sinusoidal ones. If we apply a Dirac delta impulse to the system depicted above, the output of the first system in the cascade will be the impulse response of $\mathcal{H}_1$

$h_1 (t) = e^{- a t} u (t)$,

which happens to be also the impulse response of $\mathcal{H}_2$, the second system in the cascade. The lesson is the following: cascading identical LTI systems leads to resonant-like behavior.

If $b \neq a$, then we have a simple pole at $s = -a$ and another simple pole at $s = -b$. Via partial fraction expansion of $H (s)$, we obtain

$H (s) = \displaystyle\frac{1}{b - a} \left(\displaystyle\frac{1}{s+a} - \displaystyle\frac{1}{s+b}\right)$

and, taking the inverse Laplace transform, we get the impulse response

$h (t) = \displaystyle\frac{1}{b - a} \left(e^{-a t} - e^{-b t}\right) u (t)$.

Note that these are valid only if $b \neq a$. If we make $b = a$, we get pesky indeterminate stuff: $H (s) = 0 / 0$ and $h (t) = 0 / 0$.

Let us see what happens if $\mathcal{H}_1$ and $\mathcal{H}_2$ are “almost identical”. To be precise, let us investigate the impulse response of the cascade when the (simple) poles at $s = -a$ and $s = -b$ are almost “on top of each other”. Let $b = a \pm \varepsilon$, where $\varepsilon > 0$ is “small”. Then, the impulse response of $\mathcal{H}$ can be written as follows

$h (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(e^{-a t} - e^{-(a \pm \varepsilon) t}\right) u (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(1 - e^{- (\pm \varepsilon) t} \right) e^{-a t} u (t)$.

For convenience, let us define

$\gamma_{\varepsilon} (t) := \pm \displaystyle\frac{1}{\varepsilon} \left(1 - e^{- (\pm \varepsilon) t} \right)$

so that $h (t) = \gamma_{\varepsilon} (t) e^{-a t} u (t)$. Taking the Taylor expansion of $\gamma_{\varepsilon} (t)$ about $t = 0$, we get

$\gamma_{\varepsilon} (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(\pm \varepsilon t - \displaystyle\frac{\varepsilon^2}{2} t^2 \pm \displaystyle\frac{\varepsilon^3}{3!} t^3 - \dots\right)$

and, taking the limit as $\varepsilon$ approaches zero, we obtain

$\displaystyle\lim_{\varepsilon \to 0} \gamma_{\varepsilon} (t) = \displaystyle\lim_{\varepsilon \to 0} \pm \displaystyle\frac{1}{\varepsilon} \left(\pm \varepsilon t - \displaystyle\frac{\varepsilon^2}{2} t^2 \pm \displaystyle\frac{\varepsilon^3}{3!} t^3 - \dots\right) = t$

and, therefore, we finally get

$\displaystyle\lim_{\varepsilon \to 0} h (t) = \displaystyle\lim_{\varepsilon \to 0} \gamma_{\varepsilon} (t) e^{-a t} u (t) = t e^{-a t} u (t)$.

This shows, albeit in a sinfully non-rigorous manner, that the impulse response of $H (s) = 1 / ((s+a) (s+b))$ approaches the impulse response of $H (s) = 1 / (s+a)^2$ as the pole at $s = -b$ gets “closer” and “closer” to the pole at $s = -a$. I concede that this result is not terribly exciting…

Let us now perform a numerical experiment! Let $a = 1$, and let $b = a + \varepsilon$, so that $b$ approaches $a$ as $\varepsilon \to 0$. The plot below illustrates the impulse responses for various values of $\varepsilon$

where $\varepsilon = 0$ corresponds to the cascade of two identical systems (which creates a double pole at $s = -1$). Note that, as expected, as $\varepsilon$ approaches zero, the impulse response of the cascade approaches the resonant one, $h (t) = t e^{- a t} u (t)$.

Last but not least, here’s the MATLAB script that generates the plot:

% build transfer function of H1
a = 1; H1 = tf(1,[1 a]);

% create time grid
t = [0:0.01:5];

% compute impulse responses of the cascade
h = [];
for epsilon = [1 0.5 0.1 0];

% build transfer function of H2
b = a + epsilon; H2 = tf(1,[1 b]);

% compute and store impulse response
h = [h, impulse(H2 * H1, t)];

end

% plot impulse responses
figure;
plot(t, h(:,1), 'r-', t, h(:,2), 'm-', t, h(:,3), 'b-', t, h(:,4), 'k--');
legend('\epsilon = 1', '\epsilon = 0.5', '\epsilon = 0.1', '\epsilon = 0');
xlabel('t (seconds)'); title('Impulse responses of the cascade');