## Posts Tagged ‘Signals & Systems’

January 30, 2012

We recently studied the first-order causal LTI system which is described by the difference equation $y (n) - \alpha y (n-1) = u (n)$, where $|\alpha| < 1$. The system can be represented by the block diagram

Observing the block diagram, we conclude that there are three basic operations: addition, multiplication by a constant coefficient, and delay. We arrive at the same conclusion if we rewrite the difference equation in the form $y (n) = u (n) + \alpha y (n-1)$. Do note that the output is obtained by adding the input to a scaled and delayed version of the output and, therefore, we have a feedback system.

The system’s input-output relationship can be written as follows

$y = u + \alpha\,\mathcal{D} (y)$

where $\mathcal{D}$ is the unit-delay operator. Note that we have signals rather than signal samples on both sides of the equation. To clarify, when I say “sample”, I mean the value of the signal at some (discrete) time instant. Let us introduce the linear operator $\mathcal{H}$ such that the output signal can be written as a function of the input, $y = \mathcal{H} (u)$, assuming zero initial condition. Hence, we obtain $\mathcal{H} (u) = u + \alpha\,\mathcal{D} (\mathcal{H} (u))$. It would be convenient to introduce also a gain operator $\mathcal{G}_{\alpha}$ to carry out the multiplication by a constant coefficient, i.e., $\mathcal{G}_{\alpha} (x) = \alpha \, x$. Composing the operators, we finally obtain the following equation

$\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$

which we will now implement in Haskell.

__________

Our first implementation of the LTI system under study relied on state-space models and the scanl trick to propagate the initial state forwards in time. Our second implementation was little more than a beautified version of the first one. This third implementation will be radically different from the previous two.

type Signal a = [a]
type System a b = Signal a -> Signal b

which hopefully will make the code more readable. We build a function that takes two discrete-time signals (of the same type) and returns their elementwise addition:

(.+) :: Num a => Signal a -> Signal a -> Signal a
(.+) = zipWith (+)

Since we represent discrete-time signals as lists, the function above merely adds two lists elementwise. Let us test this function:

*Main> -- create test input signals
*Main> let us = repeat 1.0 :: Signal Float
*Main> let vs = [1..] :: Signal Float
*Main> -- add two signals elementwise
*Main> let ys = us .+ vs
*Main> take 10 ys
[2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0]
*Main> -- check types
*Main> :type (us,vs,ys)
(us,vs,ys) :: (Signal Float, Signal Float, Signal Float)

We now implement the unit-delay with zero initial condition:

delay :: Num a => System a a
delay us = 0 : us

which right-shifts the input list and introduces a zero at the output list’s head. If we want a unit-delay operator with non-zero initial condition, we would have the following code instead:

delay :: Num a => System a a
delay us = ini : us

where ini is the initial condition of the delay block. Lastly, we create the gain operator:

gain :: Num a => a -> System a a
gain alpha = map (alpha*)

which takes a number (the gain factor) and returns a system (that maps signals to signals). Note that we use partial function application. One can think of the gain operator as a function that takes a number and a signal and returns a signal. If we fix the first argument (the gain factor), we obtain a function that maps signals to signals, i.e., a system. Here is a quick test of the delay and gain operators:

*Main> -- create signal
*Main> let xs = [1..] :: Signal Float
*Main> -- delay signal
*Main> let ys = delay xs
*Main> take 10 ys
[0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
*Main> -- amplify delayed signal
*Main> let zs = gain 2.0 ys
*Main> take 10 zs
[0.0,2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0,18.0]

Finally, we have the following Haskell script:

type Signal a = [a]
type System a b = Signal a -> Signal b

(.+) :: Num a => Signal a -> Signal a -> Signal a
(.+) = zipWith (+)

-- delay operator
delay :: Num a => System a a
delay us = 0 : us

-- gain operator
gain :: Num a => a -> System a a
gain alpha = map (alpha*)

-- build feedback system
sys :: Floating a => System a a
sys us = us .+ (gain 0.5 . delay . sys) us

where we used $\alpha = 0.5$. Note that the last line is a direct translation of the equation $\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$ to Haskell! Beautiful!

To finalize, let us obtain the impulse response of the LTI system under study for $\alpha = 0.5$:

*Main> -- create unit impulse
*Main> let delta = 1.0 : repeat 0.0 :: Signal Float
*Main> -- compute impulse response
*Main> let hs = sys delta
*Main> take 8 hs
[1.0,0.5,0.25,0.125,6.25e-2,3.125e-2,1.5625e-2,7.8125e-3]

which is the expected impulse response $h (n) = \alpha^n$ for $n \geq 0$. Frankly, I am in awe. It is amazing that this implementation works!

January 25, 2012

Last weekend we learned how to build systems (LTI or otherwise) using Haskell. We now want to construct interconnections of systems. In this post we will study the series interconnection of systems, usually known as cascade interconnection. Parallel and feedback interconnections will be discussed in future posts.

__________

Let us consider the series interconnection of two causal discrete-time LTI systems, $\mathcal{H}_1$ and $\mathcal{H}_2$, as depicted below

where $y = \mathcal{H}_1 (x)$ and $w =\mathcal{H}_2 (y)$ are the outputs of each LTI system in the cascade. Since the output of system $\mathcal{H}_1$ is the input of system $\mathcal{H}_2$, we have $w = (\mathcal{H}_2 \circ \mathcal{H}_1) (x)$.

What LTI systems should we consider? Let us choose the simplest ones: the accumulator and the differentiator. Thus, let $\mathcal{H}_1$ be an accumulator (also known as “discrete-time integrator”), whose input-output relationship is as follows

$y (n) = y (n-1) + x (n)$,

and let $\mathcal{H}_2$ be a first difference operator (also known as “discrete-time differentiator”), whose input-output relationship is

$w (n) = y (n) - y (n-1)$.

Note that the output of the cascade of these two LTI systems is thus

$w (n) = y (n) - y (n-1) = (y (n-1) + x (n)) - y (n-1) = x (n)$

and, hence, the cascade is input-output equivalent to the identity operator. Since $(\mathcal{H}_2 \circ \mathcal{H}_1) (x) = x$ for all signals $x$, we say that $\mathcal{H}_2$ is the left-inverse of system $\mathcal{H}_1$. Since both systems are LTI, the operators commute, i.e., $\mathcal{H}_2 \circ \mathcal{H}_1 = \mathcal{H}_1 \circ \mathcal{H}_2$ and, therefore, $\mathcal{H}_2$ is also the right-inverse of system $\mathcal{H}_1$. Since $\mathcal{H}_2$ is the left- and right-inverse of $\mathcal{H}_1$, we say that $\mathcal{H}_2$ is the inverse of system $\mathcal{H}_1$ (and vice-versa). Do keep in mind, however, that not all systems are invertible. For details, take a look at Oppenheim & Willsky [1].

__________

We can easily find a state-space realization for the accumulator, but that will not be necessary. As in previous posts, let us view discrete-signals as lists. Thus, the accumulator takes a list $[x_0, x_1, x_2, \dots]$, and returns the following list

$[y_0, y_1, y_2, \dots] = [x_0, x_0 + x_1, x_0 + x_1 + x_2, \dots]$

where we assume that the initial condition of the accumulator is zero (i.e., $y_{-1} = 0$). Instead of using scanl yet once again (which would require us to drop the head of the list), let us now use scanl1 to implement the accumulator:

acc :: Num a => System a a
acc = scanl1 (+)

Please note that if the initial condition of the accumulator is not zero, we should use the following code instead:

acc' :: Num a => System a a
acc' us = tail $scanl (+) acc_ini us where acc_ini is the initial condition of the accumulator (analogous to the constant of integration in integral calculus). The differentiator is not a proper system [2] and, therefore, it has no state-space realization. The differentiator takes a list $[y_0, y_1, y_2, \dots]$, and returns the following list $[w_0, w_1, w_2, \dots] = [y_0, y_1 - y_0, y_2 - y_1, \dots]$ where we again assume that $y_{-1} = 0$. Note the following $[w_0, w_1, w_2, \dots] = [y_0, y_1, y_2, \dots] - [0, y_0, y_1, \dots]$ i.e., list $w$ is obtained by elementwise subtraction of a right-shifted version of list $y$ from list $y$ itself. Subtracting two lists elementwise can be implemented using zipWith: diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) If the initial condition of the differentiator is not zero, we should instead use the following code: diff' :: Num a => System a a diff' ys = zipWith (-) ys (diff_ini : ys) where diff_ini is the initial condition of the differentiator. Piecing it all together, we finally obtain the following Haskell script: type Signal a = [a] type System a b = Signal a -> Signal b -- accumulator acc :: Num a => System a a acc = scanl1 (+) -- differentiator diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) -- cascade of the acc. and diff. sys :: Num a => System a a sys = diff . acc Take a look at the last line. It says that cascading systems is the same as composing systems! Hence, the Haskell implementation is conceptually very close to the mathematical formulation using operators. Functional analysis meets functional programming We run the script above on GHCi and then play with it: *Main> -- build unit impulse *Main> let delta = 1.0 : repeat 0.0 :: Signal Float *Main> -- output of the accumulator *Main> let ys = acc delta *Main> take 20 ys [1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0] *Main> -- output of the differentiator *Main> let ws = diff ys *Main> take 20 ws [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] *Main> -- impulse response of the cascade *Main> let hs = sys delta *Main> take 20 hs [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] The impulse response of the accumulator is the unit step. The impulse response of the cascade is the unit impulse, as we expected. The differentiator is the inverse of the accumulator, and vice-versa. __________ References [1] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab, Signals & Systems, 2nd edition, Prentice-Hall, 1997. [2] Panos Antsaklis, Anthony Michel, A Linear Systems Primer, Birkhäuser Boston, 2007. ### Discrete-time LTI systems in Haskell II January 22, 2012 Yesterday I wrote a post on how to implement discrete-time LTI systems in Haskell. The code I presented did not quite please my delicate aesthetic sensitivity, as it bundled the declaration of the signals with the declaration of the LTI system under study. This is not practical. Ideally, I would like to define the LTI system in a Haskell script, run it on the GHCi interpreter, then create the signals using the GHCi command line. Why? Because I can then compute the response of the system to various test input signals without reloading the script. In this post, we will again consider the first-order causal discrete-time LTI system described by the difference equation $y (n) - \alpha y (n-1) = u (n)$ where $|\alpha| < 1$. In my previous post, I obtained a state-space representation of this LTI system. In this post, I will propose a better implementation of the given system in Haskell. __________ Signal and System type synonyms What is a discrete-time signal? We can think of a discrete-time signal as a sequence of numbers or symbols. In Haskell, we often use lists of floating-point numbers to represent real-valued discrete-time signals. More generally, we have the type synonym: type Signal a = [a] which says that a signal of type $a$ is a list of elements of type $a$. We can then have signals of various types: integer, fixed-point, floating-point, bit, binary word, etc. This should be particularly useful in case we want to simulate mixed-signal circuits. What is a discrete-time system? It is that which maps input discrete-time signals to output discrete-time signals. We will use the type synonym: type System a b = (Signal a -> Signal b) which tells us that a system of type $a \,\, b$ is a function that maps an input signal of type $a$ to an output signal of type $b$. In other words, it takes a list whose elements are of type $a$, and it returns a list whose elements are of type $b$. __________ Implementing the LTI system in Haskell In my previous post, I created the state and output sequences using xs = scanl f x0 us ys = zipWith g xs us Combining the two lines above into a single line, we obtain ys = zipWith g (scanl f x0 us) us which returns the output sequence when given the input sequence. Note that the state sequence is not created. Assuming zero initial condition, we can then create a system (i.e., a function that takes a list and returns a list) as follows sys :: Floating a => System a a sys us = zipWith g (scanl f 0.0 us) us which takes signals of type $a$ and returns signals of the same type, where $a$ is a floating-point type (either Float or Double). Putting it all together in a script, we finally obtain: type Signal a = [a] type System a b = (Signal a -> Signal b) -- state-space model f,g :: Floating a => a -> a -> a f x u = (0.5 * x) + u -- state-transition g x u = f x u -- output -- define LTI system sys :: Floating a => System a a sys us = zipWith g (scanl f 0.0 us) us which defines the LTI system under study with $\alpha = 1/2$. For this choice of $\alpha$, the system is a lowpass filter. __________ Example: lowpass-filtering a square wave We run the script above on GHCi to create the desired LTI system. Let us now create an input signal, a square wave of period equal to 16 and duty cycle equal to 50%, and lowpass-filter it using the LTI system we created: *Main> -- create square wave *Main> let ones = take 8$ repeat 1.0 :: Signal Float
*Main> let zeros = take 8 \$ repeat 0.0 :: Signal Float
*Main> let us = cycle (ones ++ zeros) :: Signal Float
*Main> -- create output sequence
*Main> let ys = sys us
*Main> -- check types
*Main> :type us
us :: Signal Float
*Main> :type ys
ys :: Signal Float
*Main> :type sys
sys :: Floating a => System a a

Finally, we compute the first 64 samples of the input and output signals using function take, and plot the signals using MATLAB:

In case you are wondering why the peak amplitude of the output signal is twice that of the input signal, compute the transfer function of the LTI system and note that the DC gain is equal to $\frac{1}{1 - \alpha} = 2$.

### Discrete-time LTI systems in Haskell

January 21, 2012

Consider the first-order causal discrete-time LTI system described by the following difference equation

$y (n) - \alpha y (n-1) = u (n)$

where $|\alpha| < 1$. This system can be represented by the block diagram

where $D$ is a unit-delay block, the output of which is $y (n-1)$. We now introduce state variable $x (n)$ denoting the output of the unit-delay block, i.e., $x(n) = y(n-1)$. Thus, $x (n+1) = y(n)$ and, since $y (n) = \alpha y (n-1) + u (n)$, we obtain the following state-transition equation

$x (n+1) = \alpha x (n) + u (n)$.

The initial condition is $x(0) = y(-1)$, which we assume to be zero. Note that the output is $y (n) = x(n+1)$ and, thus, the output equation is

$y (n) = \alpha x (n) + u (n)$.

Lastly, we define $f (x,u) = g (x,u) := \alpha x + u$, which allows us to write the state-transition and output equations as follows

$\begin{array}{rl} x (n+1) &= f (x(n), u(n))\\ y (n) &= g (x(n), u(n))\\\end{array}$

We can now implement this state-space model in Haskell!

__________

Given an initial state $x_0$ and an input sequence $u = [u_0, u_1, \dots]$, we can compute the state sequence using the scanl trick I wrote about last weekend. Once we have obtained the state sequence $x = [x_0, x_1, \dots]$, the output sequence is

$y = [y_0, y_1, \dots] = [g (x_0, u_0), g (x_1, u_1), \dots]$.

Do you recognize the pattern? We are zipping lists $x$ and $u$ using function $g$!! We thus use (higher-order) function zipWith to compute the output sequence.

For $\alpha = 7/8$ and a constant input sequence, the following Haskell code computes the state and output trajectories:

-- constant input sequence
us :: Floating a => [a]
us = repeat 1.0

-- parameter alpha
alpha :: Floating a => a
alpha = 7/8

-- state-transition function
f :: Floating a => a -> a -> a
f x u = (alpha * x) + u

-- output function
g :: Floating a => a -> a -> a
g x u = f x u

-- initial condition
x0 :: Floating a => a
x0 = 0.0

-- state sequence
xs :: Floating a => [a]
xs = scanl f x0 us

-- output sequence
ys :: Floating a => [a]
ys = zipWith g xs us

Not quite. I am still stuck in the imperative paradigm… and need to be corrected. The code above does not compute anything. It does declare what the input, state and output sequences are. After running the script above in GHCi, we compute 30 samples of the input and output sequences using function take:

*Main> take 30 us
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,
1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,
1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]
*Main> take 30 ys
[1.0,1.875,2.640625,3.310546875,3.896728515625,
4.409637451171875,4.858432769775391,5.251128673553467,
5.5947375893592834,5.895395390689373,6.158470966853201,
6.388662095996551,6.590079333996982,6.7663194172473595,
6.92052949009144,7.05546330383001,7.173530390851258,
7.276839091994852,7.367234205495495,7.446329929808559,
7.515538688582489,7.576096352509678,7.629084308445968,
7.675448769890222,7.716017673653944,7.751515464447201,
7.782576031391301,7.809754027467388,7.833534774033964,
7.854342927279719]

We can plot these sequences in MATLAB:

We now know how to simulate discrete-time LTI systems using Haskell. Note that our framework is based on state-space models, not on transfer functions. Hence, it should be possible to simulate discrete-time nonlinear systems in the same manner: using scanl to compute the state sequence, and zipWith to compute the output sequence. All we need is a state-space representation of the system under study.

### On discrete-time LTI systems

August 14, 2010

The White Rabbit put on his spectacles. “Where shall I begin, please your Majesty?” he asked. “Begin at the beginning,” the King said gravely, “and go on till you come to the end: then stop.”

Lewis Carroll, Alice’s Adventures in Wonderland

A discrete-time signal is, essentially, a sequence of numbers or symbols. A temperature time series can be viewed as a real-valued discrete-time signal, whereas the output of an A/D converter can be thought of as a symbol-valued discrete-time signal if we view each binary word as a symbol. For simplicity, in this post we will consider sequences of real numbers only. More formally, a discrete-time signal is a function from a set of indices $\mathcal{I} \subseteq \mathbb{Z}$ to $\mathbb{R}$. Again for simplicity, we will only consider the case where $\mathcal{I} = \mathbb{Z}$, i.e., we will focus on doubly-infinite sequences.

A discrete-time system transforms input discrete-time signals into output discrete-time signals. For the time being, we will consider single-input, single-output (SISO) systems only. Let $x : \mathbb{Z} \to \mathbb{R}$ and $y : \mathbb{Z} \to \mathbb{R}$ be the real-valued input and output signals of a discrete-time system. Then, we can think of the system as an operator $\mathcal{H}$ that transforms signal $x$ into signal $y$, as depicted below

The output of the system is thus $y = \mathcal{H}(x)$. We will focus on the class of linear time-invariant (LTI) systems. But, what is a linear system? Or a time-invariant one? Let us “begin at the beginning”.

__________

Linearity

A linear system is one “that possesses the important property of superposition” [1], i.e.,

$\mathcal{H}(a_1 x_1 + \dots + a_n x_n) = a_1 \mathcal{H}(x_1) + \dots + a_n \mathcal{H}(x_n)$

or, equivalently

$\mathcal{H}\left(\displaystyle\sum_{i=1}^n a_i x_i\right) =\displaystyle\sum_{i=1}^n a_i \mathcal{H}(x_i)$.

In words: “if an input consists of the weighted sum of several signals, then the output is the superposition–that is, the weighted sum–of the responses of the system to each of those signals” [1]. Note that if $\mathcal{H}$ is linear, then

$\mathcal{H}(0) = \mathcal{H}(x - x) = \mathcal{H}(x) - \mathcal{H}(x) = 0$,

i.e., “for linear systems, an input which is zero for all time results in an output which is zero for all time” [1]. Pictorially, if $\mathcal{H}$ is linear, then the following block diagram

is equivalent to the one below

This is not a mere academic curiosity. Note that the former block diagram contains $n$ multipliers, one $n$-input adder, and one $\mathcal{H}$ block; by contrast, the latter block diagram contains $n$ multipliers, one $n$-input adder, and $n$ $\mathcal{H}$ blocks. Hence, the former block diagram can be implemented using less hardware, i.e., it is cheaper to implement.

__________

Time-Invariance

A system is time-invariant “if a time-shift in the input signal results in an identical time-shift in the output signal” [1]. Let us introduce the time-shift operator $\mathcal{S}_k$, where $\mathcal{S}_k(x)$ represents:

• signal $x$ delayed by $k$ samples if $k > 0$.
• signal $x$ “anticipated” by $k$ samples if $k < 0$.

Note that $\mathcal{S}_0 (x) = x$, i.e., $\mathcal{S}_0$ is the identity operator. From the definition, if a system is time-invariant, then

$\mathcal{H}(\mathcal{S}_k(x)) = \mathcal{S}_k (y)$

and, since $y = \mathcal{H}(x)$, we obtain

$\mathcal{H}(\mathcal{S}_k(x)) = \mathcal{S}_k (\mathcal{H}(x))$.

For the sake of clarity, let us write $\mathcal{H}(\mathcal{S}_k(x)) = (\mathcal{H} \circ \mathcal{S}_k)(x)$, and $\mathcal{S}_k (\mathcal{H}(x)) = (\mathcal{S}_k \circ \mathcal{H})(x)$. Then, for time-invariant systems, the following equality

$(\mathcal{H} \circ \mathcal{S}_k)(x) = (\mathcal{S}_k \circ \mathcal{H})(x)$

holds for every input signal $x$, i.e., for time-invariant systems the operators $\mathcal{H}$ and $\mathcal{S}_k$ do commute. Pictorially, we have that the following block diagrams

are equivalent. Discrete-time LTI systems are sometimes called linear shift-invariant (LSI) systems [2].

__________

Response of LTI Systems

Let the unit impulse signal $\delta : \mathbb{Z} \to \mathbb{R}$ be defined by

$\delta (n) := \begin{cases} 1, & n=0\\ 0, & n \neq 0\end{cases}$

and let $\delta_k := \mathcal{S}_k (\delta)$ be the unit impulse signal shifted by $k$ samples. An arbitrary discrete-time signal $x : \mathbb{Z} \to \mathbb{R}$ can thus be represented as a linear combination of shifted unit impulses

$x = \displaystyle \sum_{k \in \mathbb{Z}} x(k) \delta_k$

which is known as the sifting property of the discrete-time unit impulse signal [1]. If the input signal is $x$, the output signal will be

$y = \mathcal{H} (x) = \mathcal{H} \left(\displaystyle \sum_{k \in \mathbb{Z}} x(k) \delta_k\right)$.

If the system is linear (we demand no time-invariance at this point), then

$y = \mathcal{H} \left(\displaystyle \sum_{k \in \mathbb{Z}} x(k) \delta_k\right) = \displaystyle \sum_{k \in \mathbb{Z}} x(k) \mathcal{H}(\delta_k)$

i.e., the response of a linear system to a discrete-time signal $x$ is “the superposition of the scaled responses of the system to each of these shifted impulses” [1]. Let $h_k := \mathcal{H}(\delta_k)$ denote the system’s response to the shifted unit impulse signal $\delta_k$. Then we can write the output signal as

$y = \displaystyle \sum_{k \in \mathbb{Z}} x(k) h_k$.

Hence, a linear system is characterized by the set of responses to shifted unit impulses, $\{h_k\}_{k \in \mathbb{Z}}$, an infinite set. Note that

$h_k = \mathcal{H} (\delta_k) = \mathcal{H} (\mathcal{S}_k (\delta_0)) = (\mathcal{H} \circ \mathcal{S}_k) (\delta_0)$

If the system is also time-invariant, then $\mathcal{H}$ and $\mathcal{S}_k$ commute and, thus

$h_k = (\mathcal{H} \circ \mathcal{S}_k) (\delta_0) = (\mathcal{S}_k \circ \mathcal{H})(\delta_0) = \mathcal{S}_k (\mathcal{H}(\delta_0)) = \mathcal{S}_k (h_0)$.

In words: time-invariance means that “the responses of a time-invariant system to the time-shifted unit impulses are simply time-shifted versions of one another” [1]. The system’s output is thus

$y = \displaystyle \sum_{k \in \mathbb{Z}} x(k) h_k = \displaystyle \sum_{k \in \mathbb{Z}} x(k) \mathcal{S}_k (h_0)$

i.e., the system’s output signal is the superposition of scaled, time-shifted versions of the response to the unit impulse signal $\delta = \delta_0$. Let $h := h_0$, which we call the system’s impulse response. Then we have that

$\{h_k\}_{k \in \mathbb{Z}} = \{\mathcal{S}_k (h)\}_{k \in \mathbb{Z}}$.

Hence, from one single discrete-time signal $h$ we can generate the entire set $\{h_k\}_{k \in \mathbb{Z}}$. Therefore, the LTI system can be characterized by one signal, $h = \mathcal{H}(\delta)$, or by one transfer function (which is the Z-transform of the impulse response).

Finally, we have that the output of a discrete-time LTI system is

$y = \displaystyle \sum_{k \in \mathbb{Z}} x(k) \mathcal{S}_k (h)$

where the $n$-th sample of the output signal is, thus, given by the following convolution sum

$y(n) = \displaystyle \sum_{k \in \mathbb{Z}} x(k) h(n-k) =: (x * h)(n)$

where $(\mathcal{S}_k (h))(n) = h(n-k)$.

__________

References

[1] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab, Signals & Systems, 2nd edition, Prentice-Hall, 1996.

[2] Alan V. Oppenheim, Ronald W. Schafer, Digital Signal Processing, Prentice-Hall, 1975.