## Archive for the ‘MATLAB’ Category

### Distance between two lines II

November 27, 2010

Suppose we are given two lines, $\mathcal{L}_1, \mathcal{L}_2 \subset \mathbb{R}^n$, defined as follows

$\mathcal{L}_1 = \left\{ x_0 + \lambda u \mid \lambda \in \mathbb{R} \right\}, \quad{} \mathcal{L}_2 = \left\{ y_0 + \gamma v \mid \gamma \in \mathbb{R} \right\}$

where $x_0, y_0 \in \mathbb{R}^n$, and $u, v \in \mathbb{R}^n \setminus \{0_n\}$.  Let vector-valued functions $x : \mathbb{R} \rightarrow \mathbb{R}^n$ and $y : \mathbb{R} \rightarrow \mathbb{R}^n$ be defined as $x(\lambda) = x_0 + \lambda u$ and $y(\gamma) = y_0 + \gamma v$, so that the two lines can be written in the form

$\mathcal{L}_1 = \left\{ x(\lambda) \mid \lambda \in \mathbb{R} \right\}, \quad{} \mathcal{L}_2 = \left\{ y(\gamma) \mid \gamma \in \mathbb{R} \right\}$.

Suppose that we want to compute the distance between these two lines. Back in December 2008, I wrote a very long post in which I derived from first principles an expression for the distance between two lines in $\mathbb{R}^n$. To cut a long story short, the distance between $\mathcal{L}_1$ and $\mathcal{L}_2$ is given by

$\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \displaystyle \min_{(\lambda, \gamma) \in \mathbb{R}^2} \| x(\lambda) - y(\gamma) \|_2$.

My earlier post was quite conceptual and theoretical. In this post I focus on actually computing the distance between two given lines.

__________

Introduction

Note that the difference vector $x(\lambda) - y(\gamma)$ can be written in matrix form, as follows

$x(\lambda) - y(\gamma) = \left[\begin{array}{cc} u & -v\end{array}\right] \left[\begin{array}{c} \lambda \\ \gamma \end{array}\right] + (x_0 - y_0)$.

Let $\eta := (\lambda, \gamma)$. Then, the squared 2-norm of the difference vector can be written as

$\| x(\lambda) - y(\gamma) \|_2^2 = \eta^T Q \eta + 2 r^T \eta + s$

where $Q, r, s$ are defined as follows

$Q := \left[\begin{array}{rr} u^T u & -u^T v \\ -v^T u & v^T v \\\end{array}\right], \quad{} r := \left[\begin{array}{c} u^T \\ -v^T\end{array}\right] (x_0 - y_0), \quad{} s := \|x_0 - y_0\|_2^2$

Note that $Q = Q^T$, and also that $Q$ is a Gramian matrix, which means that $Q$ is either positive definite or positive semidefinite. Therefore, function $\| x(\lambda) - y(\gamma) \|_2^2$ is convex in $\eta = (\lambda, \gamma)$, and we conclude that the minimum exists.

The pair $(\lambda, \gamma)$ that minimizes the 2-norm $\| x(\lambda) - y(\gamma) \|_2$ is the same one that minimizes the squared 2-norm $\| x(\lambda) - y(\gamma) \|_2^2$ and, hence, we have the unconstrained quadratic program

$(\lambda^*, \gamma^*) = \eta^* = \displaystyle \arg\min_{\eta \in \mathbb{R}^2} \left\{ \eta^T Q \eta + 2 r^T \eta + s \right\}$

which can be solved analytically. The gradient of the objective function must vanish at the minimum

$\displaystyle \frac{\partial}{\partial \eta} \left[\eta^T Q \eta + 2 r^T \eta + s\right] \mid_{\eta = \eta^*} = 0_2 \Rightarrow Q \eta^* = -r$

and, hence, we have a linear system of two equations in two unknowns

$\left[\begin{array}{rr} u^T u & -u^T v \\ -v^T u & v^T v \\\end{array}\right] \left[\begin{array}{c} \lambda^* \\ \gamma^* \end{array}\right] = \left[\begin{array}{c} u^T (y_0 - x_0) \\ v^T (x_0 - y_0)\end{array}\right]$.

If $\det(Q) \neq 0$, then the system of equations has a single unique solution. However, if $\det(Q) = 0$, there are infinitely many solutions. Let us consider both cases separately.

__________

Non-parallel lines

If matrix $Q$ is invertible, we obtain the unique minimizer $\eta^* = - Q^{-1} r$. Thus, the minimum squared 2-norm of the distance vector is

$\begin{array}{rl}\| x(\lambda^*) - y(\gamma^*) \|_2^2 &= (\eta^*)^T Q \eta^* + 2 r^T \eta^* + s\\ &= r^T Q^{-T} Q Q^{-1} r - 2 r^T Q^{-1} r + s\\ &= r^T Q^{-1} r - 2 r^T Q^{-1} r + s\\ &= s - r^T Q^{-1} r\end{array}$

Let $P := \left[\begin{array}{cc} u & -v\end{array}\right]$. Then, $Q = P^T P$ and $r = P^T (x_0 - y_0)$. Hence, we obtain

$\| x(\lambda^*) - y(\gamma^*) \|_2^2 = \|x_0 - y_0\|_2^2 - (x_0 - y_0)^T P (P^T P)^{-1} P^T (x_0 - y_0)$

and, finally, the distance between lines $\mathcal{L}_1$ and $\mathcal{L}_2$, $\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \| x(\lambda^*) - y(\gamma^*) \|_2$, is the following

$\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \displaystyle\sqrt{(x_0 - y_0)^T \left(I_n - P (P^T P)^{-1} P^T\right) (x_0 - y_0)}$.

If direction vectors $u, v$ are linearly independent, i.e., the two given lines are non-parallel, then $Q$ is invertible, and we have a unique minimizer. Intuitively, this makes sense.

__________

Non-parallel lines

Suppose now that the direction vectors are linearly dependent, $v = c u$, where $c \in \mathbb{R}$. This corresponds to the case where the two lines are parallel (or coincident). In this case

$Q = \left[\begin{array}{rr} \|u\|_2^2 & - c\|u\|_2^2 \\ -c \|u\|_2^2 & c^2 \|u\|_2^2 \\\end{array}\right] = \|u\|_2^2 \left[\begin{array}{rr} 1 & - c \\ -c & c^2 \\\end{array}\right]$

which is clearly non-invertible. The linear system of equations, $Q \eta^* = -r$, now has a redundant equation. Eliminating this redundant equation, we obtain an underdetermined system

$\left[\begin{array}{rr} u^T u & -u^T v \\\end{array}\right] \left[\begin{array}{c} \lambda^* \\ \gamma^* \end{array}\right] = u^T (y_0 - x_0)$.

which has infinitely many solutions. We have one equation and two unknowns, i.e., one degree of freedom. Let us choose $\gamma^* = 0$. Hence, we obtain

$\lambda^* = \displaystyle\frac{u^T (y_0 - x_0)}{u^T u}$

and

$x(\lambda^*) = x_0 + \displaystyle\frac{u u^T}{u^T u}(y_0 - x_0), \quad{} y(\gamma^*) = y_0$.

Let $\Pi_u := u u^T / u^T u$. Note that $\Pi_u$ represents an orthogonal projection along vector $u$. Hence, a distance-minimizing difference vector would be

$x(\lambda^*) - y(\gamma^*) = (I_n - \Pi_u) (x_0 - y_0)$.

Note that this difference vector is not unique (there are infinitely many alternatives). Finally, we obtain the distance between the two parallel lines

$\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \|(I_n - \Pi_u) (x_0 - y_0)\|_2$.

If $Q$ is singular and $\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = 0$, we conclude that the two lines are coincident.

__________

Implementation

Here’s a MATLAB script that computes the distance between two lines:

function d = dist(x0,y0,u,v)

% extract dimension
n = size(x0,1);

% build P, Q matrices
P = [u -v];
Q = P' * P;

% build projection matrix
Pi_u = (u * u') / (u' * u);

% compute distance
if det(Q) == 0
d = norm((eye(n,n) - Pi_u)*(x0-y0),2);
else
d = sqrt((x0-y0)'*(eye(n,n) - P*inv(Q)*P')*(x0-y0));
end


__________

Related:

### Cascading (almost) identical LTI systems

October 23, 2010

Consider a causal continuous-time LTI system $\mathcal{H}$ with transfer function

$H (s) = \displaystyle\frac{1}{(s+a) (s+b)}$

where $a, b \in \mathbb{R}$. Note that we have two poles (at $s = -a$ and $s = -b$), but no finite zeros. We can view $\mathcal{H}$, a 2nd order system, as the cascade connection of two causal 1st order LTI systems

If $b = a$, then the transfer function becomes $H (s) = 1 / (s+a)^2$ and we have a double pole at $s = -a$. Taking the inverse Laplace transform of $H (s)$ we obtain the impulse response of the overall system $\mathcal{H}$

$h (t) = t e^{- a t} u (t)$

where $u (t)$ is the Heaviside step function. If $a > 0$, then the impulse response will eventually decay to zero. However, note that the exponential is multiplied by $t$, which means that there is a transient “peak”. If $a$ is positive but “close” to zero, this transient could be quite “wild”! This phenomenon is somewhat similar to resonance, although we are dealing with decaying exponential excitations, not sinusoidal ones. If we apply a Dirac delta impulse to the system depicted above, the output of the first system in the cascade will be the impulse response of $\mathcal{H}_1$

$h_1 (t) = e^{- a t} u (t)$,

which happens to be also the impulse response of $\mathcal{H}_2$, the second system in the cascade. The lesson is the following: cascading identical LTI systems leads to resonant-like behavior.

If $b \neq a$, then we have a simple pole at $s = -a$ and another simple pole at $s = -b$. Via partial fraction expansion of $H (s)$, we obtain

$H (s) = \displaystyle\frac{1}{b - a} \left(\displaystyle\frac{1}{s+a} - \displaystyle\frac{1}{s+b}\right)$

and, taking the inverse Laplace transform, we get the impulse response

$h (t) = \displaystyle\frac{1}{b - a} \left(e^{-a t} - e^{-b t}\right) u (t)$.

Note that these are valid only if $b \neq a$. If we make $b = a$, we get pesky indeterminate stuff: $H (s) = 0 / 0$ and $h (t) = 0 / 0$.

Let us see what happens if $\mathcal{H}_1$ and $\mathcal{H}_2$ are “almost identical”. To be precise, let us investigate the impulse response of the cascade when the (simple) poles at $s = -a$ and $s = -b$ are almost “on top of each other”. Let $b = a \pm \varepsilon$, where $\varepsilon > 0$ is “small”. Then, the impulse response of $\mathcal{H}$ can be written as follows

$h (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(e^{-a t} - e^{-(a \pm \varepsilon) t}\right) u (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(1 - e^{- (\pm \varepsilon) t} \right) e^{-a t} u (t)$.

For convenience, let us define

$\gamma_{\varepsilon} (t) := \pm \displaystyle\frac{1}{\varepsilon} \left(1 - e^{- (\pm \varepsilon) t} \right)$

so that $h (t) = \gamma_{\varepsilon} (t) e^{-a t} u (t)$. Taking the Taylor expansion of $\gamma_{\varepsilon} (t)$ about $t = 0$, we get

$\gamma_{\varepsilon} (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(\pm \varepsilon t - \displaystyle\frac{\varepsilon^2}{2} t^2 \pm \displaystyle\frac{\varepsilon^3}{3!} t^3 - \dots\right)$

and, taking the limit as $\varepsilon$ approaches zero, we obtain

$\displaystyle\lim_{\varepsilon \to 0} \gamma_{\varepsilon} (t) = \displaystyle\lim_{\varepsilon \to 0} \pm \displaystyle\frac{1}{\varepsilon} \left(\pm \varepsilon t - \displaystyle\frac{\varepsilon^2}{2} t^2 \pm \displaystyle\frac{\varepsilon^3}{3!} t^3 - \dots\right) = t$

and, therefore, we finally get

$\displaystyle\lim_{\varepsilon \to 0} h (t) = \displaystyle\lim_{\varepsilon \to 0} \gamma_{\varepsilon} (t) e^{-a t} u (t) = t e^{-a t} u (t)$.

This shows, albeit in a sinfully non-rigorous manner, that the impulse response of $H (s) = 1 / ((s+a) (s+b))$ approaches the impulse response of $H (s) = 1 / (s+a)^2$ as the pole at $s = -b$ gets “closer” and “closer” to the pole at $s = -a$. I concede that this result is not terribly exciting…

Let us now perform a numerical experiment! Let $a = 1$, and let $b = a + \varepsilon$, so that $b$ approaches $a$ as $\varepsilon \to 0$. The plot below illustrates the impulse responses for various values of $\varepsilon$

where $\varepsilon = 0$ corresponds to the cascade of two identical systems (which creates a double pole at $s = -1$). Note that, as expected, as $\varepsilon$ approaches zero, the impulse response of the cascade approaches the resonant one, $h (t) = t e^{- a t} u (t)$.

Last but not least, here’s the MATLAB script that generates the plot:

% build transfer function of H1
a = 1; H1 = tf(1,[1 a]);

% create time grid
t = [0:0.01:5];

% compute impulse responses of the cascade
h = [];
for epsilon = [1 0.5 0.1 0];

% build transfer function of H2
b = a + epsilon; H2 = tf(1,[1 b]);

% compute and store impulse response
h = [h, impulse(H2 * H1, t)];

end

% plot impulse responses
figure;
plot(t, h(:,1), 'r-', t, h(:,2), 'm-', t, h(:,3), 'b-', t, h(:,4), 'k--');
legend('\epsilon = 1', '\epsilon = 0.5', '\epsilon = 0.1', '\epsilon = 0');
xlabel('t (seconds)'); title('Impulse responses of the cascade');