## Archive for November, 2010

### Distance between two lines II

November 27, 2010

Suppose we are given two lines, $\mathcal{L}_1, \mathcal{L}_2 \subset \mathbb{R}^n$, defined as follows

$\mathcal{L}_1 = \left\{ x_0 + \lambda u \mid \lambda \in \mathbb{R} \right\}, \quad{} \mathcal{L}_2 = \left\{ y_0 + \gamma v \mid \gamma \in \mathbb{R} \right\}$

where $x_0, y_0 \in \mathbb{R}^n$, and $u, v \in \mathbb{R}^n \setminus \{0_n\}$.  Let vector-valued functions $x : \mathbb{R} \rightarrow \mathbb{R}^n$ and $y : \mathbb{R} \rightarrow \mathbb{R}^n$ be defined as $x(\lambda) = x_0 + \lambda u$ and $y(\gamma) = y_0 + \gamma v$, so that the two lines can be written in the form

$\mathcal{L}_1 = \left\{ x(\lambda) \mid \lambda \in \mathbb{R} \right\}, \quad{} \mathcal{L}_2 = \left\{ y(\gamma) \mid \gamma \in \mathbb{R} \right\}$.

Suppose that we want to compute the distance between these two lines. Back in December 2008, I wrote a very long post in which I derived from first principles an expression for the distance between two lines in $\mathbb{R}^n$. To cut a long story short, the distance between $\mathcal{L}_1$ and $\mathcal{L}_2$ is given by

$\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \displaystyle \min_{(\lambda, \gamma) \in \mathbb{R}^2} \| x(\lambda) - y(\gamma) \|_2$.

My earlier post was quite conceptual and theoretical. In this post I focus on actually computing the distance between two given lines.

__________

Introduction

Note that the difference vector $x(\lambda) - y(\gamma)$ can be written in matrix form, as follows

$x(\lambda) - y(\gamma) = \left[\begin{array}{cc} u & -v\end{array}\right] \left[\begin{array}{c} \lambda \\ \gamma \end{array}\right] + (x_0 - y_0)$.

Let $\eta := (\lambda, \gamma)$. Then, the squared 2-norm of the difference vector can be written as

$\| x(\lambda) - y(\gamma) \|_2^2 = \eta^T Q \eta + 2 r^T \eta + s$

where $Q, r, s$ are defined as follows

$Q := \left[\begin{array}{rr} u^T u & -u^T v \\ -v^T u & v^T v \\\end{array}\right], \quad{} r := \left[\begin{array}{c} u^T \\ -v^T\end{array}\right] (x_0 - y_0), \quad{} s := \|x_0 - y_0\|_2^2$

Note that $Q = Q^T$, and also that $Q$ is a Gramian matrix, which means that $Q$ is either positive definite or positive semidefinite. Therefore, function $\| x(\lambda) - y(\gamma) \|_2^2$ is convex in $\eta = (\lambda, \gamma)$, and we conclude that the minimum exists.

The pair $(\lambda, \gamma)$ that minimizes the 2-norm $\| x(\lambda) - y(\gamma) \|_2$ is the same one that minimizes the squared 2-norm $\| x(\lambda) - y(\gamma) \|_2^2$ and, hence, we have the unconstrained quadratic program

$(\lambda^*, \gamma^*) = \eta^* = \displaystyle \arg\min_{\eta \in \mathbb{R}^2} \left\{ \eta^T Q \eta + 2 r^T \eta + s \right\}$

which can be solved analytically. The gradient of the objective function must vanish at the minimum

$\displaystyle \frac{\partial}{\partial \eta} \left[\eta^T Q \eta + 2 r^T \eta + s\right] \mid_{\eta = \eta^*} = 0_2 \Rightarrow Q \eta^* = -r$

and, hence, we have a linear system of two equations in two unknowns

$\left[\begin{array}{rr} u^T u & -u^T v \\ -v^T u & v^T v \\\end{array}\right] \left[\begin{array}{c} \lambda^* \\ \gamma^* \end{array}\right] = \left[\begin{array}{c} u^T (y_0 - x_0) \\ v^T (x_0 - y_0)\end{array}\right]$.

If $\det(Q) \neq 0$, then the system of equations has a single unique solution. However, if $\det(Q) = 0$, there are infinitely many solutions. Let us consider both cases separately.

__________

Non-parallel lines

If matrix $Q$ is invertible, we obtain the unique minimizer $\eta^* = - Q^{-1} r$. Thus, the minimum squared 2-norm of the distance vector is

$\begin{array}{rl}\| x(\lambda^*) - y(\gamma^*) \|_2^2 &= (\eta^*)^T Q \eta^* + 2 r^T \eta^* + s\\ &= r^T Q^{-T} Q Q^{-1} r - 2 r^T Q^{-1} r + s\\ &= r^T Q^{-1} r - 2 r^T Q^{-1} r + s\\ &= s - r^T Q^{-1} r\end{array}$

Let $P := \left[\begin{array}{cc} u & -v\end{array}\right]$. Then, $Q = P^T P$ and $r = P^T (x_0 - y_0)$. Hence, we obtain

$\| x(\lambda^*) - y(\gamma^*) \|_2^2 = \|x_0 - y_0\|_2^2 - (x_0 - y_0)^T P (P^T P)^{-1} P^T (x_0 - y_0)$

and, finally, the distance between lines $\mathcal{L}_1$ and $\mathcal{L}_2$, $\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \| x(\lambda^*) - y(\gamma^*) \|_2$, is the following

$\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \displaystyle\sqrt{(x_0 - y_0)^T \left(I_n - P (P^T P)^{-1} P^T\right) (x_0 - y_0)}$.

If direction vectors $u, v$ are linearly independent, i.e., the two given lines are non-parallel, then $Q$ is invertible, and we have a unique minimizer. Intuitively, this makes sense.

__________

Non-parallel lines

Suppose now that the direction vectors are linearly dependent, $v = c u$, where $c \in \mathbb{R}$. This corresponds to the case where the two lines are parallel (or coincident). In this case

$Q = \left[\begin{array}{rr} \|u\|_2^2 & - c\|u\|_2^2 \\ -c \|u\|_2^2 & c^2 \|u\|_2^2 \\\end{array}\right] = \|u\|_2^2 \left[\begin{array}{rr} 1 & - c \\ -c & c^2 \\\end{array}\right]$

which is clearly non-invertible. The linear system of equations, $Q \eta^* = -r$, now has a redundant equation. Eliminating this redundant equation, we obtain an underdetermined system

$\left[\begin{array}{rr} u^T u & -u^T v \\\end{array}\right] \left[\begin{array}{c} \lambda^* \\ \gamma^* \end{array}\right] = u^T (y_0 - x_0)$.

which has infinitely many solutions. We have one equation and two unknowns, i.e., one degree of freedom. Let us choose $\gamma^* = 0$. Hence, we obtain

$\lambda^* = \displaystyle\frac{u^T (y_0 - x_0)}{u^T u}$

and

$x(\lambda^*) = x_0 + \displaystyle\frac{u u^T}{u^T u}(y_0 - x_0), \quad{} y(\gamma^*) = y_0$.

Let $\Pi_u := u u^T / u^T u$. Note that $\Pi_u$ represents an orthogonal projection along vector $u$. Hence, a distance-minimizing difference vector would be

$x(\lambda^*) - y(\gamma^*) = (I_n - \Pi_u) (x_0 - y_0)$.

Note that this difference vector is not unique (there are infinitely many alternatives). Finally, we obtain the distance between the two parallel lines

$\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = \|(I_n - \Pi_u) (x_0 - y_0)\|_2$.

If $Q$ is singular and $\textbf{dist} (\mathcal{L}_1, \mathcal{L}_2) = 0$, we conclude that the two lines are coincident.

__________

Implementation

Here’s a MATLAB script that computes the distance between two lines:

function d = dist(x0,y0,u,v)

% extract dimension
n = size(x0,1);

% build P, Q matrices
P = [u -v];
Q = P' * P;

% build projection matrix
Pi_u = (u * u') / (u' * u);

% compute distance
if det(Q) == 0
d = norm((eye(n,n) - Pi_u)*(x0-y0),2);
else
d = sqrt((x0-y0)'*(eye(n,n) - P*inv(Q)*P')*(x0-y0));
end


__________

Related: