## Archive for February, 2012

### Deciding injectivity

February 28, 2012

Consider a function $f : \mathcal{X} \to \mathcal{Y}$. We say that $f$ is injective if and only if it maps distinct inputs to distinct outputs [1]. More formally, $f$ is injective if and only if

$\forall x_1 \forall x_2 \left( x_1 \neq x_2 \implies f (x_1) \neq f (x_2)\right)$

where $x_1$ and $x_2$ range over set $\mathcal{X}$. Note that the inequations $x_1 \neq x_2$ and $f (x_1) \neq f (x_2)$ are equivalent to $\neg (x_1 = x_2)$ and $\neg \left(f (x_1) = f (x_2)\right)$, respectively. The universally-quantified formula above can be rewritten as follows

$\forall x_1 \forall x_2 \left( \neg (x_1 = x_2) \implies \neg (f (x_1) = f (x_2))\right)$

which looks a bit messy. Contrapositively,

$\forall x_1 \forall x_2 \left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$.

In this post, we will restrict our attention to real-valued functions of a real variable (i.e., functions from $\mathbb{R}$ to $\mathbb{R}$). For polynomial functions from reals to reals, we can use quantifier elimination to determine the truth values of the universally-quantified formulas above, thereby deciding injectivity.

__________

Example

We would like to decide the injectivity of the following functions

$\begin{array}{rl} f : \mathbb{R} &\to \mathbb{R}\\ x &\mapsto x^2\end{array}$                 $\begin{array}{rl} g : \mathbb{R} &\to \mathbb{R}\\ x &\mapsto x^3\end{array}$

which we plot below (using Wolfram Alpha)

In high school we all learned to use the infamous horizontal line test to determine whether a function is injective. Let us use something more powerful this time, namely, the following REDUCE + REDLOG script:

% decide the injectivity of f(x) = x^2 and g(x) = x^3

rlset ofsf;

% define universally-quantified formulas
phi := all({x1,x2}, x1**2=x2**2 impl x1=x2);
psi := all({x1,x2}, x1**3=x2**3 impl x1=x2);

% perform quantifier elimination
rlqe phi;
rlqe psi;

end;

which decides that $f$ is not injective and that $g$ is injective.

__________

References

[1] David Makinson, Sets, Logic and Maths for Computing, Springer, 2008.

### Deciding the existence of equilibrium points

February 25, 2012

Suppose we are given a dynamical system of the form

$\dot{x} (t) = f ( x (t) )$

where $x : \mathbb{R} \to \mathbb{R}^n$ is the state trajectory, and $f : \mathbb{R}^n \to \mathbb{R}^n$ is a known vector field [1]. We now introduce the following definition:

Definition: A point $x^* \in \mathbb{R}^n$ is an equilibrium point of the given dynamical system if the state being at $x^*$ at some time $t^*$ implies that the state will remain at $x^*$ for all future time, i.e., $x (t) = x^*$ for all $t \geq t^*$. In other words, an equilibrium point $x^*$ is a point of zero flow, i.e., $f ( x^* ) = 0$.

If you happen to dislike the word “flow”, feel free to replace it with the word “velocity”. A point of zero velocity is thus a stationary one. Like a properly-trained poodle, it stays put.

One can find the equilibrium points of a given dynamical system by computing the real roots of the vector equation $f (x) = 0$. However, suppose that we are not interested in computing the equilibrium points; instead, all we would like to know is whether any such points exist. Thus, we arrive at the following decision problem:

Problem: Given a vector field $f : \mathbb{R}^n \to \mathbb{R}^n$, we would like to decide whether the dynamical system $\dot{x} (t) = f ( x (t) )$ has at least one equilibrium point. This is equivalent to determining the truth value of the formula $\exists x \left( f (x) = 0 \right)$, where $x \in \mathbb{R}^n$.

Since both $x$ and $f (x)$ are $n$-dimensional vectors, $x = (x_1, \dots, x_n)$ and $f (x) = (f_1 (x), \dots, f_n (x))$, we can rewrite the existentially-quantified formula above in the following form

$\displaystyle\exists x_1 \exists x_2 \dots \exists x_n \left( \bigwedge_{i=1}^n f_i (x_1, x_2, \dots, x_n) = 0\right)$.

Can one even determine the truth value of the existentially-quantified formula above? In order to avoid colliding against the brick wall of undecidability, let us restrict our attention to polynomial dynamical systems (i.e., dynamical systems in which the vector field $f : \mathbb{R}^n \to \mathbb{R}^n$ is polynomial). For such systems, one can use quantifier elimination to decide the existence of equilibrium points.

__________

Example

Consider the following polynomial dynamical system

$\begin{array}{rl} \dot{x}_1 &= \mu - x_1^2\\ \dot{x}_2 &= - x_2\end{array}$

taken from section 2.7 in Khalil’s book [1]. Note that the two first-order ordinary differential equations above are decoupled, i.e., they are of the form $\dot{x}_i = f_i (x_i)$. Note also that we have a free parameter $\mu \in \mathbb{R}$.

We now compute the equilibrium points of the dynamical system by solving the (scalar) polynomial equations $f_i (x_i) = 0$. We thus obtain $x_1^2 = \mu$ and $x_2 = 0$. For $\mu > 0$, we have two equilibrium points at $(\sqrt{\mu}, 0)$ and $(-\sqrt{\mu}, 0)$. For $\mu = 0$, we have one equilibrium point at $(0,0)$. Finally, for $\mu < 0$, we have no equilibrium points, as the equation $x_1^2 = \mu$ has no real roots when $\mu < 0$. The existence of equilibrium points depends on parameter $\mu$. Stating that this dynamical system has at least one equilibrium point is the same as saying that the following existentially-quantified formula

$\displaystyle\exists x_1 \exists x_2 \left( x_1^2 - \mu = 0 \land x_2 = 0\right)$

evaluates to $\text{True}$. By visual inspection of the formula, we conclude that the formula (which has two bound variables, $x_1$ and $x_2$, and one free variable $\mu$) evaluates to $\text{True}$ when $\mu \geq 0$. Let us confirm this using the following REDUCE + REDLOG script:

% decide the existence of equilibrium points

rlset ofsf;

% define polynomial vector field
f1 := mu - x1**2;
f2 := -x2;

% define existentially quantified formula
phi := ex({x1,x2}, f1=0 and f2=0);

% perform quantifier elimination
rlqe phi;

end;

which outputs $\mu \geq 0$. Hence, if the parameter $\mu$ is nonnegative, then there will always exist at least one equilibrium point.

__________

References

[1] Hassan K. Khalil, Nonlinear Systems, 3rd edition, Prentice Hall.

### Deciding linear independence

February 23, 2012

Quantifier elimination has a bit of a magical feel to it.

Arnab Bhattacharyya (2011)

We would like to decide whether a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ is linearly independent. Let us first recall the definition of linear independence. From Dym’s book [1], we have:

Definition: A set of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ in a vector space $\mathcal{V}$ over $\mathbb{F}$ is said to be linearly independent over $\mathbb{F}$ if the only scalars $\alpha_1, \dots, \alpha_k \in \mathbb{F}$ for which

$\alpha_1 \mathbf{v}_1 + \dots + \alpha_k \mathbf{v}_k = \mathbf{0}$

are $\alpha_1 = \dots = \alpha_k = 0$. This is just another way of saying that you cannot express one of these vectors in terms of the others.

_____

In this post we are interested in sets of real vectors and, thus, we shall restrict our attention to the case where $\mathbb{F} = \mathbb{R}$. Asserting that a given (finite) set of vectors is linearly independent is the same as stating that the set of vectors under study is not linearly dependent. Therefore, we now recall Dym’s definition [1] of linear dependence:

Definition: A set of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ in a vector space $\mathcal{V}$ over $\mathbb{F}$ is said to be linearly dependent over $\mathbb{F}$ if there exists a set of scalars $\alpha_1, \dots, \alpha_k \in \mathbb{F}$, not all of which are zero, such that

$\alpha_1 \mathbf{v}_1 + \dots + \alpha_k \mathbf{v}_k = \mathbf{0}$.

Notice that this permits you to express one or more of the given vectors in terms of the others.

_____

From this definition it follows that saying that a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ is linearly dependent is the same as stating that the following existentially quantified formula

$\displaystyle\exists \alpha_1 \exists \alpha_2 \dots \exists \alpha_k \left( \neg \left(\bigwedge_{i=1}^k \alpha_i = 0\right) \land \left(\sum_{i = 1}^k \alpha_i \mathbf{v}_i = \mathbf{0}\right)\right)$

evaluates to $\text{True}$. Please do note that $\sum_{i = 1}^k \alpha_i \mathbf{v}_i = \mathbf{0}$ is a vector equation encapsulating $n$ scalar equations of the form

$\displaystyle\sum_{i = 1}^k \alpha_i v_i^{(j)} = 0$

where $v_i^{(j)}$ is the $j$-th component of vector $\mathbf{v}_i$. Finally, we conclude that a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ being linearly independent is equivalent to the following formula

$\neg \displaystyle\exists \alpha_1 \exists \alpha_2 \dots \exists \alpha_k \left( \neg \left(\bigwedge_{i=1}^k \alpha_i = 0\right) \land \left(\bigwedge_{j=1}^n \sum_{i = 1}^k \alpha_i v_i^{(j)} = 0\right)\right)$

evaluating to $\text{True}$. We can determine the truth value of the formula above using quantifier elimination.

__________

Example

Consider the three following vectors in $\mathbb{R}^4$

$\mathbf{v}_1 = \left[\begin{array}{c} 1\\ 0 \\ 0 \\ 0\end{array}\right]$,    $\mathbf{v}_2 = \left[\begin{array}{c} 0\\ 1 \\ 0 \\ 0\end{array}\right]$,    $\mathbf{v}_3 = \left[\begin{array}{c} 1\\ 1 \\ 0 \\ 0\end{array}\right]$.

Is the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ linearly independent (over $\mathbb{R}$)? It is not, as $\mathbf{v}_3 = \mathbf{v}_1 + \mathbf{v}_2$. However, we would like to arrive at this conclusion using quantifier elimination. Note that in this example we have $k = 3$ (number of vectors), and $n = 4$ (dimensionality).

The vector equation $\alpha_1 \mathbf{v}_1 + \alpha_2 \mathbf{v}_2 + \alpha_3 \mathbf{v}_3 = \mathbf{0}$ then becomes

$\alpha_1 \left[\begin{array}{c} 1\\ 0 \\ 0 \\ 0\end{array}\right] + \alpha_2 \left[\begin{array}{c} 0\\ 1 \\ 0 \\ 0\end{array}\right] + \alpha_3 \left[\begin{array}{c} 1\\ 1 \\ 0 \\ 0\end{array}\right] = \left[\begin{array}{c} 0\\ 0 \\ 0 \\ 0\end{array}\right]$

which yields two (scalar) equations: $\alpha_1 + \alpha_3 = 0$ and $\alpha_2 + \alpha_3 = 0$. Note that the vector equation above also yields two redundant equations of the form $0 = 0$, which we happily discard.

The set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is linearly independent (over $\mathbb{R}$) if and only if the following existentially quantified formula

$\neg \displaystyle\exists \alpha_1 \exists \alpha_2 \exists \alpha_3 \left( \neg \left(\bigwedge_{i=1}^3 \alpha_i = 0\right) \land \alpha_1 + \alpha_3 = 0 \land \alpha_2 + \alpha_3 = 0\right)$

evaluates to $\text{True}$. Using the following REDUCE + REDLOG script, we can determine the truth value of the formula above:

% decide the linear independence of a set of vectors

rlset ofsf;

% define linear functions
f1 := alpha1;
f2 := alpha2;
f3 := alpha3;
f4 := alpha1 + alpha3;
f5 := alpha2 + alpha3;

% define existentially quantified formula
phi := not ex({alpha1,alpha2,alpha3},
not (f1=0 and f2=0 and f3=0) and f4=0 and f5=0);

% perform quantifier elimination
rlqe phi;

end;

This script returns $\text{False}$, which allows us to conclude that the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is, indeed, not linearly independent.

__________

Acknowledgements

This post was inspired by Arnab Bhattacharyya’s recent blog post on quantifier elimination.

__________

References

[1] Harry Dym, Linear algebra in action, American Mathematical Society, 2006.