## Posts Tagged ‘Linear Algebra’

### Deciding linear independence

February 23, 2012

Quantifier elimination has a bit of a magical feel to it.

Arnab Bhattacharyya (2011)

We would like to decide whether a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ is linearly independent. Let us first recall the definition of linear independence. From Dym’s book [1], we have:

Definition: A set of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ in a vector space $\mathcal{V}$ over $\mathbb{F}$ is said to be linearly independent over $\mathbb{F}$ if the only scalars $\alpha_1, \dots, \alpha_k \in \mathbb{F}$ for which

$\alpha_1 \mathbf{v}_1 + \dots + \alpha_k \mathbf{v}_k = \mathbf{0}$

are $\alpha_1 = \dots = \alpha_k = 0$. This is just another way of saying that you cannot express one of these vectors in terms of the others.

_____

In this post we are interested in sets of real vectors and, thus, we shall restrict our attention to the case where $\mathbb{F} = \mathbb{R}$. Asserting that a given (finite) set of vectors is linearly independent is the same as stating that the set of vectors under study is not linearly dependent. Therefore, we now recall Dym’s definition [1] of linear dependence:

Definition: A set of vectors $\mathbf{v}_1, \dots, \mathbf{v}_k$ in a vector space $\mathcal{V}$ over $\mathbb{F}$ is said to be linearly dependent over $\mathbb{F}$ if there exists a set of scalars $\alpha_1, \dots, \alpha_k \in \mathbb{F}$, not all of which are zero, such that

$\alpha_1 \mathbf{v}_1 + \dots + \alpha_k \mathbf{v}_k = \mathbf{0}$.

Notice that this permits you to express one or more of the given vectors in terms of the others.

_____

From this definition it follows that saying that a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ is linearly dependent is the same as stating that the following existentially quantified formula

$\displaystyle\exists \alpha_1 \exists \alpha_2 \dots \exists \alpha_k \left( \neg \left(\bigwedge_{i=1}^k \alpha_i = 0\right) \land \left(\sum_{i = 1}^k \alpha_i \mathbf{v}_i = \mathbf{0}\right)\right)$

evaluates to $\text{True}$. Please do note that $\sum_{i = 1}^k \alpha_i \mathbf{v}_i = \mathbf{0}$ is a vector equation encapsulating $n$ scalar equations of the form

$\displaystyle\sum_{i = 1}^k \alpha_i v_i^{(j)} = 0$

where $v_i^{(j)}$ is the $j$-th component of vector $\mathbf{v}_i$. Finally, we conclude that a given (finite) set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k\} \subset \mathbb{R}^n$ being linearly independent is equivalent to the following formula

$\neg \displaystyle\exists \alpha_1 \exists \alpha_2 \dots \exists \alpha_k \left( \neg \left(\bigwedge_{i=1}^k \alpha_i = 0\right) \land \left(\bigwedge_{j=1}^n \sum_{i = 1}^k \alpha_i v_i^{(j)} = 0\right)\right)$

evaluating to $\text{True}$. We can determine the truth value of the formula above using quantifier elimination.

__________

Example

Consider the three following vectors in $\mathbb{R}^4$

$\mathbf{v}_1 = \left[\begin{array}{c} 1\\ 0 \\ 0 \\ 0\end{array}\right]$,    $\mathbf{v}_2 = \left[\begin{array}{c} 0\\ 1 \\ 0 \\ 0\end{array}\right]$,    $\mathbf{v}_3 = \left[\begin{array}{c} 1\\ 1 \\ 0 \\ 0\end{array}\right]$.

Is the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ linearly independent (over $\mathbb{R}$)? It is not, as $\mathbf{v}_3 = \mathbf{v}_1 + \mathbf{v}_2$. However, we would like to arrive at this conclusion using quantifier elimination. Note that in this example we have $k = 3$ (number of vectors), and $n = 4$ (dimensionality).

The vector equation $\alpha_1 \mathbf{v}_1 + \alpha_2 \mathbf{v}_2 + \alpha_3 \mathbf{v}_3 = \mathbf{0}$ then becomes

$\alpha_1 \left[\begin{array}{c} 1\\ 0 \\ 0 \\ 0\end{array}\right] + \alpha_2 \left[\begin{array}{c} 0\\ 1 \\ 0 \\ 0\end{array}\right] + \alpha_3 \left[\begin{array}{c} 1\\ 1 \\ 0 \\ 0\end{array}\right] = \left[\begin{array}{c} 0\\ 0 \\ 0 \\ 0\end{array}\right]$

which yields two (scalar) equations: $\alpha_1 + \alpha_3 = 0$ and $\alpha_2 + \alpha_3 = 0$. Note that the vector equation above also yields two redundant equations of the form $0 = 0$, which we happily discard.

The set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is linearly independent (over $\mathbb{R}$) if and only if the following existentially quantified formula

$\neg \displaystyle\exists \alpha_1 \exists \alpha_2 \exists \alpha_3 \left( \neg \left(\bigwedge_{i=1}^3 \alpha_i = 0\right) \land \alpha_1 + \alpha_3 = 0 \land \alpha_2 + \alpha_3 = 0\right)$

evaluates to $\text{True}$. Using the following REDUCE + REDLOG script, we can determine the truth value of the formula above:

% decide the linear independence of a set of vectors

rlset ofsf;

% define linear functions
f1 := alpha1;
f2 := alpha2;
f3 := alpha3;
f4 := alpha1 + alpha3;
f5 := alpha2 + alpha3;

% define existentially quantified formula
phi := not ex({alpha1,alpha2,alpha3},
not (f1=0 and f2=0 and f3=0) and f4=0 and f5=0);

% perform quantifier elimination
rlqe phi;

end;

This script returns $\text{False}$, which allows us to conclude that the set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is, indeed, not linearly independent.

__________

Acknowledgements

This post was inspired by Arnab Bhattacharyya’s recent blog post on quantifier elimination.

__________

References

[1] Harry Dym, Linear algebra in action, American Mathematical Society, 2006.

### Representing complex numbers as 2×2 matrices

November 9, 2008

We can represent complex numbers as $2 \times 2$ real matrices, such that arithmetic operations $(+, -, \times, \div)$ on complex numbers become equivalent to arithmetic operations on their matrix representations.

Consider a matrix-valued function $M: \mathbb{C} \rightarrow \mathbb{R}^{2 \times 2}$ defined as

$M(z) = \left[\begin{array}{cc} \Re\{z\} & -\Im\{z\}\\ \Im\{z\} & \Re\{z\}\\ \end{array}\right]$,

where $\Re\{z\}$ and $\Im\{z\}$ are the real and imaginary parts of $z \in \mathbb{C}$, respectively. It can be shown that for all $z_1, z_2 \in \mathbb{C}$, we have that

• $M(z_1 + z_2) = M(z_1) + M(z_2)$.
• $M(z_1 - z_2) = M(z_1) - M(z_2)$.
• $M(z_1 z_2) = M(z_1) M(z_2)$.
• $M(z_1 / z_2) = [M(z_2)]^{-1} M(z_1)$, if $z_2 \neq 0$.

There seems to be an homomorphism here.

__________

Matrix Representation of a Complex Number

Let $z = a + i b$, where $i = \sqrt{-1}$ and $a,b \in \mathbb{R}$, be a complex number. Its matrix representation is

$M(z) = M(a + i b) = \left[\begin{array}{cc} a & -b\\ b & a\\ \end{array}\right] = a \left[\begin{array}{cc} 1 & 0\\ 0 & 1\\ \end{array}\right] + b \left[\begin{array}{cc} 0 & -1\\ 1 & 0\\ \end{array}\right]$.

Let matrices $I_2, Q \in \mathbb{R}^{2 \times 2}$ be

$I_2 = \left[\begin{array}{cc} 1 & 0\\ 0 & 1\\ \end{array}\right], \quad{} Q = \left[\begin{array}{cc} 0 & -1\\ 1 & 0\\ \end{array}\right]$,

then the matrix representation of $z \in \mathbb{C}$ is

$M(z) = M(a + i b) = a I_2 + b Q$.

Note that both $I_2$ and $Q$ are orthogonal matrices, and therefore $Q^{-1} = Q^T$. Since $I_2$ is the $2 \times 2$ identity matrix, we have that $I_2$ and $Q$ commute, i.e., $I_2 Q = Q I_2$.

The matrix representation of the complex conjugate of $z$ is

$M(z^*) = M(a - i b) = a I_2 - b Q = M^T (z)$,

which follows from the fact that matrix $Q$ is skew-symmetric, i.e., $Q^T = -Q$. The complex conjugate of a complex number can thus be computed by transposing its matrix representation. Since $Q^T = Q^{-1}$ and $Q^T = - Q$, we have $Q^2 = - I_2$.

The square of the magnitude of a complex number, $|z|^2 = zz^*$, is equal to the determinant of its matrix representation, as follows

$\det(M(z)) = \det(M(a + i b)) = \begin{vmatrix} a & -b \\ b & a\\ \end{vmatrix} = a^2 + b^2 = |z|^2$.

A complex number written in the cartesian form, $z = a + i b$, can be converted to its polar form, $z = \rho e^{i \theta}$, where $\rho = \sqrt{a^2 + b^2}$ and $\theta = \arg(z)$. Given a complex number in the polar form $z = \rho e^{i \theta}$, its cartesian form is

$z = \rho (\cos(\theta) + i \sin(\theta))$,

and its matrix representation is

$M(z) = \rho \left[\begin{array}{cc} \cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\\ \end{array}\right] = \rho R(\theta)$,

where

$R(\theta) = \left[\begin{array}{cc} \cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\\ \end{array} \right]$

is a rotation matrix that rotates a vector in $\mathbb{R}^2$ by a counterclockwise angle $\theta$ radians, leaving the vector’s $\|\cdot\|_2$ unchanged. Matrix $R(\theta)$ is orthogonal, and thus $R^{-1} (\theta) = R^T (\theta)$. Given that

$\sin(-\theta) = - \sin(\theta), \quad{} \cos(-\theta) = \cos(\theta)$,

we have $R(-\theta) = R^T (\theta) = R^{-1}$. As $\det(R(\theta)) = 1$ for any value of $\theta$, we can conclude that $R(\theta) \in \text{SO}(2)$.

Finally, note that $Q = R(\frac{\pi}{2})$, i.e., matrix $Q$ rotates a vector in $\mathbb{R}^2$ $\pi/2$ radians counter-clockwise. If we multiply a complex number by $i$, we rotate it by a counterclockwise angle of $\pi/2$ radians

$i z = i \rho e^{i \theta} = \rho e^{i (\theta + \frac{\pi}{2})}$,

and in terms of matrix representations, this is equivalent to multiplying $M(z)$ by matrix $Q$. It does not matter if we multiply by $Q$ on the left or on the right, because matrices $I_2$ and $Q$ do commute.

__________

Arithmetic Operations

Let $z_1 = a_1 + i b_1$ and $z_2 = a_2 + i b_2$ be two complex numbers. Their addition/subtraction is thus

$z_1 \pm z_2 = (a_1 \pm a_2) + i (b_1 \pm b_2)$,

and from the rules of matrix addition / subtraction it can easily be shown  that $M(z_1) \pm M(z_2) = M(z_1 \pm z_2)$.

It’s easier to compute products of complex numbers if we write them in the polar form, i.e., $z_1 = \rho_1 e^{i \theta_1}$ and $z_2 = \rho_2 e^{i \theta_2}$. The product is thus $z_1 z_2 = \rho_1 \rho_2 e^{i (\theta_1 + \theta_2)}$. The product of the matrix representations is

$M(z_1) M(z_2) = \rho_1 R(\theta_1) \rho_2 R(\theta_2) = \rho_1 \rho_2 R(\theta_1 + \theta_2) = M(z_1 z_2)$,

since $R(\theta_1) R(\theta_2) = R(\theta_1 + \theta_2)$. Dividing $z_1 \in \mathbb{C}$ by $z_2 \in \mathbb{C} \setminus \{0\}$ yields

$\displaystyle\frac{z_1}{z_2} = \frac{\rho_1}{\rho_2} e^{i (\theta_1 - \theta_2)}$,

whose corresponding matrix representation is

$\displaystyle M(z_1 / z_2) = \frac{\rho_1}{\rho_2} R(\theta_1 - \theta_2)$.

Note that

$M^{-1} (z_2) M(z_1) = \displaystyle \frac{\rho_1}{\rho_2} R(-\theta_2) R(\theta_1) = \frac{\rho_1}{\rho_2} R(\theta_1 - \theta_2)$

and therefore $M^{-1} (z_2) M(z_1) = M(z_1 / z_2)$.

We can conclude that arithmetic operations on complex numbers are equivalent to arithmetic operations on the matrices representing such complex numbers.

I have absolutely no idea what use this matrix representation could possibly have, but it’s quite interesting nonetheless.

__________

Related:

### Gilbert Strang’s lectures on Linear Algebra

October 16, 2008

Matrix theory is the arithmetic of higher mathematics.

Richard Bellman (1920-1984)

Prof. Gilbert Strang‘s famous lectures on Linear Algebra are available for download here. You can also watch the 34 lectures on YouTube. More material can be found on the course’s OCW page. Highly recommended. Strang is an awesome teacher. His lectures are delightful.

Thanks to MIT OCW for making the lecture available online.

__________

Related: