## Archive for the ‘Dynamical Systems’ Category

### ODE notation considered harmful

June 5, 2012

Whenever I want to learn a new subject, I announce a graduate course in it, since the best way to learn is by teaching. But even better than teaching humans is teaching computers, i.e. program! Since computers will not let you wave your hands and wing it.

Doron Zeilberger (1999)

Suppose that we are given an $n$-dimensional first-order vector ordinary differential equation (ODE)

$\dot{x} = f (x)$

where $f : \mathbb{R}^n \to \mathbb{R}^n$ is a vector field. A solution of this ODE is a function $x : \mathbb{R} \to \mathbb{R}^n$ whose first derivative, $\dot{x}$, is equal to $f (x)$. Elementary, my dear Watson!

Despite the fact that the notation $\dot{x} = f (x)$ is rather pervasive, it is dangerous, in my humble opinion. One should keep in mind that such notation is an abbreviation. How could it not be an abbreviation? Note that $x$ is a function, and that the argument of $f$ is a vector in $\mathbb{R}^n$, not a function! In other words, writing $f (x)$ results in a type error, and a compiler / interpreter with type checking would never accept the declaration $\dot{x} = f (x)$. We, fallible humans, accept it because we know (do we?) that it is an abbreviation, and because it is tradition. Who are we to challenge a centuries-old tradition?

If $\dot{x} = f (x)$ is an abbreviation, what exactly is it an abbreviation for? It is an abbreviation for the following

$\dot{x} (t) = f ( x (t) )$

where the $=$ symbol denotes equality of vectors. Note that the argument of $f$ is the vector $x (t)$ (i.e., function $x$ evaluated at $t$), not the function $x$. But, what does the equation above say? It says the following:

Proposition: Given a vector field $f : \mathbb{R}^n \to \mathbb{R}^n$, there exists a function $x : \mathbb{R} \to \mathbb{R}^n$, whose first derivative is $\dot{x} : \mathbb{R} \to \mathbb{R}^n$, such that for every $t \in \mathbb{R}$ we have that $\dot{x} (t) = f ( x (t) )$.

This proposition may be true or false. Not all differential equations have solutions, after all, which is why the study of existence of solutions is (was?) an active area of research. Note the presence of the words “for every” in the proposition above. Do you see any universal quantifier, $\forall$, anywhere? You do not, and that is due to the fact that $\dot{x} (t) = f ( x (t) )$ is also an abbreviation! The universal quantifier is implicit. If we make it explicit, we obtain

$\forall t \left( \dot{x} (t) = f ( x (t) ) \right)$

where $t$ ranges over $\mathbb{R}$. The universally quantified formula above is the conjunction of infinitely many equations. However, do note that the words “there exists” also appear in the proposition, which suggests that an existential quantifier is missing. Therefore, the non-abbreviated notation would be as follows

$\exists x \, \forall t \left( \dot{x} (t) = f ( x (t) ) \right)$

where $x$ ranges over the set of all (continuous?) functions from $\mathbb{R}$ to $\mathbb{R}^n$, and $t$ ranges over $\mathbb{R}$.

__________

A better notation

So far, I have criticized the abbreviated notation for differential equations. I consider it harmful, as confusing a function $x$ with its evaluation $x(t)$ is an atrocious crime against types. To make this post more “constructive”, I will take the liberty of proposing a better notation.

We start by noting that $f ( x (t) )$ can be written in the form $(f \circ x) (t)$, where the symbol $\circ$ denotes function composition. We now introduce a differential operator $D$, which maps functions to functions, so that we obtain the first derivative of function $x$ via $\dot{x} = D (x)$. Hence, $\dot{x} (t) = (D (x)) (t)$, which allows us to rewrite $\forall t \left( \dot{x} (t) = f ( x (t) ) \right)$ in the following form

$\forall t \left( (D(x)) (t) = (f \circ x) (t) \right)$

where $t$ ranges over $\mathbb{R}$. The universally quantified formula above states that functions $D(x)$ and $f \circ x$, both of which from $\mathbb{R}$ to $\mathbb{R}^n$ evaluate to the same values for all possible choices of the input $t$, i.e., functions $D(x)$ and $f \circ x$ are equal. The formula above can thus be rewritten more compactly as $D(x) = f \circ x$.

Finally, we conclude that $\exists x \, \forall t \left( \dot{x} (t) = f ( x (t) ) \right)$ is equivalent to the following existentially quantified formula

$\exists x \left( D(x) = f \circ x \right)$

where the $=$ symbol denotes equality of functions. This new notation may not be as compact as $\dot{x} = f (x)$, but at least now the RHS does not result in type error. Criticism would be most welcome.

### Deciding the existence of equilibrium points

February 25, 2012

Suppose we are given a dynamical system of the form

$\dot{x} (t) = f ( x (t) )$

where $x : \mathbb{R} \to \mathbb{R}^n$ is the state trajectory, and $f : \mathbb{R}^n \to \mathbb{R}^n$ is a known vector field [1]. We now introduce the following definition:

Definition: A point $x^* \in \mathbb{R}^n$ is an equilibrium point of the given dynamical system if the state being at $x^*$ at some time $t^*$ implies that the state will remain at $x^*$ for all future time, i.e., $x (t) = x^*$ for all $t \geq t^*$. In other words, an equilibrium point $x^*$ is a point of zero flow, i.e., $f ( x^* ) = 0$.

If you happen to dislike the word “flow”, feel free to replace it with the word “velocity”. A point of zero velocity is thus a stationary one. Like a properly-trained poodle, it stays put.

One can find the equilibrium points of a given dynamical system by computing the real roots of the vector equation $f (x) = 0$. However, suppose that we are not interested in computing the equilibrium points; instead, all we would like to know is whether any such points exist. Thus, we arrive at the following decision problem:

Problem: Given a vector field $f : \mathbb{R}^n \to \mathbb{R}^n$, we would like to decide whether the dynamical system $\dot{x} (t) = f ( x (t) )$ has at least one equilibrium point. This is equivalent to determining the truth value of the formula $\exists x \left( f (x) = 0 \right)$, where $x \in \mathbb{R}^n$.

Since both $x$ and $f (x)$ are $n$-dimensional vectors, $x = (x_1, \dots, x_n)$ and $f (x) = (f_1 (x), \dots, f_n (x))$, we can rewrite the existentially-quantified formula above in the following form

$\displaystyle\exists x_1 \exists x_2 \dots \exists x_n \left( \bigwedge_{i=1}^n f_i (x_1, x_2, \dots, x_n) = 0\right)$.

Can one even determine the truth value of the existentially-quantified formula above? In order to avoid colliding against the brick wall of undecidability, let us restrict our attention to polynomial dynamical systems (i.e., dynamical systems in which the vector field $f : \mathbb{R}^n \to \mathbb{R}^n$ is polynomial). For such systems, one can use quantifier elimination to decide the existence of equilibrium points.

__________

Example

Consider the following polynomial dynamical system

$\begin{array}{rl} \dot{x}_1 &= \mu - x_1^2\\ \dot{x}_2 &= - x_2\end{array}$

taken from section 2.7 in Khalil’s book [1]. Note that the two first-order ordinary differential equations above are decoupled, i.e., they are of the form $\dot{x}_i = f_i (x_i)$. Note also that we have a free parameter $\mu \in \mathbb{R}$.

We now compute the equilibrium points of the dynamical system by solving the (scalar) polynomial equations $f_i (x_i) = 0$. We thus obtain $x_1^2 = \mu$ and $x_2 = 0$. For $\mu > 0$, we have two equilibrium points at $(\sqrt{\mu}, 0)$ and $(-\sqrt{\mu}, 0)$. For $\mu = 0$, we have one equilibrium point at $(0,0)$. Finally, for $\mu < 0$, we have no equilibrium points, as the equation $x_1^2 = \mu$ has no real roots when $\mu < 0$. The existence of equilibrium points depends on parameter $\mu$. Stating that this dynamical system has at least one equilibrium point is the same as saying that the following existentially-quantified formula

$\displaystyle\exists x_1 \exists x_2 \left( x_1^2 - \mu = 0 \land x_2 = 0\right)$

evaluates to $\text{True}$. By visual inspection of the formula, we conclude that the formula (which has two bound variables, $x_1$ and $x_2$, and one free variable $\mu$) evaluates to $\text{True}$ when $\mu \geq 0$. Let us confirm this using the following REDUCE + REDLOG script:

% decide the existence of equilibrium points

load_package redlog;
rlset ofsf;

% define polynomial vector field
f1 := mu - x1**2;
f2 := -x2;

% define existentially quantified formula
phi := ex({x1,x2}, f1=0 and f2=0);

% perform quantifier elimination
rlqe phi;

end;

which outputs $\mu \geq 0$. Hence, if the parameter $\mu$ is nonnegative, then there will always exist at least one equilibrium point.

__________

References

[1] Hassan K. Khalil, Nonlinear Systems, 3rd edition, Prentice Hall.

### Computing state trajectories using scans

January 15, 2012

Consider a class of dynamical systems of the form

$x_{k+1} = f (x_k, u_k )$

where $x_k$ is the state, $u_k$ is the input, and $f : \mathcal{X} \times \mathcal{U} \to \mathcal{X}$ is the state-transition function, where $\mathcal{X}$ and $\mathcal{U}$ are the state set and input set, respectively.

Given an initial state $x_0$ and a (possibly infinite) input sequence $(u_0, u_1, u_2, \dots)$, we would like to compute the state trajectory $(x_0, x_1, x_2, x_3, \dots)$. Iterating, we obtain

$\begin{array}{rl} x_1 &= f (x_0, u_0)\\ x_2 &= f (x_1, u_1) = f (f (x_0, u_0), u_1)\\ x_3 &= f (x_2, u_2) = f (f (x_1, u_1), u_2) = f (f (f (x_0, u_0), u_1), u_2)\end{array}$

and so on. Let us pause to take a look at the following equation

$x_3 = f (f (f (x_0, u_0), u_1), u_2)$.

If you happen to be acquainted with functional programming, you will probably recognize this pattern. It is a left fold!! Hence, for every integer $N \geq 1$, we can compute $x_N$ in Haskell using function foldl, as follows

$\text{foldl} \, f \, x_0\, [u_0, u_1, \dots, u_{N-1}]$.

However, we would like to obtain the intermediate states $x_1, x_2, \dots, x_{N-1}$ as well. We can compute the entire state trajectory using function scanl instead

$\text{scanl} \, f \, x_0\, [u_0, u_1, \dots, u_{N-1}]$

which returns a (finite) list of states $[ x_0, x_1, \dots, x_N]$. Amazingly (or perhaps not), since Haskell is lazy, we can obtain a state trajectory of infinite length corresponding to a given initial state and a given input sequence of infinite length!

__________

Example

Consider the logistic map with a control input

$x_{k+1} = r x_k (1 - x_k) + u_k$

where we will use $r = 3.8$, for which the dynamics (of the unforced system) are chaotic. Let the initial state be $x_0 = 0$ (note that $x = 0$ is a fixed-point of the unforced logistic map), and let the input sequence be a discrete-time sinusoid of infinite length

$u_k = \displaystyle\frac{1}{5} \sin \left(\frac{\pi}{10} k\right)$

whose period is $20$. The following Haskell program computes the infinite-length state trajectory using the aforementioned scanl trick:

-- define logistic map with control input
f :: Floating a => a -> a -> a
f x u = (3.8 * x * (1 - x)) + u

-- frequency of the input sinusoid
omega :: Floating a => a
omega = pi / 10

-- generate sinusoidal control input
us :: Floating a => [a]
us = map ((0.2*) . sin . (omega*) . fromIntegral) [0..]

-- initial state
x0 :: Floating a => a
x0 = 0.0

-- compute the state trajectory
xs :: Floating a => [a]
xs = scanl f x0 us

where we used map, function composition, and an infinite list of non-negative integers to generate the sinusoidal input.

To be more precise, the program above does not quite compute the state trajectory, it merely “promises” to compute it when needed. After running the program in GHCi, we can obtain the first 20 values of the input sequence and the first 21 values of the state sequence using function take, as follows

*Main> take 20 us
[0.0,6.180339887498948e-,0.11755705045849463,
0.1618033988749895,0.1902113032590307,0.2,
0.19021130325903074,0.1618033988749895,
0.11755705045849466, 6.180339887498951e-2,
2.4492127076447546e17,-6.180339887498946e2,
-0.11755705045849461,-0.16180339887498948,
-0.1902113032590307,-0.2,-0.19021130325903074,
-0.1618033988749895,-0.11755705045849468,
-6.180339887498953e-2]
*Main> take 21 xs
[0.0,0.0,6.180339887498948e-2,0.33789525775595064,
1.0119471985345525,0.14426955372699996,
0.6691322284587664,1.031509602586003,
3.8294059838691885e-2,0.2575020247735924,
0.7883433805171414,0.6340607606653986,
0.8199019084343063,0.44356147166584237,
0.7760924326990134,0.4701259774450641,
0.7466086625502713,0.5286885334506017,
0.7850690797091344,0.5236383047578965,
0.8860732772080671]

In other words, the input and state sequences are only computed when the expressions with take are evaluated by the interpreter. We can plot these sequences in MATLAB:

Last but not least, here is the MATLAB code that generates the plot:

figure; hold on;
stem([0:1:19],u,'b');
stem([0:1:20],x,'r');
legend('input','state');
xlabel('k (time)'); ylabel('amplitude');
axis([0 21 -0.25 1.20]);


where (row) vectors $u$ and $x$ are the lists produced by GHCi.

__________

Lastly, please do note that the class of dynamical systems we have considered in this post is extremely broad. It encompasses difference equations obtained from the time-discretization of differential equations, IIR filters, finite and infinite automata, etc.

### Linear dynamical systems over finite rings II

July 28, 2011

Last week, we studied some properties of the following finite linear dynamical system

$\left[\begin{array}{cc} x_1^+\\ x_2^+\end{array}\right] = \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right] \left[\begin{array}{c} x_1 \\ x_2\\\end{array}\right]$

where $x := (x_1, x_2)$ lives in $\mathbb{Z}_4^2$, where $\mathbb{Z}_4 := \{0, 1, 2, 3\}$. Let $A \in \mathbb{Z}_4^{2 \times 2}$ be defined by

$A := \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right]$

so that we can rewrite the state update equation as $x^+ = A x$.

In the 1st installment, we obtained qualitative information about the dynamical system from visual inspection of its state transition diagram, which is clearly impossible for finite dynamical systems with state sets of sufficiently large cardinality. Thus, in this post, we will study the same dynamical system using only algebra. The goal is to see how far we can go without knowing what the state transition diagram looks like.

Before we embark on this journey, please take a good look at the addition and multiplication tables of the finite ring $(\mathbb{Z}_4, +, \times)$

__________

Time reversibility

Suppose there exists a matrix $B \in \mathbb{Z}_4^{2 \times 2}$ such that $B A = I$, where $I \in \mathbb{Z}_4^{2 \times 2}$ is the identity matrix. Then, left-multiplying both sides of the state update equation by $B$, we obtain $B x^+ = B A x = I x$, which is equivalent to $x = B x^+$. Hence, if matrix $B$ exists, we can propagate the state backwards in time, and the dynamical system is time reversible.

Matrix equation $B A = I$ can be written as

$\left[\begin{array}{cc} b_{11} & b_{12}\\ b_{21} & b_{22}\\\end{array}\right] \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right] = \left[\begin{array}{cc} 1 & 0\\ 0 & 1\\\end{array}\right]$

and, vectorizing both sides, we obtain a linear system of equations $(A^T \otimes I)\, \mbox{vec}(B) = \mbox{vec}(I)$, i.e., we have

$\left[\begin{array}{cccc} 2 & 0 & 1 & 0\\ 0 & 2 & 0 & 1\\ 3 & 0 & 0 & 0\\ 0 & 3 & 0 & 0\\\end{array}\right] \left[\begin{array}{c} b_{11}\\ b_{21}\\ b_{12}\\ b_{22}\\\end{array}\right] = \left[\begin{array}{c} 1\\ 0\\ 0\\ 1\\\end{array}\right]$

From the 4th and last equation, $3 b_{21} = 1$, we get $b_{21} = 3$, as $3 \times 3 = 1$ in modulo 4 arithmetic. From the 3rd equation, $3 b_{11} = 0$, we get $b_{11} = 0$. The 2nd equation, $2 b_{21} + b_{22} = 0$, becomes then $2 + b_{22} = 0$, which yields $b_{22} = 2$. Finally, the 1st equation, $2 b_{11} + b_{12} = 1$, becomes $b_{12} = 1$. Let us define $A^{-1} := B$. Hence, we have

$A^{-1} = \left[\begin{array}{cc} 0 & 1\\ 3 & 2\\\end{array}\right]$

which is the (left and right) inverse of matrix $A$. We then have the (backwards in time) state update equation $x = A^{-1} x^+$. If we draw the state transition diagram for this linear dynamical system, we obtain the diagram we obtained last week, but with the direction of the arrows reversed. If you do not believe me, then define $g (x) := A^{-1} x$  and run the following Python script:

# define function g
def g (x):
g1 = x[1] % 4
g2 = (3 * x[0] + 2 * x[1]) % 4
return (g1, g2)

print "State transitions:\n"
for x1 in [0,1,2,3]:
for x2 in [0,1,2,3]:
print "g(%d,%d) = (%d,%d)" % (x1, x2, g((x1,x2))[0], g((x1,x2))[1])


__________

Fixed points

As mentioned last week, we can find fixed points of the dynamical system $x^+ = A x$ by solving the equation $A x = x$. Since $-1 = 3$ in modulo 4 arithmetic, let us add $3 x$ to both sides of the equation, which yields $(A + 3 I) x = 0_2$, i.e.,

$\left[\begin{array}{cc} 1 & 3\\ 1 & 3\end{array}\right] \left[\begin{array}{c} x_1\\ x_2 \end{array}\right] = \left[\begin{array}{c} 0\\ 0 \end{array}\right]$

and, since one of the equations is redundant, we are left with $x_1 + 3 x_2 = 0$, which is equivalent to $x_1 = x_2$. The set of fixed points is, thus,

$\{ (0,0), (1,1), (2,2), (3,3)\}$.

Fixed points are periodic points of period equal to 1. Let us now find periodic points of period greater than 1.

__________

Periodic points of period equal to 2

To find periodic points of period equal to 2, we must solve the equation $A^2 x = x$, where

$A^2 = \left[\begin{array}{cc} 3 & 2\\ 2 & 3\\\end{array}\right]$.

Once again, let us add $3 x$ to both sides of the equation, which yields $(A^2 + 3 I) x = 0_2$, i.e.,

$\left[\begin{array}{cc} 2 & 2\\ 2 & 2\end{array}\right] \left[\begin{array}{c} x_1\\ x_2 \end{array}\right] = \left[\begin{array}{c} 0\\ 0 \end{array}\right]$

and, eliminating the redundant equation, we are left with $2 x_1 + 2 x_2 = 0$. Since $-2 = 2$ in modulo 4 arithmetic, we finally obtain the equation $2 x_1 = 2 x_2$. If we were solving equations over $\mathbb{R}$, we could just multiply both sides of $2 x_1 = 2 x_2$ by $1 / 2$ and obtain $x_1 = x_2$. However, we are working with the finite ring $(\mathbb{Z}_4, +, \times)$, for which the element 2 does not have a multiplicative inverse! Let us write the equation for various values of $x_2$:

• if $x_2 = 0$, then we obtain the equation $2 x_1 = 0$, which yields $x_1 \in \{0, 2\}$. We thus have points $(0, 0)$ and $(2,0)$.
• if $x_2 = 1$, then we obtain the equation $2 x_1 = 2$, which yields $x_1 \in \{1, 3\}$. We thus have points $(1, 1)$ and $(3,1)$.
• if $x_2 = 2$, then we obtain the equation $2 x_1 = 0$, which yields $x_1 \in \{0, 2\}$. We thus have points $(0, 2)$ and $(2,2)$.
• if $x_2 = 3$, then we obtain the equation $2 x_1 = 2$, which yields $x_1 \in \{1, 3\}$. We thus have points $(1, 3)$ and $(3,3)$.

To summarize, we have eight periodic points of period equal to 2

$\{ (0,0), (2,0), (1,1), (3,1), (2,2), (0,2), (3,3), (1,3)\}$

of which, 4 points are fixed points. Removing these fixed points, we are left with 4 periodic points of fundamental period equal to 2

$\{ (2,0), (3,1), (0,2), (1,3)\}$.

We finally conclude that the finite dynamical system under study has two cycles of length equal to 2, namely

$(2,0) \mapsto (0,2) \mapsto (2,0)$

$(3,1) \mapsto (1,3) \mapsto (3,1)$

__________

Periodic points of period equal to 3

To find periodic points of period equal to 3, we must solve the equation $A^3 x = x$, where

$A^3 = \left[\begin{array}{cc} 0 & 1\\ 3 & 2\\\end{array}\right]$.

Note that $A^3 = A^{-1}$. Hence, $A^4 = A^{-1} A = I$, and $A^5 = A$. In order to solve equation $A^3 x = x$, let us (yet once again) add $3 x$ to both sides of the equation, which yields $(A^3 + 3 I) x = 0_2$, i.e.,

$\left[\begin{array}{cc} 3 & 1\\ 3 & 1\end{array}\right] \left[\begin{array}{c} x_1\\ x_2 \end{array}\right] = \left[\begin{array}{c} 0\\ 0 \end{array}\right]$

and, eliminating the redundant equation, we are left with $3 x_1 + x_2 = 0$. Since $-1 = 3$ in modulo 4 arithmetic, we finally obtain the equation $3 x_1 = 3 x_2$. Fortunately, element 3 has a multiplicative inverse, which is itself (as $3 \times 3 = 1$). Multiplying both sides by 3, we obtain $x_1 = x_2$, an equation we solved already when looking for the fixed points. Since the only solutions of this equation are the four fixed points, we conclude that there are no periodic points of fundamental period equal to 3, i.e., there are no cycles of length 3.

__________

Periodic points of period equal to 4

To find periodic points of period equal to 4, we must solve the equation $A^4 x = x$. Nevertheless, we know that $A^4 = I$ and, hence, $A^4 x = x$ reduces to $x = x$, which is satisfied by every $x \in \mathbb{Z}_4^2$, of course. In other words, every single point in the state set is a periodic point of period equal to 4. Some of them have fundamental period equal to 1, some others have fundamental period equal to 2. Removing from the state set these two sets, we are left with the periodic points of fundamental period equal to 4

$\{ (1,0), (2,1), (3,2), (0,3), (3,0), (2,3), (1,2), (0,1)\}$.

We finally conclude that the finite dynamical system under study has two cycles of length equal to 4, namely

$(1,0) \mapsto (2,1) \mapsto (3,2) \mapsto (0,3) \mapsto (1,0)$

$(3,0) \mapsto (2,3) \mapsto (1,2) \mapsto (0,1) \mapsto (3,0)$

The hunt for periodic points is over. Since $A^5 = A$, equation $A^5 x = x$ reduces to equation $A x = x$, which we already solved. Likewise, $A^6 x = x$ boils down to $A^2 x = x$, which we already solved too. We have found all the cycles.

__________

Concluding remarks

All the qualitative information about the finite dynamical system that we were able to obtain by taking a look at the state transition diagram can instead be obtained algebraically.

Last week, we found the solutions of the equations $A^k x = x$ using sheer brute force, i.e., via exhaustive search over the entire state set. By contrast, in this post we actually did solve the equations using algebra. Intuitively, the latter should be computationally cheaper. If this is correct, then the question is: how much cheaper?

### Linear dynamical systems over finite rings

July 18, 2011

Consider the following linear dynamical system (example 3.4 in [1])

$\left[\begin{array}{cc} x_1^+\\ x_2^+\end{array}\right] = \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right] \left[\begin{array}{c} x_1 \\ x_2\\\end{array}\right]$

where the state vector $x := (x_1, x_2)$ travels in the finite state space $\mathbb{Z}_4^2$, where $\mathbb{Z}_4 := \{0, 1, 2, 3\}$. Since the state space is finite, let us henceforth call it state set. Let $A \in \mathbb{Z}_4^{2 \times 2}$ be defined by

$A := \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right]$

so that we can write the state update equation in the more compressed form $x^+ = A x$. For simplicity, let us introduce function $f : \mathbb{Z}_4^2 \to \mathbb{Z}_4^2$ defined by $f (x) := A x$. Hence, we have $x^+ = f (x)$.

Do note that we are working with the finite ring $(\mathbb{Z}_4, +, \times)$, whose addition and multiplication tables are as follows

and, therefore, when we update each state via $x_i^+ = a_{i1} x_1 + a_{i2} x_2$, we use addition and multiplication modulo 4.

When we study discrete-time dynamical systems over $\mathbb{R}$, we are usually interested in finding equilibrium points, fixed points and limit cycles. For dynamical systems over a finite ring, what are the properties of interest? And how can one study them?

__________

State transition graph

The state set, $\mathbb{Z}_4^2$, which has $4^2 = 16$ elements, is a subset of the 2-dimensional integer lattice $\mathbb{Z}^2$, as depicted below

Computing $x^+ = f(x)$ for each $x$ in the state set and forming ordered pairs $(x, f(x))$, we can build the state transition graph, whose vertex set is the state set $\mathbb{Z}_4^2$, and whose edge set is the set $\{(x, f(x))\}_{x \in \mathbb{Z}_4^2}$. A pictorial representation of this graph is shown below

Note that the out-degree of each vertex is 1. This is due to the fact that function $f$ maps states to states, not states to sets of states. In other words, we do not allow non-determinism.

Visual inspection of the state transition diagram allows us to conclude that, interestingly, the in-degree of each vertex is also 1, i.e., function $f$ is injective and surjective. Thus, $f$ is invertible, which means that the dynamical system we are studying is time reversible! Switch the direction of each arrow in the diagram, and we obtain a new state transition diagram. Hence, if are given the state at a certain time step, we can determine the entire history of the system, both past and future!

A state $x$ is a fixed point if $f(x) = x$, i.e., if $x^+ = x$. Taking a look at the state transition diagram, we see that there are four vertices with self-loops, which means that the system has four fixed points

$\{ (0,0), (1,1), (2,2), (3,3)\}$.

If the system starts at a fixed point $x$, it will remain there forever

$x \mapsto f(x) = x \mapsto f(x) = x \mapsto \dots$

and, thus, the set of all fixed points is an invariant set. Note that a fixed point is a periodic point of period equal to 1.

Observing the state transition diagram again, we notice the existence of loops of various lengths. We have two loops of length equal to 2 and two loops of length equal to 4, depicted below in blue and magenta, respectively

Do note, however, that we also have four loops of length equal to 1, which are the self-loops corresponding to the fixed points, but we already mentioned those.

For the dynamical system under study, we could obtain qualitative information about its behavior by visual inspection of its state transition diagram. Suppose, for example, that we want to study a dynamical system whose state set is $\mathbb{Z}_4^{16}$, which has over four billion states. It is evident that drawing diagrams and counting loops is an approach that does not scale! Can one use Algebra to obtain qualitative information about the dynamical system being studied?

__________

Finding periodic points

If state $x$ is a fixed point, then $f (x) = x$ or, equivalently, $A x = x$, as we defined $f (x) := A x$. Hence, we can find fixed points by solving the equation $A x = x$.

A state $x$ is a periodic point of period equal to $k$ if $f^{(k)} (x) = x$, where $f^{(k)} = f \circ f^{(k-1)}$ and $f^{(1)} = f$. Since $f (x) := A x$, we have that $f^{(k)} (x) = A^k x$. Thus, we can find periodic points of period equal to $k$ by solving the equation $A^k x = x$.

However, do note that fixed points, i.e., periodic points of period equal to 1, are also periodic points of period equal to 2, equal to 3, equal to 4, etc. Therefore, let the fundamental period be the smallest natural number $k$ for which $A^k x = x$. For example, the set of all periodic points of fundamental period equal to 2 is

$\{x \in \mathbb{Z}_4^2 \mid A^2 x = x \land A x \neq x\}$

and the set of all periodic points of fundamental period equal to 3 is

$\{x \in \mathbb{Z}_4^2 \mid A^3 x = x \land A x \neq x\}$.

However, the periodic points of period equal to 2 are also periodic points of period equal to 4. Hence, the set of periodic points of fundamental period equal to 4 is

$\{x \in \mathbb{Z}_4^2 \mid A^4 x = x \land A^2 x \neq x \land A x \neq x\}$.

But, how do we solve equations of the form $A^k x = x$ over the ring $(\mathbb{Z}_4, +, \times)$? A possibility is to search the whole state set and find the states for which the equations hold. That is precisely what the following Python script does:

# define function f
def f (x):
f1 = (2 * x[0] + 3 * x[1]) % 4
f2 = x[0] % 4
return (f1, f2)

# create state set
X = []
for x1 in [0,1,2,3]:
for x2 in [0,1,2,3]:
X.append((x1,x2))

print "State transitions:\n"
for x in X:
x1, x2, f1, f2 = x[0], x[1], f(x)[0], f(x)[1]
print "f(%d,%d) = (%d,%d)" % (x1, x2, f1, f2)

# find periodic points of fundamental periods 1, 2, 3, 4
PP1, PP2, PP3, PP4 = [], [], [], []
for x in X:
if f(x) == x:
PP1.append(x)
elif f(f(x)) == x:
PP2.append(x)
elif f(f(f(x))) == x:
PP3.append(x)
elif f(f(f(f(x)))) == x:
PP4.append(x)

print "\nLists of periodic points:\n"
print "\nFundamental period equal to 1:\n %s" % PP1
print "\nFundamental period equal to 2:\n %s" % PP2
print "\nFundamental period equal to 3:\n %s" % PP3
print "\nFundamental period equal to 4:\n %s" % PP4


The output of this script is the following:

State transitions:

f(0,0) = (0,0)
f(0,1) = (3,0)
f(0,2) = (2,0)
f(0,3) = (1,0)
f(1,0) = (2,1)
f(1,1) = (1,1)
f(1,2) = (0,1)
f(1,3) = (3,1)
f(2,0) = (0,2)
f(2,1) = (3,2)
f(2,2) = (2,2)
f(2,3) = (1,2)
f(3,0) = (2,3)
f(3,1) = (1,3)
f(3,2) = (0,3)
f(3,3) = (3,3)

Lists of periodic points:

Fundamental period equal to 1:
[(0, 0), (1, 1), (2, 2), (3, 3)]

Fundamental period equal to 2:
[(0, 2), (1, 3), (2, 0), (3, 1)]

Fundamental period equal to 3:
[]

Fundamental period equal to 4:
[(0, 1), (0, 3), (1, 0), (1, 2), (2, 1), (2, 3), (3, 0), (3, 2)]


Note that there are no periodic points of fundamental period equal to 3, as a quick glance at the state transition diagram will tell.

__________

References

[1] O. Colón-Reyes, A. Jarrah, R. Laubenbacher, B. Sturmfels, Monomial Dynamical Systems over Finite Fields, May 2006.