## Archive for April, 2011

### Dealing cards

April 25, 2011

Imagine that you have a deck of three cards: $2$, $3$, and $4$. You shuffle the deck and deal out the three cards. What is the probability that:

• the second card is even given that the first card is even?
• the first two cards are even given that the third card is even?
• the second card is even given that the first card is odd?
• the second card is odd given that the first card is odd?

(this is problem 1.5.5 in Yates & Goodman [1])

[ source ]

__________

My solution:

Our experiment consists of shuffling the deck, dealing out the cards, and observing the sequence in which the three cards are dealt. Hence, each outcome or observation is a $3$-tuple

$(\text{1st card}, \text{2nd card}, \text{3rd card})$

and the sample space, the set of all observations, is thus

$S = \{(2,3,4), (2,4,3), (3,2,4), (3,4,2), (4,2,3), (4,3,2)\}$.

Note that $|S| = 3! = 6$. We assume that the six observations are equally likely, i.e., $\mathbb{P}(s) = 1/6$ for all $s \in S$. Let $E_i$ denote the event that the $i$-th card dealt is even numbered, and let $\mathcal{O}_i$ denote the event that the $i$-th card dealt is odd numbered. Recall that an event is a subset of the sample space $S$ and, thus, the event space (the set of all events) is $2^S$. Hence, we have

$E_1 = \{(2,3,4), (2,4,3), (4,2,3), (4,3,2)\}$

$E_2 = \{(2,4,3), (3,2,4), (3,4,2), (4,2,3)\}$

and $E_1 \cap E_2 = \{(2,4,3), (4,2,3)\}$. Hence

$\mathbb{P}(E_1) = \mathbb{P}\left(\displaystyle\bigcup_{s \in E_1} \{s\}\right) = \displaystyle\sum_{s \in E_1} \mathbb{P}(s) = 4/6$

and $\mathbb{P}(E_1 \cap E_2) = 2/6$. Thus, the probability that the 2nd card is even given that the 1st card is even is the following

$\mathbb{P}(E_2 \mid E_1) = \displaystyle\frac{\mathbb{P}(E_1 \cap E_2)}{\mathbb{P}(E_1)} = \displaystyle\frac{2/6}{4/6} = \frac{1}{2}$.

Interpretation: if the 1st card is even numbered, after the 1st card is dealt (and observed) there will be only two cards left in the deck, one even numbered and another one odd numbered, and these two are equally likely. Hence, we conclude that the probability that the 2nd card is even given that the 1st card is even is $1/2$.

Since there are only two even numbered cards in the deck, we have $E_1 \cap E_2 \cap E_3 = \emptyset$. Therefore, the probability that the first two cards are even given that the 3rd card is even is

$\mathbb{P}(E_1 \cap E_2 \mid E_3) = \displaystyle\frac{\mathbb{P}(E_1 \cap E_2 \cap E_3)}{\mathbb{P}(E_3)} = \displaystyle\frac{\mathbb{P}(\emptyset)}{\mathbb{P}(E_3)} = 0$.

Interpretation: if the 3rd card is even numbered, then the two cards previously dealt must be odd and even numbered (not necessarily in that order, of course), as we only have two even numbered cards in the deck. Thus, it’s impossible to get three even cards and $\mathbb{P}(E_1 \cap E_2 \mid E_3) = 0$.

Note that $\mathcal{O}_1 =\{(3,2,4), (3,4,2)\} \subset E_2$. Hence, we have that $E_2 \cap \mathcal{O}_1 = \mathcal{O}_1$. Therefore, the probability that the 2nd card is even given that the 1st card is odd is

$\mathbb{P}(E_2 \mid \mathcal{O}_1) = \displaystyle\frac{\mathbb{P}(E_2 \cap \mathcal{O}_1)}{\mathbb{P}(\mathcal{O}_1)} = \displaystyle\frac{\mathbb{P}(\mathcal{O}_1)}{\mathbb{P}(\mathcal{O}_1)} = 1$.

Interpretation: if the 1st card is odd numbered, then only even numbered cards remain in the deck. Thus, we know for sure that the 2nd card will be even if the 1st is odd.

Since we only have one odd numbered card in the deck, we can’t have an observation / outcome with two odd numbered cards, i.e., $\mathcal{O}_1 \cap \mathcal{O}_2 = \emptyset$. Hence, the probability that the 2nd card is odd given that the 1st card is odd is

$\mathbb{P}(\mathcal{O}_2 \mid \mathcal{O}_1) = \displaystyle\frac{\mathbb{P}(\mathcal{O}_1 \cap \mathcal{O}_2)}{\mathbb{P}(\mathcal{O}_1)} = \displaystyle\frac{\mathbb{P}(\emptyset)}{\mathbb{P}(\mathcal{O}_1)} = 0$.

Interpretation: if the 1st card is odd numbered, then only even numbered cards will remain in the deck. Thus, it’s impossible for the 2nd card to be odd when the 1st one is odd.

__________

References

[1] Roy D. Yates, David J. Goodman, Probability and stochastic processes: a friendly introduction for electrical and computer engineers, 2nd edition, Wiley, 2004.

### The social construction of Hungarian genius

April 24, 2011

Hungary is a most intriguing country. Among the many mysteries that surround Magyarország, the Magyar language is one of the most fascinating. Hungarian history, albeit occasionally tragic, is also quite interesting. In particular, how was it possible for Hungary to spawn so many brilliant intellectuals and artists during the (relatively) short life of the Austro-Hungarian dual monarchy (1867-1918)?

In The Social Construction of Hungarian Genius [1], Tibor Frank attempts to answer this question by providing “a broader background — historical, social, intellectual, and cultural — to understanding the admirable creativity in early 20th century Hungary, with the mathematician and scientist John von Neumann (1903-1957) in center focus”. Frank argues that the social and cultural transformations that Hungary went through after the Compromise of 1867, among which:

• the decline of feudalism and the emergence of a middle-class.
• the assimilation of Jews and ethnic Germans in Hungarian society (a process he calls “Magyarization”).
• the influence of German culture in Hungarian mathematics, science, music, and education system.

created the conditions for a new intellectual elite to emerge. The leadership of József Eötvös and his son Loránd Eötvös, the excellence of high-school teachers such as László Rátz and Sándor Mikola, the foundation of the Középiskolai Matematikai Lapok by Dániel Arany, and the establishment of the Eötvös Competition led to the discovery, nurturing, and training of exceptional students such as von Neumann, von Kármán, Wigner, Szilárd, Teller, among many others.

In 1918, with the end of World War I and the collapse of the Austro-Hungarian monarchy, Hungary underwent a period of political turmoil that forced some of its intellectual elite to exile, first in Germany, and later in the United States. Frank’s book [2] focuses on such migrations.

__________

References

[1] Tibor Frank, The Social Construction of Hungarian Genius (1867-1930), background paper for the panel Discussion “Budapest: The Golden Years” in “The von Neumann Memorial Lectures”, Princeton, 2007.

[2] Tibor Frank, Double Exile: migrations of Jewish-Hungarian professionals through Germany to the United States, 1919-1945, Peter Lang, 2009.

### Sailing in state space

April 11, 2011

Consider the following controlled (autonomous) dynamical system [1]

$\dot{x} (t) = f (x (t), u (t))$

where $x : [0, \infty) \to \mathbb{R}^n$ is the state trajectory, $u : [0,\infty) \to \mathcal{U}$ is the control input history, $\mathcal{U} \subseteq \mathbb{R}^m$ is the set of admissible control inputs, and $f : \mathbb{R}^n \times \mathcal{U} \to \mathbb{R}^n$ is a (known) vector field.

If we know the control input $u (t)$ for all $t \geq 0$, then we can integrate the ODE above with initial condition $x(0) = x_0$ and obtain the state trajectory $x : [0, \infty) \to \mathbb{R}^n$, as follows

$x (t) = x_0 + \displaystyle \int_{0}^{t} f \left(x (\tau), u (\tau) \right) \mathrm{d} \tau$

which can be represented pictorially by a single streamline in $\mathbb{R}^n$

I am (quite explicitly) alluding to fluid flow. Just like a cork on the ocean will follow a certain path depending on the velocity field (i.e., “ocean currents”), a point in state space will flow along a certain streamline.

__________

Let us fix the control input, i.e., $u (t) = \bar{u}$ for all $t \geq 0$, and define

$\begin{array}{rl} v^{(\bar{u})} : \mathbb{R}^n & \to \mathbb{R}^n\\ x & \mapsto f(x, \bar{u})\\\end{array}$

For the sake of simplicity, let us consider $\mathcal{U} = \{-1, +1\}$. Since $\bar{u}$ can only take two values, we have two vector fields, $v^{(+1)}(x) := f(x, +1)$ and $v^{(-1)}(x) := f(x, -1)$. Integrating the following ODEs

$\begin{array}{rl} \dot{x} (t) &= v^{(+1)} ( x(t) )\\ \dot{x} (t) &= v^{(-1)} ( x(t) )\\\end{array}$

for various initial conditions, we obtain two families of streamlines, as depicted below

For each fixed control input and collection of initial conditions, we will have a family of streamlines in state space.

Imagine that we have the following control input history

$u (t) = \begin{cases} +1, & t \in [0, t^{\star})\\ -1, & t \geq t^{\star}\\\end{cases}$

where we toggle the control input at time $t^{\star} > 0$, thus interrupting the flow along a “blue” streamline and initiating the flow along a “pink” streamline. Pictorially, we have

If the control input is fixed, say, $\bar{u} = +1$, then given an initial condition $x(0)$, we will flow along the “blue” streamline that passes through $x(0)$. This streamline is $1$-dimensional and does not allow us to “explore” the $n$-dimensional state space. However, if we switch between $\bar{u} = +1$ and $\bar{u} = -1$, then we are able to “travel” to other regions of the state space. Allowing the control input to take values in $\mathcal{U} = [-1,+1]$ would give us even more freedom.

Lastly, we arrive at a most childish idea: in some cases, controller design can be viewed as shaping the streamlines differently in different parts of the state space. This is childish because it is purely conceptual. One obviously cannot design optimal controllers by doodling with crayons.

__________

References

[1] Hassan K. Khalil, Nonlinear Systems, 3rd edition, Prentice Hall.

### A convoluted convolution

April 2, 2011

Consider the signal $x(t) = 1/t$. What is the convolution of $x$ with itself?

__________

My solution:

From the definition of convolution [1], we have

$(x * x) (t) = \displaystyle \int_{-\infty}^{+\infty}\frac{d \tau}{\tau (t - \tau)} = \displaystyle \frac{1}{t} \int_{-\infty}^{+\infty} \left(\frac{1}{\tau} + \frac{1}{t - \tau}\right) d \tau$.

Unfortunately, the latter integral is plagued by convergence problems. To make matters worse, $x(t)$ is not even defined at $t = 0$. Fortunately, the former integral is a familiar one: it’s a scaled Hilbert transform of $x$. It is well-known [1] that the Fourier transform of $1 / \pi t$ (the impulse response of a Hilbert transformer) is the following

$\mathcal{F} \left(\displaystyle\frac{1}{\pi t}\right) = - j \,\text{sgn} (w) =\begin{cases} -j, & \omega > 0\\ +j, & \omega < 0\end{cases}$

where $\text{sgn}(\cdot)$ is the signum function. Thus, we have that the Fourier transform of $x$ is

$\mathcal{F} (x) = -j \pi \, \text{sgn} (w) = \begin{cases} -j \pi, & \omega > 0\\ +j \pi, & \omega < 0\end{cases}$

Convolution in the time domain corresponds to multiplication in the frequency domain. Hence, the Fourier transform of the convolution is

$\mathcal{F} (x * x) = \left(\mathcal{F} (x)\right)^2 = \left(-j \pi \, \text{sgn} (w)\right)^2 = - \pi^2$

and, taking the inverse Fourier transform, we finally obtain

$(x * x) (t) = \mathcal{F}^{-1} (- \pi^2) = -\pi^2 \delta (t)$

where $\delta (t)$ is the Dirac delta. This is a prime example of a problem that Signal Processing professors in engineering departments tend to love, but that drives (some) mathematicians up the wall.

__________

References

[1] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab, Signals & Systems, 2nd edition, Prentice-Hall, 1996.