## Archive for September, 2008

### Bandwidth-Sharing Games

September 18, 2008

In chapter 1 of Algorithmic Game Theory, Éva Tardos and Vijay Vazirani, discuss the well-known Tragedy of the Commons in the context of bandwidth-sharing:

Example: Suppose that $n$ players would like to have part of a shared resource: each player wants to send information along a shared channel of known maximum capacity, say $1$. In this game each player has an infinite set of strategies, where player $i$‘s strategy is to send $x_i \in [0,1]$ units of flow along the channel.

Assume that each player would like to have a large fraction of the bandwidth, but assume also that the quality of the channel deteriorates with the total bandwidth used. We will describe this game by a simple model, using a benefit or payoff function for each set of strategies. If the total bandwidth $\sum_j x_j$ exceeds the channel capacity, no player gets any benefit. If $\sum_j x_j < 1$ then the value for player $i$ is $x_i (1 - \sum_j x_j)$. This models exactly the kind of trade-off we have in mind: the benefit for a player deteriorates as the total assigned bandwidth increases, but it increases with his own share (up to a point).

__________

In my humble opinion, the authors’ original wording is a bit confusing, so I took the liberty of slightly rephrasing their example.

The authors’ definition of stable strategies is:

Definition: a set of strategies is stable if all players are playing their optimal selfish strategy, given the strategies of all other players.

Using some quick, back-of-the-envelope calculations, the authors show that the stable strategy for each player $i$ is to send

$x_i = \displaystyle\frac{1}{2}\left(1 - \displaystyle\sum_{j \neq i} x_j\right)$

units of flow. Note that each player’s optimal selfish strategy depends on all other players’ strategies. There’s a unique stable solution: each player $i$ sends

$x_i = \displaystyle\frac{1}{n+1}$

units of flow. The total bandwidth being used is thus

$\displaystyle\sum_{i = 1}^n x_i = \displaystyle\frac{n}{n+1}$,

which is less than the channel capacity.

The authors omit a rigorous derivation of this solution (due to time and editorial constraints, I suppose). Therefore, I figured I could present here a more detailed, step-by-step derivation.

__________

Introductory Remarks

Let $\mathcal{I} = \{1, 2, \ldots, n\}$ be a finite set of players. The set of available actions (pure strategies) for each player $i \in \mathcal{I}$ is $\mathcal{X}_i = [0,1]$. We denote by $x_i \in \mathcal{X}_i$ an action for player $i$, and by

$x_{-i} = (x_1, x_2, \ldots, x_{i-1}, x_{i+1}, \ldots, x_n)$

the vector of actions for all players in $\mathcal{I}$ except $i$. The $n$-tuple $x = (x_1, x_2, \ldots, x_n)$ is an action profile or outcome of the game, i.e., $x$ contains the actions (pure strategies) chosen by all players in $\mathcal{I}$. We denote by

$\mathcal{X} = \displaystyle\prod_{i \in \mathcal{I}} \mathcal{X}_i = [0,1]^n$

the set of action profiles (set of outcomes), and by

$\mathcal{X}_{-i} = \displaystyle\prod_{j \in \mathcal{I} \setminus \{i\}} \mathcal{X}_j = [0,1]^{n-1}$

the set of action profiles for all players except $i$. Note that

$x_i \in \mathcal{X}_i$, $x \in \mathcal{X}$, and $x_{-i} \in \mathcal{X}_{-i}$.

What we have is a one-shot simultaneous move game:

• it’s a one-shot game because there’s only one “round” or “stage”, and thus the players only have one chance to choose their actions.
• it’s a simultaneous move game because all players simultaneously choose an action $x_i$ from their set of available actions $\mathcal{X}_i = [0,1]$.

Each particular outcome benefits each player differently. Given two outcomes in $\mathcal{X}$, player $i$ will (in general) prefer one over the other. This preference can be encoded in each player’s utility function $u_i: \mathcal{X} \rightarrow \mathbb{R}$, which is defined as

$u_i(x) = \left\{\begin{array}{cl} x_i \displaystyle\left(1 - \displaystyle\sum_{j \in \mathcal{I}} x_j\right) & \textrm{if} \quad \sum_j x_j < 1\\ 0 & \textrm{if} \quad{} \sum_j x_j \geq 1\end{array}\right.$

but, for the sake of simplicity, we shall assume that the condition $\sum_j x_j < 1$ holds, so we can write the utility function simply as

$u_i(x) = x_i \displaystyle\left(1 - \displaystyle\sum_{j \in \mathcal{I}} x_j\right)$.

Let $\mathbf{1}_n = (1, 1, \ldots, 1)$ be the $n$-dimensional vector of ones. Then, the utility function for player $i \in \mathcal{I}$ can be written as

$u_i(x) = x_i \displaystyle\left(1 - \mathbf{1}_n^T x\right)$,

where $\mathbf{1}_n^T x$ is the total bandwidth being used. Alternatively, we can consider an utility function $u_i: \mathcal{X}_i \times \mathcal{X}_{-i} \rightarrow \mathbb{R}$ defined as

$u(x_i, x_{-i}) = x_i \left(1 - x_i - \mathbf{1}_{n-1}^T x_{-i}\right)$.

Since this is a simultaneous move game, all players must choose their actions in a simultaneous manner. It’s therefore impossible for any player to know the other players’ actions before choosing his own action.

Assumption: we assume that all players seek to maximize their utility. This implies, of course, that each players knows its utility function.

If player $i$ somehow knew the other players’ actions, his best response would be to choose action $x_i = B_i(x_{-i})$, where mapping $B_i: \mathcal{X}_{-i} \rightarrow \mathcal{X}_i$ is defined as

$B_i(x_{-i}) = \{ x_i \in \mathcal{X}_i \mid u_i(x_i, x_{-i}) \geq u_i(x_i', x_{-i}), \forall x_i' \in \mathcal{X}_i\}$.

In other words, given $x_{-i}$, player $i$‘s best move is to choose action $x_i = B_i(x_{-i})$, so that his utility is maximized. The maximum utility that player $i$ can attain (given $x_{-i}$) is $u_i(B_i(x_{-i}), x_{-i})$.

__________

My Derivation

Note that, given $x_{-i}$, player $i$‘s best response is

$B_i(x_{-i}) = \displaystyle\arg \max_{x_i \in \mathcal{X}_i} u_i(x_i, x_{-i})$.

Assuming that $\sum_j x_j < 1$, we have

$B_i(x_{-i}) = \displaystyle\arg \max_{x_i \in \mathcal{X}_i} \left\{x_i \left(1 - x_i - \mathbf{1}_{n-1}^T x_{-i}\right)\right\}$.

The maximizer is computed by seeking points where the partial derivative of $u_i(x_i, x_{-i})$ with respect to $x_i$ vanishes. Note that

$\displaystyle\frac{\partial u_i}{\partial x_i} (x_i, x_{-i}) = (1 - \mathbf{1}_{n-1}^T x_{-i}) - 2 x_i$,

and the maximixer is the vanishing set of function $\frac{\partial u_i}{\partial x_i} (x_i, x_{-i})$, which yields a unique solution

$x_i = B_i(x_{-i}) = \displaystyle \frac{1 - \mathbf{1}_{n-1}^T x_{-i}}{2}$.

We assumed that all players are utility-maximizers. Let us assume that all players are choosing best response actions, i.e., $x_i = B_i(x_{-i})$ for all $i \in \mathcal{I}$. The vector of action profiles ($x \in \mathcal{X}$) can be computed by solving the following system of linear equations (“linear system”)

$\left[\begin{array}{c}x_1\\ x_2\\ x_3\\ \vdots\\ x_n\end{array}\right] = \displaystyle\frac{1}{2} \left[\begin{array}{c} 1\\ 1\\ 1\\ \vdots\\ 1\end{array}\right] - \displaystyle\frac{1}{2} \left[\begin{array}{ccccc} 0 & 1 & 1 & \ldots & 1\\ 1 & 0 & 1 & \ldots & 1\\ 1 & 1 & 0 & \ldots & 1\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & 1 & 1 &\ldots & 0 \end{array} \right] \left[\begin{array}{c}x_1\\ x_2\\ x_3\\ \vdots\\ x_n\end{array}\right]$

which can be written more compactly as

$x = \frac{1}{2} \mathbf{1}_n - \frac{1}{2} (\mathbf{1}_n \mathbf{1}_n^T - I_n) x$,

where $I_n$ is the $n \times n$ identity matrix. Rearranging the terms, the system becomes

$(I_n + \mathbf{1}_n \mathbf{1}_n^T) x = \mathbf{1}_n$.

It can be shown that $(I_n + \mathbf{1}_n \mathbf{1}_n^T)$ is invertible (see Appendix A), and therefore, the solution of the linear system exists and it is unique

$x = (I_n + \mathbf{1}_n \mathbf{1}_n^T)^{-1} \mathbf{1}_n$.

We can use a “trick” to solve the linear system. Note that

$(I_n + \mathbf{1}_n \mathbf{1}_n^T)^{-1} (I_n + \mathbf{1}_n \mathbf{1}_n^T) = I_n$

and therefore

$(I_n + \mathbf{1}_n \mathbf{1}_n^T)^{-1} = I_n - (I_n + \mathbf{1}_n \mathbf{1}_n^T)^{-1} \mathbf{1}_n \mathbf{1}_n^T$.

The (unique) solution of the linear system can thus be written as

$\begin{array}{rl} x &= \displaystyle (I_n + \mathbf{1}_n \mathbf{1}_n^T)^{-1} \mathbf{1}_n\\ &= \mathbf{1}_n - \underbrace{(I_n + \mathbf{1}_n \mathbf{1}_n^T)^{-1} \mathbf{1}_n}_{= x}\underbrace{\mathbf{1}_n^T \mathbf{1}_n}_{= n}.\end{array}$

Finally, we have

$(n+1) x = \mathbf{1}_n$,

which yields

$x = \left(\frac{1}{n+1}\right) \mathbf{1}_n$.

When all players choose best response actions, then $x_i = \frac{1}{n+1}$ for each $i \in \mathcal{I}$. This is the same solution which the authors derived using back-of-the-envelope calculations. Note that $\sum_j x_j < 1$ and therefore our assumption that the total bandwidth being used is less than the channel capacity does hold.

__________

Tragedy of the Commons

The value of each player’s utility function is

$u_i(x) \mid_{x = \left(\frac{1}{n+1}\right) \mathbf{1}_n} = \displaystyle\frac{1}{n+1} \left(1 - \frac{n}{n+1}\right) = \frac{1}{(n+1)^2}$.

The sum of all players’ utility functions is

$\displaystyle \sum_{i \in \mathcal{I}} u_i(x) \mid_{x = \left(\frac{1}{n+1}\right) \mathbf{1}_n} = \frac{n}{(n+1)^2} \approx \frac{1}{n}$.

As pointed out by Éva Tardos and Vijay Vazirani, this solution is a tragedy: when all players choose best response actions, the utility for each player is very low. Moreover, the more players trying to use the channel, the less “value” each player gets (which is intuitive).

This example illustrates the Tragedy of the Commons: each player benefits from using the shared channel and wants to use it as much as possible, while the “costs” of using the channel are distributed among all of those who use it. Players are selfish and want to maximize their own utility, which leads to over-exploitation of the shared resource. Selfishness does not always pay off.

__________

Appendix A

We would like to show that matrix $I_n + \mathbf{1}_n \mathbf{1}_n^T$ is invertible.

For starters, note that the $n^2$ entries of $n \times n$ matrix $\mathbf{1}_n \mathbf{1}_n^T$ are ones. Therefore, the columns/rows of $\mathbf{1}_n \mathbf{1}_n^T$ are not linearly independent. In fact, since all columns/rows of $\mathbf{1}_n \mathbf{1}_n^T$ are the same, we have $\textrm{rank}(\mathbf{1}_n \mathbf{1}_n^T) = 1$. The nullspace of $\mathbf{1}_n \mathbf{1}_n^T$ has dimension $n-1$, and therefore $n-1$ of the $n$ eigenvalues of $\mathbf{1}_n \mathbf{1}_n^T$ are zero.

Note also that matrix $\mathbf{1}_n \mathbf{1}_n^T$ is symmetric, and thus, its eigenvalues are real and its eigenvectors are orthogonal. Note that

$\mathbf{1}_n \mathbf{1}_n^T = n \left(\frac{1}{\sqrt{n}} \mathbf{1}_n\right)\left(\frac{1}{\sqrt{n}} \mathbf{1}_n\right)^T$

can be regarded as an eigendecomposition of $\mathbf{1}_n \mathbf{1}_n^T$: the only nonzero eigenvalue of $\mathbf{1}_n \mathbf{1}_n^T$ is $\lambda_1 = n$, and the eigenvector associated with it is $q_1 = \frac{1}{\sqrt{n}} \mathbf{1}_n$. Matrix $\mathbf{1}_n \mathbf{1}_n^T$ has $n-1$ zero eigenvalues: $\lambda_2 = \lambda_3 = \ldots = \lambda_n = 0$. Consequently, the nullspace of $\mathbf{1}_n \mathbf{1}_n^T$ is a $(n-1)$-dimensional linear subspace spanned by an orthonormal vector basis $\{q_2, q_3, \ldots, q_n\}$. We now form $n \times n$ matrices

$Q = \displaystyle [q_1 \mid q_2 \mid \ldots \mid q_n]$

and $\Lambda = \textrm{diag}(n, 0, \ldots, 0)$. We thus have an eigendecomposition

$\mathbf{1}_n \mathbf{1}_n^T = Q \Lambda Q^T$.

Since $\{q_2, q_3, \ldots, q_n\}$ is an orthonormal vector basis, and since $q_1$ is orthogonal to any vector in the nullspace, we have $Q Q^T = I_n$. Therefore

$I_n + \mathbf{1}_n \mathbf{1}_n^T = Q Q^T + Q \Lambda Q^T = Q (I_n + \Lambda) Q^T$.

Finally, we can conclude that the eigenvalues of $I_n + \mathbf{1}_n \mathbf{1}_n^T$ are the eigenvalues of $\mathbf{1}_n \mathbf{1}_n^T$ plus one

$\lambda_i (I_n + \mathbf{1}_n \mathbf{1}_n^T) = 1 + \lambda_i$

and thus the spectrum of matrix $I_n + \mathbf{1}_n \mathbf{1}_n^T$ is $\{n+1, 1, \ldots, 1\}$. Since none of the eigenvalues of $I_n + \mathbf{1}_n \mathbf{1}_n^T$ is zero, we can conclude that matrix $I_n + \mathbf{1}_n \mathbf{1}_n^T$ is invertible.

### Parasitic Computing

September 17, 2008

Parasitic Computing is a scheme in which standard communication protocols (such as TCP/IP) can be exploited to perform computation with the communication infrastructure. This scheme may allow a parasite computer to “steal” CPU cycles from another computer, thus gaining access to potentially massive amounts of computational power at “zero” cost. This computing paradigm was introduced in a paper which was submitted to Nature by a team at the University of Notre Dame:

Here’s a schematic diagram from the paper depicting a parasite node (in green) targeting web servers (in blue):

[ diagram courtesy of Vincent Freeh et alia ]

where ALU stands for Arithmetic Logic Unit, and NIF stands for Network InterFace. More detailed information can be found on the Notre Dame‘s page on Parasitic Computing:

### Curta Calculating Machines

September 10, 2008

Until fairly recently, I had never heard of Curta calculators. These little mechanical marvels were invented by Curt Herzstark (1902-1988) in the late 1930s, and from the late 1940s until the early 1970s they were popular portable calculators. Eventually, they were replaced by portable electronic calculators.