## Playing with Permutations

Let’s play a game:

You build an ordered list $y = (y_1, y_2, \ldots, y_n)$ of $n$ real numbers, and you conceal it from me. You then compute $c_k = \sum_{i=1}^n y_i^k$ for each $k \in \{1,2,\ldots,n\}$, and you give me the $n$-tuple $c = (c_1, c_2, \ldots, c_n)$. Finally, I build a list $x = (x_1, x_2, \ldots, x_n)$ of $n$ real numbers such that $\sum_{i=1}^n x_i^k = c_k$ for all $k \in \{1,2,\ldots,n\}$. Can I conclude that my list is a permutation of yours (or equal to it)?

__________

Each ordered list of $n$ elements is an $n$-tuple. We can also think of an ordered list of $n$ reals as a vector in $\mathbb{R}^n$, where the list’s $i$-th element is the vector’s $i$-th entry.

The following 3-stage diagram hopefully clarifies the essence of this $2$-player game:

where I am player #1, and you’re player #2. The game’s three stages are:

1. player #2 builds an ordered list $y \in \mathbb{R}^n$, but player #1 doesn’t have access to it.
2. player #2 gives player #1 the $n$ “measurements” $c = (c_1, c_2, \ldots, c_n)$, where $c_k = \sum_{i=1}^n y_i^k$.
3. player #1 builds ordered list $x \in \mathbb{R}^n$ from $c = (c_1, c_2, \ldots, c_n)$ such that $\sum_{i=1}^n x_i^k = c_k$ for all $k \in \{1,2,\ldots,n\}$.

We know that it’s always possible to build list $x$, because if $x$ is equal to $y$ or a permutation of $y$, then all $n$ equality conditions

$\begin{array}{rl} \displaystyle\sum_{i=1}^n x_i &= c_1\\ \displaystyle\sum_{i=1}^n x_i^2 &= c_2\\ \vdots\\ \displaystyle\sum_{i=1}^n x_i^n &= c_n\\\end{array}$

will be satisfied. What we don’t know is whether any list $x$ that satisfies the equality conditions will be a permutation of $y$. In more rigorous terms: if $x, y$ are permutations of one another, then it is necessary that the $n$ equalities hold. What we would like to know is whether the fact that the $n$ equalities hold is sufficient for us to conclude that $x, y$ are permutation of one another.

But how can player #1 build the desired ordered list? A possibility would be for player #1 to solve the following system on polynomial equations in variables $x_1, x_2, \ldots, x_n$

$\begin{array}{rl} x_1 + x_2 + \ldots + x_n &= c_1\\ x_1^2 + x_2^2 + \ldots + x_n^2 &= c_2\\ \vdots\\ x_1^n + x_2^n + \ldots + x_n^n &= c_n\\\end{array}$

where $c$ is a contant, and $x$ is a variable.

The game I propose on this post stems from two closely related problems I have been thinking of since January 2008. I have written a few posts on these problems already:

About these ads

Tags:

### 9 Responses to “Playing with Permutations”

1. Efrique Says:

This is related to the problem of specifying a distribution by its moments (in this case, you have a discrete distribution).

• Rod Carvalho Says:

Thanks for your input. I had never thought of this problem from that point of view, but it makes sense.

If I understood it right, player 2 would first build an uniform discrete distribution, whose probability mass function (pmf) would be

$f(u) = \left\{\begin{array}{cl} 1/n & \textrm{if}\quad{} u \in \{y_1, y_2, \ldots, y_n\}\\ 0 & \textrm{if}\quad{} u \notin \{y_1, y_2, \ldots, y_n\} \end{array}\right.$.

Using Dirac impulse functions, the pmf can be written as a probability density function (pdf)

$f(u) = \displaystyle\sum_{i=1}^n \displaystyle\frac{1}{n} \delta(u - y_i)$.

Player 2 would then compute the first $n$ moments $(M_1, M_2, \ldots, M_n)$

$\begin{array}{rl} M_k &= \displaystyle\int_{-\infty}^{+\infty} u^k f(u) du\\ &= \displaystyle\int_{-\infty}^{+\infty} u^k \left[\displaystyle\sum_{i=1}^n \displaystyle\frac{1}{n} \delta(u - y_i)\right] du\\ &= \displaystyle\sum_{i=1}^n \displaystyle\frac{1}{n} \left[\displaystyle\int_{-\infty}^{+\infty} u^k \delta(u - y_i) du\right] \end{array}$

and since

$\begin{array}{rl} \displaystyle\int_{-\infty}^{+\infty} u^k \delta(u - y_i) du &= \displaystyle\int_{-\infty}^{+\infty} y_i^k \delta(u - y_i) du\\ &= y_i^k \underbrace{\displaystyle\int_{-\infty}^{+\infty} \delta(u - y_i) du}_{=1} = y_i^k\end{array}$

we finally have

$M_k = \displaystyle\sum_{i=1}^n \displaystyle\frac{1}{n} y_i^k = \displaystyle\frac{1}{n} \displaystyle\sum_{i=1}^n y_i^k = \displaystyle\frac{1}{n} c_k$.

Then, $n$-tuple $c = (c_1, c_2, \ldots, c_n)$ is a scaled $n$-tuple of moments:

$(c_1, c_2, \ldots, c_n) = n (M_1, M_2, \ldots, M_n)$.

From $c$, player 1 would build an uniform discrete distribution whose first $n$ moments are $\frac{1}{n}(c_1, c_2, \ldots, c_n)$. What we want to know is whether player 1′s distribution is the same as the one built by player 2.

I hope I understood the problem right.

2. Efrique Says:

I wanted to make sure I had the name right before going on about it too much. This is called the truncated moment problem. See the wikipedia entry on the moment problem.

The mentioned uniqueness in the Hausdorff moment case would seem (by scaling) to establish it in the truncated case, so perhaps degeneracies are not an issue.

Searching on “the moment problem” finds lots of links. I seem to recall attending a talk on the truncated moment problem around 1984, but I recall no relevant details.

• Rod Carvalho Says:

Thanks for all the info! I had never heard of the moment problem. And, oh well, there are lot of moment problems:

And it seems that Chebyshev and Markov also worked on it. I googled and found lots of papers on it. Some of them quite old indeed, like you said.

I don’t know whether that work can be used to solve this particular problem I wrote on this post, but it’s always good to find connections with other areas of Mathematics. And, quite frankly, I had never expected that this problem had connections with Probability Theory :-)

3. Efrique Says:

Yes, that’s something like what I was getting at. Which means that the literature on that problem might help you pin down some answers to your questions. Some of the relevant papers may be fairly old.

I’m not familiar with the details for the specific problem you have here, but in general (i.e. typically infinite-dimensional case) the lore is that “moments don’t pin down distributions very well”. In the case where you have as many moments as there are points in the sample space, it might tend to work out in most cases.

To be more specific, in your game you have the right number of equations in the right number of unknowns (so one would expect solutions to exist), but there may be degenerate cases.

Once you get up to 5 points you have a quintic equation, so an algebraic solution may be problematic. $n=1$ and $n=2$ are easy to solve, of course.

4. Sune Kristian Jakobsen Says:

I think I solved the problem in a comment to the post “My Conjecture”.

The idea is, that player 1 can use $c_1, c_2, \dots c_n$ to calculate $(\sum_i y_i) , (\sum_{i (the elementary symmetric polynomials of $y_1, y_2, \dots y_n$ ) and then he can write down the n’te degree polynomial with the solutions $y_1, y_2, \dots y_n$. Now he just have to solve this polynomial to find a permutation of the y’s.

5. David Speyer Says:

Sune is absolutely correct. If you want explicit equations, here is a method which is easy to remember, or to program in your favorite computer algebra system. Notice the equality

$\prod_{i=1}^n (1-t y_i) = exp(- \sum_{i=1}^n \sum_{k=1}^{\infty} y_i^k t^k/k )$.

Since I know $\sum_{i=1}^n y_i^k$ for $k=1$ through $n$, I can compute the coefficients of $t^k$ in the exponent for $k=1$ through $n$. Exponentiating as formal power series, that means I can determine the left hand side up to order $t^n$. But the left hand side is a polynomial of degree $n$, so I can compute the left hand side exactly. Now just haul out your favorite polynomial equation solver to find the $y$‘s.

Example: Say $n=2$ and you tell me $y_1+y_2=3$ and $y_1^2+y_2^2=5$. Then I know that
$(1-t y_1)(1-t y_2) = exp(-3t-5t^2/2+O(t^3))$.
The right hand side is
$1-3t-5t^2/2+O(t^3) +(-3t+O(t^2))^2/2 + O(t^3) = 1-3t+2 t^2 + O(t^3)$.
So I know that
$(1-t y_1)(1-t y_2) = 1 - 3t + 2 t^2.$ Factoring, I get $y_1=1$ and $y_2=2$.

6. David Speyer Says:

A harder to remember, but easier to compute, method is to use Newton’s identities. These can be derived from the above approach by taking logarithmic derivatives of both sides of the equation

$\prod_{i=1}^n (1-t y_i) = \exp(- \sum_{i=1}^n \sum_{k=1}^{\infty} y_i^k t^k/k )$

• Rod Carvalho Says:

I didn’t know that equality, but if I were to derive it, I would start with the equality

$\ln\left[\displaystyle\prod_{i=1}^n (1 - t y_i)\right] = \displaystyle\sum_{i=1}^n \ln(1 - t y_i)$,

then we obtain

$\displaystyle\prod_{i=1}^n (1 - t y_i) = \exp\left(\displaystyle\sum_{i=1}^n \ln(1 - t y_i)\right)$.

Since the Maclaurin series of $\ln(1-u)$ is

$\ln(1-u) = -\displaystyle\sum_{k=1}^{\infty} \frac{u^k}{k}$,

we could thus rewrite the equality as

$\displaystyle\prod_{i=1}^n (1 - t y_i) = \exp\left(- \displaystyle\sum_{i=1}^n \sum_{k=1}^{\infty} \frac{t^k y_i^k}{k}\right)$

which is the equality you started with. Changing the order of the summations, we get

$\displaystyle\prod_{i=1}^n (1 - t y_i) = \exp\left(- \displaystyle\sum_{k=1}^{\infty} \frac{t^k}{k} \sum_{i=1}^n y_i^k\right)$.

Since $c_k = \sum_{i=1}^n y_i^k$, we finally have

$\begin{array}{rl}\displaystyle\prod_{i=1}^n (1 - t y_i) &= \exp\left(- \displaystyle\sum_{k=1}^{\infty} \frac{t^k}{k} c_k\right)\\ &= \exp\left(- \displaystyle\sum_{k=1}^{n} \frac{t^k}{k} c_k + \mathcal{O}(t^{n+1})\right).\\\end{array}$

Since the Maclaurin series of $e^{u}$ is

$e^{-u} = 1 - u + \displaystyle\frac{u^2}{2} - \displaystyle\frac{u^3}{3!} + \ldots = \displaystyle\sum_{m=0}^{\infty} (-1)^m \frac{u^m}{m!}$,

we finally obtain

$\begin{array}{rl}\displaystyle\prod_{i=1}^n (1 - t y_i) &= \exp\left(- \displaystyle\sum_{k=1}^{\infty} \frac{t^k}{k} c_k\right)\\ &= 1 - \displaystyle\sum_{k=1}^{\infty} \frac{t^k}{k} c_k + \frac{1}{2}\left(\displaystyle\sum_{k=1}^{\infty} \frac{t^k}{k} c_k\right)^2 - \ldots\\ &= \displaystyle\sum_{m=0}^{\infty} \frac{(-1)^m}{m!} \left(\displaystyle\sum_{k=1}^{\infty} \frac{t^k}{k} c_k\right)^m.\\\end{array}$

Computing the RHS does not seem all that easy. However, we know that the LHS is a polynomial of degree $n$, so the RHS must also be a polynomial of degree $n$.