## Posts Tagged ‘SymPy’

### Deciding the convexity of semialgebraic sets II

June 23, 2012

Let $S_3 \subset \mathbb{R}^{3 \times 3}$ denote the set of real symmetric $3 \times 3$ matrices. We introduce a matrix-valued function $A: \mathbb{R}^3 \to S_3$ defined by

$A (x) = \left[\begin{array}{ccc} 1 & x_1 & x_2\\ x_1 & 1 & x_3\\ x_2 & x_3 & 1\end{array}\right]$

The elliptope $E_3 \subset \mathbb{R}^3$ is defined by

$E_3 = \left\{ x \in \mathbb{R}^3 \mid A (x) \succeq 0 \right\}$

where $A (x) \succeq 0$ means that matrix $A (x)$ is positive semidefinite. Since the elliptope $E_3$ is defined by the linear matrix inequality (LMI) $A (x) \succeq 0$, we can conclude that $E_3$ is a spectrahedron and, therefore, we have that $E_3$ is convex.

The $E_3$ elliptope (also known as “inflated tetrahedron”) is the yellow part (surface and interior) of the Cayley’s cubic surface plotted below

[ source ]

__________

$E_3$ is a semialgebraic set

Saying that $A (x)$ is positive semidefinite is equivalent to saying that the $2^3 - 1 = 7$ principal minors of $A (x)$ are nonnegative. If $A (x) \succeq 0$, then the three 1st order principal minors (which are the diagonal entries of $A (x)$) lead to the following three (admittedly uninteresting) inequalities

$1 \geq 0$,     $1 \geq 0$,     $1 \geq 0$

which we can discard. The three 2nd order principal minors (determinants of $2 \times 2$ submatrices of $A (x)$) and the 3rd order principal minor (the determinant of $A (x)$) can be computed using the following Python 2.5 + SymPy script:

from sympy import *

# symbolic variables
x1 = Symbol('x1')
x2 = Symbol('x2')
x3 = Symbol('x3')

# build matrix A(x)
A = Matrix([[1,x1,x2],
[x1,1,x3],
[x2,x3,1]])

print "\nMatrix A(x):"
print A

print "\n2nd order principal minors:"
print A.extract([0,1],[0,1]).det()
print A.extract([0,2],[0,2]).det()
print A.extract([1,2],[1,2]).det()

print "\n3rd order principal minor:"
print A.det()


which produces the following output:

Matrix A(x):
[ 1, x1, x2]
[x1,  1, x3]
[x2, x3,  1]

2nd order principal minors:
-x1**2 + 1
-x2**2 + 1
-x3**2 + 1

3rd order principal minor:
-x1**2 + 2*x1*x2*x3 - x2**2 - x3**2 + 1

Hence, the 2nd order principal minors yield the following inequalities

$1 - x_1^2 \geq 0$,     $1 - x_2^2 \geq 0$,     $1 - x_3^2 \geq 0$

which define the $3$-dimensional cube $[-1,1]^3 \subset \mathbb{R}^3$. The 3rd order principal minor (i.e., the determinant) yields the inequality

$1 + 2 x_1 x_2 x_3 - (x_1^2 + x_2^2 + x_3^3) \geq 0$.

We can thus conclude that $E_3$ is a basic closed semialgebraic set.

__________

Deciding the convexity of $E_3$

In my last post, I used quantifier elimination to decide the convexity of two semialgebraic sets in $\mathbb{R}^2$. Let us now decide the convexity of $E_3$, which is of higher “complexity” than the sets I considered last time. What is the goal? The goal is to find out whether REDLOG can decide the convexity of $E_3$ in less than 10 minutes.

Based on the scripts in my previous post, we write the following REDUCE + REDLOG script to decide the convexity of $E_3$:

% decide the convexity of the elliptope E3

rlset ofsf;

% define predicates in antecedent
p1 := (1-x1^2>=0) and (1-x2^2>=0) and (1-x3^2>=0) and
(1-x1^2-x2^2-x3^2+2*x1*x2*x3>=0);
p2 := (1-y1^2>=0) and (1-y2^2>=0) and (1-y3^2>=0) and
(1-y1^2-y2^2-y3^2+2*y1*y2*y3>=0);
p3 := (theta>=0) and (theta<=1);

% define predicate in the consequent
z1 := theta*x1+(1-theta)*y1;
z2 := theta*x2+(1-theta)*y2;
q  := (1-z1^2>=0) and (1-z2^2>=0) and (1-z3^2>=0) and
(1-z1^2-z2^2-z3^2+2*z1*z2*z3>=0);

% define universal formula
phi := all({x1,x2,x3,y1,y2,y3,theta}, (p1 and p2 and p3) impl q);

% perform quantifier elimination
rlqe phi;

end;

After 30 minutes (!!!) waiting for the decision procedure to terminate, I killed the reduce.exe process (which was eating all my machine’s CPU cycles and RAM memory). This is disappointing…

### Fair play

March 31, 2012

Last February’s Ponder This challenge was the following:

To decide who would play first in a human vs. computer game, the human suggested to flip a coin it produced from its pocket, stating “heads you win, tails I win.” The computer replied, “Humans are very devious. I know that your coin is not fair, and that the probability of heads on that coin is slightly less than one half. To make this perfectly fair, flip the coin repeatedly, and I win if the coin first shows heads on toss number 1, 14, 15, 19, 20, or 23. You win if it first shows heads on any other toss.”

If the computer is correct and $p$ is the probability that the coin shows heads, give the best rational approximation to $p$ with a denominator having fewer than 10 digits.

Here is the solution.

__________

My solution #1:

Let $\mathbb{P} (H) = p$ be the probability that the coin will land with the heads side up. If the coin is biased in favor of the human player, then $p < 1/2$. The probability that the coin toss will first produce “heads” on the $n$-th trial (where $n \geq 1$) is $(1-p)^{n-1} p$. Let us define $\mathcal{I} := \{1, 14, 15, 19, 20, 23\}$. The probability that the computer will win is thus

$\displaystyle \sum_{n \in \mathcal{I}} (1-p)^{n-1} p$.

We now assume that the computer knows the exact value of $p$ and, thus, is able to compute its probability of winning. Since the computer is interested in making this game “perfectly fair”, then his probability of winning should equal the human’s probability of winning, i.e., it should equal $1/2$. Hence, we obtain the equation

$p + (1-p)^{13} p + (1-p)^{14} p + (1-p)^{18} p + (1-p)^{19} p +(1-p)^{22} p = \frac{1}{2}$

which we can solve using the following Python + SymPy script:

from sympy import *

# create symbolic variable
p = Symbol('p')

# solve polyomial equation
f = p + p*(1-p)**13+p*(1-p)**14+p*(1-p)**18+p*(1-p)**19+p*(1-p)**22
Solutions = solve(f-0.5,p)

# find (purely) real solutions
RealSols = filter(lambda z: im(z)==0, Solutions)

# print solutions
print "\nAll solutions:"
for sol in Solutions:
print sol

# print real solutions
print "\nReal solutions:"
for sol in RealSols:
print sol


This script outputs the following list of solutions:

All solutions:
0.334938061741437 - 1.03291350457235*I
0.334938061741437 + 1.03291350457235*I
1.97217279050061 + 0.143849563303172*I
1.50662666735084 - 0.797180194029411*I
1.92419089868505 + 0.454390584787364*I
0.0780115260218319 + 0.464383156442821*I
0.00197767511752256 - 0.101887310197153*I
1.27721993255011 + 0.951650156864179*I
1.27721993255184 - 0.951650156863701*I
0.247036637335883 - 0.624592516289942*I
0.625456509765409 - 0.845921918004056*I
0.0780115260218319 - 0.464383156442821*I
0.00197767511752256 + 0.101887310197153*I
0.991590970537258 - 0.923668626671437*I
1.79082570892033 - 0.599036471538201*I
1.92419089870934 - 0.454390584808249*I
1.79082570891289 + 0.599036471546724*I
0.499905242928390
0.6254565097654 + 0.845921918004058*I
0.991590970537348 + 0.923668626671824*I
0.247036637335883 + 0.624592516289942*I
1.50662666734992 + 0.797180194030292*I
1.97217279048996 - 0.14384956326771*I

Real solutions:
0.499905242928390

Hence, we have that the probability of heads is

$p = 0.499905242928390$.

We can convert this number to a ratio of integers in Haskell using function approxRational in the library Data.Ratio:

import Data.Ratio

p = 0.499905242928390

-- create (infinite) list of epsilons
epsilons = map (10**) [-1,-2..]

-- create (infinite) list of ratios
ratios = map (approxRational p) epsilons

-- list of ratios whose denominators have less than 10 digits
ratios' = takeWhile (\x -> denominator x < 10^10) ratios

Running the script above in GHCi, we obtain a list of ratios:

*Main Data.Ratio> ratios'
[1 % 2,1 % 2,1 % 2,1 % 2,2386 % 4773,2611 % 5223,2636 % 5273,
2638 % 5277,13189 % 26383,44843 % 89703,166183 % 332429,
393036 % 786221,1405961 % 2812455,6243733 % 12489833,
14906352 % 29818355,69693988 % 139414397]

and we finally obtain the following rational approximation

$p \approx \displaystyle\frac{69693988}{139414397}$

whose denominator has nine digits. According to the webmaster, the rational approximation I presented above is not the best one. Hence, let us now try to obtain a better one.

__________

My solution #2:

Most likely, the reason why I did not obtain the best rational approximation is that the real solution I obtained was not precise enough. Hence, let us use the following MATLAB command to solve the polynomial equation:

solve('p + p*(1-p)^13+p*(1-p)^14+p*(1-p)^18+p*(1-p)^19+p*(1-p)^22 = 0.5')


This command produces the following list of solutions:

ans =

.19776751175225606651411342151624e-2-.10188731019715301414082803578298*i
.19776751175225606651411342151624e-2+.10188731019715301414082803578298*i
.78011526021831885333432975572559e-1-.46438315644282071005250209738747*i
.78011526021831885333432975572559e-1+.46438315644282071005250209738747*i
.24703663733588276238861533742100-.62459251628994218281562354280746*i
.24703663733588276238861533742100+.62459251628994218281562354280746*i
.33493806174143698737516495548579-1.0329135045723528674007841220342*i
.33493806174143698737516495548579+1.0329135045723528674007841220342*i
.49990524292838950452962194522516
.62545650976540488339104798827398-.84592191800405765193777444233924*i
.62545650976540488339104798827398+.84592191800405765193777444233924*i
.99159097053730697830960408610308-.92366862667162608535284662817351*i
.99159097053730697830960408610308+.92366862667162608535284662817351*i
1.2772199325509563050241902871670-.95165015686389448554291677056666*i
1.2772199325509563050241902871670+.95165015686389448554291677056666*i
1.5066266673486985232104204448828-.79718019402868789409470957216463*i
1.5066266673486985232104204448828+.79718019402868789409470957216463*i
1.7908257089204728277307542114304-.59903647154154201043833983995659*i
1.7908257089204728277307542114304+.59903647154154201043833983995659*i
1.9241908987061350990877558129099-.45439058479360516453156947604463*i
1.9241908987061350990877558129099+.45439058479360516453156947604463*i
1.9721727904901564352190617939256-.14384956327517564484908725760533*i
1.9721727904901564352190617939256+.14384956327517564484908725760533*i

We now obtain a much more precise real solution

$p =0.49990524292838950452962194522516$.

We modify the Haskell script, using the new value of $p$ and a much finer list of epsilons (the precisions):

import Data.Ratio

p = 0.49990524292838950452962194522516

-- create (infinite) list of epsilons
epsilons = map ((1.005)**) [-1,-2..]

-- create (infinite) list of ratios
ratios = map (approxRational p) epsilons

-- list of ratios whose denominators have less than 10 digits
ratios' = takeWhile (\x -> denominator x < 10^10) ratios

Which yields the following rational approximation

$p \approx \displaystyle\frac{61031369}{122085875}$

whose denominator has nine digits. Although this rational approximation does not match the one in the official solution, I still got my name on the “people who answered correctly” list. Surprisingly, the value of $p$ which I obtained using MATLAB is quite different from the value of $p$ used by the webmaster

$p = 0.499905242928506776611191007553$.

Which value of $p$ is “more correct”? I do not know. Do you?

### Rainy days

July 7, 2011

A couple of weeks ago, Presh Talwalkar posted this puzzle:

In Mathland, the weather is described either as sunny or rainy. On a sunny day, there is an equal chance it will rain on the following day or be sunny. On a rainy day, however, there is a 70% chance it will rain on the following day versus a 30% chance it will be sunny. How many days is it expected to rain in a 365-day year? Assume the first day is sunny.

__________

My solution:

We will consider a period of $N$ days. Since in our oversimplified world each day is either sunny or rainy, we introduce $N$ random variables $W_1, W_2, \dots, W_N$ taking values in $\{S, R\}$, where $W_k = R$ means that the $k$-th day is rainy. We assume that the first day is sunny, i.e., $W_1 = S$. Hence, $\mathbb{P} (W_1 = S) = 1$ and $\mathbb{P} (W_1 = R) = 0$.

Let us now introduce $N$ random variables $X_1, X_2, \dots, X_N$ taking values in $\{0, 1\}$, where each $X_k$ is defined as follows

$X_k =\begin{cases} 1, & W_k = R\\ 0, & W_k = S\end{cases}$

Note that the expected value of $X_k$ is $\mathbb{E}(X_k) = \mathbb{P} (W_k = R)$. We introduce a new random variable $Y$, defined by

$Y = X_1 + X_2 + \dots + X_N$

where $Y$ is the number of rainy days in a period of $N$ days. We want to find the expected number of rainy days, which is

$\mathbb{E} (Y) = \mathbb{E} \left[ \displaystyle\sum_{k=1}^N X_k \right] = \displaystyle \sum_{k=1}^N \mathbb{E} (X_k) = \displaystyle \sum_{k=1}^N \mathbb{P} (W_k = R)$.

Hence, to compute the desired expected value, $\mathbb{E} (Y)$, we must first compute the probabilities $\mathbb{P} (W_k = R)$ for $1 \leq k \leq N$.

Let the probability mass function (p.m.f.) on the $k$-th day be

$\pi_k := \left[\begin{array}{cc} \mathbb{P} (W_k = S) & \mathbb{P} (W_k = R)\end{array}\right]$

We know that the p.m.f. on the first day is $\pi_1 = \left[\begin{array}{cc} 1 & 0\end{array}\right]$, as we assume the first day is sunny. Propagating the p.m.f. forwards in time, we obtain $\pi_{k+1} = \pi_k P$, where

$P := \left[\begin{array}{cc} 0.5 & 0.5\\ 0.3 & 0.7\\\end{array}\right]$

is the transition matrix. Hence, we have $\pi_k = \pi_1 P^{k-1}$ and, thus,

$\mathbb{P} (W_k = R) = \left[\begin{array}{cc} 1 & 0\end{array}\right] \left[\begin{array}{cc} 0.5 & 0.5\\ 0.3 & 0.7\\\end{array}\right]^{k-1} \left[\begin{array}{c} 0 \\ 1\\\end{array}\right]$

Matrix $P$ has the eigendecomposition $P = Q D Q^{-1}$ and, hence, $P^{k-1} = Q D^{k-1} Q^{-1}$. We can finally compute $\mathbb{P} (W_k = R)$ by matrix-vector multiplication, a task so excruciatingly dull that I am most happy to delegate it to my Pythonic minion:

from sympy import *

# create symbolic variable
k = Symbol('k')

# define transition matrix
P = Matrix([[0.5, 0.5], [0.3, 0.7]])

# eigendecomposition of the transition matrix
Q = Matrix([[1/sqrt(2), 5/sqrt(34)], [1/sqrt(2), -3/sqrt(34)]])
D = Matrix([[1, 0], [0, 0.2]])

# check if eigendecomposition is correct
print Q * D * Q.inv() - P

# create vectors
e1 = Matrix(2, 1, [1, 0])
e2 = Matrix(2, 1, [0, 1])

# compute and print probability that the k-th day is rainy
print e1.T * Q * Matrix([[1, 0], [0, 0.2**(k-1)]]) * Q.inv() * e2


This Python / SymPy script gives us

$\mathbb{P} (W_k = R) = \frac{5}{8} \left(1 - 0.2^{k-1}\right)$

and, finally, we obtain the desired expected value

$\mathbb{E} (Y) = \displaystyle \sum_{k=1}^N \mathbb{P} (W_k = R) = \displaystyle \sum_{k=1}^N \frac{5}{8} \left(1 - 0.2^{k-1}\right) = \displaystyle\frac{5}{8}\sum_{k=0}^{N-1} \left(1 - 0.2^k\right)$

which is equal to

$\mathbb{E} (Y) = \displaystyle\frac{5 N}{8} - \frac{25}{32} (1 - 0.2^N) \approx \displaystyle\frac{5 N}{8} - \frac{25}{32}$.

For $N = 365$, we get $\mathbb{E} (Y) \approx 228$ rainy days per year. Mathland is certainly not in Southern California ;-)

### Sum of squares decomposition II

July 5, 2011

Last week, we concluded that the quartic form

$F(x, y) = 2 x^4 + 2 x^3 y - x^2 y^2 + 5 y^4$

has the following sum of squares decomposition

$F(x, y) = \displaystyle \frac{1}{2} \left(2 x^2 -3 y^2 + x y\right)^2 + \frac{1}{2} \left(y^2 + 3 x y\right)^2$

and, thus, $F$ is globally nonnegative. More generally, we saw that $F$ can be written as follows

$F(x,y) = \left[\begin{array}{c} x^2\\ y^2\\ x y\end{array}\right]^T \left[\begin{array}{ccc} 2 & -\lambda & 1\\ -\lambda & 5 & 0\\ 1 & 0 & 2 \lambda - 1\end{array}\right] \left[\begin{array}{c} x^2\\ y^2\\ x y\end{array}\right]$

where $\lambda \in \mathbb{R}$ is a parameter. Let us define $z := (x^2, y^2, x y)$ and

$Q := \left[\begin{array}{ccc} 2 & -\lambda & 1\\ -\lambda & 5 & 0\\ 1 & 0 & 2 \lambda - 1\end{array}\right]$

so that we can write polynomial $F$ in the more compressed form $F(x, y) = z^T Q z$. A sufficient condition for the existence of a SOS decomposition is that $Q$ is positive semidefinite, i.e., $Q \succeq 0$. Why? If $Q$ is positive semidefinite, then it has a Cholesky factorization $Q = L^T L$ and, thus, $F(x, y) = z^T Q z = z^T L^T L z = \| L z \|_2^2$, which finally yields the SOS decomposition

$F(x, y) = \displaystyle \sum_{i =1}^3 \left(L_{i1} x^2 + L_{i2} y^2 + L_{i3} xy \right)^2$

where $L_{ij}$ is the $(i,j)$ entry of matrix $L$. One can also think of the Cholesky factorization as $Q = B B^T$, where $B = L^T$, so that we obtain $F(x, y) = \| B^T z \|_2^2$. However, let us not lose sight of where we are going. The question to be answered is: how do we find a $\lambda \in \mathbb{R}$ such that $Q \succeq 0$?

__________

Admissible values of $\lambda$

Saying that $Q$ is positive semidefinite is equivalent to saying that the $2^3 - 1 = 7$ principal minors of $Q$ are nonnegative. If $Q \succeq 0$, then the three 1st degree principal minors (which are the diagonal entries of $Q$) lead to the three inequalities

$2 \geq 0$,   $5 \geq 0$,   $2 \lambda - 1 \geq 0$.

The first two inequalities are redundant, and the third yields $\lambda \geq \frac{1}{2}$. The three 2nd degree principal minors (determinants of $2 \times 2$ submatrices of $Q$) and the 3rd degree principal minor (determinant of $Q$) can be computed with the following Python / SymPy script:


from sympy import *

# symbolic variable
Lambda = Symbol('Lambda')

# build matrix Q
Q = Matrix([[2,-Lambda,1], [-Lambda,5,0], [1,0,2*Lambda-1]])

print "\n2nd degree principal minors:"
print Q.extract([0,1],[0,1]).det()
print Q.extract([0,2],[0,2]).det()
print Q.extract([1,2],[1,2]).det()

print "\n3rd degree principal minor:"
print Q.det().factor()



which produces the following output:

2nd degree principal minors:
-Lambda**2 + 10
4*Lambda - 3
10*Lambda - 5

3rd degree principal minor:
-(Lambda - 3)*(2*Lambda**2 + 5*Lambda - 5)

From the three 2nd degree principal minors, we then obtain the three inequalities

$- \lambda^2 + 10 \geq 0$,   $4 \lambda - 3 \geq 0$,   $10 \lambda - 5 \geq 0$

which are equivalent to

$\lambda^2 \leq 10$,   $\lambda \geq \frac{3}{4}$,   $\lambda \geq \frac{1}{2}$

which yields $\lambda \in [\frac{3}{4}, \sqrt{10}]$. From the 3rd degree principal minor, we obtain the inequality

$- 2 \lambda^{3} + \lambda^{2} + 20 \lambda - 15 = - \left(\lambda -3\right) \left(2 \lambda^{2} + 5 \lambda -5\right) \geq 0$.

Hence, we have that admissible values of $\lambda$ must satisfy

$\lambda \in [\frac{3}{4}, \sqrt{10}]$,   $\left(\lambda -3\right) \left(2 \lambda^{2} + 5 \lambda -5\right) \leq 0$.

The roots of polynomial $2 \lambda^{2} + 5 \lambda -5$ are $(-5 \pm \sqrt{65}) / 4$. Unfortunately $(-5 + \sqrt{65}) / 4$ is in $[\frac{3}{4}, \sqrt{10}]$. Hence, let us partition the interval $[\frac{3}{4}, \sqrt{10}]$ into two intervals:

• for $\lambda \in [\frac{3}{4}, \frac{-5 + \sqrt{65}}{4})$, we have that $2 \lambda^{2} + 5 \lambda -5$ is negative, and thus $\left(\lambda -3\right) \left(2 \lambda^{2} + 5 \lambda -5\right) \leq 0$ implies that $\lambda \geq 3$, but

$\{ \lambda \in \mathbb{R} : \lambda \in [\frac{3}{4}, \frac{-5 + \sqrt{65}}{4}) \land \lambda \geq 3\} = \emptyset$.

• for $\lambda \in [\frac{-5 + \sqrt{65}}{4}, \sqrt{10}]$, we have that polynomial $2 \lambda^{2} + 5 \lambda -5$ is nonnegative, and thus $\left(\lambda -3\right) \left(2 \lambda^{2} + 5 \lambda -5\right) \leq 0$ implies that $\lambda \leq 3$. Hence,

$\{ \lambda \in \mathbb{R} : \lambda \in [\frac{-5 + \sqrt{65}}{4}, \sqrt{10}] \land \lambda \leq 3\} = [\frac{-5 + \sqrt{65}}{4}, 3]$.

Finally, we conclude that the set of admissible values of $\lambda$ is $\Lambda := [\frac{-5 + \sqrt{65}}{4}, 3]$. Note that $\frac{-5 + \sqrt{65}}{4} \approx 0.765$.

__________

An ensemble of SOS decompositions

Now that we know for which values of $\lambda$ matrix $Q$ is positive semidefinite, we can compute various SOS decompositions. The following Python / SymPy script computes an SOS decomposition for $\lambda = 3$:


from sympy import *

# symbolic variables
Lambda = Symbol('Lambda')
x = Symbol('x')
y = Symbol('y')

# define polynomial
F = 2*(x**4) + 2*(x**3)*y - (x**2)*(y**2) + 5*(y**4)

# build matrix Q
Q = Matrix([[2,-Lambda,1], [-Lambda,5,0], [1,0,2*Lambda-1]])

# build vector of monomials
z = Matrix(3, 1, [x**2, y**2, x*y])

# compute Cholesky factorization
# for a particular value of Lambda
B = Q.subs(Lambda, Rational(3,1)).cholesky()

# obtain vector of SOS polynomials
f = B.T * z

print "\nSOS polynomials:"
pprint(f[0])
pprint(f[1])
pprint(f[2])

# compute residual polynomial
R = F - sum([f[i]**2 for i in [0,1,2]])

print "\nResidual polynomial: %s" % R.expand()


and it produces the SOS decomposition

$F(x, y) = \displaystyle \frac{1}{2} \left(2 x^2 -3 y^2 + x y\right)^2 + \frac{1}{2} \left(y^2 + 3 x y\right)^2$

which is the one we obtained last week. For $\lambda = 3/2$, we get

$F(x, y) = \displaystyle \frac{1}{2} \left(2 x^2 - \frac{3}{2} y^2 + x y\right)^2 + 62 \left(\frac{1}{4} y^2 + \frac{3}{62} x y\right)^2 +\frac{1302}{961} x^2 y^2$

and to verify that this is correct, we can run the script:

h0 = Rational(1,2) * (2*x**2 - Rational(3,2)*y**2 + x*y)**2
h1 = 62 * (Rational(3,62)*x*y + Rational(1,4)*y**2)**2
h2 = Rational(1302,961)*x**2*y**2
pprint((F - (h0 + h1 + h2)).expand())


For $\lambda = 5/2$, we obtain the SOS decomposition

$F(x, y) = \displaystyle \frac{1}{2} \left(2 x^2 - \frac{5}{2} y^2 + x y\right)^2 + \frac{30}{16} \left(y^2 + \frac{2}{3} x y\right)^2 +\frac{8}{3} x^2 y^2$.

But, Доверяй, но проверяй!

h0 = Rational(1,2) * (2*x**2 - Rational(5,2)*y**2 + x*y)**2
h1 = Rational(30,16) * (Rational(2,3)*x*y + y**2)**2
h2 = Rational(8,3)*x**2*y**2
pprint((F - (h0 + h1 + h2)).expand())


__________

What happens for non-admissible values of $\lambda$?

We know that if $\lambda \in \Lambda$, then $Q \succeq 0$. What if $\lambda \notin \Lambda$? Then matrix $Q$ will be negative definite and will, or will not, have a Cholesky factorization, depending on what convention one uses.

One can think of matrices $L$ and $B$ in the Cholesky factorizations $Q = L^T L$ and $Q = B B^T$, respectively, as square roots of matrix $Q$. Do negative reals have square roots? They do if you are willing to work with complex numbers. Likewise, a negative definite $Q$ will have a Cholesky factorization if one is willing to work with complex matrices. For example, for $\lambda = 0$ we obtain the SOS decomposition

$F(x, y) = \displaystyle \sum_{i=0}^2 f_i^2 (x, y)$

where the $f_i$ polynomials are:

$f_0 (x,y) = \frac{\sqrt{2}}{2} (2 x^2 + x y)$,   $f_1 (x,y) = \sqrt{5} y^2$,   $f_2 (x,y) = i \frac{\sqrt{6}}{2} x y$

where $i := \sqrt{-1}$. Note that $f_0, f_1 \in \mathbb{R}[x, y]$, but $f_2 \in \mathbb{C}[x, y]$. If we square $f_2$, we get $f_2^2 = -\frac{3}{2} x^2 y^2$. This term is being subtracted from the sum of squares, which prevents us from concluding global nonnegativity. That is why we impose the condition $Q \succeq 0$.

### Sum of squares decomposition

June 30, 2011

Consider the following quartic form in $\mathbb{R}[x, y]$

$F(x, y) = 2 x^4 + 2 x^3 y - x^2 y^2 + 5 y^4$

and suppose we would like to decide if $F$ is globally nonnegative. In other words, we want to know whether the proposition

$F(x, y) \geq 0, \qquad{} \forall x, y \in \mathbb{R}$

is true. A sufficient condition for $F(x,y)$ to be globally nonnegative is the existence of a sum of squares decomposition [1, 2]

$F(x, y) = \displaystyle \sum_{i} f_i^2 (x, y)$

where $f_i \in \mathbb{R}[x, y]$. If such a decomposition does indeed exist, how do we compute it? How do we find the $f_i$ polynomials?

Note that $F(x, y)$ is  a quartic form, i.e., a polynomial whose monomials all have total degree 4. Using Parrilo’s method [1, 2], let us try to write $F(x, y)$ as a quadratic form in all the monomials of total degree equal to 2, as follows

$F(x,y) = \left[\begin{array}{c} x^2\\ y^2\\ x y\end{array}\right]^T \left[\begin{array}{ccc} q_{11} & q_{12} & q_{13}\\ q_{12} & q_{22} & q_{23}\\ q_{13} & q_{23} & q_{33}\end{array}\right] \left[\begin{array}{c} x^2\\ y^2\\ x y\end{array}\right]$

where the real matrix

$Q := \left[\begin{array}{ccc} q_{11} & q_{12} & q_{13}\\ q_{12} & q_{22} & q_{23}\\ q_{13} & q_{23} & q_{33}\end{array}\right]$

is symmetric. Hence, we obtain

$F(x,y) = q_{11} x^4 + 2 q_{13} x^3 y + (2 q_{12} + q_{33}) x^2 y^2 + 2 q_{23} x y^3 + q_{22} y^4$

and, matching coefficients, we get the equality constraints

$q_{11} = 2$,   $q_{13} = 1$,   $2 q_{12} + q_{33} = -1$,   $q_{23} = 0$,   $q_{22} = 5$.

Let us make $q_{12} = - \lambda$, where $\lambda \in \mathbb{R}$, so that $q_{33} = 2 \lambda - 1$. Hence, we obtain a family of symmetric $Q$ matrices parameterized by $\lambda$

$Q := \left[\begin{array}{ccc} 2 & -\lambda & 1\\ -\lambda & 5 & 0\\ 1 & 0 & 2 \lambda - 1\end{array}\right]$

For each choice of $\lambda$, we get a matrix $Q$. We then conclude that the representation $F(x, y) = z^T Q z$, where $z := (x^2, y^2, x y)$, is not unique. Since there are infinitely many admissible $Q$ matrices, if we are able to find a positive semidefinite one, then $F(x, y) = z^T Q z \geq 0$ for all $z$, and global nonnegativity of the given polynomial $F$ is guaranteed!

For instance, let us try $\lambda = 3$. We then get

$Q = \left[\begin{array}{ccc} 2 & -3 & 1\\ -3 & 5 & 0\\ 1 & 0 & 5\end{array}\right]$

which is positive semidefinite. Computing a Cholesky factorization of $Q$, i.e., finding a matrix $L$ such that $Q = L^T L$, we get

$L = \displaystyle \frac{1}{\sqrt{2}}\left[\begin{array}{ccc} 2 & -3 & 1\\ 0 & 1 & 3\end{array}\right]$

and, thus, $F(x, y) = z^T Q z = z^T L^T L z = \| L z \|_2^2$, which finally yields a sum of squares (SOS) decomposition

$F(x, y) = \displaystyle \frac{1}{2} \left(2 x^2 -3 y^2 + x y\right)^2 + \frac{1}{2} \left(y^2 + 3 x y\right)^2$.

To make sure that this SOS decomposition is correct, we can use a little Python / SymPy script:


from sympy import *

# symbolic variables
x = Symbol('x')
y = Symbol('y')

# define polynomial
F = 2*(x**4) + 2*(x**3)*y - (x**2)*(y**2) + 5*(y**4)

# define SOS polynomials
f1 = (sqrt(2)/2) * (2*(x**2) - 3*(y**2) + x*y)
f2 = (sqrt(2)/2) * (y**2 + 3*x*y)

# construct residual polynomial
R = F - (f1**2 + f2**2)

print "Residual polynomial: %s" % R.expand()



The residual polynomial is $R (x, y) = 0$. The SOS decomposition is correct! We can then conclude that the given polynomial $F$ is globally nonnegative.

__________

Remark: this post is based on example 4.1 of Parrilo’s doctoral dissertation [1] and also on example 3.5 of Parrilo’s paper [2].

__________

References

[1] Pablo Parrilo, Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization, Dissertation (Ph.D.), California Institute of Technology, 2000.

[2] Pablo Parrilo, Semidefinite programming relaxations for semialgebraic problems, Mathematical Programming, 2003.