If you want to make money online : Register now

What is the significance of the vector dimension in semidefinite programming relaxations?

, , No Comments
Problem Detail: 

Let's say that we want to design a semi-definite programming approximation for an optimization problem such as MAX-CUT or MAX-SAT or what have you.

So, we first write down an integer quadratic program that solves the problem exactly. For example, in the case of MAX-CUT we would write: $$ \max \sum_{\{u,v\} \in E}{\frac{1 - x_u x_v}{2}} $$ $$ x_v \in \{-1,+1\}. $$

Then we relax the program by allowing the variables to take on vector values in the sphere $S^{n-1}$, and replacing products with scalar products, obtaining a semi-definite program that can be solved precisely.

Our approximation proceeds by solving the SDP and rounding the solution to obtain a legal solution for the original problem whose distance from the SDP solution can be bounded.

My question is: What is the importance of the dimension of the vectors? In all the cases I have seen, the analysis would stay the same if we assume that the vectors are all unit vectors in $\mathbb{R}^2$. So why do people always take vectors in $S^{n-1}$?

Asked By : Zur Luria

Answered By : Yuval Filmus

Vector programs are solved by solving semidefinite programs. The basic idea is that if $v_1,\ldots,v_n$ are vectors, then the matrix $V_{ij} = \langle v_i,v_j \rangle$ is positive semidefinite: $$ \alpha' V \alpha = \sum_{i,j} \alpha_i \alpha_j \langle v_i, v_j \rangle = \left\|\sum_i \alpha_i v_i\right\|^2. $$ The converse is also true: a positive semidefinite $n \times n$ matrix $V$ of rank $r$ can be written as $V = LL'$, where $L$ is $n \times r$ (this is known as the Cholesky decomposition). Furthermore, $L$ can be found effectively given $V$. If $\ell_1,\ldots,\ell_n$ are the $r$-dimensional rows of $L$, then $V_{ij} = \ell_i \ell'_j = \langle \ell_i, \ell_j \rangle$. We can therefore optimize a vector program involving inner products of vectors $v_i$ by solving a semidefinite program. Rank constraints are not convex, and so we cannot really control the dimension of the vectors other than the trivial bound $n$.

As you mention, the analysis in the particular case of MAX-CUT only uses the two-dimensional projections. This shows that even if you could somehow guarantee that $\operatorname{rank} V = 2$ (recall that the rank corresponds to the dimension), the analysis wouldn't get better. On the other hand, if you could guarantee than $\operatorname{rank} V = 1$, you will have solved MAX-CUT exactly. This shows that the latter constraint is NP-hard to realize. It is probably the case that any non-trivial rank constraint cannot be enforced in polynomial time, and any constant rank constraint is NP-hard.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/26262

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback