Cheap and Secure Web Hosting Provider : See Now

[Solved]: Why/how does the definition of PCP use randomness?

, , No Comments
Problem Detail: 

I am confused by the definition of probabilistic checkable proofs.

Language $L$ has an $(r(n),q(n))$ - PCP verifier, if there is a PPA V satisfying:

Efficiency: $V$ uses at most $r(n)$ random coins and makes at most $q(n)$ nonadaptive queries to location of $\pi$.

Completeness: $x \in L, \exists \pi \in \{0,1\}^*, Pr[V^{\pi}(x)=1]=1$

Soundness: $x \notin L, \forall \pi \in \{0,1\}^*, Pr[V^{\pi}(x)=1]\leq\frac{1}{2}$

The main question:

  • Why does PCP need randomness?
  • How exactly are random coins used in the PCP?
  • And what happens when these random coins are actually deterministic?
  • Where does the effective proof length bound of $2^{r(n)}q(n)$ come from?
Asked By : com

Answered By : Yuval Filmus

The way a PCP works is like so: in order to verify some property, the prover presents a proof P which is "easily testable". The way to test it is to query the proof at a small number $q(n)$ of points. We want the PCP to satisfy two properties:

  1. If the property actually holds, the verifier should always (or almost always) accept.
  2. If the property doesn't hold, then the verifier would notice with some constant probability.

Now if the proof is long and the verifier only looks at a small number $q(n)$ of them, then they better be random! Otherwise it's very easy to fool the verifier. That should answer your first and third questions.

How are the random coins used? In an arbitrary way. The verifier chooses $r(n)$ random coins, and based on them queries the proof at $q(n)$ points. Then she applies a test which always succeeds in YES instances (the property holds), but fails with constant probability (over the coin tosses) in NO instances (the property doesn't hold). That should answer the second question.

Since there are $r(n)$ coins, there are $2^{r(n)}$ different outcomes for the coins, and so the verifier in total queries the proof in at most $2^{r(n)}q(n)$ different places. That answers your fourth question.

What's the point of a PCP? We can encode the verifier as a CNF whose variables are the bits of the proof: for each of the $2^{r(n)}$ coin tosses, there are clauses encoding that the $q(n)$ bits sampled pass the verifier's test. If it's a YES instance, then this CNF is satisfiable. If it's a NO instance, then the verifier fails with some constant probability, and so a constant fraction of the clauses in the CNF are always unsatisfied.

The PCP theorem gives a PCP for 3SAT with $q(n) = O(\log n)$ and $r(n) = O(1)$. That means that given a 3CNF, we can come construct a new CNF of length $O(2^{q(n)}) = n^{O(1)}$ using the construction outlined in the preceding paragraph such that (i) if the original 3CNF formula is satisfiable, so is the new one, (ii) otherwise, at most $1-\epsilon$ fraction of the clauses can ever be satisfied in the new CNF. That gives a hardness of approximation result for 3SAT.

Using Raz's parallel repetition theorem, Håstad obtained a construct in which in case (ii), at most $7/8+\epsilon$ of the clauses can be satisfied, thus obtaining an optimal inapproximability result. (It's optimal since a random assignment satisfies $7/8$ of the clauses; this algorithm can be derandomized using the method of conditional expectations.)

Best Answer from StackOverflow

Question Source :

3.2K people like this

 Download Related Notes/Documents


Post a Comment

Let us know your responses and feedback