If you want to make money online : Register now

[Solved]: Simulation BPP by Pseudorandom Generator

, ,
Problem Detail:

I use the following scribe to study Pseudorandom Generators.

The most intriguing question for me is how exactly PRG can be used in order to simulate BPP.

Let $A(x,r) \in BPP$, where $x$ is primary input and $r$ - randomness provided by BPP. Instead of $r$ we can provide $A$ with pseudorandom input $G(U_{d(n^c)})$ and according to the Claim 7.6 from the scribe. $A$ with pseudorandom input differs from $A$ with random $r$ in less than half of running cases. The proof of Lemma shows that if it's so, that there is a distinguisher that distinguishes $U_{d(n^c)}$ from $U_{n^c}$ with advantage at least $\frac{1}{2} - \frac{1}{3} > \frac{1}{8}$ which is contradicts to the definition of PRG.

Q: Where this advantage comes from? I simply don't understand what these numbers mean.

Q: Construction question. Why do need to run over all seeds of length $d(n^c)$ there are $2^{(d(n^c))}$ such string. For me it sounds like a strengthening of randomness because after that we use the majority as a result. But actually why do need that, why we cannot use single seed of length $d(n^c)$.

Q:Philosophical question. Why do we need PRG to be non-uniformly strong? Why we cannot use uniformly indistinguishable PRG? I saw the idea that the primary input $x$ to the $A$ can be used as advice, but I still don't understand why do we need it.

The idea of a PRNG is coming up with a "deterministic" sequence that "looks like" random, in the sense that programs getting access to it act (roughly) as if they had access to a completely random string. The PRNG takes a seed or a key, and outputs a larger pseudorandom sequence. When simulating some probabilistic algorithm, instead of choosing a completely random string, it is enough to choose a random seed and use a PRNG.

How would you use a PRNG to simulate (say) a BPP algorithm? Suppose we had a PRNG that takes a seed of length $L = O(\log n)$, and $\epsilon$-fools every BPP algorithm. (Pseudorandom sequences don't look completely random, but in this case, using them instead of a random sequence changes the probability of any event by at most $\epsilon$.) That means that if we pick the seed at random, then the BPP algorithm returns the correct answer with probability at least $2/3-\epsilon$ (supposing that the original success probability was $2/3$ - that depends on your definition of BPP). If $\epsilon < 1/3$, then we have the following property:

• If we run the algorithm on a YES instance and a random seed, then it returns YES with probability larger than $1/2$.
• If we run the algorithm on a NO instance and a random seed, then it returns YES with probability less than $1/2$.

This property assumes we choose a random seed, but we wanted a deterministic algorithm. To that end, we go over all possible seeds, and calculate the probability that the algorithm returns YES. This is just the number of times that the algorithm returns YES divided by $2^L$. If the probability is larger than $1/2$ then it's a YES instance, otherwise its a NO instance. If indeed $L = O(\log n)$, then $2^L$ is polynomial, and we obtain a deterministic polynomial time algorithm.

1. I haven't looked at the scribal notes, but the advantage might come from the BPP algorithm.
2. Looking at a fixed seed is the same as looking at a fixed "random" sequence - you don't need a PRNG for that. All you know is that on average, the performance of the algorithm on a pseudorandom sequence correlates with whether the input is a YES or a NO instance. That doesn't say anything about a single seed.
3. As to uniformity, the program that your PRNG should fool is $A(x)$ for an arbitrary input $x$. While $A$ is a uniform circuit, $A(x)$ isn't, since $x$ could be arbitrary. Your PRNG should fool at one and the same time $A(x)$ for all possible inputs $x$.