Cheap and Secure Web Hosting Provider : See Now

[Solved]: Membership problem for context sensitive languages PSPACE-complete

, , No Comments
Problem Detail: 

I have read that the membership problem for CSL is PSPACE-complete but I couldn't find the proof anywhere. So I tried it myself.

Let's mark the membership problem for CSL as MEM. First I have to proof that $ MEM \in PSPACE $. This should be easy, just take Turing Machine that generate words from L in lexicographic order and check if any is the same as the input word w. We can stop with the Turing machine when we reach length w+1.

Seccond, make a reduction from some PSPACE language. Quantified Boolean formula problem (QBF) seems to be suitable for this reduction. I have seen how to make a reduction from MEM to QBF but here I need the opposite. If I had a word w, I could make a formula based on the configuration I must go through to get w and all those configurations would mean true for the QBF. The representation could be just going from the binary code to some formulas.

But when going from the opposite direction, I don't know how to make CSL work from any given formula.

Asked By : Dracke

Answered By : Yuval Filmus

I'm assuming that the context-sensitive language is given to you as a context-sensitive grammar. Summarizing the Wikipedia article, here is how to show that the word problem for context-sensitive grammars is $\mathsf{PSPACE}$-complete.

First, we show that the word problem is in $\mathsf{PSPACE}$. The productions in a context-sensitive grammar cannot decrease the size of the word. Therefore, given a context-sensitive grammar $G$ and a word $w$, we can repeatedly apply productions in reverse, in the hope of eventually reaching the starting symbol $S$. We also keep a counter, and give up after $|\Sigma|^{|w|}$ steps (this ensures that the machine always halts), where $\Sigma$ contains both terminals and non-terminals; the size of this counter is $|w| \log |\Sigma| = O(n\log n)$ (here $n$ is the input size). This puts the word problem in $\mathsf{NSPACE}(O(n\log n)) \subseteq \mathsf{NPSPACE} = \mathsf{PSPACE}$, using Savitch's theorem.

In the other direction, Kuroda showed how to convert a linear bounded automaton (that is, a machine in $\mathsf{NSPACE}(n)$) to a context-sensitive grammar in such a way that the LBA accepts a word iff the CSG accepts it. Given a language $L \in \mathsf{SPACE}(p(n))$, we consider the language $L' = \{x\#^{p(|x|)} : x \in L \}$ obtained by padding $L$, and note that $L' \in \mathsf{NSPACE}(n)$. Kuroda's theorem gives an equivalent context-sensitive grammar $G'$. In order to check whether $x \in L$, we first compute $x' = x\#^{p(x)}$, and then check whether $G'$ accepts $x'$. This gives a polynomial time reduction from any language in $\mathsf{PSPACE}$ to the word problem for context-sensitive grammars, and so the latter is $\mathsf{PSPACE}$-complete.

Best Answer from StackOverflow

Question Source :

3.2K people like this

 Download Related Notes/Documents


Post a Comment

Let us know your responses and feedback