Publication
ISTCS 1995
Conference paper

On the role of shared randomness in two prover proof systems

View publication

Abstract

We consider which aspects of the two prover model are necessary for their striking language recognition and zero-knowledge capabilities. We approach this question by looking at an alternative, more symmetric model which we call the double verifier model. We find that in this model the shared randomness of the verifiers is key to the language recognition power: if the verifiers don't share randomness the power is PSPACE; otherwise it is MIP=NEXPTIME. We find that the shared randomness of the provers is necessary for zero-knowledge: if the provers don't share randomness, statistical zero-knowledge is only possible for languages in BPPNP; else it is possible for all of NEXPTIME. These results have immediate implications for the standard two-prover model. We see that correlations between the verifier's queries is crucial for the language recognition power of two prover proofs. In particular, the natural analog of IP=AM does not hold in the two-prover model unless NEXPTIME=PSPACE. Similarly, we see that shared randomness, or correlation of the provers answers, is necessary for the statistical zero-knowledge of two prover proofs.

Date

Publication

ISTCS 1995

Authors

Share