Jekyll2019-06-23T23:04:19+00:00https://unnatural-proofs.github.io/feed.xmlin search of unnatural proofscomplexity theory and moreIn Defense of Random Oracles2019-05-23T00:00:00+00:002019-05-23T00:00:00+00:00https://unnatural-proofs.github.io/2019/in-defense-of-random-oracles<div style="display:none;">
$$
\newcommand{\QSZK}{\textsf{QSZK}}
\newcommand{\SZK}{\textsf{SZK}}
\newcommand{\NP}{\textsf{NP}}
\newcommand{\P}{\textsf{P}}
\newcommand{\coNP}{\textsf{coNP}}
\newcommand{\UP}{\textsf{UP}}
\newcommand{\coUP}{\textsf{coUP}}
\newcommand{\BQP}{\textsf{BQP}}
\newcommand{\BPP}{\textsf{BPP}}
\newcommand{\PSPACE}{\textsf{PSPACE}}
\newcommand{\IP}{\textsf{IP}}
$$
$$
\newcommand{\N}{\mathbb{N}}
$$
$$
\newcommand{\A}{\mathcal{A}}
\newcommand{\poly}{\text{poly}}
\newcommand{\polylog}{\text{polylog}}
$$
$$
\newcommand{\ket}[1]{\lvert #1 \rangle}
\newcommand{\bra}[1]{\langle #1 \rvert}
\newcommand{\coloneqq}{\mathrel{:=}}
\newcommand{\dim}{\text{dim}}
$$
</div>
<p>A few days ago, I read<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<blockquote>
<p><em>The Random Oracle Model: A Twenty-Year Retrospective</em><br />
Neal Koblitz and Alfred Menezes<br />
Crypto ePrint <a href="https://eprint.iacr.org/2015/140">2015/140</a></p>
</blockquote>
<p>and it reaffirmed my longstanding belief that oracle results are useful. There is a counter example to the “Random Oracle Hypothesis” (one can <a href="https://doi.org/10.1016/S0022-0000(05)80084-4">show</a> that relative to a random oracle $\IP \neq \PSPACE$; it is exactly what you’d expect) but if used correctly, they are a very powerful tool to reason about the real world. There are similar counterexamples in the crypto world, perhaps the most famous one is Shafi Goldwasser and Yael Tauman’s <a href="https://eprint.iacr.org/2003/034">proof</a> of the insecurity of the <em>Fiat-Shamir transform</em>.<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> I don’t want take up more of your time—read the paper.</p>
<p>Some people might call me a hypocrite for liking Koblitz’s view about random oracles but not liking <a href="https://www.ams.org/notices/200708/tx070800972p.pdf">his views</a> towards the foundations of cryptography. To them, I say that it is more nuanced than that. Trashing random oracles because of a few synthetic counterexamples is just as bad as trashing an entire field based on a few anecdotes. (Contrary to the expectations of most people, I never claimed or will claim that, “foundations of crypto—or any other subfield of theoretical computer science—is immediately useful.” Also, I agree with Koblitz that “nontightness” in reductions is a huge problem, especially in lattice crypto where people keep throwing around “our scheme is secure based on the worst-case hardness of approximating lattice problems.” Sanjit Chatterjee, Neal Koblitz, Alfred Menezes, and Palash Sarkar have a <a href="https://eprint.iacr.org/2016/360">beautiful paper</a> emphasizing this issue.)</p>
<p>On another side note, the bandwagon effect that Koblitz describes with regard to crypto in the 1990s is exactly what is happening right now with blockchain and machine learning (and to a smaller extent, even quantum computing.)</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>(on the bus) <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>Remind me to write a blog post on this. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>sanketh$$ \newcommand{\QSZK}{\textsf{QSZK}} \newcommand{\SZK}{\textsf{SZK}} \newcommand{\NP}{\textsf{NP}} \newcommand{\P}{\textsf{P}} \newcommand{\coNP}{\textsf{coNP}} \newcommand{\UP}{\textsf{UP}} \newcommand{\coUP}{\textsf{coUP}} \newcommand{\BQP}{\textsf{BQP}} \newcommand{\BPP}{\textsf{BPP}} \newcommand{\PSPACE}{\textsf{PSPACE}} \newcommand{\IP}{\textsf{IP}} $$ $$ \newcommand{\N}{\mathbb{N}} $$ $$ \newcommand{\A}{\mathcal{A}} \newcommand{\poly}{\text{poly}} \newcommand{\polylog}{\text{polylog}} $$ $$ \newcommand{\ket}[1]{\lvert #1 \rangle} \newcommand{\bra}[1]{\langle #1 \rvert} \newcommand{\coloneqq}{\mathrel{:=}} \newcommand{\dim}{\text{dim}} $$More Tweets: Quantum Economics2019-05-11T00:00:00+00:002019-05-11T00:00:00+00:00https://unnatural-proofs.github.io/2019/more-tweets-quantum-economics<p>This post is essentially a reference to my tweets. I will write a coherent blog post sometime in the future.</p>
<blockquote class="twitter-tweet tw-align-center" data-lang="en"><p lang="en" dir="ltr">From page 5 of the referenced paper (<a href="https://t.co/Yw3rwCGZET">https://t.co/Yw3rwCGZET</a>). <a href="https://t.co/ZSIe9Y31Oi">pic.twitter.com/ZSIe9Y31Oi</a></p>— Sanketh Menda (@sgmenda) <a href="https://twitter.com/sgmenda/status/1127289744046600192?ref_src=twsrc%5Etfw">May 11, 2019</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>See, also, <a href="https://twitter.com/sgmenda/status/1126907682626125824">this thread</a>.</p>
<p><strong>Edit (26/05):</strong> See, also,</p>
<blockquote class="twitter-tweet tw-align-center" data-lang="en"><p lang="en" dir="ltr">"Money or currency is believed by some to have a quantum nature. As we move towards a cashless economy and as digital- and crypto-currencies are on the rise, their diffusion will have commonality on which quantum physics operates." <br /> <a href="https://t.co/P0GcSg9TU4">https://t.co/P0GcSg9TU4</a></p>— Jonathan P. Dowling (@jpdowling) <a href="https://twitter.com/jpdowling/status/1132462366950600704?ref_src=twsrc%5Etfw">May 26, 2019</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>sankethThis post is essentially a reference to my tweets. I will write a coherent blog post sometime in the future.Quantum Computers Could Not Have Prevented 2008!!!2019-04-09T00:00:00+00:002019-04-09T00:00:00+00:00https://unnatural-proofs.github.io/2019/quantum-computers-could-not-have-prevented-2008<p>This post is essentially a reference to my month-old tweets.</p>
<blockquote class="twitter-tweet tw-align-center" data-lang="en"><p lang="en" dir="ltr">1. Nature ≠ NPJQI<br />2. Risk measures like VaR were partly responsible for 2008. (<a href="https://t.co/t2T67Kas4m">https://t.co/t2T67Kas4m</a>)<br />3. Quadratic speedups for monte carlo are boring.</p>— Sanketh Menda (@sgmenda) <a href="https://twitter.com/sgmenda/status/1102208126986739715?ref_src=twsrc%5Etfw">March 3, 2019</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<blockquote class="twitter-tweet tw-align-center" data-conversation="none" data-lang="en"><p lang="en" dir="ltr">Academics have been warning us about the dangers of VaR since 1995: (<a href="https://t.co/1NhdjgxRb2">https://t.co/1NhdjgxRb2</a>) (<a href="https://t.co/1NhdjgxRb2">https://t.co/1NhdjgxRb2</a>). If you want a more extreme take on it, see Taleb: (<a href="https://t.co/rTlJNBIXLw">https://t.co/rTlJNBIXLw</a>). The basic idea is that VaR is extremely sensitive to model specification.</p>— Sanketh Menda (@sgmenda) <a href="https://twitter.com/sgmenda/status/1115624921714044929?ref_src=twsrc%5Etfw">April 9, 2019</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>After 2008, Taleb also <a href="http://nassimtaleb.org/2010/06/nassim-taleb-speaks-to-congress-value-at-risk-var/">spoke</a> to Congress about the risks of using VaR. (Ignore the description under the video.) You can see the full hearing <a href="https://youtu.be/40Gkp0wJplU">here</a>. More generally, see the <a href="https://en.wikipedia.org/wiki/Value_at_risk#Criticism">Criticism section</a> on VaR’s Wikipedia page.</p>sankethThis post is essentially a reference to my month-old tweets.Edmonds in 19672019-04-07T00:00:00+00:002019-04-07T00:00:00+00:00https://unnatural-proofs.github.io/2019/edmonds-in-1967<div style="display:none;">
$$
\newcommand{\P}{\text{P}}
\newcommand{\EdmondsP}{\text{EdmondsP}}
\newcommand{\NP}{\text{NP}}
\newcommand{\coNP}{\text{coNP}}
\newcommand{\BQP}{\text{BQP}}
$$
</div>
<blockquote>
<p>
I conjecture that there is no good algorithm for the traveling salesman problem. My reasons are the same as for any mathematical conjecture: (1) It is a legitimate mathematical possibility, and (2) I do not know.
</p><br />
<cite>Jack Edmonds, <a href="https://nvlpubs.nist.gov/nistpubs/jres/71b/jresv71bn4p233_a1b.pdf">Optimum Branchings</a>, J. Res. Natl. Bur. Stand. 71B, 233-240 (1967). </cite>
</blockquote>
<p>I have seen this quote many times (it appears in Papadimitriou and Arora and Barak) but I haven’t read the source till today. I highly recommend anything by Edmonds, he is awesome. If you want to read just one paper: check out <a href="https://doi.org/10.4153/CJM-1965-045-4">Paths, Trees, and Flowers</a>.</p>
<p>If you are wondering, I still don’t believe that $\P = \NP \cap \coNP$. On the other hand, I wouldn’t be surprised if every combinatorial problem that is currently in $\NP \cap \coNP$—you could call this class $\EdmondsP$—turns out to be in $\P$. $\EdmondsP$, for instance, would include graph isomorphism, which I strongly believe is in $\P$. Also, if you are wondering, why this does not imply $\P = \NP \cap \coNP$—after all, if all combinatorial problems in $\NP$ are in $\P$, then $\P=\NP$—it is because we don’t believe that $\NP \cap \coNP$ has complete problems (Sipster <a href="https://doi.org/10.1007/BFb0012797">constructed</a> a relativized world where this holds.)</p>
<p><strong>Added on May 11, 2019:</strong> I heard Jack Edmonds talk about this at the <a href="http://www.fields.utoronto.ca/activities/18-19/NP50">CookSymposium</a>. I admire him a lot more now. On a side note, a debate between Edmonds and Sipser broke out at the conference about the progress towards proving $\P \neq \NP$; you can see it for yourself <a href="http://www.fields.utoronto.ca/talks/Adventures-Complexity">here</a> (the debate starts at 10:00.) I used be in Sipster’s camp, but now I am squarely in Edmonds’s camp: the point of complexity theory is to inform real world decisions. It doesn’t matter whether $\P = \NP$ or not if we have an efficient (in the real world) algorithm for SAT.</p>sanketh$$ \newcommand{\P}{\text{P}} \newcommand{\EdmondsP}{\text{EdmondsP}} \newcommand{\NP}{\text{NP}} \newcommand{\coNP}{\text{coNP}} \newcommand{\BQP}{\text{BQP}} $$Mulmuley’s PRAM2019-03-17T00:00:00+00:002019-03-17T00:00:00+00:00https://unnatural-proofs.github.io/2019/Mulmuleys-PRAM<div style="display:none;">
$$
\newcommand{\P}{\text{P}}
\newcommand{\NC}{\text{NC}}
\newcommand{\NP}{\text{NP}}
\newcommand{\BQP}{\text{BQP}}
\newcommand{\BPP}{\text{BPP}}
\newcommand{\PSPACE}{\text{PSPACE}}
\newcommand{\SP}{\text{#P}}
\newcommand{\BQNC}{\text{BQNC}}
$$
$$
\newcommand{\CC}{\mathbb{C}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\NN}{\mathbb{N}}
$$
$$
\newcommand{\A}{\mathcal{A}}
\newcommand{\poly}{\text{poly}}
\newcommand{\polylog}{\text{polylog}}
$$
$$
\newcommand{\ket}[1]{\lvert #1 \rangle}
\newcommand{\bra}[1]{\langle #1 \rvert}
\newcommand{\coloneqq}{\mathrel{:=}}
\newcommand{\dim}{\text{dim}}
$$
</div>
<p>Today, I will talk about one of my favorite models of computation—Mulmuley’s PRAM. To keep this post short, avoid embarrassing myself, and not fail any of my assignments, I will stick to just the model. In a later post, I will talk more generally about GCT.</p>
<p>This post is based on my notes which in turn are based on Joshua Grochow’s <a href="https://www.cs.toronto.edu/~toni/Courses/PvsNP/Lectures/lecture7-1.pdf">lec</a><a href="https://www.cs.toronto.edu/~toni/Courses/PvsNP/Lectures/lecture7-2.pdf">tur</a><a href="https://www.cs.toronto.edu/~toni/Courses/PvsNP/Lectures/lecture8.pdf">es</a> for CSC 2429 and Mulmuley’s <a href="http://gct.cs.uchicago.edu/">GCT papers</a>.</p>
<p>But, first, why should you care about models others than Turing machines (or uniform circuits!)? Because you can <em>prove</em> stuff. Remember that time, more than a decade ago, when STOC papers had actual unconditional proofs? That kind of proofs. ;-p</p>
<p>Here is the punchline:</p>
<p><strong>Theorem 1</strong> (Mulmuley (1997, 1999))<strong>.</strong> In the PRAM model without bit operations (Mulmuley’s PRAM), $\P \neq \NC$.</p>
<p>If you have never seen $\NC$ before, don’t worry, we will see a definition soon. For now, think of it as problems that admit really fast ($\polylog$ time) parallel algorithms.</p>
<p>One of the reasons we care about $\P$ vs. $\NC$ is the existence of fast parallel algorithms for combinatorial optimization problems like <a href="https://en.wikipedia.org/wiki/Maximum_flow_problem">max-flow</a> which are $\P$-Complete. If $\P \neq \NC$, then there is no fast parallel algorithm for max-flow. Max-flow is a particularly nice problem because it has a strongly-polynomial time algorithm; that is, the running time is polynomial in the number of input parameters, not on the input bitlength. We don’t know if this property holds for all $\P$ problems (where it makes sense to ask this question!), a major open problem in TCS is to determine if linear programming has a strongly-polynomial algorithm.</p>
<p>For algebraic problems like max-flow, it makes sense to ask if there is a parallel algorithm that does not use bit operations. Theorem 1 unconditionally rules out this possibility. Notice that Theorem 1 is a formal implication of $\P \neq \NC$—I later argue that it is very strong evidence in favor of it.</p>
<p><strong>What is a bit operation?</strong> An operation that acts on the individual bits of the input/data like $\vee$, $\wedge$, <code class="highlighter-rouge">extract-bit</code>, <code class="highlighter-rouge">modify-bit</code>,… For this to make sense, think of the input as an array of integers.</p>
<h3 id="pram-model-without-bit-operations-aka-mulmuleys-pram">PRAM Model Without Bit Operations aka Mulmuley’s PRAM</h3>
<p>This model was introduced in Mulmuley (1993). Informally, it is hybrid between algebraic models and restricted circuit models. The input is a bunch of integers. Like algebraic models, you can add and multiply these integers at unit cost. But—unlike algebraic models—the runtime and the number of processors is allowed to depend on <em>both</em> the number of inputs and their bitlength (don’t worry, this will become more clear in a second). Because of these weird characteristics, this model can do almost everything parallel algorithms can do. For example, it can do</p>
<ul>
<li>Neff’s <a href="https://doi.org/10.1016/S0022-0000(05)80061-3">specified precision polynomial root isolation</a></li>
<li>Csanky’s <a href="https://doi.org/10.1137/0205040">matrix inversion</a></li>
<li>Ben-Or et al.’s <a href="https://epubs.siam.org/doi/10.1137/0217069">determination of all roots of a polynomial with real roots</a></li>
<li>Karger and Motwani’s <a href="https://www.cs.bu.edu/faculty/gacs/courses/cs535/papers/p497-karger.pdf">min-cuts</a></li>
</ul>
<p>I don’t quite understand these results, so don’t ask me about them…</p>
<p><strong>Definition</strong> (Algebraic RAM Program over $\ZZ$)<strong>.</strong> First, think of your garden-variety RAM machine with 1 processor and infinite memory locations (the addresses start at <code class="highlighter-rouge">0x1</code> and go to infinity). Here, each memory location can store an integer (instead of a bit). As usual, the memory is split between input, output and workspace. There are constant number of unique instructions and each is of the form:</p>
<ol>
<li>$w = u \circ v$ where
<ul>
<li>$\circ \in {+, -, \times}$</li>
<li>$w$ is a memory location</li>
<li>$u,v$ are memory locations or constants.</li>
</ul>
</li>
<li><code class="highlighter-rouge">goto</code> $\ell$ where $\ell$ is an instruction label.</li>
<li>conditioned on $u \square 0$, <code class="highlighter-rouge">branch</code> to $\ell$, where
<ul>
<li>$\square \in {<, \leq, =}$</li>
<li>$u$ is a memory location</li>
<li>$\ell$ is an instruction label</li>
</ul>
</li>
<li>copy $u$ to $v$, where $u,v$ are memory locations.</li>
<li>dereference $*u$; that is, interpret the value of $v$ as a memory location and read from there.</li>
<li>address of $\&u$; that is, get address of $u$.</li>
<li><code class="highlighter-rouge">return</code></li>
</ol>
<p>If you have ever taken a computer architecture course, then the above definition should look familiar. Yes, there are some gaps in my definition; if you care, try to fill them as an exercise. One important thing to note is that—unlike real processors—here, we are assuming that all these instructions take unit time (“unit cost model”). This assumption only makes out claim stronger as only going to talk about lower bounds.</p>
<p><strong>Definition</strong> (Nonuniform Algebraic RAM over $\ZZ$)<strong>.</strong> This is similar to a nonuniform family of circuits. A sequence
\begin{equation}
\A = \{A_{n,N} : n,N \in \NN \}
\end{equation}
of algebraic RAM programs over $\ZZ$. For an input of $n$ integers and total bitlength at most $N$ we use $A_{n,N}$.</p>
<p><strong>Definition</strong> (Algebraic PRAM Program over $\ZZ$)<strong>.</strong> The P in PRAM stands for parallel. Here, the number of processors is $\poly(n,N)$. Every processor has private memory and can communicate with other processors using shared memory. As usual, we have EREW, CREW, and CRCW modes (if you don’t know about these modes, forget that I mentioned them.).</p>
<h3 id="mulmuleys-lower-bound">Mulmuley’s Lower Bound</h3>
<p>As I mentioned above, I am not going to explain this result. (I don’t quite understand it myself!) But I want to state it a little more formally.</p>
<p><strong>Theorem 1</strong> (Mulmuley (1997, 1999))<strong>.</strong> Max-flow problem for $n$ nodes, where every edge-capacity is a nonnegative integer of bitlength at most $O(n^2)$, cannot be solved $\Omega(\sqrt{n})$ time with $2^{\Omega(\sqrt{n})}$ processors.</p>
<p>Here we are considering the decision version of the max-flow problem. The input also has a parameter $f_0$ and you want to decide if the max flow exceeds $f_0$.</p>
<p>Mulmuley’s result also holds for the constant-additive-error approximation version. Mulmuley’s also extends to <em>PRAM with limited bit operations</em> where parity, left shift (by 1) and right shift (by 1) are allowed. I will elaborate on this in a forthcoming GCT post but it is super cool how you can make this model “more boolean” without fucking everything up. Roughly speaking, this is why GCT has the potential to prove boolean $\P \neq \NP$.</p>
<h3 id="random-and-quantum-pram">Random and Quantum PRAM</h3>
<p>Let us start by talking about Randomized PRAM. This turns out to be not that hard, just add an instruction</p>
<ol>
<li><code class="highlighter-rouge">random-branch</code> $\ell$ which flips a fair coin and branches to label $\ell$ if coin returns 1.</li>
</ol>
<p>Defining quantum PRAM is equally easy, add the instruction</p>
<ol>
<li><code class="highlighter-rouge">quantum-branch</code> $\ell$ $\theta$ which
<ul>
<li>continues with amplitude $\sin(\theta)$, and</li>
<li>branches with amplitude $i\cos(\theta)$.</li>
</ul>
</li>
</ol>
<p>This gate is inspired by <a href="https://doi.org/10.1098/rspa.1989.0099">Deutsch’s (1989)</a> construction of a universal quantum gate. I am not going to get into it here, but for our purposes, it suffices to have this gate only for a fixed constant number of values of $\theta$. (For a far better definition of quantum PRAM, see <a href="https://doi.org/10.1098/rspa.2012.0686">Beals et al. (2013)</a>.)</p>
<p><strong>Claim.</strong> Quantum PRAM corresponds to $\BQNC$.</p>
<p>Now, here is my conjecture (which I think I can prove):</p>
<p><strong>Conjecture 1.</strong> In the PRAM model without bit operations, $\P \neq \BQNC$.</p>
<p>The reason this conjecture might be interesting is concerning the power of $\P^\BQNC$ which kinda models the power of near-term quantum computers. Hit me up if you want to chat about this.</p>
<h3 id="references">References</h3>
<p>Mulmuley, Ketan. “A Lower Bound for Solvability of Polynomial Equations.” In Foundations of Software Technology and Theoretical Computer Science, 13th Conference, Bombay, India, December 15-17, 1993, Proceedings, 268–83, 1993. DOI: <a href="https://doi.org/10.1007/3-540-57529-4_60">10.1007/3-540-57529-4_60</a>.</p>
<p>—. “Lower Bounds for Parallel Linear Programming and Other Problems.” In Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, 23-25 May 1994, Montréal, Québec, Canada, 603–14, 1994. DOI: <a href="https://doi.org/10.1145/195058.195413">10.1145/195058.195413</a>.</p>
<p>—. “Is There an Algebraic Proof for P != NC? (Extended Abstract).” In Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, El Paso, Texas, USA, May 4-6, 1997, 210–19, 1997. DOI: <a href="https://doi.org/10.1145/258533.258586">10.1145/258533.258586</a>.</p>
<p>—. “Lower Bounds in a Parallel Model without Bit Operations.” SIAM J. Comput. 28, no. 4 (1999): 1460–1509. DOI: <a href="https://doi.org/10.1137/S0097539794282930">10.1137/S0097539794282930</a>.</p>sanketh$$ \newcommand{\P}{\text{P}} \newcommand{\NC}{\text{NC}} \newcommand{\NP}{\text{NP}} \newcommand{\BQP}{\text{BQP}} \newcommand{\BPP}{\text{BPP}} \newcommand{\PSPACE}{\text{PSPACE}} \newcommand{\SP}{\text{#P}} \newcommand{\BQNC}{\text{BQNC}} $$ $$ \newcommand{\CC}{\mathbb{C}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\NN}{\mathbb{N}} $$ $$ \newcommand{\A}{\mathcal{A}} \newcommand{\poly}{\text{poly}} \newcommand{\polylog}{\text{polylog}} $$ $$ \newcommand{\ket}[1]{\lvert #1 \rangle} \newcommand{\bra}[1]{\langle #1 \rvert} \newcommand{\coloneqq}{\mathrel{:=}} \newcommand{\dim}{\text{dim}} $$What Does It Mean to Simulate a Quantum Computer?2018-12-01T00:00:00+00:002018-12-01T00:00:00+00:00https://unnatural-proofs.github.io/2018/what-does-it-mean-to-simulate-a-quantum-computer<p><a href="https://scholar.google.com/citations?user=GqpgudUAAAAJ&hl=en">Hakop Pashayan</a> of The University of Sydney gave an excellent talk on classical simulation of quantum circuits at the Institute for Quantum Computing yesterday. The talk was based on the following paper:</p>
<blockquote>
<p><em>From estimation of quantum probabilities to simulation of quantum circuits</em><br />
Hakop Pashayan, Stephen D. Bartlett, and David Gross<br />
<a href="https://arxiv.org/abs/1712.02806">arXiv:1712.02806 [quant-ph]</a></p>
</blockquote>
<p>The big takeaway for me was the new perspective on classical simulation (of quantum computation).</p>
<p>Normally, when we talk about classical simulation we talk about efficient algorithms for outputting an approximation to the answer; that is, if the original circuit accepts the input with high probability, then the simulation should accept the input with high probability. A self-contained paper that I really like in this direction is <a href="https://arxiv.org/abs/quant-ph/0406196v5">Aaronson and Gottesman (2004)</a>.</p>
<p>But, the metric we <em>really</em> care about is <em>computational indistinguishability</em>. If we cannot tell the difference between a quantum computer and the simulator in polynomial time, it doesn’t matter which one we have. Of course, the simulator should be able to do everything in $\text{NP} \cap \text{BQP}$ but when we are talking about sampling problems (like simulating restricted quantum systems) outside $\text{NP}$ this distinction matters. Also, most restricted quantum systems cannot do stuff like factoring which puts $\text{NP} \cap \text{BQP}$ outside $\text{P}$.</p>
<p>So, lemme define a simulator as follows. A classical algorithm $A$ is a <em>(classical) simulator</em> of a quantum system $\mathcal{Q}$ if there does not exist a polynomially-bounded classical verifier $V$ such that $V$ can tell the difference between $A$ and $\mathcal{Q}$ given oracle access.</p>
<p>Now that we have this definition. A natural question is if we can construct such simulators for near-term models like noisy IQP circuits (see <a href="https://arxiv.org/abs/1610.01808">Bremner, Montanaro, and Shepherd (2017)</a>) and noisy boson sampling circuits (see <a href="https://arxiv.org/abs/1801.06166">Oszmaniec and Brod (2018)</a>).</p>
<p>Also, now that we got interactive proofs in the picture, what about zero-knowledge proofs? Can we construct a protocol such that a quantum computer/simulator can prove its “quantumness” without “leaking” any further information?</p>
<p>Also, one can ask about the power of adaptive queries in this setting. Do there exist simulators that are indistinguishable from a quantum system in the parallel query model but are easy distinguished once we allow adaptive queries.</p>
<p>A question that I have been interested in for quite sometime is lower bounds on the simulation of quantum computation. Maybe this is the right model to ask these questions.</p>
<p>Finally, although these problems seem super theoretical, I strongly believe that they are of practical interest.</p>sankethHakop Pashayan of The University of Sydney gave an excellent talk on classical simulation of quantum circuits at the Institute for Quantum Computing yesterday. The talk was based on the following paper: From estimation of quantum probabilities to simulation of quantum circuits Hakop Pashayan, Stephen D. Bartlett, and David Gross arXiv:1712.02806 [quant-ph]Shannon in 19772018-11-15T00:00:00+00:002018-11-15T00:00:00+00:00https://unnatural-proofs.github.io/2018/shannon-in-1977<blockquote>
<p>Well, back in '42 ... computers were just emerging, so to speak. They had things like the ENIAC down at University of Pennsylvania. ... Now they were slow, they were very cumbersome and huge and all, there were computers that would fill a couple rooms this size and they would have about the ability of one of the little calculators that you can buy now for $10. But nevertheless we could see the potential of this, the thing that happened here if things ever got cheaper and we could ever make the up-time better, sort of keep the machines working for more than ten minutes, things like that. It was really very exciting.</p><br />
<p>We had dreams, Turing and I used to talk about the possibility of simulating entirely the human brain, could we really get a computer which would be the equivalent of the human brain or even a lot better? And it seemed easier then than it does now maybe. We both thought that this should be possible in not very long. in ten or 15 years. Such was not the case, it hasn't been done in thirty years.</p><br />
<cite>Shannon, 1977; as cited in Soni, Jimmy, and Rob Goodman. A mind at play: How Claude Shannon invented the information age. Simon and Schuster, 2017. p. 106</cite>
</blockquote>
<p><a href="https://books.google.ca/books?id=gygsDwAAQBAJ&lpg=PA107&ots=YKtABbgVEM&dq=shannon%201977%20now%20they%20were%20slow%2C%20they%20were%20cumbersome%20and%20huge%20and%20all%2C%20they%20were%20computers&pg=PA107#v=onepage&q&f=false">Here</a> is the page in Google books.</p>
<p>Also, since you are here, check out <a href="https://twitter.com/dabacon/status/1063163663815663616">this twitter thread</a> by <a href="https://twitter.com/dabacon">@dabacon</a>. The cited <a href="https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing">article</a> is infuriating; for example, look at this:</p>
<blockquote>
<p>Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!</p>
</blockquote>sankethWell, back in '42 ... computers were just emerging, so to speak. They had things like the ENIAC down at University of Pennsylvania. ... Now they were slow, they were very cumbersome and huge and all, there were computers that would fill a couple rooms this size and they would have about the ability of one of the little calculators that you can buy now for $10. But nevertheless we could see the potential of this, the thing that happened here if things ever got cheaper and we could ever make the up-time better, sort of keep the machines working for more than ten minutes, things like that. It was really very exciting.What is the power of a BPP verifier with a QMA prover?2018-10-28T00:00:00+00:002018-10-28T00:00:00+00:00https://unnatural-proofs.github.io/2018/what-is-the-power-of<p>I dunno.</p>
<p>I think that it is at least BQP. Dorit Aharonov and Ayal Green <a href="https://arxiv.org/abs/1710.09078">showed</a> that PostBQP is contained in IP[BPP, PostBQP] (interactive protocol with a BPP verifier and a PostBQP verifier.)</p>
<p>Before someone points it out, I know that if I assume LWE (or technically speaking, the existence of an <em>extended trapdoor claw-free
family</em>) then this follows from Urmila Mahadev’s <a href="https://arxiv.org/abs/1804.01082">breakthrough result</a> from earlier this year but I don’t want to assume anything.</p>
<p>My current approach is to show that an additive approximation to the Jones polynomial is contained in this class. But I don’t know how to make it work. (I only spent half a day on it so maybe it is obvious and I just missed it.)</p>
<p>We know how to do this for easier problems; for instance, François Le Gall, Tomoyuki Morimae, Harumichi Nishimura, and Yuki Takeuchi <a href="https://arxiv.org/abs/1805.03385">showed</a> that computing orders of solvable groups (which John Watrous <a href="https://cs.uwaterloo.ca/~watrous/Papers/QuantumAlgorithmsSolvableGroups.pdf">put</a> in BQP) is in IP[BPP, BQP].</p>
<p>While trying to write up a different result (which is what I should be doing), I stumbled upon <a href="https://lance.fortnow.com/papers/files/thesis.pdf">Lance Fortnow’s thesis</a> and it is awesome! (I should prolly get back to writing…)</p>sankethI dunno.A Question About Quantum Advice2018-10-21T00:00:00+00:002018-10-21T00:00:00+00:00https://unnatural-proofs.github.io/2018/a-question-about-quantum-advice<div style="display:none;">
$$
\newcommand{\P}{\text{P}}
\newcommand{\BQP}{\text{BQP}}
\newcommand{\BPP}{\text{BPP}}
\newcommand{\PSPACE}{\text{PSPACE}}
\newcommand{\SP}{\text{#P}}
$$
$$
\newcommand{\ket}[1]{\lvert #1 \rangle}
\newcommand{\bra}[1]{\langle #1 \rvert}
\newcommand{\coloneqq}{\mathrel{:=}}
\newcommand{\dim}{\text{dim}}
$$
</div>
<p>One of the stupidest things about being an undergrad is that I never have enough time for research—and I am only halfway though!! Anyway, without further ado, here is a question I have thought a little bit about but never solved. I apologize in advance for any errors, I haven’t seriously thought about this problem in almost a year.</p>
<h2 id="question-is-qszkqpoly-contained-in-exppoly">Question: Is QSZK/qpoly contained in EXP/poly?</h2>
<p>My interest in this question came from trying to understand the limits on the power of quantum interactive proofs with advice. <a href="https://theoryofcomputing.org/articles/v001a001/v001a001.pdf">Aaronson (2005)</a> showed that BQP/qpoly is contained in PP/poly and <a href="https://doi.org/10.1007/s00453-007-9033-6">Raz (2009)</a> showed that QIP(2)/qpoly = ALL. Due <a href="https://cs.uwaterloo.ca/~watrous/Papers/HonestVerifierQuantumZeroKnowledge.pdf">Watrous’ (2002)</a> we know that QSZK lies between BQP and QIP(2). Moreover, it has a nice complete problem so a natural and seemingly easy question to bound the power of QSZK/qpoly. This question is also left as an open problem in <a href="https://www.scottaaronson.com/papers/dqpqpoly.pdf">Aaronson (2018)</a> where they showed that PDQP/qpoly = ALL.</p>
<p>My conjecture is that QSZK/qpoly is contained in EXP/poly. My intuition is that a modification of Aaronson’s (2003) <a href="http://www.scottaaronson.com/talks/qpoly.ppt">original proof</a> that BQP/qpoly is contained in EXP/poly works here.</p>
<h4 id="sketch-of-aaronsons-proof">Sketch of Aaronson’s proof</h4>
<p>(Adapted from <a href="http://www.scottaaronson.com/talks/qpoly.ppt">Aaronson’s slides</a>)</p>
<p>Let $A$ be a BQP/qpoly algorithm. Fix an input length $n$ and an advice state $\ket{\psi}$. We can make the error of $A$ exponentially small by taking polynomially many copies $\ket{\psi}^{\otimes p(n)}$ of the advice.</p>
<p>Define $S_0$ to be the set of advice states that cause the algorithm to output $0$ with probability $1-\epsilon$, and $S_1$ to be the set of advice states that cause the algorithm to output $1$ with probability $1-\epsilon$.</p>
<p><strong>Claim 1.</strong> There exist orthogonal subspaces $H_0, H_1$ such that $S_0$ is exponentially close to $H_0$ and $S_1$ is exponentially close to $H_1$.</p>
<p><em>Proof.</em> Given an advice state $\ket{\varphi}$, the acceptance probability of a BQP/qpoly algorithm is given by
\begin{equation}\label{acceptance}
\bra{\varphi} \rho \ket{\varphi}.
\end{equation}
where $\rho$ is a <em>quantum state</em> (positive semidefinite trace one matrix). Define
\begin{align}
H_0:& \;\text{Subspace spanned by eigenvectors of $\rho$ with eigenvalues in $[0,1/3]$}\\
H_1:& \;\text{Subspace spanned by eigenvectors of $\rho$ with eigenvalues in $[2/3,1]$}
\end{align}
It is known (see <a href="https://math.stackexchange.com/q/762984/460480">this proof</a> on Math StackExchange) that eigenvectors of distinct eigenvalues of quantum states are orthogonal so these subspaces are orthogonal as claimed. <script type="math/tex">\tag*{$\square$}</script></p>
<p>For each length $n$, define the classical advice for the EXP/poly algorithm to be a polynomial number of strings $z_i$, each of length $n$ encoding a positive integer $z_i \leq 2^n$. Call this set of strings $B$.</p>
<p>We will now describe an EXP/poly algorithm for $A$, that takes advice of the form mentioned above.</p>
<p>Fix an input $x \in \{0,1\}^n$ and loop through all $y \leq x$ in lexicographic order. (We can do this in EXP.)</p>
<p>Define $T_0$ to be the entire Hilbert space and we will iteratively define $T_y$ be the subspace of advice states $\ket{\psi}$ compatible with the inputs $1$ to $y$ as follows. First, define
\begin{align}
\Pi_0:& \;\text{Projection of $T_{y-1}$ onto $H_0$}\\
\Pi_1:& \;\text{Projection of $T_{y-1}$ onto $H_1$}
\end{align}
and then for each $y$, do the following:</p>
<ol>
<li>If $y \notin B$, choose the larger subspace; that is, $T_y \coloneqq \Pi_0$ if $\dim(\Pi_0) \geq \dim(\Pi_1)$, and $T_y \coloneqq \Pi_1$ otherwise.</li>
<li>If $y \in B$, choose the smaller subspace; that is, $T_y \coloneqq \Pi_1$ if $\dim(\Pi_0) \geq \dim(\Pi_1)$, and $T_y \coloneqq \Pi_0$ otherwise.</li>
</ol>
<p>Notice that each time we pick the smaller subspace, $\dim(T_y)$ is at least halved (because $\Pi_0$ and $\Pi_1$ are projections onto orthogonal subspaces) so we only need polynomially many bits of advice to get exponentially close to $\ket{\psi}^{\otimes p(n)}$. If we assume that this works (exercise for the interested reader), then by equation \eqref{acceptance}, we get the acceptance probability of $A$.</p>
<h4 id="the-qszk-case">The QSZK case</h4>
<p>The naïve approach is to use the above procedure to learn about $\ket{\psi}$ and then use the ideas of <a href="https://cs.uwaterloo.ca/~watrous/Papers/QuantumInteractiveProofs.pdf">Kitaev and Watrous (2000)</a> to simulate the quantum interactive proof system in EXP. But that does not work in general because of the aforementioned result of <a href="https://doi.org/10.1007/s00453-007-9033-6">Raz (2009)</a> who showed that QIP(2)/qpoly = ALL.</p>
<p>So, to prove this, one needs to take advantage of the statistical zero-knowledge restriction. It is easy to see that an analogue of <a href="https://cs.uwaterloo.ca/~watrous/Papers/HonestVerifierQuantumZeroKnowledge.pdf">Watrous’ (2002)</a> <em>Quantum State Distinguishability</em> (QSD) where the circuits take as input $\ket{\psi} \otimes \ket{0}^{\otimes n}$, which I will call <em>Advice Quantum State Distinguishability</em> (AQSD), is complete for QSZK/qpoly. I imagine that an analog of Watrous’ proof that QSD is in PSPACE works for AQSD, which would put QSZK/qpoly in EXP/poly. (We still need EXP because the above procedure to learn $\ket{\psi}$ takes exponential time.)</p>
<p><strong>Another Question.</strong> If QSZK/qpoly is strictly less powerful than QIP/qpoly, I wonder if QSZK itself is less powerful than QIP. Right now, the best upper bound we have on QSZK is PSPACE and due is <a href="https://cs.uwaterloo.ca/~watrous/Papers/HonestVerifierQuantumZeroKnowledge.pdf">Watrous (2002)</a>. It is natural to think that QSZK is contained in PP but <a href="https://arxiv.org/abs/1609.02888">Bouland et al. (2016)</a>, resolving a question of Watrous, showed that a proof that even SZK is in PP would require non-relativising techniques. So, what about PP<sup>PP</sup>? or the Counting Hierarchy?</p>
<p><strong>Acknowledgements.</strong> I thank Andrew Drucker and John Watrous for helpful discussions. (and sorry for giving up!)</p>sanketh$$ \newcommand{\P}{\text{P}} \newcommand{\BQP}{\text{BQP}} \newcommand{\BPP}{\text{BPP}} \newcommand{\PSPACE}{\text{PSPACE}} \newcommand{\SP}{\text{#P}} $$ $$ \newcommand{\ket}[1]{\lvert #1 \rangle} \newcommand{\bra}[1]{\langle #1 \rvert} \newcommand{\coloneqq}{\mathrel{:=}} \newcommand{\dim}{\text{dim}} $$Researchers did *not* prove that quantum computers are better than classical computers!!!2018-10-20T00:00:00+00:002018-10-20T00:00:00+00:00https://unnatural-proofs.github.io/2018/quantum-vs-classical<div style="display:none;">
$$
\newcommand{\P}{\text{P}}
\newcommand{\BQP}{\text{BQP}}
\newcommand{\BPP}{\text{BPP}}
\newcommand{\PSPACE}{\text{PSPACE}}
\newcommand{\SP}{\text{#P}}
$$
</div>
<p><strong>Breaking News:</strong> <a href="https://interestingengineering.com/ibms-team-proved-quantum-computers-do-impossible-things">IBM’s Team Proved Quantum Computers Do ‘Impossible’ Things</a></p>
<p><strong>Update:</strong> See <a href="https://twitter.com/quantum_aram/status/1053732821553033216">this comment</a> by <a href="https://twitter.com/quantum_aram">@quantum_aram</a>. I modified the wording of the post to reflect that this is what <em>I</em> think. As mentioned in the comment, it is wholly possible that there exist people for whom computation means constant-depth computation and problem means relational problem, and for them the result of Bravyi et al. unconditionally shows that quantum computers are better than classical computers.</p>
<p>I really don’t like to rant about the press good results receive because the results themselves are great—but the way traditional press is interpreting this is insane.</p>
<p>Firstly, this is one of those late reactions from the press. (BTW did you know that during the <a href="https://en.wikipedia.org/wiki/Wall_Street_Crash_of_1929">crash of 1929</a> the ticker tape was hours late!) This paper has been on the arXiv since April 2017 and was the subject of a <a href="https://collegerama.tudelft.nl/Mediasite/Showcase/qip2018/Presentation/53e90567101440b1a9eeb308b6bd48211d">plenary lecture</a> at QIP 2018.</p>
<p>Secondly, the result is that there exists a <em>relational problem</em> which can be solved by <em>bounded fan-in, bounded fan-out, constant-depth quantum circuits</em> but cannot be solved by <em>bounded fan-in, unbounded fan-out, sub-logarithmic-depth classical circuits</em>. Yes, you heard me right, there is no oracle involved. This is an amazing result!!</p>
<p>But, here are two reasons why I think this result does not constitute a <em>proof</em> that (general) quantum computers are better than (general) classical computers:</p>
<ol>
<li>It compares constant-depth quantum circuits to sub-logarithmic depth classical circuits</li>
<li>It concerns relational problems</li>
</ol>
<p>I think a true proof that quantum computers are better than classical computers would need to show that there exists a <em>decision problem</em> that can be decided by <em>bounded-error polynomial-size quantum circuits</em> but cannot be decided by <em>bounded-error polynomial-size classical circuits</em>. Succinctly, $\BQP \supsetneq \BPP$. But, back in 1997, <a href="https://doi.org/10.1137/S0097539796300921">Ethan Bernstein and Umesh Vazirani</a> showed that $\BQP \subseteq \P^\SP$ so a proof of this would also show that $\P \neq \PSPACE$. Going back, this result of Sergey Bravyi, David Gosset, and Robert Koenig takes us many steps toward this holy grail by introducing some brilliant techniques but doesn’t go all the way.</p>
<p>For more on the result I highly recommend <a href="https://doi.org/10.1126/science.aau9555">Ashley Montanaro’s prespective</a> and the <a href="https://arxiv.org/abs/1704.00690">article itself</a>. For a slight improvement of this result and an application to certified randomness expansion, see Matthew Coudron, Jalex Stark, and Thomas Vidick’s <a href="https://arxiv.org/abs/1810.04233">super recent article</a>.</p>sanketh$$ \newcommand{\P}{\text{P}} \newcommand{\BQP}{\text{BQP}} \newcommand{\BPP}{\text{BPP}} \newcommand{\PSPACE}{\text{PSPACE}} \newcommand{\SP}{\text{#P}} $$