The cryptographic leash

This post is meant as a companion to an introductory post I wrote on the blog of Caltech’s IQIM (Institute for Quantum Information and Matter), of which I am a member. The post describes a “summer cluster” on quantum computation that I co-organized with Andrew Childs, Ignacio Cirac, and Umesh Vazirani at the Simons Institute in Berkeley over the past couple months. The IQIM post also describes one of the highlights of the workshop we organized as part of this program: the recent result by Mahadev on classical verification of quantum computation. The present post is a continuation of that one, so that I would encourage you to read it first. In this post my goal is to give additional detail on Mahadev’s result. For the real thing you should of course read the paper (you may want to start by watching the author’s beautiful talk at our workshop). What follows is my attempt at an introduction, in great part written for the sake of clarifying my own understanding. I am indebted to Urmila for multiple conversations in which she indefatigably answered my questions and cleared my confusions — of course, any remaining inaccuracies in this post are entirely mine.

The result

Let’s start by recalling Mahadev’s result. She shows that from any quantum computation, specified by a polynomial-size quantum circuit {C}, it is possible to efficiently compute a classical-verifier quantum-prover protocol, i.e.~a prescription for the actions of a classical probabilistic polynomial-time verifier interacting with a quantum prover, that has the following properties. For simplicity, assume that {C} produces a deterministic outcome {o(C)\in\{0,1\}} when it is executed on qubits initialized in the state {| 0 \rangle} (any input can be hard-coded in the circuit). At the end of the protocol, the verifier always makes one of three possible decisions: “reject”; “accept, 0”; “accept, 1”. The completeness property states that for any circuit {C} there is a “honest” behavior for the prover that can be implemented by a polynomial-time quantum device and that will result in the verifier making the decision “accept, {o(C)}”, where {o(C)} is the correct outcome, with probability {1}. The soundness property states that for any behavior of the quantum prover in the protocol, either the probability that the verifier returns the outcome “accept, {1-o(C)}” is negligibly small, or the quantum prover has the ability to break a post-quantum cryptographic scheme with non-negligible advantage. Specifically, the proof of the soundness property demonstrates that a prover that manages to mislead the verifier into making the wrong decision (for any circuit) can be turned into an efficient attack on the learning with errors (LWE) problem (with superpolynomial noise ratio).

The fact that the protocol is only sound against computationally bounded provers sets it apart from previous approaches, which increased the power of the verifier by allowing her to dispose of a miniature quantum computer, but established soundness against computationally unbounded provers. The magic of Mahadev’s result is that she manages to leverage this sole assumption, computational boundnedness of the prover, to tie a very tight “leash” around its neck, by purely classical means. My use of the word “leash” is not innocent: informally, it seems that the cryptographic assumption allows Mahadev to achieve the kind of feats that were previously known, for classical verifiers, in the model where there are two quantum provers sharing entanglement. I am not sure how far the analogy extends, and would like to explore it further; this has already started with a collaboration with Brakerski, Christiano, Mahadev and Vazirani that led to a single-prover protocol for certifiable randomness expansion. Nevertheless, the main open question left open by Mahadev’s work remains whether the computational assumption is even necessary: could a similar result hold, where the honest prover can perform the required actions in quantum polynomial-time, but the protocol remains sound against arbitrarily powerful provers? (Experts will have recognized that the existence of a protocol where the honest prover is as powerful as PSPACE follows from the classical results that BQP is in PSPACE, and that PSPACE=IP. Unfortunately, we currently don’t expect even a supercharged AWS cloud to be able to implement PSPACE-complete computations.)

Encoding computation in ground states

Let’s get to business: how does this work? Fix a quantum circuit {C} that the verifier is interested in. Assume the description of {C} known to both the verifier and the prover. As earlier, assume further that when {C} is executed on a state initialized to {| 0 \rangle} a measurement of the output qubit of the circuit returns either the outcome {0} or the outcome {1}, deterministically. The verifier wishes to determine which case holds.

The first step that the verifier performs is a classical polynomial-time reduction from this circuit output decision problem to the following Hamiltonian energy decision problem. In the Hamiltonian energy decision problem the input is the description of a pair of classical polynomial-time randomized circuits. The first circuit, {S}, takes as input a random string {r}, and returns a string {\theta\in\{X,Z\}^n}. The second circuit, {V}, takes as input a string {\theta\in\{X,Z\}^n} of the kind returned by the first circuit, as well as an {n}-bit string {a\in\{0,1\}^n}, and returns a “decision bit” {b\in \{0,1\}}. The goal of the verifier is to distinguish between the following two cases. Either there exists an {n}-qubit state {\rho} such that, when a string {\theta} is sampled according to {S} (choosing a uniformly random {r} as input), the {n} qubits of {\rho} are measured in the bases specified by {\theta} (i.e.~the {i}-th qubit is measured in the computational basis in case {\theta_i=Z}, and in the Hadamard basis in case {\theta_i=X}), the resulting {n}-bit outcome {a} satisfies {V(\theta,a)=1} with probability at least {3/4}. Or, for any state {\rho}, the same procedure results in {V(\theta,a)=0} with probability at most {2/3}.

I called this problem the Hamiltonian energy decision problem because the circuits {S} and {V} implicitly specify a Hamiltonian, whose minimal energy the verifier aims to approximate. Note that the Hamiltonian is not required to be local, and furthermore it may involve an average of exponentially many terms (as many as there are random strings {r}). The problem is still in QMA, because the verifier is efficient. It is not hard to show that the problem is QMA-hard. What the formulation above buys us, compared to using the usual QMA-complete formulation of the local Hamiltonian problem, is the constant energy gap — which comes at the cost of exponentially many terms and loss of locality. (Open question: I would like to know if it is possible to achieve a constant gap with only one of these caveats: local with exponentially many terms, or nonlocal with polynomially terms.) Of course here we only care that the problem is BQP-hard, and that the witness can be computed by a BQP prover; this is indeed the case. We also don’t really care that there is a constant gap – the soundness of the final protocol could be amplified by other means – but it is convenient that we are able to assume it.

The reduction that achieves this is a combination of Kitaev’s history state construction with some gadgetry from perturbation theory and an amplification trick. The first step reduces the verification that {C} returns outcome {1} (resp. {0}) on input {| 0 \rangle} to the verification that a local Hamiltonian {H} (computed from {C}) has ground state energy exponentially close to {0} (resp. at least some positive inverse polynomial). The second step consists in applying perturbation theory to reduce to the case where {H} is a weighted linear combination of terms of the form {X_iX_j} and {Z_iZ_j}, where {X_i}, {Z_j} are the Pauli {X} and {Z} operators on the {i}-th and {j}-th qubit respectively. The final step is an amplification trick, that produces a nonlocal Hamiltonian whose each term is a tensor product of single-qubit {X} and {Z} observables and has ground state energy either less than {1/4} or larger than {1/3} (when the Hamiltonian is scaled to be non-negative with norm at most {1}).

These steps are fairly standard. The first two are combined in a paper by Fitzsimons and Morimae to obtain a protocol for “post-hoc” verification of quantum computation: the prover prepares the ground state of an {XZ} local Hamiltonian whose energy encodes the outcome of the computation, and sends it to the verifier one qubit at a time; the verifier only needs to perform single-qubit {X} and {Z} measurements to estimate the energy. The last step, amplification, is described in a paper with Natarajan, where we use it to obtain a multi-prover interactive proof system for QMA.

For the remainder of this post, I take the reduction for granted and focus on the core of Mahadev’s result, a verification protocol for the following problem: given a Hamiltonian of the form described in the previous paragraph, decide whether the ground state energy of {H} is smaller than {1/4}, or larger than {1/3}.

Stitching distributions into a qubit

In fact, for the sake of presentation I’ll make one further drastic simplification, which is that the verifier’s goal has been reduced to verifying the existence of a single-qubit state {\rho}, whose existence is claimed by the prover. Specifically, suppose that the prover claims that it has the ability to prepare a state {| \psi \rangle} such that {\langle \psi |X| \psi \rangle = E_X}, and {\langle \psi |Z| \psi \rangle=E_Z}, for real parameters {E_X,E_Z}. In other words, that the Hamiltonian {H = \frac{1}{2}(X+Z)} has minimal energy at most {\frac{1}{2}(E_X+E_Z)}. How can one verify this claim? (Of course we could do it analytically\ldots{}but that approach would break apart as soon as expectation values on larger sets of qubits are considered.)

We could ask the prover to measure in the {X} basis, or the {Z} basis, repeatedly on identical copies of {| \psi \rangle}, and report the outcomes. But how do we know that all these measurements were performed on the same state, and that the prover didn’t choose e.g. {| \psi \rangle=| 1 \rangle} to report the {Z}-basis outcomes, and {| \psi \rangle=| - \rangle} to report the {X}-basis outcomes? We need to find a way to prevent the prover from measuring a different state depending on the basis it is asked for — as well as to ensure the measurement is performed in the right basis.

Committing to a qubit

The key idea in Mahadev’s protocol is to use cryptographic techniques to force the prover to “commit” to the state {| \psi \rangle} in a way that, once the commitment has been performed, the prover no longer has the liberty to “decide” which measurement it performs on the commited qubit (unless it breaks the cryptographic assumption).

I described the commitment scheme in the companion post here. For convenience, let me quote from that post. Recall that the scheme is based on a pair of trapdoor permutations {f_0,f_1:\{0,1\}^n \rightarrow \{0,1\}^n} that is claw-free. Informally, this means that it is hard to produce any pair {(x_0,x_1)} such that {f_0(x_0)=f_1(x_1)}.

The commitment phase of the protocol works as follows. Starting from a state {| \psi \rangle=\alpha| 0 \rangle+\beta| 1 \rangle} of its choice, the prover is supposed to perform the following steps. First, the prover creates a uniform superposition over the common domain of {f_0} and {f_1}. Then it evaluates either function, {f_0} or {f_1}, in an additional register, by controlling on the qubit of {| \psi \rangle}. Finally, the prover measures the register that contains the image of {f_0} or {f_1}. This achieves the following sequence of transformations:

\displaystyle \begin{array}{rcl} \alpha| 0 \rangle+\beta| 1 \rangle &\mapsto& (\alpha| 0 \rangle + \beta| 1 \rangle) \otimes \Big(2^{-n/2} \sum_{x\in\{0,1\}^n} | x \rangle\Big) \\ &\mapsto & 2^{-n/2} \sum_x \alpha | 0 \rangle| x \rangle| f_0(x) \rangle + \beta | 1 \rangle| f_1(x) \rangle\\ &\mapsto & \big(\alpha| 0 \rangle| x_0 \rangle+\beta| 1 \rangle| x_1 \rangle\big)| y \rangle\;, \end{array}

where {y\in\{0,1\}^n} is the measured image. The string {y} is the prover’s commitment string, that it reports the verifier.

The intuition for this commitment procedure is that it introduces asymmetry between prover and verifier: the prover knows {y} (it had to report it to the verifier) but not {x_0} and {x_1} (this is the claw-free assumption on the pair {(f_0,f_1)}), which seems to prevent it from recovering the original state {| \psi \rangle}, since it does not have the ability to “uncompute” {x_0} and {x_1}. In contrast, the verifier can use the trapdoor information to recover both preimages.

In a little more detail, how is this used? Note that at this point, from the verifier’s point of view the only information that has been received is the prover’s commitment string {y}. In general there are multiple ways a prover could have come up with a value {y}: for example, by selecting an {x\in\{0,1\}^n} and returning {y=f_0(x)}. Or, by directly selecting an arbitrary string {y\in\{0,1\}^n}. At this stage of the protocol, any of these strategies look fine.

Let’s modify the commitment phase by adding a little test. With some probability, the verifier, upon receiving the commitment string {y}, decides to challenge the prover by asking it to report a valid preimage of {y}, either under {f_0} or under {f_1} (to the prover’s choice). Since both {f_0} and {f_1} are presumed to be hard to invert, the only way the prover can answer this challenge is if it already “knows” a valid preimage — or at a minimum, if it has a superposition on preimages that it can measure when tested. Thus the fact that the prover is required to succeed in the commitment test, when it is performed, guarantees that after the prover has returned the commitment string {y} we may without loss of generality assume that the prover’s state can be written as

\displaystyle | \tilde{\psi} \rangle=\tilde{\alpha}| 0,x_0 \rangle| \phi_0 \rangle+\tilde{\beta}| 1,x_1 \rangle| \phi_1 \rangle\;, \ \ \ \ \ (1)

where we have purposefully spelled out the two possible preimages that the prover could return if challenged. Note that aside from the fact that it gives the ability to obtain {x_0} or {x_1}, this format does not make any assumption on {| \tilde{\psi} \rangle}; in particular the register containing the preimage can be entangled with other private registers of the prover.

We have defined a four-message commitment protocol: the verifier sends the security parameters to prover; the prover sends a commitment string {y} back; an optional one-round preimage test is executed. Now is the time to give a first definition for the single qubit to which the prover has “committed” by returning {y}. This committed qubit is the state {\sigma} that we ultimately aim to show has the claimed expectation values under {X} and {Z} measurements.

Let {\sigma} be the qubit obtained from {| \tilde{\psi} \rangle} by erasing {x_0} and {x_1} (which is possible given knowledge of {y}) and returning the first qubit of the resulting state. (Later we will slightly modify this definition, but it is a good placeholder to get us started.) Note that the verifier does not know the state {\sigma}; in fact, strictly speaking {\sigma} is not present on the prover’s workspace either. The point is that {\sigma} exists, and this is all we need. Our remaining task is to find a way for the verifier to extract from the prover measurement outcomes that are distributed as would be a measurement of {\sigma} in the {X} or {Z} basis, without the prover having the ability to deviate. If the verifier can do this, for a basis of her choice, she can choose a basis at random, estimate the expectation value, and check the prover’s claim (the values {E_X} or {E_Z}).

As already mentioned, the key point that we’ll use in order to achieve this is that at the end of the commitment phase, the verifier has obtained some leverage over the prover: given {y} and the trapdoor information, the verifier can recover both {x_0} and {x_1}. In contrast, the prover, while it holds the state {| \tilde{\psi} \rangle}, is not able to freely operate on it. Without the trapdoor, it can no longer uncompute {x_0} and {x_1} to recover the initial state {| \psi \rangle}, and so it can’t obviously apply, say, the unitary on {| \tilde{\psi} \rangle} that would amount to performing a single-qubit rotation on {| \psi \rangle}.

Measuring in the computational basis

We need to explain how the verifier extracts measurement outcomes in the X (Hadamard) or Z (computational) basis from the prover. For each basis there is a small sub-protocol. At the end of the sub-protocol the verifier records a single bit, that it considers is the outcome obtained by a measurement of the committed qubit, {\sigma}, in the corresponding basis. We call this bit the verifier’s “decoded bit” for that basis.

The protocol for extracting the outcome of a measurement in the computational basis is straightforward. Recall that by definition the prover’s state after the commitment phase has ended is the state {| \tilde{\psi} \rangle} in (1). Moreover, recall that we made a choice of basis for the provers’ space such that when the prover is challenged for a preimage of {y}, it measures the first {(n+1)} qubits of {| \tilde{\psi} \rangle} in the computational basis and returns the outcome. Now observe that the first bit of this outcome is {0} with probability {|\tilde{\alpha}|^2}, and {1} with probability {|\tilde{\beta}|^2}. This is exactly the distribution of the outcome of a measurement of the committed qubit {\sigma} in the computational basis, by definition! Thus to extract a measurement outcome in the computational basis the verifier simply executes the preimage test and records the first bit returned by the prover as the decoded bit.

Measuring in the Hadamard basis

Extracting a measurement outcome in the Hadamard basis is more delicate. Recall the form of the prover’s state in (1). Given our definition of the committed qubit {\sigma}, the natural way to obtain a measurement of {\sigma} in the Hadamard basis, starting from {| \tilde{\psi} \rangle}, is to first erase the register containing {x_0} and {x_1}, and then perform a Hadamard measurement of the first qubit. But even an honest prover cannot accomplish this, as it does not have the trapdoor information that would allow to erase {x_0} and {x_1} (of course we purposefully set things up this way). What the prover can do, however, is measure all {n} qubits of the register containing {x_0} and {x_1} in the Hadamard basis. The result of this measurement is an {n}-bit string {d\in\{0,1\}^n}. The corresponding post-measurement state is, up to global phase,

\displaystyle \tilde{\alpha}| 0 \rangle| \phi_0 \rangle+(-1)^{d\cdot(x_0+ x_1)}\tilde{\beta}| 1 \rangle| \phi_1 \rangle\;,

where the addition {x_0+x_1} is taken bitwise, modulo {2}. Observe that this state is almost exactly the committed qubit {\sigma} — except for a phase flip, {Z^{d\cdot(x_0\oplus x_1)}}, applied on the first qubit. If the prover measures the remaining qubit in the Hadamard basis, the phase flip leads to a bit flip on the outcome {m} of the measurement. So the verifier can ask the prover to report both {d} and {m}; if she recourds the decoded bit {b=m\oplus d\cdot (x_0\oplus x_1)} then this bit matches the outcome of a measurement of {\sigma} in the Hadamard basis.

This completes the description of the measurement sub-protocol for the Hadamard basis. It is clear that a honest prover, performing the actions described above, will induce the verifier into recording the correct outcome. Now of course in general the prover may act in an arbitrary way! It could report any values for {(m,d)}: the verifier accepts any outcomes on faith. How could this possibly work out? There is magic in Mahadev’s proof.

Malicious provers

Let’s assume, as we already have, that the prover is arbitrary but that, if tested in the commitment phase, it succeeds with certainty. According to the discussion around (1) this implies that at the end of the commitment phase the prover holds a state of the form {| \tilde{\psi} \rangle}. Moreover, by definition, when asked for a computational basis measurement the prover measures the first {(n+1)} qubits of {| \tilde{\psi} \rangle} in the computational basis and reports the outcome; the verifier records the first bit as its decoded bit.

As we already argued, our earlier definition of the committed qubit {\sigma} ensures that the verifier’s decoded bit for the case of a computational basis measurement matches the outcome of a measurement of {\sigma} in the computational basis. Unfortunately for the case of a Hadamard basis measurement we are in trouble. Since the prover may in principle report an arbitrary pair {(m,d)} there is no chance to argue that this matches (in distribution) the outcome of a measurement of {\sigma} in the Hadamard basis. To find a state that is consistent with the verifier’s decoded bit in both bases we need to change our definition of the committed qubit to take into account the prover’s action in the case it is asked for a Hadamard measurement.

Recall that the main leverage that the verifier has over the prover is that, while the prover does have the possibility of reporting arbitrary outcomes {(m,d)}, it does not have control over the verifier’s decoding, i.e.~the operation {b\leftarrow m\oplus (d\cdot(x_0+ x_1))}. Let’s work a little bit and spell out the distribution of the verifier’s Hadamard basis decoded bit, {b}. Towards this it is convenient to think of the prover in the following way: the prover first applies an arbitrary unitary “attack” {U} on {| \tilde{\psi} \rangle}, then “honestly” measures the first {(n+1)} qubits in the Hadamard basis, and finally reports the {(n+1)}-bit outcome {(m,d)}. An arbitrary {(n+1)}-bit-outcome measurement can always be expressed in this way. With this setup we can write the probability that the decoded bit is some value {b\in\{0,1\}} as

\displaystyle \Pr(b) \,=\, \sum_{d\in\{0,1\}^n} \langle \tilde{\psi} | U^\dagger H \big((X^{d\cdot(x_0+x_1)} | b \rangle\!\langle b |X^{d\cdot(x_0+x_1)}) \otimes | d \rangle\!\langle d |\big)HU| \tilde{\psi} \rangle\;. \ \ \ \ \ (2)

Before we can proceed we should say a little more about the computational assumptions that are placed on the pair of functions {(f_0,f_1)}. Earlier we mentioned this pair of functions should be claw-free, but in fact a little more is needed — though all requirements can ultimately be met by a construction based on the Learning With Errors problem. Rather than state the exact assumptions, I will mention two important consequences. The first is that the pair of functions is “collapsing”, a notion introduced by Unruh in his investigations of collision-resistance against quantum attacks. In our context this property implies that it is computationally hard to distinguish between an arbitrary superposition over preimages, as in {| \tilde{\psi} \rangle}, and the “collapsed” state obtained by measuring the control register (the first qubit). The second is that for any {n}-bit string {d} that can be obtained as the outcome of an arbitrary, but computationally efficient, measurement on the collapsed state, the bit {d\cdot(x_0+x_1)} is computationally indistinguishable from uniform. (This is analogous to a “hardcore bit” property, since {x_0+x_1} encodes information about both preimages simultaneously, and such information should not be accessible if the pair {(f_0,f_1)} is claw-free.)

These two assumptions taken together justify the following two modifications to the expression for {\Pr(b)} in (2), that will lead to a computationally indistinguishable distribution. First, we can “collapse” the first {(n+1)} qubits of {| \tilde{\psi} \rangle} by measuring them in the computational basis. Second, we can replace the bit {d\cdot(x_0+x_1)} by a uniformly random bit {r}. Using that {\sum_d | d \rangle\!\langle d |=\ensuremath{\mathop{\rm Id}\nolimits}}, the expression simplifies to

\displaystyle {\Pr}^{'}(b) \,=\, \frac{1}{4}\sum_{r\in\{0,1\}} \langle \tilde{\psi} | (Z^r \otimes \ensuremath{\mathop{\rm Id}\nolimits}) (U^\dagger) (Z^r \otimes \ensuremath{\mathop{\rm Id}\nolimits}) H \big( | b \rangle\!\langle b |) \otimes \ensuremath{\mathop{\rm Id}\nolimits}\big) (Z^{r} \otimes \ensuremath{\mathop{\rm Id}\nolimits}) HU (Z^{r} \otimes \ensuremath{\mathop{\rm Id}\nolimits}) | \tilde{\psi} \rangle\;, \ \ \ \ \ (3)

where the outermost {Z^r} were inserted thanks to the first assumption (the collapsing property), and the innermost {Z^r} come from commuting {X^r} past the Hadamard. I should clarify that obtaining (3) formally requires more care. In particular, I made use of computational indistinguishability in an expression that involves a quantity that is hard to compute (the parity {x_0+x_1}). This is illegal, and to work around the difficulty Mahadev has to introduce some additional ingenious manipulations that I am skipping here.

Note the key effect that the random {Z^r} operator has in (3): it effectively trivializes the action of the prover’s “attack” {U} on the first qubit with respect to the computational basis. Thus the result of this argument is that we have managed to argue that the verifier’s decoded bit {b} associated with the Hadamard basis is computationally indistinguishable from the outcome of a Hadamard measurement on the state

\displaystyle \sigma' = \mbox{\rm Tr}_E\big((I \otimes U_I) | \tilde{\psi} \rangle\langle \tilde{\psi} | (I\otimes U_I)^\dagger+ (X\otimes U_X) | \tilde{\psi} \rangle\langle \tilde{\psi} | (X \otimes U_X)^\dagger \big) \;,

where we expanded the first qubit of the unitary {U} as {U = I\otimes U_I + X \otimes U_X + Z\otimes U_Z + XZ \otimes U_{XZ}}, and {E} represents all registers except the first qubit. Note that the second term involves an {X} on the first qubit, which has no effect on a measurement in the Hadamard basis. Thus, {\sigma'} can be updated to a state {\sigma''} where we have “erased” the {X} operator on the first qubit. Moreover, by definition, a measurement of the first (and only) qubit of {\sigma''} in the computational basis yields an outcome distributed exactly as it would on {\sigma}. In particular, it is consistent with the verifier’s decoded bit in the computational basis measurement protocol.

We are done! The state {\sigma'} is a well-defined single-qubit state such that the distribution of decoded bits recorded by the verifier for either basis is computationally indistinguishable from the distribution of outcomes of a measurement of {\sigma'} in the same basis. Note that {\sigma'} may not “exist” at any point of the protocol. But this is besides the point: as long as {\sigma'} is a well-defined quantum state, and the verifier correctly records decoded measurement outcomes, this eventually leads to a valid certificate for the prover’s claim that the XZ Hamiltonian that encodes the computation has low enough energy.

Phew. Catch your breath, read this post again (and please do ask for clarifications as needed), and then move on to the beautiful paper, whose introduction already has more depth than I could provide here, and whose body fills in all the remaining gaps. (This includes how to deal with states that are more than a single qubit, an issue that my presentation of the single-qubit case may make seem more thorny than it is — in fact, it is possible to express the argument given here in a way that makes it relatively straightforward to extend to multiple qubits, though there are some technical issues, explained in Mahadev’s paper.) And then – use the idea to prove something!

Advertisement

About Thomas

I am a professor in the department of Computing and Mathematical Sciences (CMS) at the California Institute of Technology, where I am also a member of the Institute for Quantum Information and Matter (IQIM). My research is in quantum complexity theory and cryptography.
This entry was posted in Conferences, QCrypto, Simons and tagged , , . Bookmark the permalink.

1 Response to The cryptographic leash

  1. Pingback: The Quantum Wave in Computing | Quantum Frontiers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s