+
These
machines are of little use to consumers. They are delicate, easily
disturbed, require cooling to just above absolute zero, and
are ruinously expensive.
But the implications are enormous for heavy
number-crunching. In theory, banks could use quantum
computers to calculate risk faster than their competitors, giving them
an edge in the markets. Tech companies could use them to figure out if their code is bug-free.
Spies could use them to crack cryptographic codes, which requires
crunching through massive calculations. A fully-fledged version of such a
machine could theoretically tear through calculations that the most
powerful mainframes would take eons to complete.
+
Quantum computers: a grossly oversimplified introduction
The
foundation of all computing is a logic gate—a simple yes/no switch. In
modern computers, one position of the switch represents 0; the other
represents 1. Your laptop computer contains billions of such gates, each
of which switches between 1 and 0 billions of times a second.
+
Nonetheless,
your computer has a handicap, imposed by classical physics. At any
given moment it can only be in one state—one particular combination of
1s and 0s across those billions of gates. It has to step through a
sequence of such states to complete a calculation. But what if, instead,
a vast number of copies of your computer could somehow exist in
parallel, each representing one of these possible states,
and collectively perform the entire calculation simultaneously?
+
Essentially,
this is what quantum theory says can happen. The key is to shrink the
gates small enough that quantum physics, which describes the behavior of
extremely small objects, takes over from classical physics. (Some
quantum gates consist of a single atom, held in place by electric and
magnetic fields.) Such a tiny gate, called a “qubit,” can exist as a
kind of combination—called a “superposition”—of 1 and 0. A computer made
of qubits would, in some sense, exist in all the possible
combinations of 1s and 0s at once.
+
Physicists
differ in how they interpret this. Some literally believe a myriad
parallel universes exist, each containing a separate copy of
the computer; some have a more minimalist explanation. But the outcome
is the same: In principle, it’s possible to reap the fruits of all that
parallel processing to arrive at a result faster—so much faster that
even the hardest calculation could become pretty much instantaneous.
+
(For more details in something resembling English, see this interview with MIT’s Scott Aaronson—a long-time skeptic of D-Wave—in the Washington Post, this excellent blog post by Michael Nielsen, and, thus prepared, this Reddit thread.)
+
Yes it is. No it isn’t
But
why should it be so hard to figure out whether a computer is quantum or
not? Surely the simplest thing to do is take it apart and see how it
works. Or, failing that, run it alongside a classical computer and see
which performs better.
+
Alas, science, like all reality, is not always that straightforward.
+
The
controversy surrounding D-Wave is rooted in the fact that the
company doesn’t claim to have built a full-fledged “universal quantum
computer” that can tackle any problem. In particular, it doesn’t claim
to able to crack secret codes; if it did, the argument would be over
quickly. The technique it uses, called “quantum annealing”—which we
won’t even try to explain—is only good for solving specific sorts of mathematical puzzles called
optimization problems. As their name implies, these involve finding the
quickest or lowest-cost way to do something. (That could make the
technique useful to banks and mining companies, but not spy agencies.)
+
The trouble is, it’s possible to build a device that produces a similar result to
quantum annealing without any quantum behavior—i.e., without invoking
superpositions and parallel universes. So the question is: Is the D-Wave
machine doing quantum annealing, or just something that looks like it?
And this is where things get tricky.
+
You can’t look inside
That
D-Wave’s computer contains qubits is not in dispute. The current
incarnation, the D-Wave 2, has 512 of the little things, microscopic
loops of superconducting metal chilled to a fraction of a degree above
absolute zero, which represent 0 and 1 by means of tiny currents flowing
either clockwise or counterclockwise. The question is, what are they
doing in there?
+
What
they should be doing, in theory, is hovering in superpositions while a
calculation is going on. When it’s time to spit out an answer, each
qubit comes to rest on either 0 or 1, the machine reads them all, and it
provides a result.
+
But there’s no way to tell what actually happened. It’s in the nature of quantum physics—as outlined in the famous paradox of Schrödinger’s cat—that
you cannot look at something that is exhibiting quantum behavior
without stopping its quantum behavior. If you try to observe a qubit
while it’s in a superposition of 0 and 1, you will see it as either 0 or
1, and you won’t know what it was doing before you started looking.
+
You can’t (necessarily) do a speed test
If
quantum computers are so much faster, couldn’t you just run the same
calculation on a D-Wave and a regular machine and time them?
Unfortunately not. As MIT’s Scott Aaronson explained to the Washington Post,
whether or not a quantum computer should actually be faster depends a
lot on the kind of problem you’re solving. Breaking cryptographic codes
is so fantastically labor-intensive that it would be a pretty
unequivocal test. But “with the optimization problem, we don’t really know if we’d get a speedup,” he said.
+
Even if there were a speedup, it would be hard to measure. There was a good deal of hype about
an experiment last year by a scientist whom D-Wave had hired as a
consultant, showing that the D-Wave 2 did a certain calculation 3,600
times faster than a conventional machine. But as it turned out, the
likelier reason the conventional computer was slow was that its
software was slow. Other researchers said they’d been able to make an ordinary laptop, using different software, run as fast as the D-Wave.
+
You have to compare the answers, and even then…
Probably
the most definitive way to tell whether or not the D-Wave is doing what
its creators claim is to run it side-by-side with classical computers
trying to solve the same problem and compare not their speeds, but the
answers they give. There are several different ways to solve
an optimization problem, and they give subtly different answers. If the
D-Wave’s answers are like those from classical computers, it’s behaving
as a classical computer. If not, it’s doing something quantum. In fact,
you can even make a classical computer run a kind of simulation of
quantum annealing, and compare that with the D-Wave’s answers too.
+
That’s
the theory, at least. Unfortunately, even this approach hasn’t been
conclusive so far. In January a group of researchers at Berkeley and IBM
published a paper finding that
the results from D-Wave’s machine could just as easily have come from a
computer that uses classical physics. Yet in March, researchers at the
University of Southern California and University College London
published a paper supporting D-Wave’s contention that its computer does indeed involve some degree of quantumness.
+
So how can we ever tell what’s a quantum computer?
This
scientific back-and-forth may yet reach a definitive conclusion. If
not, we might just have to wait for quantum computing to evolve
further. Conventional computers today are wonderful, general-purpose
machines. Quantum computers are very much in their infancy, and just
like the vacuum-tubed machines of old, they are good only at specific
things.
+
Michele Mosca,
a co-founder of the Institute for Quantum Computing at the University
of Waterloo in Canada, points out that the very meaning of the words
“quantum computer” is fuzzy. He proposes five (actually five-and-a-half)
definitions, which we’ve listed at the bottom of this article for the technically minded.
+
The
D-Wave, Mosca says, definitely meets definition 2, and probably its
stronger variant, meaning that it is behaving somehow differently from
classical computers, but not necessarily faster. If further research can
show that it meets definition 3, that means it can be faster than
classical machines at certain tasks, provided the problems aren’t too
large. If it meets definition 4 (the best, Mosca says, that it aspires
to) then it has the potential to continue to be faster for any larger,
more complex input —though again, only for certain tasks.
+
That
would still be hugely important. It would mean that the D-Wave is
taking advantage of the weird properties of quantum physics to do things
that, until not so long ago, were thought literally impossible, and
open up the ability to solve problems that would otherwise remain out of
reach.
+
But
the full power of a quantum computer will be realized only when a
machine can satisfy definition 5—a general-purpose device that far
outstrips conventional computers at any task they can perform. That
machine has not yet been built.
+
Footnote: Michele Mosca’s five-and-a-half definitions of “quantum computer”
Definition 1:
+
Since
the world is quantum, any computer is a quantum computer. Conventional
computers are just weak quantum computers, since they don’t exploit
intrinsically quantum effects, such as superposition and entanglement.
+
Definition 2:
+
A
quantum computer is a computer that uses intrinsically quantum effects
that cannot naturally be modeled by classical physics. Classical
computers may be able to mathematically simulate instances of such
computers, but they are not implementing the same kinds of quantum
operations.
+
Definition 2’:
+
Definition 2, where there are strong tests or proofs of the quantum effects at play (e.g. by doing Bell tests).
+
Definition 3:
+
A
quantum computer is a computer that uses intrinsically quantum effects
to gain some advantage over the best known classical algorithms for some
problem.
+
Definition 4:
+
A
quantum computer is a computer that uses intrinsically quantum effects
to gain an asymptotic speed-up over the best known classical algorithms
for some problem. (The difference with definition 3 is that the
advantage is a fundamental algorithmic one that grows for larger
instances of the problem; versus advantages more closely tied to
hardware or restricted to instances of some bounded size.)
+
Definition 5:
+
No comments:
Post a Comment