Download - Solving Hard Problems With Light
Solving Hard Problems With Light
Scott Aaronson (Assoc. Prof., EECS)Joint work with Alex Arkhipov
vs
In 1994, something big happened in the foundations of computer science, whose meaning
is still debated today…
Why exactly was Shor’s algorithm important?
Boosters: Because it means we’ll build QCs!
Skeptics: Because it means we won’t build QCs!
Me: For reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness result for one of the central computational problems
of modern science: QUANTUM SIMULATION
Shor’s Theorem:
QUANTUM SIMULATION is not
solvable efficiently (in polynomial time),
unless FACTORING is also
Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik)
Advantages:
Based on more “generic” complexity assumptions than the hardness of FACTORING
Gives evidence that QCs have capabilities outside the entire
“polynomial hierarchy”
Requires only a very simple kind of quantum computation:
nonadaptive linear optics (testable before I’m dead?)
Today, a different kind of hardness result for simulating quantum mechanics
Disadvantages:
Applies to relational problems (problems with many possible outputs) or sampling
problems, not decision problems
Harder to convince a skeptic that your computer is indeed
solving the relevant hard problem
Less relevant for the NSA
Bestiary of Complexity Classes
BQP
P#P
BPP
P
NP
PH
FACTORIN
G
PERMANENT
COUNTING
3SAT
XYZ…
How complexity theorists say “such-and-such is damn unlikely”:
“If such-and-such is true, then PH collapses to a finite level”
Suppose the output distribution of any linear-optics circuit can be efficiently sampled by a classical algorithm. Then the polynomial hierarchy collapses.
Indeed, even if such a distribution can be sampled by a classical computer with an oracle for the polynomial hierarchy, still the polynomial hierarchy collapses.
Suppose two plausible conjectures are true: the permanent of a Gaussian random matrix is(1) #P-hard to approximate, and(2) not too concentrated around 0.Then the output distribution of a linear-optics circuit can’t even be approximately sampled efficiently classically, unless the polynomial hierarchy collapses.
Our Results
If our conjectures hold, then even a noisy linear-optics experiment can
sample from a probability distribution that no classical
computer can feasibly sample from
nS
n
iiiaA
1,Per
BOSONS
nS
n
iiiaA
1,
sgn1Det
FERMIONS
There are two basic types of particle in the universe…
Their transition amplitudes are given respectively by…
All I can say is, the bosons got the harder job
Particle Physics In One Slide
High-Level IdeaEstimating a sum of exponentially many positive or
negative numbers: #P-hard
Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in PH
If quantum mechanics could be efficiently simulated classically, then these two problems would become
equivalent—thereby placing #P in PH, and collapsing PH
So why aren’t we done?
Because real quantum experiments are subject to noise
Would an efficient classical algorithm that simulated a noisy optics experiment still collapse the polynomial hierarchy?
Main Result: Yes, assuming two plausible conjectures about permanents of random matrices (the “PCC” and the
“PGC”)
U
Particular experiment we have in mind: Take a system of n identical photons with m=O(n2) modes. Put each photon in a known mode, then apply a Haar-random mm unitary transformation U:
Then measure which modes have 1 or more photon in them
There exists a polynomial p such that for all n,
The Permanent Concentration Conjecture (PCC)
2
1,0~
1!PerPr
nnp
nX
nnCNX
Empirically true!
Also, we can prove it with determinant in place of
permanent
Let X be an nn matrix of independent, N(0,1) complex Gaussian entries. Then approximating Per(X) to within a 1/poly(n) multiplicative error, for a 1-1/poly(n) fraction of X, is a #P-hard problem.
The Permanent-of-Gaussians Conjecture (PGC)
Experimental ProspectsWhat would it take to implement the requisite experiment?• Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes• Reliable single-photon sources• Photodetector arrays that can reliably distinguish 0 vs. 1 photonBut crucially, no nonlinear optics or postselected measurements!
Our Proposal: Concentrate on (say)
n=20 photons and m=400 modes, so that classical simulation is
nontrivial but not impossible
SummaryI often say that Shor’s algorithm presented us with three choices. Either
(1)The laws of physics are exponentially hard to simulate on any computer today,
(2)Textbook quantum mechanics is false, or
(3)Quantum computers are easy to simulate classically.
For all intents and purposes?