1
00:00:07,330 --> 00:00:10,690
Who is the microphone working? All right.
2
00:00:10,700 --> 00:00:14,740
So, so, so thank you so much for inviting me here.
3
00:00:14,830 --> 00:00:18,580
Thank you, Leslie, for the unbelievably kind introduction.
4
00:00:19,180 --> 00:00:29,409
It's it's an honour to give this lecture and to be here at Oxford, you know, among so many old friends and colleagues and also,
5
00:00:29,410 --> 00:00:34,300
you know, at a place that played such a pivotal role in the birth of quantum computing.
6
00:00:35,320 --> 00:00:43,570
So when I did a Google image search for quantum computer, this is one of the first things that came up.
7
00:00:44,260 --> 00:00:47,530
So I'm not really sure whether that's what they look like.
8
00:00:47,530 --> 00:00:53,200
I mean, it may become apparent in this talk that I am a theorist rather than an engineer.
9
00:00:53,800 --> 00:00:59,980
But one of the most interesting things and what I want to tell you about today is that despite
10
00:00:59,980 --> 00:01:05,410
being about as far as you can possibly get on the theoretical and of quantum computing,
11
00:01:05,650 --> 00:01:09,580
even I am actually talking to the engineers and the experimentalists.
12
00:01:09,580 --> 00:01:12,760
And I will tell you something about about why.
13
00:01:13,570 --> 00:01:18,670
So so the title of the talk is Quantum Supremacy.
14
00:01:18,910 --> 00:01:24,790
So I should, you know, admit at the outset that it's a little bit of an unfortunate term.
15
00:01:25,330 --> 00:01:28,659
It's a term that's come into use.
16
00:01:28,660 --> 00:01:37,270
You know, a few years ago, I think it was John Prescott who first started using it, but I don't know.
17
00:01:38,620 --> 00:01:45,639
So, so, so I come from the US. I don't know how many of you here follow the news,
18
00:01:45,640 --> 00:01:55,930
but in the US there's a presidential candidate by the name of Trump and he so because of his candidacy,
19
00:01:55,930 --> 00:02:05,020
people have actually asked me about whether I will formally disavow quantum supremacy and quantum supremacists.
20
00:02:07,660 --> 00:02:15,250
You know, I when when when Mr. Trump was asked whether he would disavow white supremacy,
21
00:02:15,250 --> 00:02:24,640
you probably all know his his his reply was that he needed to research it further when it when it comes to quantum supremacy.
22
00:02:24,820 --> 00:02:27,520
I too need to research this further.
23
00:02:28,420 --> 00:02:37,270
So but but but in this talk I'll tell you about the research that I have done and that my colleagues have done on this topic.
24
00:02:37,270 --> 00:02:44,500
So so what? So what exactly do we mean by do do people mean by this term quantum supremacy?
25
00:02:45,100 --> 00:02:58,540
So so it's a term that's basically just used to refer to a certain task that people are, I would say, racing to achieve right now,
26
00:02:59,530 --> 00:03:14,710
which which is to to do some experiment in quantum computation, you know, hopefully with hardware that will be available in the in the near future.
27
00:03:15,010 --> 00:03:24,520
So, you know, five years or maybe even less than that which will, you know, not break any cryptography.
28
00:03:24,740 --> 00:03:33,700
It will not, you know, factor an enormous number, will not do do anything of any practical use to anyone.
29
00:03:34,240 --> 00:03:42,520
That's still a ways off, but it will do something that, while useless, is nevertheless hard.
30
00:03:42,880 --> 00:03:49,930
Okay. Something that we can have some sort of confidence or reason to believe is hot would
31
00:03:49,930 --> 00:03:55,810
be hard to simulate using a classical computer for for some meaning of the word hard.
32
00:03:55,900 --> 00:04:04,540
Okay. You know, it would take manifestly more time to get the same sort of result using classical computation.
33
00:04:05,230 --> 00:04:12,280
This is a milestone that, you know, despite sort of being claimed, you know,
34
00:04:12,280 --> 00:04:17,200
every week or so in the popular press, I would say, has not yet been achieved.
35
00:04:18,250 --> 00:04:27,850
But, you know, if we do a fair comparison, that is if we compare quantum computation, you know, not against a just, you know,
36
00:04:27,860 --> 00:04:37,420
a a poor classical algorithm, but against the very best classical algorithms that could be designed that can be devised for the same sorts of tasks.
37
00:04:37,750 --> 00:04:42,639
Okay. But but it is plausible for reasons.
38
00:04:42,640 --> 00:04:45,160
I'll go into that within a few years.
39
00:04:45,160 --> 00:04:55,480
We will have the capability to, to do something with quantum mechanical hardware, you know, which is a well-defined mathematical task,
40
00:04:56,080 --> 00:05:04,900
but for which, you know, we could argue that a classical simulation would take a lot longer than then.
41
00:05:04,930 --> 00:05:08,390
Than than than than. Then the quantum computation.
42
00:05:09,200 --> 00:05:20,750
I think that this will be a major milestone. This will sort of cast a severe doubt on what we call the extended church Turing thesis,
43
00:05:21,050 --> 00:05:27,770
which has been sort of a foundational belief of computer science since the 1960s.
44
00:05:28,070 --> 00:05:37,100
And for me personally, you know, this overthrowing this extended church Turing thesis is really,
45
00:05:37,100 --> 00:05:41,690
you know, the reason why I got into quantum computing in the first place.
46
00:05:42,500 --> 00:05:48,620
Sometimes people ask, well, you know, what is the the biggest application of a of a quantum computer?
47
00:05:48,630 --> 00:05:51,020
Right. And you may have heard of some of them.
48
00:05:51,020 --> 00:06:01,850
You know, is that breaking all the cryptography that we use on the Internet today or is it simulating quantum mechanics itself, you know,
49
00:06:02,300 --> 00:06:09,200
for designing drugs or designing new materials, which you might say is a more positive application for humanity than,
50
00:06:09,200 --> 00:06:12,560
you know, breaking codes and reading people's email.
51
00:06:12,890 --> 00:06:17,270
Okay. Or is it optimisation or machine learning?
52
00:06:17,450 --> 00:06:20,989
You know, those are certainly huge application areas, but for those,
53
00:06:20,990 --> 00:06:27,380
it's less clear how much advantage a quantum computer would really give us even even if we had one.
54
00:06:29,690 --> 00:06:37,910
My my answer to that has always been that, you know, there is one application of quantum computing which is more important to me than any of those,
55
00:06:38,210 --> 00:06:42,320
and that is to disprove the people who say that it's impossible.
56
00:06:42,800 --> 00:06:46,580
Okay. That's that that that that's why I am in this.
57
00:06:46,590 --> 00:06:53,930
I think that quantum computing is in some sense, the most stringent test of quantum mechanics itself,
58
00:06:54,170 --> 00:06:59,480
that, you know, that we maybe that will that we're going to see in our lifetimes.
59
00:06:59,810 --> 00:07:05,810
And, you know, if it is worth building the Large Hadron Collider or worth building Lego, which,
60
00:07:05,810 --> 00:07:11,090
you know, are these amazing, wonderful machines that, you know, so far mostly, you know,
61
00:07:11,090 --> 00:07:17,930
sort of triumphantly confirm our existing theories, then surely it is worth it to see whether quantum mechanics,
62
00:07:18,170 --> 00:07:22,820
you know, still holds in this regime of computational intractability.
63
00:07:23,150 --> 00:07:33,470
And, you know, and I, I spend a lot of time, you know, on my blog and elsewhere just arguing against quantum computing sceptics.
64
00:07:33,830 --> 00:07:38,630
And, you know, I would just I would I would just like to see them admit that they were wrong.
65
00:07:40,610 --> 00:07:47,269
I'll, I'll, I'll I'll I'll confess to that mode of, you know, of course, you know, if they if they turned out to be right, you know, in some sense,
66
00:07:47,270 --> 00:07:55,790
that's that's the even that's even a more exciting outcome to me because that, I think would force a revision in our understanding of physics itself.
67
00:07:56,090 --> 00:08:00,620
If there's really some deep and fundamental reason why you cannot build a quantum computer,
68
00:08:00,620 --> 00:08:05,270
then that, you know, we would have to rewrite the textbooks to explain why.
69
00:08:05,360 --> 00:08:09,980
Write a believing you can build a quantum computer is just the conservative option.
70
00:08:10,380 --> 00:08:14,180
Okay. But, you know.
71
00:08:14,180 --> 00:08:22,970
But either way, I would like to know the truth. Okay, so, so, so so this is, you know, sort of what I've been advocating to the, you know,
72
00:08:23,330 --> 00:08:32,610
experimentalists is sort of to disentangle the, so to speak, the question of, you know,
73
00:08:32,720 --> 00:08:41,780
what a quantum computer would be practically useful for from the question of just can we demonstrate to everyone satisfaction that we can achieve,
74
00:08:42,020 --> 00:08:47,270
you know, a clear quantum speed up at all and just focus on doing the latter first.
75
00:08:47,600 --> 00:08:51,670
Okay. And that just seems like the right order to me.
76
00:08:52,110 --> 00:08:59,479
And and I am actually very, you know, because of developments within the last five or six years,
77
00:08:59,480 --> 00:09:06,890
I am very optimistic that this clear quantum speed up will be achievable, you know, relatively soon.
78
00:09:07,100 --> 00:09:15,380
But there's a lot more that has to happen in terms of engineering, and there is also more that has to happen in terms of theory.
79
00:09:15,680 --> 00:09:23,000
Okay. In terms of understanding what is or isn't, hard to simulate using a classical computer and why.
80
00:09:23,240 --> 00:09:27,170
Okay. And so that that's what I'll focus on more in this talk. Okay.
81
00:09:27,440 --> 00:09:33,950
So first of all, so what is this extended church Turing thesis that I'm so interested in trying to refute?
82
00:09:34,250 --> 00:09:39,210
Well, so the original church Turing thesis was a statement about compute ability.
83
00:09:39,260 --> 00:09:49,100
Okay. And the way that I would interpret it is as a statement that everything that is computable, you know,
84
00:09:49,200 --> 00:09:55,790
by any device that you can physically build is also computable by a Turing machine,
85
00:09:56,270 --> 00:10:00,680
which is, you know, just a mathematical idealisation of what we mean by a computer.
86
00:10:01,820 --> 00:10:05,420
You know, it's not clear that church or Turing themselves ever, you know.
87
00:10:06,790 --> 00:10:10,520
White said that. But but in any case, that's what they should have said.
88
00:10:10,850 --> 00:10:21,830
So, you know, I think that, you know, 80 years, you know, after church and touring, you know, this this thesis remains on very solid ground.
89
00:10:22,070 --> 00:10:28,130
As a as a as a as a as a statement about our laws of physics.
90
00:10:28,500 --> 00:10:36,230
You know, you know, there is maybe, you know, one person who is who has tried to fight it.
91
00:10:37,850 --> 00:10:42,590
Sir Roger Penrose, who I guess is also here in Oxford.
92
00:10:43,550 --> 00:10:50,060
But, you know, it is, you know, almost a you could say a backhanded compliment to the church during theses,
93
00:10:50,330 --> 00:10:55,160
sort of how hard he has had to work to develop any sort of alternative to it.
94
00:10:56,390 --> 00:11:04,460
Now, the the extended church during theses sticks its neck out further and it talks about what is efficiently computable.
95
00:11:04,530 --> 00:11:13,490
Okay. It says that anything that we could efficiently, feasibly compute by a physically buildable device,
96
00:11:13,760 --> 00:11:18,650
meaning let's say, you know, our rough and ready approximation in computer science.
97
00:11:18,950 --> 00:11:23,500
But, you know, meaning, you know, anything we can compute in polynomial time, you know,
98
00:11:23,540 --> 00:11:33,800
using resources that scale like the problem size to some fixed power ought to be computable in polynomial time using a Turing machine,
99
00:11:34,100 --> 00:11:39,170
meaning, you know, it should be in the complexity class P or at any rate,
100
00:11:39,440 --> 00:11:44,770
at least in the complexity class of BPP, which is the probabilistic version of P.
101
00:11:44,810 --> 00:11:52,070
Okay, there should be a, you know, a deterministic or probabilistic Turing machine to,
102
00:11:52,070 --> 00:11:55,490
you know, efficiently simulate whatever we can do in the physical world.
103
00:11:55,820 --> 00:12:02,600
Okay. Now, you know, just just the very fact that I had to equivocate between deterministic or maybe probabilistic, you know,
104
00:12:02,600 --> 00:12:10,550
I think illustrates that, you know, this thesis from the very beginning was on shakier ground than the compute ability one.
105
00:12:10,890 --> 00:12:24,680
Okay. But probably most of you know that in the 1990s this extended or polynomial time church Turing thesis developed really, really serious fissures.
106
00:12:25,310 --> 00:12:33,330
So Peter Shaw came along and those are supposed to be cracks in the foundations of the church,
107
00:12:33,350 --> 00:12:38,630
or indeed, as some people said, that they looked like horns. But I just I'm just not good enough at PowerPoint.
108
00:12:38,990 --> 00:12:49,850
Okay. So so Peter Shaw developed discovered a polynomial time algorithm for factoring integers using a quantum computer,
109
00:12:50,360 --> 00:12:56,690
which is a problem certainly that no one has proven is is hard for a classical computer but that we,
110
00:12:56,990 --> 00:12:58,250
you know, for better or worse,
111
00:12:58,250 --> 00:13:07,580
we believe that enough that we base the security of most of our electronic commerce on on the belief that that and related problems are hard.
112
00:13:08,030 --> 00:13:17,390
Okay. So, you know, the way you know, let me let me show you a different maybe more more unusual way to State Shaw's discovery.
113
00:13:17,630 --> 00:13:25,280
Okay. You know, Shaw's theorem is not merely about, you know, a hypothetical future in which quantum computers are built.
114
00:13:25,580 --> 00:13:32,930
It's a hardness result for a problem that could concern us right now, namely the problem of simulating quantum mechanics.
115
00:13:33,310 --> 00:13:37,969
Okay. This is a problem of enormous concern to chemists and physicists.
116
00:13:37,970 --> 00:13:43,160
That's what, you know, a significant percentage of supercomputing time is actually used for today.
117
00:13:43,520 --> 00:13:54,890
And Shor's theorem, in some sense explains why simulating quantum mechanics seems to be hard in terms of more standard problems in computer science.
118
00:13:55,190 --> 00:14:05,240
It says that if you had an efficient algorithm for simulating nature, for simulating quantum physics using a conventional computer,
119
00:14:05,390 --> 00:14:11,060
then you would also necessarily have a fast classical algorithm for factoring integers.
120
00:14:11,750 --> 00:14:19,040
Why? Because a simulation would in particular have to be able to simulate a quantum computer running Shor's factoring.
121
00:14:20,210 --> 00:14:28,430
Okay, so. So it was, you know, an amazing connection between ideas that really sort of started off this field.
122
00:14:28,940 --> 00:14:34,129
But now, you know, because this is a computer science lecture series, I thought, you know,
123
00:14:34,130 --> 00:14:43,790
at some point for for those of you who are, you know, remaining innocent of it, I need to explain what quantum mechanics is.
124
00:14:44,030 --> 00:14:48,920
I don't have a lot of time to do that, but fortunately, I think I can do it in one slide.
125
00:14:49,250 --> 00:14:55,460
Okay. So, you know, I think that physicists, you know, really, you know,
126
00:14:55,970 --> 00:15:06,530
did a remarkable job of sort of making quantum mechanics look, you know, really complicated and, you know, and hard to learn.
127
00:15:06,830 --> 00:15:12,979
You know, indeed it is if you really care about, you know, making predictions about actual physical systems.
128
00:15:12,980 --> 00:15:14,990
Okay. But I'm a computer scientist.
129
00:15:14,990 --> 00:15:25,010
I don't you know, and, you know, the marvellous secret in quantum information is that quantum mechanics turns out to be just remarkably simple.
130
00:15:25,460 --> 00:15:30,800
Once you take the physics out of it, and the way that I, you know,
131
00:15:30,800 --> 00:15:38,330
and many of us in this field think about quantum mechanics is that it's a certain generalisation of the rules of probability themselves.
132
00:15:38,650 --> 00:15:42,650
Okay. So I think of it as sort of not even physics in the usual sense.
133
00:15:42,890 --> 00:15:49,130
It's it's more like an operating system that physical theories can run on as application software.
134
00:15:49,490 --> 00:15:54,830
Okay. And so, you know, we all know that in probability theory,
135
00:15:54,830 --> 00:16:02,330
you would describe your knowledge of a physical system by using a list of
136
00:16:02,330 --> 00:16:08,300
non-negative real numbers which add up to one and which we call probabilities.
137
00:16:08,660 --> 00:16:16,280
Okay. And which describe the likelihood that you'll find the system in one configuration or in another one if you if you look at it.
138
00:16:17,360 --> 00:16:24,050
But now, in addition to looking at a system of one thing you can also do is transform it somehow.
139
00:16:24,320 --> 00:16:32,010
Like if you have a coin that you flipped before looking at the coin, you know, you could just flip it, you know,
140
00:16:32,060 --> 00:16:39,110
and that would correspond to taking your vector of probabilities and acting on it by a linear transformation,
141
00:16:39,440 --> 00:16:46,910
which has to have the property that it maps any normalised probability vector to another probability vector.
142
00:16:47,190 --> 00:16:53,570
Okay. So any linear transformation that does that, we call a stochastic transformation.
143
00:16:53,660 --> 00:16:58,760
Okay. It's a given by, you know, a non-negative matrix where all the columns come to one.
144
00:16:59,660 --> 00:17:02,660
It's a made it's a matrix that preserves the one norm.
145
00:17:03,720 --> 00:17:12,270
Okay. Now, if you know, without knowing anything about physics, let's say you were a mathematician in the 1800s.
146
00:17:12,840 --> 00:17:19,130
If someone had come to you and said, I want you to invent a theory which is just like this one,
147
00:17:19,140 --> 00:17:25,450
just like probability theory, except it should be based on the two norm rather than the one norm.
148
00:17:25,740 --> 00:17:30,750
Because you know the two norm is the one that, you know, that sort of God prefers in every situation.
149
00:17:31,020 --> 00:17:35,040
Okay. You would you would be more or less forced to invent quantum mechanics.
150
00:17:35,580 --> 00:17:41,130
Okay. You know, this is not the usual story above of how you know where it comes from.
151
00:17:41,340 --> 00:17:47,040
You know, usually, you know, you tell a whole long tale about, you know,
152
00:17:47,040 --> 00:17:53,790
I guess a box that emits radiation and, you know, hydrogen atoms and so forth.
153
00:17:53,800 --> 00:17:58,200
Right. That is the story of how these rules came to be discovered, which is, you know,
154
00:17:58,200 --> 00:18:04,050
an important and fascinating story that took place between 1919 25 or so.
155
00:18:04,350 --> 00:18:07,350
But if you just want to know, at the end of the day, what are the rules?
156
00:18:08,340 --> 00:18:15,870
They are like the rules of probability, except that instead of a vector of non-negative real numbers,
157
00:18:16,110 --> 00:18:22,590
now we describe the state of a system using a vector of complex numbers, which we call amplitudes.
158
00:18:23,820 --> 00:18:31,110
And if you measure a, you know, a system, then these amplitudes become probabilities.
159
00:18:31,410 --> 00:18:37,629
Right. So, you know, it would you know, you don't want a theory to tell you you'll have a -10% chance of,
160
00:18:37,630 --> 00:18:42,090
you know, seeing some outcome, let alone an 80% chance.
161
00:18:42,420 --> 00:18:50,220
Okay. But so you can, you know, each amplitude becomes a probability by taking its squared absolute value.
162
00:18:50,610 --> 00:18:55,049
Okay. And measurement, you may have heard, is a destructive process.
163
00:18:55,050 --> 00:19:00,660
So the system, you know, when you look at a system, you force it to collapse to one of the outcomes.
164
00:19:00,960 --> 00:19:09,900
You know, it makes up its mind, you know, it becomes outcome I with probability, absolute value of alpha survival squared.
165
00:19:10,200 --> 00:19:13,200
And then and then it sticks with that choice. Okay.
166
00:19:13,290 --> 00:19:17,549
Now, in addition to looking at a system, you can also act on it,
167
00:19:17,550 --> 00:19:24,670
which now means taking this vector of amplitudes and acting on it using any linear transformation that preserves the two norm.
168
00:19:25,000 --> 00:19:33,300
Okay. That we call a unitary matrix. It's just a complex generalisation of an orthogonal matrix.
169
00:19:33,630 --> 00:19:36,740
Okay. And so so those are the two rules.
170
00:19:36,750 --> 00:19:45,180
You can make a measurement that gives this probabilistic result, or you can transform your state by a unitary matrix.
171
00:19:45,510 --> 00:19:50,219
Okay. And are all of the differences, you know,
172
00:19:50,220 --> 00:19:56,790
all of the weirdness is of the quantum world that you've ever heard about are just ultimately traceable to the
173
00:19:56,790 --> 00:20:04,110
fact that amplitudes are based on the two norm and therefore work differently than classical probabilities do,
174
00:20:04,290 --> 00:20:08,609
which are based on the one norm. Okay. So all right.
175
00:20:08,610 --> 00:20:11,970
So now I need one slide to tell you what is quantum computing?
176
00:20:12,930 --> 00:20:24,270
So so this this idea dates back to the early 1980s when a few physicists like Richard Feynman and,
177
00:20:25,140 --> 00:20:29,129
of course, David Deutsch, who's a here in Oxford, started,
178
00:20:29,130 --> 00:20:37,410
you know, noticing that if you want to simulate a quantum system using a conventional computer,
179
00:20:37,650 --> 00:20:41,670
it seems to require an exponential blow up in the amount of time.
180
00:20:42,270 --> 00:20:43,490
Why? Because, you know,
181
00:20:43,710 --> 00:20:50,910
the rules of quantum mechanics say that you need to assign an amplitude to every possible configuration that your system could be in.
182
00:20:50,960 --> 00:21:02,550
Okay. So if you had an bit's or, you know, as we call them, qubits, quantum bits, there are two to the N possible configurations of those qubits.
183
00:21:02,850 --> 00:21:07,830
And the rules say that every one of those two to the N possibilities gets an amplitude.
184
00:21:08,190 --> 00:21:12,230
Okay. So just to keep track of, say, a thousand particles, you know,
185
00:21:12,240 --> 00:21:18,810
nature off to the side somewhere needs to maintain a vector of two to the thousand complex numbers.
186
00:21:19,140 --> 00:21:27,570
And whenever something is done to the particles, nature needs to take that vector of size two to the thousand and multiply it by a matrix.
187
00:21:27,840 --> 00:21:33,000
Okay. So, you know, that's just seems like an enormous amount of effort for nature to be going to.
188
00:21:33,300 --> 00:21:40,170
And, you know, and it you know, it raises the, in retrospect, obvious question, you know, why don't we, you know,
189
00:21:40,260 --> 00:21:47,069
take that regimen of, you know, how hard quantum mechanics is to simulate and turn it into lemonade, okay?
190
00:21:47,070 --> 00:21:52,200
Of, you know, build a computer that would itself exploit, you know,
191
00:21:52,210 --> 00:21:58,230
this these amplitudes that could be, as we say, in a superposition of all of these states.
192
00:21:58,560 --> 00:22:03,090
Okay. So now, of course, the question arises, supposing we built such a computer, what would it be?
193
00:22:03,180 --> 00:22:08,310
Good for five men in Deutsche were only able to give one real answer to that question,
194
00:22:08,580 --> 00:22:12,360
which is that it would be good for simulating quantum mechanics.
195
00:22:12,850 --> 00:22:13,230
Okay.
196
00:22:13,800 --> 00:22:23,220
You know, I think that that remains maybe possibly the biggest, you know, real practical application that a quantum computer would be known to have.
197
00:22:23,670 --> 00:22:30,180
Okay. But one important point that I should clear up is that, you know,
198
00:22:30,180 --> 00:22:37,590
in like almost every popular article that's been written about this subject for the past 25 years,
199
00:22:38,460 --> 00:22:47,070
you know, it has described a quantum computer as a device that would just try each possible answer in a different parallel universe.
200
00:22:47,470 --> 00:22:54,810
Okay. Or, you know, in a in a in a different brands. Okay. That is that that that that that that is not how it works.
201
00:22:55,110 --> 00:23:02,410
Okay. You know, and, you know, I don't care whether you believe or don't believe in, you know, the reality of parallel universes.
202
00:23:02,480 --> 00:23:02,720
Okay.
203
00:23:02,740 --> 00:23:11,250
It's you know, that's just that that description that you constantly see of a quantum computer is wrong for just a straightforward technical reason.
204
00:23:11,650 --> 00:23:18,390
Okay. It's wrong because you while you can create a superposition over all the possible answers to some problem.
205
00:23:18,660 --> 00:23:26,309
So giving each possible answer some amplitude, if you, you know, at some point you have to measure the computer to see what state it's in.
206
00:23:26,310 --> 00:23:32,040
And if you just make a measurement not having done anything else, all you're going to see is a random answer.
207
00:23:32,430 --> 00:23:37,290
Okay. And if you just wanted a random answer and you could have picked one yourself with a lot less trouble.
208
00:23:37,650 --> 00:23:44,940
Uh huh. So, you know, the only hope of getting a speed advantage from a quantum computer is to exploit
209
00:23:45,540 --> 00:23:49,580
the ways that amplitudes are different from conventional probabilities,
210
00:23:49,860 --> 00:23:56,730
and in particular, the fact that amplitudes can also be negative or complex or whatever.
211
00:23:57,300 --> 00:24:06,930
And so the goal in quantum computing is always to choreograph things so that the the amplitude,
212
00:24:07,230 --> 00:24:12,780
the final amplitude for the correct answer is very, very large, so close to one.
213
00:24:13,050 --> 00:24:18,540
Whereas the final amplitudes for the incorrect answers are very small, close to zero.
214
00:24:18,870 --> 00:24:25,799
Okay, if you can arrange that, then when you measure the computer, you should see the right answer with the high probability, you know.
215
00:24:25,800 --> 00:24:33,510
And if it's mere a computer that is merely correct, 70% of the time, I should say, is perfectly fine for our purposes because,
216
00:24:33,750 --> 00:24:41,190
you know, if you're not happy with a 70% chance of correctness, then just rerun the computer 100 times and take the majority answer.
217
00:24:41,550 --> 00:24:48,060
Okay. So but but now how do you boost the probability of the right answer?
218
00:24:48,900 --> 00:24:56,250
The what that always involves is sort of choreographing a pattern of quantum interference.
219
00:24:56,550 --> 00:25:05,550
So the idea is that for each wrong answer, there will be sort of some contributions to its amplitude that are positive and others that are negative.
220
00:25:05,760 --> 00:25:09,050
So on the whole, they will cancel each other out. Okay.
221
00:25:09,060 --> 00:25:16,140
Where is the amp? The amp? The contributions to the amplitude of the correct answer should all be in phase with each other.
222
00:25:16,290 --> 00:25:22,019
So all positive or negative, if you can arrange that, then you know, then that's that.
223
00:25:22,020 --> 00:25:27,809
That is how you get a quantum speedup. It was not obvious to anyone that, you know,
224
00:25:27,810 --> 00:25:35,730
that that that this very strange hammer will actually hit any of the nails that we care about in computer science.
225
00:25:35,760 --> 00:25:38,819
You know, besides simulating quantum mechanics itself.
226
00:25:38,820 --> 00:25:47,340
Okay, but, you know, being computer scientists, we can associate any concept with a an inscrutable sequence of capital letters.
227
00:25:47,610 --> 00:25:55,050
So this is what was done by my advisor, Ramesh Fasoranti, and his student, Ethan Bernstein, in 1993.
228
00:25:55,290 --> 00:26:00,360
When they define the class bq p or bounded error quantum polynomial time.
229
00:26:00,660 --> 00:26:08,580
This is the quantum generalisation of the class P, you know, of problems efficiently solvable with a classical computer.
230
00:26:08,880 --> 00:26:17,430
Okay. And this is just all the problems that are solvable efficiently by a quantum computer with a bounded probability of error.
231
00:26:17,880 --> 00:26:27,570
Okay. And so, so now Shaw's great discovery in the nineties could be phrased the saying that the factoring problem, strictly speaking,
232
00:26:27,570 --> 00:26:36,389
a decision version of the factoring problem is in this complexity class BQ P and that this was the discovery course
233
00:26:36,390 --> 00:26:42,600
that made many people interested in quantum computing who who had not been before like the intelligence agencies.
234
00:26:42,960 --> 00:26:49,410
Okay. So just to show you, you know, a little map of the world to orient you a bit.
235
00:26:49,710 --> 00:26:55,440
So, you know, here is p problems efficiently solvable with a classical computer.
236
00:26:55,800 --> 00:27:03,020
Here's M.P. the problems whose answers are efficiently checkable by a classical computer, you know, at the top of end.
237
00:27:03,390 --> 00:27:06,330
Are the famous NPR complete problems. Okay.
238
00:27:06,340 --> 00:27:14,400
And, you know, already here there are profound questions like the P versus NP question that you may have heard of.
239
00:27:16,770 --> 00:27:22,020
So now BQ P is a generalisation of P.
240
00:27:22,290 --> 00:27:28,010
So, you know, a quantum computer can only simulate a classical one, but it might be able to do more work.
241
00:27:28,020 --> 00:27:40,230
And so what she showed is that BQ P contains at least one NP problem, namely factoring, which is not known to be in p.
242
00:27:40,590 --> 00:27:49,470
Okay, so you know, I drew bq p with a wavy border because everything quantum is spooky and mysterious.
243
00:27:49,710 --> 00:27:53,820
Okay, but yeah, but we do know a little bit about this class now.
244
00:27:54,360 --> 00:28:01,410
So we so, you know, you know, in particular, you'll notice, you know, a factoring.
245
00:28:01,680 --> 00:28:05,040
So you may know is neither known nor believed to be np complete.
246
00:28:05,400 --> 00:28:09,510
There are excellent theoretical reasons why it should not be an and p complete problem.
247
00:28:10,530 --> 00:28:14,820
And to this day, you know, we do not know whether bq p contains NPE.
248
00:28:15,090 --> 00:28:19,860
Okay. We don't know if there is an efficient quantum algorithm to solve the NP complete problems.
249
00:28:20,130 --> 00:28:26,910
Many of us would conjecture that the answer is no, that, you know, quantum computers will give you at most a limited advantage for that.
250
00:28:27,210 --> 00:28:33,720
Certainly Shor's algorithm takes advantage of very, very special structure in the factoring problem.
251
00:28:33,940 --> 00:28:40,020
You know, ironically, some of the same structure that makes factoring so useful for public key cryptography.
252
00:28:40,620 --> 00:28:47,400
Okay. Well, in the other direction, we also don't know whether BQ P is contained in NPE,
253
00:28:47,670 --> 00:28:54,720
so a quantum computer might be able to solve problems for which a classical computer cannot even efficiently verify the answer.
254
00:28:55,140 --> 00:29:04,170
Okay, that will be important later in the talk. Now we do know that the BQ P is contained in well, in polynomial space,
255
00:29:04,500 --> 00:29:12,180
in particular in a class called P to the sharp p, which means are might as my students today call it hashtag P.
256
00:29:12,420 --> 00:29:19,649
Okay. Which means, you know, all the problems that you could solve if you had an oracle for counting problems for,
257
00:29:19,650 --> 00:29:23,610
you know, counting the number of solutions to some combinatorial problem.
258
00:29:24,570 --> 00:29:32,000
So, so, so in particular, that means the quantum computers will at most get an exponential advantage over classical ones, right?
259
00:29:32,160 --> 00:29:36,780
They will not do anything that is classically computable, like the whole thing problem.
260
00:29:37,080 --> 00:29:43,050
Okay. It also means that in our present state of knowledge,
261
00:29:43,260 --> 00:29:50,370
we really have no hope of proving unconditionally that quantum computers are more powerful than classical ones.
262
00:29:50,640 --> 00:29:53,640
Or in other words, that bq p is different from P.
263
00:29:54,060 --> 00:30:00,780
Okay. And that's a very important point. Okay. But the reasons for it have in some sense nothing to do with quantum computing.
264
00:30:00,860 --> 00:30:05,520
Okay. They're just we don't even know how to separate P from P space, for example.
265
00:30:05,550 --> 00:30:13,020
Right. And if those two were equal, then certainly, you know, a P and BQ P would get you know, would get sandwiched together, too.
266
00:30:13,350 --> 00:30:16,470
Okay. So any evidence that we're going to offer,
267
00:30:16,620 --> 00:30:24,690
so in this talk that a quantum computer is achieving supremacy is going to have to be conditional on some hypothesis,
268
00:30:24,960 --> 00:30:28,830
on some kind of complexity, theoretic, hardness, conjecture.
269
00:30:29,010 --> 00:30:32,370
And the question that will interest us is which conjecture?
270
00:30:32,580 --> 00:30:36,450
You know, how safe and believable conjecture can we make it?
271
00:30:37,020 --> 00:30:45,000
Okay. So now you might say, well, you know, sounds like quantum computers just refute this extended charge Turing thesis.
272
00:30:45,180 --> 00:30:50,580
What more proof could anyone want? You know, then that they do factor in.
273
00:30:50,820 --> 00:30:58,320
Okay. There are two drawbacks, as I see it. Of of of of Shor's algorithm as, you know, evidence for quantum supremacy.
274
00:30:58,420 --> 00:31:06,270
Okay. The first is just the obvious one that, you know, as you may have heard, building a actually building a quantum computer,
275
00:31:06,270 --> 00:31:12,150
which is scalable, which they could factor a thousand digit numbers, turns out to be rather hard.
276
00:31:12,570 --> 00:31:20,970
Okay. The world record, you know, after about well over $1,000,000,000, I guess, of investment in this field, you know,
277
00:31:21,180 --> 00:31:26,669
over 20 years of brilliant experimental effort has been that there is now there are now
278
00:31:26,670 --> 00:31:33,390
quantum computers that can factor 21 into three times seven with high statistical confidence.
279
00:31:33,890 --> 00:31:37,170
Okay. I think 35 may be on the horizon.
280
00:31:37,180 --> 00:31:43,530
Okay. But, you know, you you know, the the there's a huge problem called decoherence,
281
00:31:43,740 --> 00:31:49,020
which means sort of the unwanted interaction between the quantum computer and its external environment,
282
00:31:49,260 --> 00:31:52,710
which, you know, in effect prematurely measures the computer. Okay.
283
00:31:52,980 --> 00:32:02,190
And if you want to build a scalable quantum computer, you need to get this decoherence down, if not quite to zero, then to a very, very low rate.
284
00:32:03,030 --> 00:32:09,390
And so, you know, the experimentalists have made enormous progress toward that goal, but they're not there yet.
285
00:32:10,140 --> 00:32:17,219
But, you know, you might ask, well, you know, couldn't we just take systems that they already have that they're, you know, in the lab.
286
00:32:17,220 --> 00:32:17,500
Right.
287
00:32:17,530 --> 00:32:24,450
You know, I mean I mean, there are lots of molecules where, you know, for which the Schrodinger equation seems kind of, you know, hard to solve.
288
00:32:24,810 --> 00:32:33,120
Right. But, you know, unfortunately, in those cases, you know, we don't know whether the hardness is, you know, is actually asymptotic or, you know,
289
00:32:33,120 --> 00:32:39,299
in nature the kind that a theoretical computer scientist can analyse or whether it's merely that, you know,
290
00:32:39,300 --> 00:32:47,129
someone will will write down the ground state of this particular physical system and win the Nobel Prize for it.
291
00:32:47,130 --> 00:32:51,540
But that Nobel Prize will be maybe only out of one effort, you know.
292
00:32:52,190 --> 00:32:55,650
You know, maybe it won't be an exponentially scaling amount of effort.
293
00:32:55,680 --> 00:33:01,230
Okay. So so then you might ask, can't we at least meet the experimentalists halfway?
294
00:33:01,530 --> 00:33:04,080
So I like to put it that is, can't we say, look, you know,
295
00:33:04,170 --> 00:33:09,450
you don't have to build a full quantum computer that will factor numbers with Shor's algorithm.
296
00:33:09,720 --> 00:33:16,140
But just do you know something, you know, closer to what you can do today, that you know,
297
00:33:16,680 --> 00:33:25,500
that we as computer scientists can then interpret as solving a problem, that we understand, that we can argue about the hardness of.
298
00:33:25,860 --> 00:33:35,240
Right. And, you know, and and if we forget about, you know, the problem having any practical importance, then that may dramatically expand,
299
00:33:35,250 --> 00:33:41,340
you know, the number of places we could work for that, you know, you know, the the other issue is, of course,
300
00:33:41,340 --> 00:33:50,100
you know, factoring might, for all we know, have a fast classical algorithm, you know, I mean, you know,
301
00:33:50,280 --> 00:33:58,770
if a fast factoring algorithm were discovered, the way I like to put it is that might collapse the world's know digital economy.
302
00:33:58,980 --> 00:34:04,830
But it's not it would not be known to collapse the polynomial hierarchy or anything like that.
303
00:34:04,830 --> 00:34:11,190
Okay. So you know that factoring from a theoretical standpoint is just this one particular problem, right?
304
00:34:11,310 --> 00:34:19,500
You know, if it had a fast a classical algorithm, it's not known to have any sort of wider consequences for other things.
305
00:34:19,770 --> 00:34:27,810
So, you know, wouldn't it be great if we could prove, for example, that, you know, if you could efficiently simulate a quantum computer classically?
306
00:34:28,020 --> 00:34:33,120
So like if P were equal to BQ P, then P would also be equal to N.P.
307
00:34:33,510 --> 00:34:39,090
Okay, that would be a wonderful result. Okay. Now, we don't know how to show that, I should say.
308
00:34:39,660 --> 00:34:45,780
But, you know, in this talk, I'll show you how we can now come closer to that than one might have thought possible.
309
00:34:46,690 --> 00:34:52,180
Okay. So, you know, I think that in this quest for quantum supremacy, you know,
310
00:34:52,210 --> 00:35:00,220
physicists are really going to have to think more like applied cryptographers have been thinking for a long time.
311
00:35:00,820 --> 00:35:09,370
Which means, you know, you define a clear mathematical task that you can perform with the quantum hardware, hopefully of the near future.
312
00:35:09,640 --> 00:35:18,290
You think hard about how your worst enemy would perform that task or appear to perform it using only classical resources.
313
00:35:18,490 --> 00:35:25,870
You know, a an excellent a historical analogue in physics for this is the violation of the bell inequality.
314
00:35:26,140 --> 00:35:34,120
Right. Which only just a year ago, you know, people managed to violate it in a way that closes off essentially all the loopholes as to how,
315
00:35:34,120 --> 00:35:39,100
you know, nature could have been doing this in a way that was secretly classical behind the scenes.
316
00:35:39,490 --> 00:35:45,549
Okay. So we want to do a computation where, you know, we can then convince a sceptic that,
317
00:35:45,550 --> 00:35:51,310
you know, that this computation was not done in a way that you understand in a way that was,
318
00:35:51,310 --> 00:36:01,360
you know, secretly, that you can sleep easy at night interpreting as, you know, just maybe it's just classical behind the scenes.
319
00:36:01,690 --> 00:36:08,770
Okay. So, you know, so that means that, you know what, for one thing, you know, you should publish benchmark challenges.
320
00:36:08,780 --> 00:36:13,659
You should say, here is what I did. And, you know, here is a benchmark.
321
00:36:13,660 --> 00:36:20,500
We challenged classical sceptics to do the same thing with their classical computer in any comparable amount of time.
322
00:36:21,100 --> 00:36:28,059
You should also be able to isolate a very, very clear assumption that says, look, you know,
323
00:36:28,060 --> 00:36:38,230
if you could simulate my complicated system efficiently with a classical computer, then you would have to be solving this easy to state problem.
324
00:36:38,530 --> 00:36:41,320
Okay. So then people have a clear target to aim at, right?
325
00:36:41,530 --> 00:36:47,980
That if you believe that this clear problem is hard, then my quantum system is hard to simulate.
326
00:36:48,070 --> 00:36:56,730
Okay, so the same kind of game of reductions that people play in cryptography, you know, and you should leave a safety margin.
327
00:36:56,740 --> 00:37:01,990
So I'm not interested in, you know, doing something like a factor of two faster than,
328
00:37:01,990 --> 00:37:07,360
you know, we could do it with, you know, a reasonable multicore classical computer.
329
00:37:07,540 --> 00:37:11,020
You know, I would like a factor of a billion faster. A factor of a trillion.
330
00:37:11,200 --> 00:37:15,910
Okay. So. So when when these are your goals?
331
00:37:16,330 --> 00:37:21,700
One of the things that we discovered within the last six or seven years is
332
00:37:21,700 --> 00:37:26,140
that you find yourself driven to consider what are called sampling problems,
333
00:37:26,440 --> 00:37:32,530
which are broader than the problems that we usually think about in theoretical computer science,
334
00:37:32,530 --> 00:37:36,220
which are just decision problems, you know, with a yes or no answer.
335
00:37:36,550 --> 00:37:43,990
Okay, so in a sampling problem, you're given an input, say X, and your task is to output a sample,
336
00:37:44,260 --> 00:37:50,440
you know, either exactly or or better yet, just approximately from a probability distribution.
337
00:37:50,440 --> 00:37:54,190
Say this of X over strings.
338
00:37:54,520 --> 00:37:59,679
Okay. Say over end bit strings. Okay. And so, so.
339
00:37:59,680 --> 00:38:03,070
So there, you know, these are problems that don't have a single valid output.
340
00:38:03,310 --> 00:38:11,080
You know, your goal is to sample. Okay. These are also, you know, you know, I mean, many computer scientists have studied these problems as well.
341
00:38:11,830 --> 00:38:17,200
Like, you know, you you want to sample for a random point in a convex body.
342
00:38:17,410 --> 00:38:22,600
You want to sample a random matching and a graph. These would be examples of sampling problems.
343
00:38:22,930 --> 00:38:29,200
Okay. And so what we found is that, you know, sampling problems can be, number one,
344
00:38:29,950 --> 00:38:35,470
much easier to solve with devices that fall short of universal quantum computers.
345
00:38:35,740 --> 00:38:40,240
Okay. You know, which might be available with technology of the near future.
346
00:38:40,540 --> 00:38:50,050
Number two, sampling problems can be much, much easier somehow to argue are hard for a classical computer to solve.
347
00:38:51,790 --> 00:38:58,840
You know, but the sampling problems that we'll talk about, you know, they they don't have practical applications that I know of,
348
00:38:59,410 --> 00:39:03,760
nor is it very easy to verify the result with a classical computer.
349
00:39:03,970 --> 00:39:06,970
So you do give something up. Okay? Nothing's free.
350
00:39:07,830 --> 00:39:20,790
Okay. So a an example of these sampling problems that we've sort of learned to focus on in quantum supremacy research is what's called boson sampling.
351
00:39:21,030 --> 00:39:26,280
So this is something that my student, Alex, AKA and I proposed in 2011.
352
00:39:26,610 --> 00:39:35,730
And it's basically it's a very rudimentary type of quantum computing which involves only identical non interacting photons.
353
00:39:36,140 --> 00:39:45,060
Okay. So the classical analogue of our system is something called Galton's Board, which you can see in many science museums.
354
00:39:45,330 --> 00:39:52,980
It's where you drop balls one by one into a lattice of pegs and the balls sort of bounce around randomly.
355
00:39:53,220 --> 00:39:59,970
And then you see that the the bins they land in at the bottom approximates a binomial distribution.
356
00:40:00,340 --> 00:40:07,230
Right. So this is not a universal computer. But, you know, if you didn't know any other way to calculate binomial calculations,
357
00:40:07,500 --> 00:40:12,990
you could use this thing to estimate them, you know, be not, you know, I mean, maybe not entirely useless.
358
00:40:13,320 --> 00:40:17,130
Okay, so what is the quantum analogue of Gorton's ball?
359
00:40:17,370 --> 00:40:22,230
Well, we could replace the pegs by beam splitters.
360
00:40:22,800 --> 00:40:25,940
We could replace the balls by photons. Okay.
361
00:40:26,010 --> 00:40:30,780
And then, you know, physicists have known even since the eighties that, you know,
362
00:40:30,930 --> 00:40:36,450
that you already with just two photons and one beam splitter, got some very interesting phenomena.
363
00:40:36,750 --> 00:40:40,860
So this is true. Here's an effect called the Honjo mandel dip it is.
364
00:40:40,860 --> 00:40:45,210
You shoot two identical photons at the exact same time.
365
00:40:45,390 --> 00:40:48,810
They are the two arms of a 5050 beam splitter.
366
00:40:49,080 --> 00:40:53,080
Okay. And the photons never directly interact with each other.
367
00:40:53,110 --> 00:40:59,280
Right. I mean, they have in the sense that there's no force acting between them, certainly not at low energies.
368
00:40:59,580 --> 00:41:11,760
Okay. So and nevertheless, what you find, if you do this experiment perfectly, so is that 50% of the time the two photons both end up in this arm.
369
00:41:11,970 --> 00:41:14,990
50% of the time, they both end up in that arm. Okay.
370
00:41:15,060 --> 00:41:18,330
And they never end up in separate arms. Okay.
371
00:41:19,020 --> 00:41:23,880
So somehow they become perfectly correlated, even though they have never interacted.
372
00:41:24,210 --> 00:41:27,390
Okay. So so you know what on earth is going on. Okay.
373
00:41:27,390 --> 00:41:32,700
So, you know, just like with everything else in quantum mechanics, there is an answer.
374
00:41:33,000 --> 00:41:36,629
You know, it's not just mysterious, confusing whenever there is an answer.
375
00:41:36,630 --> 00:41:39,960
And the answer involves the amplitudes having minus signs.
376
00:41:40,170 --> 00:41:47,850
Okay. So basically what happens is that in when you write down the quantum state of the system,
377
00:41:48,090 --> 00:41:54,750
there are two possible there, you know, and you write the amplitude for the two photons to go down separate arms.
378
00:41:54,990 --> 00:42:01,260
There are two different contributions to that amplitude, one that comes from the photons going like this,
379
00:42:01,830 --> 00:42:06,240
and the other that comes from the photons going like that and crossing each other.
380
00:42:07,300 --> 00:42:10,420
And one of the contributions is positive and the other one is negative.
381
00:42:10,750 --> 00:42:17,950
So they cancel each other out and you never see them going down. Separate arms like the amplitudes where they go down the same arm reinforce.
382
00:42:18,310 --> 00:42:26,800
Okay. So the basic idea is that if you have any photons, then you actually do find the amplitude for them to end up in some given configuration,
383
00:42:27,100 --> 00:42:29,350
you have to sum over third of all,
384
00:42:29,350 --> 00:42:37,989
any factorial possible ways that these photons could be rearranged since they're identical particles, bosons, namely right.
385
00:42:37,990 --> 00:42:42,100
And swapping two of them does nothing to the state of the world.
386
00:42:42,430 --> 00:42:50,440
Okay. So you have to add. So just to find the amplitude for one configuration, you have to add up in factorial complex numbers.
387
00:42:50,710 --> 00:42:57,220
And if these are pointing in different directions in the complex plane, then those contributions could cancel each other out.
388
00:42:57,610 --> 00:42:58,000
Okay.
389
00:42:58,270 --> 00:43:08,530
So the formal set up is that, you know, you have a network of beam splitters, let's say, with an input modes, some larger number M of output modes.
390
00:43:09,130 --> 00:43:13,000
So an identical photons enter one per input mode.
391
00:43:13,780 --> 00:43:17,560
We assume for simplicity, let's say that they all leave in different modes.
392
00:43:17,710 --> 00:43:23,770
They don't have to, but if the number of modes is large enough, then with high probability they will go,
393
00:43:23,770 --> 00:43:28,960
in which case there are m choose n possible, you know, configurations of the output.
394
00:43:29,290 --> 00:43:33,670
Now to find the probability of one of those output configurations.
395
00:43:34,060 --> 00:43:44,890
Now what you need to do is so you look at the network of beam splitter and you look at the sort of section of a unitary matrix that that defined.
396
00:43:44,900 --> 00:43:50,860
So it would be an M by end matrix whose columns are all orthogonal unit vectors,
397
00:43:51,190 --> 00:43:57,639
and then the probability of a particular outcome is just going to be given by the squared absolute
398
00:43:57,640 --> 00:44:04,480
value of the permanent of the appropriate end by end sub matrix of that and by and matrix.
399
00:44:04,780 --> 00:44:08,560
Okay. So now what is the permanent of a matrix?
400
00:44:08,800 --> 00:44:14,470
Well, if you know what the determinant is, the permanent is the same thing but without the minus signs.
401
00:44:14,770 --> 00:44:21,940
Okay, so it is this, it's an extremely important function in theoretical computer science and combinatorics,
402
00:44:22,270 --> 00:44:31,990
which so happens to also play an important role in physics, namely in, you know, giving you the amplitudes for identical bosons.
403
00:44:32,320 --> 00:44:41,799
Okay. And so the permanent of an end by end matrix is just the sum for all and factorial permutations of,
404
00:44:41,800 --> 00:44:45,010
you know, the product of the entries along that permutation.
405
00:44:45,430 --> 00:44:48,780
Okay. And so, so, so, so amplitude.
406
00:44:48,910 --> 00:44:58,480
So the amplitudes for identical bosons are permanence, by the way, the amplitudes for identical fermions are determinants.
407
00:44:58,760 --> 00:45:08,860
Okay, so, you know, these these two functions that play such a wonderful role in computer science are also, you know, show up in nature.
408
00:45:09,190 --> 00:45:14,560
And so then the probability is just the absolute square of the amplitude.
409
00:45:14,920 --> 00:45:23,049
Okay. And so, so, you know, this was noticed in the 1990s I think by Troy Lansky in Tisch.
410
00:45:23,050 --> 00:45:31,630
B people realised that this is an amazing connection because the permanent is very, very well known to computer scientists as a hard function.
411
00:45:32,170 --> 00:45:39,999
In fact, the permanent valiant proved in the 1970s that it is complete for that class sharp p that I mentioned earlier,
412
00:45:40,000 --> 00:45:46,270
the class of all counting problems which is even above the NP complete problems.
413
00:45:46,570 --> 00:45:57,130
Okay. So, so this immediately raises a question if a, you know, photon amplitudes are given by permanence of matrices,
414
00:45:57,370 --> 00:46:04,210
then could you just use a system of identical photons to calculate the permanent of any matrix of your choice.
415
00:46:04,600 --> 00:46:11,650
Right. You know that and thereby solve, you know, and p complete problems and even even more than that in polynomial time.
416
00:46:12,010 --> 00:46:15,090
Okay, so this seems, you know, too good to be true, right?
417
00:46:15,100 --> 00:46:18,670
You ought to be suspicious of any such thing.
418
00:46:18,910 --> 00:46:22,510
Okay? And in this case, you know, the answer turns out to be no.
419
00:46:22,630 --> 00:46:27,940
Okay? This this does not let you calculate the permanent of a matrix of your choice.
420
00:46:27,940 --> 00:46:36,010
The reason why not is interesting. Okay? It is because, once again, it's, you know, amplitudes in quantum mechanics are not directly observable.
421
00:46:36,340 --> 00:46:44,020
At some point you've got to measure this. So some when you measure it, you're just going to see a single, you know,
422
00:46:44,290 --> 00:46:52,240
a configuration of the photons with probability given by the absolute square of the permanent of an associated matrix.
423
00:46:52,520 --> 00:46:55,270
Right. So you don't directly observe permanence. Okay.
424
00:46:55,270 --> 00:47:04,420
Now, it's true that if you repeated this experiment enough times, you could use this to estimate the squared permanent of a matrix of your choice.
425
00:47:04,720 --> 00:47:10,160
But in general, you know, to embed. Your matrix as a sub matrix of a unitary matrix.
426
00:47:10,400 --> 00:47:18,170
You need to make it so that its permanent is exponentially small, which means that if you wanted a decent estimate for the permanent,
427
00:47:18,440 --> 00:47:22,370
then you would need to repeat this experiment an exponential number of times,
428
00:47:22,730 --> 00:47:27,680
which means that in the end you're getting no advantage over what you'd get if you just used,
429
00:47:27,680 --> 00:47:31,250
you know, a classical computer with it with with brute force.
430
00:47:32,540 --> 00:47:38,270
Okay. So then, you know, so then this raises the question within, then why is boson sampling interesting?
431
00:47:38,510 --> 00:47:41,270
If it doesn't let you calculate the permanent? Okay.
432
00:47:41,270 --> 00:47:51,649
So what Pop and I did was that we looked at what boson sampling does do, which is that what samples are a random matrix, if you like,
433
00:47:51,650 --> 00:47:59,570
in such a way that matrices with large permanence are somewhat more likely to be sampled than matrices with small permanence.
434
00:48:00,030 --> 00:48:03,770
Okay, now what is such an ability good for? Well, I have no idea.
435
00:48:04,130 --> 00:48:10,430
Okay, but we can give evidence that even that sampling task is already classically hard.
436
00:48:10,850 --> 00:48:19,370
Okay. So. So the first sort of theorem that we observe is that, you know, if there were, say,
437
00:48:19,370 --> 00:48:25,370
a polynomial time classical algorithm that could sample from exactly the same
438
00:48:25,370 --> 00:48:31,640
distribution as the system of a bosons would sample from in the ideal case,
439
00:48:31,970 --> 00:48:36,950
then this would have a striking consequence for complexity theory.
440
00:48:37,460 --> 00:48:44,870
It would mean that the P to the short p would equal BP to the NPE, which,
441
00:48:45,080 --> 00:48:49,970
if you don't know what that means, you could take my word for it that it's bad.
442
00:48:51,060 --> 00:48:58,639
It would it would mean that, you know, two complexity classes would, you know, that are both very powerful would be equal to each other.
443
00:48:58,640 --> 00:49:03,740
And that's a problem because one of them is supposed to be way more powerful even than the other one.
444
00:49:04,040 --> 00:49:14,720
Okay. So P to the sharp p contains the entire polynomial hierarchy, whereas BP to the end is contained within the polynomial hierarchy.
445
00:49:14,750 --> 00:49:20,630
Right. If those two became equal, then the polynomial hierarchy would collapse to the third level.
446
00:49:20,960 --> 00:49:28,280
Okay. Which is, you could say is in some sense almost as bad as P and P, but sort of just sort of at a higher level up.
447
00:49:28,790 --> 00:49:41,150
Okay. So, you know, the reason why this would happen is basically that if you had a fast classical sampling algorithm for for boson sampling,
448
00:49:41,420 --> 00:49:47,360
then, you know, you could take a permanent, which is a sum of both positive and negative terms,
449
00:49:47,900 --> 00:49:53,750
you know, and you could represent it as a sum only of of of non-negative terms.
450
00:49:54,140 --> 00:50:01,910
Okay. Because you could represent it as the probability that this classical algorithm will produce this output, let's say.
451
00:50:02,000 --> 00:50:10,850
Right. Which is then, you know, just you're just counting how many different randomness as different sequences of random bits.
452
00:50:11,120 --> 00:50:18,860
Could this classical algorithm be fed that would cause it to produce this particular outcome as its output?
453
00:50:19,160 --> 00:50:22,310
Right now, that might still be an exponentially hard problem.
454
00:50:22,580 --> 00:50:30,410
But unlike the, you know, directly simulating a quantum system, that is just a problem of summing a whole bunch of non-negative terms.
455
00:50:30,740 --> 00:50:37,310
Okay. And that is the sort of thing that we know how to do in a complexity class like BPP with an oracle for.
456
00:50:38,120 --> 00:50:44,190
You know, so randomised polynomial time with an oracle for NP complete problems which again is
457
00:50:44,270 --> 00:50:51,030
unrealistically powerful but less so than a polynomial time with an oracle for counting problems.
458
00:50:51,050 --> 00:50:55,310
Okay. If these two are equal, then I think you know something way,
459
00:50:55,310 --> 00:51:04,820
way more perverse happens than there being a fast classical factoring algorithm, which is that the polynomial hierarchy collapses.
460
00:51:05,620 --> 00:51:15,140
Okay. Now, what we really want to do is say that even a classical algorithm that approximately sampled lot,
461
00:51:15,190 --> 00:51:20,920
approximately simulated a bosonic experiment would have unlikely complexity consequences.
462
00:51:22,090 --> 00:51:30,610
That's a much more complicated problem. We conjecture that this would already imply a collapse of a polynomial hierarchy.
463
00:51:31,420 --> 00:51:36,760
But. But we're. But we're not able to prove that the result that we can prove so that if there
464
00:51:36,760 --> 00:51:41,620
were an efficient classical algorithm to sample a probability distribution,
465
00:51:41,950 --> 00:51:43,960
which is even anywhere close,
466
00:51:44,260 --> 00:51:56,080
let's say in its total variation distance to the pub, to the distribution involving permanence that this bosonic system is supposed to sample from,
467
00:51:56,380 --> 00:52:00,340
then that would have the following consequence for complexity theory.
468
00:52:00,760 --> 00:52:10,959
It would mean that there is an algorithm in this complexity class BPP with NPR Oracle, which would estimate the permanent of the permanent squared,
469
00:52:10,960 --> 00:52:19,150
let's say, of a matrix of independent Gaussian entries with high probability over over such matrices.
470
00:52:19,570 --> 00:52:25,959
Okay, now we suspect that that task is already sharp p complete, which, you know, if so,
471
00:52:25,960 --> 00:52:32,590
we would get to have the consequence that even a noisy simulation of our experiment or even a simulation
472
00:52:32,590 --> 00:52:38,380
of a physically realistic version of this experiment would already collapse the polynomial hierarchy.
473
00:52:38,540 --> 00:52:41,920
Okay. This remains an outstanding open problem, though.
474
00:52:41,970 --> 00:52:48,310
You know, it's the problem about classical complexity theory, but which is, you know, inspired by this optics experiment.
475
00:52:49,360 --> 00:52:53,700
Okay. So now an obvious difficulty with boson sampling is, you know,
476
00:52:53,740 --> 00:52:59,050
supposing that you do such an experiment, how does a classical computer even verify the result?
477
00:53:01,150 --> 00:53:04,870
So, you know, one way the obvious way to do it, if you like,
478
00:53:04,870 --> 00:53:12,310
is just to calculate the permanence of all the sub matrices corresponding to the outcomes that are observed,
479
00:53:12,640 --> 00:53:21,580
and then check whether these permanence are, as it were, anomalously large, you know, consistent with you're doing boson sampling of this.
480
00:53:21,580 --> 00:53:24,670
This works, this is just fine. But you might complain.
481
00:53:24,880 --> 00:53:27,670
Well, wait, doesn't that entail calculating permanence?
482
00:53:27,940 --> 00:53:32,680
And wasn't the whole point of everything you're doing that the permanent is exponentially hard?
483
00:53:33,190 --> 00:53:42,730
Well, doesn't that defeat the purpose? Well, you know, you know, if maybe so if you had, you know, a hundred photons or a thousand photons.
484
00:53:43,030 --> 00:53:48,340
But our claim is that there is a sweet spot for quantum supremacy experiments.
485
00:53:48,670 --> 00:53:55,090
The sweet spot would be, you know, when you're dealing with about 30 or 40 photons, okay,
486
00:53:55,090 --> 00:54:00,700
in which case we'd be dealing with the permanence of 30 by 30 or 40 by 40 matrices.
487
00:54:00,970 --> 00:54:07,990
Okay. Such permanence are difficult to calculate with a classical computer, but they're not impossible to calculate.
488
00:54:08,050 --> 00:54:17,500
Okay. With enough effort, you could calculate the permanence with a classical computer in order to verify that the experiment was working as intended.
489
00:54:17,830 --> 00:54:22,510
But you could see that the doing, the result classically was taking you much more effort.
490
00:54:22,600 --> 00:54:28,959
It was a lot harder. Okay. And so this is really the sort of the sweet spot that we're hoping to aim for now.
491
00:54:28,960 --> 00:54:32,830
What is the current experimental situation with Bose on sampling?
492
00:54:33,340 --> 00:54:47,110
Well, last summer, a year ago, the group at I guess down the road at Bristol reported a boson sampling experiments with six photons,
493
00:54:47,380 --> 00:54:50,950
you know, beating three or four photons that had been done previously.
494
00:54:51,550 --> 00:54:57,340
And so basically, you know, they they you know, it was an incredibly hard experiment.
495
00:54:57,340 --> 00:55:00,940
I think they only managed to see 15 events or so.
496
00:55:01,180 --> 00:55:06,190
Okay. But they but they, you know, to within experimental limits,
497
00:55:06,190 --> 00:55:16,630
they confirm that the amplitudes for processes involving six photons are indeed given by the permanence of six by six matrices of complex numbers,
498
00:55:16,840 --> 00:55:24,250
you know, as quantum mechanics said all along that they would be okay, but now we have direct confirmation of it.
499
00:55:24,430 --> 00:55:32,560
Okay. Now, scaling up to a larger number of photons, like 30 or 40 presents a very hard engineering problem.
500
00:55:32,800 --> 00:55:40,330
And the basic reason for that is that you need all the photons to arrive at the detectors at the same time.
501
00:55:40,600 --> 00:55:47,950
If you want to see, you know, this interference pattern, we're all in factorial possibilities contribute to your amplitude.
502
00:55:48,340 --> 00:55:57,040
Okay? And right now we just there do not exist on earth photon sources that are good enough for this purpose.
503
00:55:57,370 --> 00:56:03,580
Okay? There are sources that sometimes limit a photon, but you can't control exactly when.
504
00:56:03,970 --> 00:56:11,470
And so then, you know, if you have any of these, you know, imagine how long it takes until they all happen to emit a photon at exactly the same time.
505
00:56:11,770 --> 00:56:16,360
Well, you know, the time you have to wait grows exponentially with an okay.
506
00:56:16,960 --> 00:56:19,770
So that that that's really the problem here. Okay.
507
00:56:19,930 --> 00:56:28,870
There is an amazing idea that could help with this, which on my blog I named scattershot B.S. Scattershot both on sampling.
508
00:56:28,870 --> 00:56:32,140
Again, it was it's a beautiful idea.
509
00:56:32,410 --> 00:56:37,900
It was proposed right here in Oxford by an Ian Walmsley group.
510
00:56:38,080 --> 00:56:42,940
Steve Carl Palmer, the one who, who explained it to me.
511
00:56:43,270 --> 00:56:50,079
And you know, and the idea here is that you can have a whole bunch of photon sources which are not deterministic,
512
00:56:50,080 --> 00:56:53,170
which will, you know, amid a photon whenever they feel like it.
513
00:56:53,410 --> 00:57:00,880
But they're but they're heralded. So each one, when it does admit a photon, it also emits a partner photon going in the opposite direction.
514
00:57:01,120 --> 00:57:05,380
That tells you that, yeah, a photon was flying out this way right at this time.
515
00:57:05,650 --> 00:57:11,290
And then, you know, let's say you have a thousand of these sources and let's say that within a given time window,
516
00:57:11,500 --> 00:57:19,150
only ten of them happen to generate a photon. Okay, well, then at least we know which ten they are, and whichever ten they are,
517
00:57:19,330 --> 00:57:24,490
we just define those after the fact to be our initial state for our boson sampling task.
518
00:57:24,790 --> 00:57:33,730
Okay. And since we didn't care any way about exactly what distribution we were sampling, as long as it was some distribution, that's classically hard.
519
00:57:33,940 --> 00:57:37,110
We don't really care which sources happened to be the initial ones. Right.
520
00:57:37,210 --> 00:57:40,810
We could get the experiment, decide that for us. Okay.
521
00:57:40,810 --> 00:57:46,120
It's a beautiful idea, but. All right. But given the difficulties in scaling this up, you know,
522
00:57:46,120 --> 00:57:53,829
it is worth considering whether boson sampling like ideas could be ported to other quantum computing architectures.
523
00:57:53,830 --> 00:58:00,190
You know, where where we're maybe progress will come sooner than it would than it will come in optics.
524
00:58:00,430 --> 00:58:03,669
So this. It's just, you know, I just tried to take like 5 minutes or so.
525
00:58:03,670 --> 00:58:08,470
Okay. So I just want to tell you a little bit about what we've been working on recently.
526
00:58:09,430 --> 00:58:19,569
So, you know, in a few years, we are very likely to have 40 or 50, very high quality qubits with controllable couplings in,
527
00:58:19,570 --> 00:58:28,180
you know, to possibly, you know, either or both of of two architectures, superconducting qubits or trapped ions.
528
00:58:28,420 --> 00:58:32,950
Okay. So there's been amazing progress made along these lines.
529
00:58:33,490 --> 00:58:39,280
You know, the ones that were sort of most, I would say laser focussed on scaling up right now are the,
530
00:58:39,820 --> 00:58:48,910
you know, possibly the group of John Martinez at Google now formerly UC Santa Barbara.
531
00:58:49,090 --> 00:58:55,720
Okay. But there are other groups, you know, all over the world who are working who are also working on these two approaches.
532
00:58:56,590 --> 00:59:02,380
And I should say, by the way, that, you know, you may have heard there's this company, D-Wave,
533
00:59:02,560 --> 00:59:08,140
it's called, which, you know, has you know, you know, says that they have a thousand qubits.
534
00:59:08,740 --> 00:59:15,190
But we don't you know, we don't really know what to say about sort of the the quality of those qubits or, you know,
535
00:59:15,340 --> 00:59:22,749
are they good enough to do something which is hard to simulate with a classical computer, you know, even in principle, you know?
536
00:59:22,750 --> 00:59:30,850
And so personally, I am sort of way more excited about 40 or 50 qubits that I under that I understand,
537
00:59:31,120 --> 00:59:34,419
than about a thousand qubits that I don't understand. Okay.
538
00:59:34,420 --> 00:59:40,659
And Martinez, you know, is aiming to get, you know, within the next few years,
539
00:59:40,660 --> 00:59:47,650
sort of 40 or 50 qubits that seem like they would be good enough for a real quantum supremacy demonstration.
540
00:59:47,920 --> 00:59:55,950
And so that that's sort of the exciting thing. But this is going to require thinking of sort of new ideas beyond boson sampling, right?
541
00:59:56,200 --> 01:00:03,910
We now have both the burden and the opportunity to sort of adapt, you know,
542
01:00:04,180 --> 01:00:09,580
our complexity theory to the hardware that's going to become available within the next few years.
543
01:00:09,910 --> 01:00:16,870
Okay. So I think, you know, 40 or 50 qubits are still not going to be enough for quantum error correction, you know,
544
01:00:16,870 --> 01:00:25,090
or to any substantial degree that what you do factor any interestingly large number or anything like that.
545
01:00:25,330 --> 01:00:30,190
But it may very well be enough for a convincing quantum supremacy demonstration.
546
01:00:30,520 --> 01:00:36,249
Okay. So I would say that our duty as theoretical computer scientists is to tell the
547
01:00:36,250 --> 01:00:41,139
experimenters what they ought to be doing with their existing hardware or with,
548
01:00:41,140 --> 01:00:48,400
you know, the stuff that they're planning to build within within the next couple of years and how to verify the results
549
01:00:48,670 --> 01:00:54,820
and what can be said about the hardness of simulating the resulting system with a classical computer.
550
01:00:55,570 --> 01:01:06,670
Okay. So you know, ah, so, so I have recent joint work with Ritchie Chen from Think VI University and well, you know, we've been thinking about just,
551
01:01:06,880 --> 01:01:14,770
you know, what if you just took, you know, Martinez's 40 or 50 qubits and all you did was you just applied a random quantum circuit.
552
01:01:14,890 --> 01:01:19,590
Okay? So you just did a bunch of random unitary transformations between, you know,
553
01:01:19,600 --> 01:01:26,110
neighbouring pairs of qubits and this like a two dimensional lattice where they're arranged on a chip.
554
01:01:26,820 --> 01:01:32,230
And then you measured the result, okay, you're going to see some sample from some wild probability distribution.
555
01:01:32,560 --> 01:01:34,900
Okay? So, you know, not very interesting in itself,
556
01:01:35,200 --> 01:01:40,930
but what can we say about the hardness of sampling from that same distribution with a classical computer?
557
01:01:41,170 --> 01:01:45,220
Can we say anything analogous to what we said with boson sampling?
558
01:01:45,610 --> 01:01:49,989
Okay. And so, you know, we've to some extent been able to do that.
559
01:01:49,990 --> 01:01:59,230
Okay. And, you know, what we can actually do in this case is just forget about are you sampling from the right distribution or not?
560
01:01:59,530 --> 01:02:06,160
We can just say here is the statistical test that you would apply to the outcome of this experiment.
561
01:02:06,400 --> 01:02:09,910
Right? Like here's a test that you would do which involves, you know,
562
01:02:09,940 --> 01:02:14,950
calculating the probabilities for the observed outcomes, just like with with boson sampling,
563
01:02:14,950 --> 01:02:22,660
you know, seeing if the experimental results are consistent with with, with, you know, with, with, with what the theory predicts.
564
01:02:23,920 --> 01:02:30,790
And and now we will just directly reason about how hard would it be for a classical
565
01:02:30,790 --> 01:02:36,340
impostor to do anything whatsoever that passes that same statistical test?
566
01:02:36,730 --> 01:02:43,810
Okay. Regardless of whether or not they would act, they would actually sample from the same distribution or from anything close to it.
567
01:02:44,110 --> 01:02:47,229
Okay, so we'll have a purely operational thing.
568
01:02:47,230 --> 01:02:51,010
We can tell the experimentalists. Okay, can they do this test?
569
01:02:51,310 --> 01:03:00,300
And if you see outcomes of your in your hardware that pass this test, then under a very plausible complexity hypothesis, you know.
570
01:03:00,660 --> 01:03:06,510
You are doing something that, you know that could not have been faked classically in any reasonable amount of time.
571
01:03:07,660 --> 01:03:13,570
Okay. So, you know, the verification of the outputs, there are various ways you can do it.
572
01:03:14,190 --> 01:03:19,300
You know, you know that the amplitudes should be like distributed, like Gaussian random variables.
573
01:03:19,630 --> 01:03:28,120
You can use that to your advantage, you know, are theorems about sort of mixing that you can you can exploit here, you know.
574
01:03:28,240 --> 01:03:32,800
So concretely, you could apply a very, very simple test.
575
01:03:33,070 --> 01:03:39,220
Like just say take all of the outcomes after you repeat this experiment, say 50 or 100 times.
576
01:03:39,520 --> 01:03:46,390
So see 50 or 100 outcomes you don't need. Importantly, you don't need to repeat the experiment an exponential number of times.
577
01:03:46,620 --> 01:03:53,080
All right. So you don't need to characterise the entire probability distribution, which would be way too expensive.
578
01:03:53,110 --> 01:03:58,810
Okay. You just need to repeat the experiment some small number of times and then for example, just check.
579
01:03:59,050 --> 01:04:08,080
Do it. At least 60% of the outcomes that you've sampled have a reasonably large probability according to this theoretical probability distribution.
580
01:04:08,410 --> 01:04:11,770
If they do, then you declare that the test has been passed.
581
01:04:12,130 --> 01:04:13,870
Okay. And then what we'll do is we'll say,
582
01:04:14,170 --> 01:04:23,530
how hard would it be for a classical sceptic to generate any samples 3x1 up to x t that would pass this same test.
583
01:04:23,950 --> 01:04:28,839
Okay. And we make a sort of admittedly strong hardness assumption, okay.
584
01:04:28,840 --> 01:04:32,380
But one that just talks about, say, an ordinary decision problem.
585
01:04:32,680 --> 01:04:40,300
Basically what it says is that there should not be any polynomial time classical algorithm that given a random quantum circuit,
586
01:04:40,600 --> 01:04:47,770
just sort of guesses whether a certain the probability of some particular outcome is greater than or less than some threshold
587
01:04:48,070 --> 01:04:56,470
with with a probability which is even like two to the minus n better than random guessing and being the number of qubits.
588
01:04:56,800 --> 01:05:00,070
Okay. That's admittedly a pretty strong assumption. Okay.
589
01:05:01,180 --> 01:05:07,569
You know, and in particular, there is a classical algorithm that will guess the answer with probability like a
590
01:05:07,570 --> 01:05:12,970
half plus one over exponential and m the number of gates in the quantum circuit.
591
01:05:13,210 --> 01:05:19,820
Okay. But will choose a number of gates which is larger than the number of qubits like, you know, quadratic, larger, let's say.
592
01:05:19,840 --> 01:05:22,900
Right. And so then this is going to be much smaller than that.
593
01:05:23,290 --> 01:05:26,560
Okay. So yeah. And then we don't know how to refute that conjecture.
594
01:05:26,830 --> 01:05:31,750
And what we can prove is that if this if you believe this conjecture, you know,
595
01:05:31,750 --> 01:05:37,450
if it holds, then it is hard for a classical imposter to do anything that would,
596
01:05:37,450 --> 01:05:42,460
you know, that would that would pass the same statistical test that that the quantum device passes.
597
01:05:42,550 --> 01:05:47,080
Okay. So, you know, and there are some, you know, relatively simple reduction to prove that.
598
01:05:47,740 --> 01:05:57,850
Okay. So we you know, we we have we thought about what is the you know, how would you simulate a class A quantum computation classically.
599
01:05:58,120 --> 01:06:00,990
Right. There are sort of, you know, different ways to do it.
600
01:06:01,000 --> 01:06:07,420
Like the the Schrodinger approach would say just store the entire quantum state in memory,
601
01:06:07,690 --> 01:06:11,620
you know, and that uses two to the N time and also to to the end memory.
602
01:06:12,520 --> 01:06:17,559
The Feynman approach, if you like, wejust calculate each amplitude, one by one,
603
01:06:17,560 --> 01:06:23,830
is a sum of contributions that uses vastly less memory, only a polynomial amount of memory,
604
01:06:24,070 --> 01:06:32,230
but it uses an exponential, you know, uses a much greater amount of time exponential in the number of gates rather than just in the number of qubits.
605
01:06:32,560 --> 01:06:37,330
Okay. And so you might ask, could you get the best of both worlds? We observed that you could.
606
01:06:37,810 --> 01:06:44,590
For those of you who know classical computer science, you can do it using an analogue of Savage's theorem.
607
01:06:44,920 --> 01:06:47,630
Okay. So it's like, you know,
608
01:06:47,680 --> 01:06:57,249
you can do it using a recursive divide and conquer approach where you could actually simulate a quantum circuit using polynomial
609
01:06:57,250 --> 01:07:05,950
amount of memory and using an amount of time in only grows like the number of gates to the power of the number of qubits.
610
01:07:06,370 --> 01:07:13,110
Okay. But that, you know, this still does not falsify that that that that hardness conjecture that I made.
611
01:07:13,120 --> 01:07:19,989
So the best things that we could figure out how to do, you know, still, you know, I mean, it's a very interesting question.
612
01:07:19,990 --> 01:07:22,300
Is that optimal? Could you improve it further?
613
01:07:22,450 --> 01:07:28,060
But if this is the optimal thing, then, you know, then, then, then then this conjecture that I made ought to be true,
614
01:07:28,270 --> 01:07:34,330
in which case, you know, the random quantum circuit would indeed suffice for a quantum supremacy experiment.
615
01:07:35,220 --> 01:07:44,010
Okay. So, you know, there's yet another approach to quantum supremacy, which is called for your sampling, because I'm running out of time.
616
01:07:44,250 --> 01:07:48,300
I think I'm going to skip over this. We have some new results about this as well.
617
01:07:48,630 --> 01:07:53,280
There's some beautiful work by Bremner, Joseph and Sheppard about it.
618
01:07:53,580 --> 01:08:00,300
We have a sort of a new kind of way of talking about the computational hardness of a four year sampling,
619
01:08:01,110 --> 01:08:06,390
which involves some new kind of structural complexity theory and involves pseudo random functions.
620
01:08:06,750 --> 01:08:11,400
But again, because I'm out of time, I think I'm just going to go to the conclusions.
621
01:08:12,030 --> 01:08:17,790
So conclusions are, you know, in the very near future, we might be able to perform, say,
622
01:08:17,790 --> 01:08:24,540
random quantum circuit sampling, you know, perhaps also for your sampling with 40 or 50 qubits.
623
01:08:24,970 --> 01:08:29,970
Okay. And, you know, I think a central question is, you know, once we can do this,
624
01:08:30,090 --> 01:08:35,250
how do we verify that something classically hard was achieved and that therefore,
625
01:08:35,370 --> 01:08:41,400
you know, the extended church Turing thesis was real, you know, if not refuted, then at least strained.
626
01:08:41,820 --> 01:08:48,450
Okay. You know, and the problem is, you know, there is no direct physical signature of quantum supremacy.
627
01:08:48,480 --> 01:08:51,300
Right. Experimentalists keep asking me for that.
628
01:08:51,480 --> 01:08:58,560
But there's not one, because all the two premises means is that there is not an efficient classical algorithm to do the same thing.
629
01:08:58,890 --> 01:09:05,700
And if you want to talk about the non-existence of an algorithm, you are sort of forced to talk about complexity theory.
630
01:09:06,090 --> 01:09:13,200
And, you know, I think that as quantum computing theorists, we would be urgently called upon to think about this,
631
01:09:13,710 --> 01:09:17,310
even if there were nothing theoretically interesting to say about it,
632
01:09:17,490 --> 01:09:23,910
even if it were just merely, you know, in the, you know, in the service of the experiments that are about to be done.
633
01:09:24,200 --> 01:09:29,700
Okay. But luckily for us, I think there is there are theoretically interesting things to say about it.
634
01:09:29,940 --> 01:09:36,120
So, you know, all the more reason to to work on this, some of the open problems here.
635
01:09:36,330 --> 01:09:40,680
You know, what happens with boson sampling when there is noise and error in your system,
636
01:09:40,680 --> 01:09:47,850
in particular when let's say a constant fraction, 10% of all your photons get lost and never get detected at all.
637
01:09:48,180 --> 01:09:54,510
In recent work with Daniel Brode, we've been able to handle the case where a constant number of photons are lost.
638
01:09:55,380 --> 01:10:02,310
If just a few photons are lost, then we can show that that that the the experiment you're doing still retains the
639
01:10:02,310 --> 01:10:06,720
computational hardness of the original better than sampling where nothing is lost.
640
01:10:07,090 --> 01:10:14,520
Okay. But if more than if a constant fraction of photons are asked, which is the more physically realistic thing, then we don't know what happens.
641
01:10:15,300 --> 01:10:22,680
Okay. So, you know, I would I would love to to show that, you know, prove that conjecture.
642
01:10:22,680 --> 01:10:33,239
I mentioned earlier that, you know, that even a fast classical algorithm for approximate boson sampling or any kind of approximate quantum sampling,
643
01:10:33,240 --> 01:10:36,540
for that matter, would collapse the polynomial hierarchy.
644
01:10:36,780 --> 01:10:44,790
I've actually offered $1,000 reward for that for, for, you know, just a just for you.
645
01:10:44,790 --> 01:10:48,150
I will increase it right now to £1,000. Okay.
646
01:10:48,540 --> 01:11:00,540
But so we can and I very recently were able to prove that any implication like this would need to use what are called non relative ising techniques.
647
01:11:00,810 --> 01:11:06,420
Okay. So it would need to use some interesting complexity theory. But you know, this is no abstraction to its being true.
648
01:11:07,230 --> 01:11:07,650
Okay.
649
01:11:08,250 --> 01:11:17,460
And then, you know, could there be an efficient, like classical cryptographic scheme in order to verify that a boson sampler is doing the right thing?
650
01:11:17,670 --> 01:11:21,480
So, like, without having to calculate the permanent on your classical computer,
651
01:11:21,750 --> 01:11:28,740
could you sort of smuggle into the boson sampling matrix some sort of secret that then when you saw the output,
652
01:11:28,860 --> 01:11:33,330
you know, you would know that it was doing the right thing, even though you couldn't calculate the permanent.
653
01:11:34,140 --> 01:11:38,250
Bremner has given a proposal that's sort of like that for for your sampling.
654
01:11:38,400 --> 01:11:42,060
But we you know, we don't really know what to say about it. Security.
655
01:11:42,270 --> 01:11:45,450
And in the case of boson sampling, we don't even have a proposal.
656
01:11:45,720 --> 01:11:48,780
So these are all things that, you know, I, I hope to think about.
657
01:11:48,780 --> 01:11:51,660
I hope to get others to think about. And thanks for your attention.