A few things about the quantum computation video

Quantum computers do not "try all of the possible solutions of a problem at the same time and choose the best one". It's a common misconception but, you know, not what they actually do. See http://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf for one popular treatment of the subject, or just make vague overtures in the general direction of Scott Aaronson because telling people to stop thinking that quantum computers are magical is what he seems to spend half of his time on anyway.

If you *do* want to find out what quantum computers actually can do... it turns out it's simpler than you might think! You do have to know a bit of linear algebra, but only the very basics - basically, if you're comfortable thinking in terms of vectors and matrices, and can multiply matrices together, you've got enough mathematical background to go right in. Umesh Vazirani runs a course on it, offline at UC Berkeley, online previously on Coursera, and on EdX this spring when I took the course - and he's going to give it again in the fall, although I'm not sure exactly where, so be on the lookout. Vazirani is one of the founders of the field (and Scott Aaronson's doctoral advisor, it turns out), so you can trust that the guy actually really knows this stuff. All of that crazy notation with the weird unbalanced brackets will start making a lot more sense very soon.

Also, just to clarify, the model taught in that course is the quantum circuit model, which is *not* what D-Wave Systems implements. D-Wave's computer can't do arbitrary computation and there's skepticism over whether they will be able to produce any speedup over classical ways to do the problem that their computer *does* solve. Actually people used to be skeptical over whether they made any use of quantum effects at all, but I believe that's mostly been resolved in their favor now. I seem to recall statements from D-Wave to the effect that the quantum circuit model is one of the worst things that ever happened to the field of quantum computation (presumably meaning that they don't believe that general-purpose quantum computers can be implemented anytime soon, and that their system is useful enough), so, there's probably going to be some fireworks there eventually.

I'm not entirely certain on how quantum information works, but as I understand it qubits don't actually hold any more classical information than classical bits do. Yes, a classical simulation of qubits takes a whole lot of classical bits - but you can't actually get classical information out of qubits without measuring them, and that collapses them into whatever state you measured them as being in, so any other "detail" is lost.

Also!!! The section on quantum networks is pretty inaccurate as well. Quantum cryptography has nothing to do with weak measurement, and it absolutely has nothing to do with the computer being "allowed" to see quantum states in a way that humans can't. At least quantum cryptography is reasonably easy to describe with a metaphor, though... but it'll be an incredibly lazy metaphor because IMO if you're actually interested you should just learn it mathematically anyway:

Alice and Bob want to decide on a common string of random bits. It doesn't actually matter what those bits are, only that both parties agree on them, and nobody else can see them - the point is that they'll be used as a shared key for doing classical cryptography with.

So, Alice makes up some bits and sends them to Bob in little trick boxes. The way a trick box works is that they have two handles and you can open them by pulling either at the top or the side. Each box can be configured by Alice to be either a top-box or a side-box - she makes a random choice for each box. If you open a top-box from the top or a side-box from the side, you get the bit that was sent; if you open a top-box from the side or a side-box from the top, you get a fresh random bit. Oh, and when you open it, the very fundamental existence of the universe conspires to irreversibly make the bit inside the one that you saw, so once you've opened the box, it's useless to guess again. And there's no way to know whether a box was made as a side-box or a top-box.

Bob receives those little trick boxes and, for each, randomly decides to open it either from the top or the side. Half of the time he gets it right and the bit he sees inside is the one Alice sent, half the time it's a random bit. Once he's opened all of the boxes, he finds out the difference simply by conferring with Alice over a public channel, where he tells which side he opened, and Alice tells which kind of box she sent. For each box, if they match, Bob accepts the bit, if not, he throws it away.

What makes this work so well for secrecy is that if there's an eavesdropper Eve in between Alice and Bob, trying to work out what the bits are, she will be found out. She can open a box and see what's inside it, and she can make more boxes and send them to Bob - but she doesn't know whether they were top-boxes or side-boxes, so if she guesses wrong, she's essentially going to be sending Bob random bits. Which means that if, at the end, Alice publicly reveals some of the bits that she sent and that Bob was supposed to receive, and they turn out not to match, then Bob will know that his guess of whether the bit was sent in a side-box or a top-box was correct - but Eve's wasn't, which implies that there was an Eve listening in. If the bits Bob received do match sufficiently well with the ones Alice published, then there was no Eve, and they can use the rest of the bits, trusting that there was no physical way they could have been intercepted in transmission.

So, yeah. Quantum stuff. Honestly, you kiiind of got most of it wrong, but hey, you've only got so much time to spend per episode. Still, it would be good to see some more research before you do the next quantum-related video.

Comments

  • WilsonRosaWilsonRosa Posts: 1
    Sorry, but which movie are you talking about and where is it so I can watch it? Thanks.
  • Relevant reddit comment (by me):

    Universal gate machines do stuff that is immediately recognizable to computer scientists. The actual computations being carried out are based on correlations between bits that can't be realized in a classical computer, but classical programmers can still make use of them by thinking of them as oracles that quickly solve problems that should scale exponentially (you can use stuff like the quantum phase estimation algorithm to cut through gordian knots of hardness in an otherwise classical algorithm).

    The trouble with this approach is that it completely ignores most of physics (all the quantum stuff, and probably a bunch of the analog stuff), in a manner analogous (or, frankly, equivalent) to the way computer science ignores most of mathematics (all the non-computable parts). Adiabatic quantum optimization, because it's inherently probabilistic, isn't much help with stuff like Shor's algorithm (although it can probably help solve the same problem) but that's not what the D-Wave was designed to do. It's meant to tackle hard-type problems like verification and validation "in an analog fashion" over long timescales. For example:

    Verification and validation problems for models of things like jets and missiles are classically inefficient. Changing the tire thickness on the landing gear can alter weight distribution, aerodynamics, etc. All the possible interactions with other systems have to be accounted for in a computer model. That sort of graph can get very complicated very quickly, but isn't nearly as scary when you can make use of correlations between non-adjacent qubits.

    It's also worth noting that V&V is typically >25% of the R&D cost of projects like jets and missiles.

    The D-Wave can get you quantum speedup for a range of tasks that humans are good at, but that classical computers (the digital ones, at least) are bad at. I have my own suspicions about the physical reasons for this, but suffice it to say that most of our cognition boils down to running a single algorithm that doesn't scale well on any of the hardware we've tried so far. Historically, we solved problems that required this algorithm (and, pre-digital revolution, problems requiring any kind of algorithm) by coming up with a cultural role and sticking a person in it (painter, blacksmith, photographer, architect, hunter, gatherer, etc.). When cheap digital microprocessors became ubiquitous they didn't fulfill the core computational requirements that had necessitated the creation of these roles, but they did speed up the rate at which old roles were replaced by new ones. This is because much of the instruction and training that defined previous roles involved getting people to do stuff that computers are naturally good at (hippies call this "left brained nincompoopery") and as computers got good at making computers gooder (Moore's law and such) cultural roles were more frequently changed to continue making efficient use of the capacities of the new machines.

    This would be fine, except someone along the way (probably a compsci major) decided that every practical problem of human importance must be solvable with a turing machine, and we merely have yet to find all the proper algorithms for doing so (i.e. either P=NP or nothing in NP is practical). This is an absurd and silly belief (biology and physics  are rife with examples of classically impracticable stuff with real-world applicability) but it's also a widespread belief, so most people assume digital systems will be the only places where quantum speedup is useful. People don't generally think of image recognition when they hear of quantum computers, and when they do it's always in terms of the most common types of classical algorithms that already perform the same task (as opposed to an annealing approach, quantum or otherwise).

    This lecture Q&A (3/5/13) has a short summary of some of the more recent evidence of entanglement in a D-Wave chip.

    Edit: Punctuation

    Edit 2: /r/dwave has more info on AQC

    Edit 3: Added link to Penrose's lecture at Google, Dr. Lidar's lecture at USC, and Geordie's lecture at Caltech

Sign In or Register to comment.