I'm unsatisfied with most general interest articles about quantum computation. Even those geared towards physicists and other scientists, like articles in Physics Today or IEEE Spectrum, often don't explain how the thought process behind using the thing would be different from using a classical computer. And I think that is probably what many readers are most curious about: If you had a quantum computer, what would change about the way you write code and think about things like data structures?
The purpose of this post is to give you a slightly more detailed sense for how quantum programming is different. I won't use any math, but I'll expound on areas that are often ignored in introductions to the topic.
First, why do we care about quantum computing? It's encapsulated in this plot:
For certain problems, the quantum algorithm "scales" better with respect to the problem size, which could be for example the size of your data set or the number of atoms in a molecular calculation. For each increment in size, the extra work required from the quantum computer is less than the extra work from the classical computer.
This isn't achieved with a simple redesign of the hardware. It's not as though you put transistors together in a more clever way, or use an element other than silicon. It's more fundamental than that. A quantum computer is faster because it's based on some counter-intuitive properties of quantum mechanics. (The hardware itself is a broad topic that I won't cover here.)
Which brings us to the question:
The above image gives you an initial (but incomplete) taste of where this power comes from. In a classical computer, you probably know that everything is represented as a '0' or a '1'. A bit. But in a quantum computer you can instead have combinations of '0' and '1'. This quantum equivalent of a bit is called a "qubit" and these combinations are called "superpositions" of '0' and '1'. This is closely related to the famous double slit experiment that you may have heard of, in which a single photon goes through two slits at the same time.
The discussion of the difference between bits and qubits tends to end at this point. The problem is, I've noticed that people leave with the impression that it is this continuum on each individual qubit that is the source of computational power. ("It can be anything in between 0 and 1. Infinite possibilities. I think I get it!") They get the impression that quantum computers are faster because they work on a continuous space, while classical computers work on a discrete space. This is not the reason behind the computational speedup. And either way, most quantum algorithms are implicitly discussed in terms of a discrete space (Solovay-Kitaev theorem included).
The reality is much more interesting. Here is what happens when you use multiple qubits that can interact with one another:
With two qubits, you can have an arbitrary weighted combination of four states: 00, 01, 10, and 11. (At this point we could begin discussing entanglement, but I won't get to that.) It's like your "computer memory" of two qubits in fact lives in a 4-dimensional space. With three qubits, you're in an 8-dimensional space. 2^N is the formula. It's exponential (not in the colloquial sense but the literal mathematical sense), so the size of this "memory space" grows absurdly quickly. At just 300 qubits, the size of your "memory space" is larger than the number of atoms in the universe. Keep in mind that your phone has at least billions of transistors in it, a far smaller number than 2^300. If you use a double-precision floating point to store each coefficient, then by the time you've passed around 50 qubits you wouldn't even be able to store the information in a modern supercomputer's memory.
Now that I've covered this exponential space, I want to give people a sense of how thinking like a programmer is different with a quantum computer. This is the main purpose of the post.
There are four constraints that you will have to keep in mind while programming a quantum computer.
The first is that data decays in a short period of time on real quantum computers.
This is the only one of my four rules that isn't inevitable. One day, we hope that quantum hardware will become so robust that we can run perfect calculations no matter how many operations are required. But for now, all the quantum hardware that is likely to exist in the next ten years will have this data-decay problem.
This would be like if you wanted to run a calculation on your laptop, but could do only (say) a thousand primitive math operations before all the information went away. You will still be able to do some useful things, but you have to be clever with your time budget.
Now onto the first entirely unavoidable constraint:
Even if you told your quantum computer to do an operation on only two qubits, every single value in your "memory space" may be affected. This is in fact "a feature, not a bug," as the cliche goes. It's part of the reason you can solve problems faster. But it means that we usually don't talk about operating on just one or two of the values in our space.
The third point I want to make is that, even though you have an exponential amount of "data," you can extract only a minuscule portion of it. The number of bits you can extract is equal to the number of qubits that you have.
Imagine running a calculation on your laptop, but then being able to read only a few bits of data after the computation was complete. Extracting those few bits then destroys all of the remaining data.
This isn't as horrible as it seems. For many calculations, you might be interested only in one number to begin with. The lift force exerted on a wing design, for example, or the conductivity of a new metal alloy. In the case of a quantum computer, this constraint results in two types of algorithms. You can design an algorithm from which you only need a few bits anyway; or, you can run a calculation many times in order to collect statistics on different cross-sections of the data.
The last constraint is that you can't make a copy of your memory.
This sounds like a weird one, but the theorem that proves this is very simple. It is known as the no cloning theorem. You can move data around, but you can't make a copy such that you're left with two identical sets of data. If you do want a second copy of the data, that's fine, but then you have to run the whole calculation a second time on a second memory space.
This relates to constraint 3. Constraint 4 confirms that you really do need to re-run your whole algorithm for every small set of bits that you extract. You have to go through all the work again before you extract your next set of a few bits.
I should also point out that you cannot just take any old classical algorithm and get exponential speedup on a quantum computer. Many problems cannot be sped up on a quantum computer (as far as we know). And problems that can be sped up require taking these constraints into account, in ways that weren't obvious until modern quantum algorithms were invented.
A comment on hardware. Classical computation is almost universally performed on silicon-based transistors, but it doesn't have to be. People have studied and built mechanical and fluidic computers for example, which operate using the same mathematical framework of your semiconductor-based CPU. Likewise, there are a variety of quantum hardware types: ion traps, quantum dots, and superconducting qubits appear to be the most promising. Unlike classical computation, there is not one hardware type that has yet declared victory.
The field is making enormous progress, and in the next few years we might have a quantum computer that can solve a useful problem (in materials science maybe, or data processing) that can't be solved by a modern supercomputer. You won't be able to design quantum algorithms using the same thought process you've used for traditional computing. But that's good news, since working around these four constraints makes algorithm design more fun.
Hey Ian--congrats on your recent defense! There are 2^N possible bitstrings on a classical computer, but the dimensionality of the space is still just N. At any given moment, the state is defined by N values. In a quantum state vector on the other hand, there are 2^N-1 values needed to define a single state, meaning it's a O(2^N)-dimensional space.
Nice article Nico! Regarding your point on the exponential size of the space, isn't this also the case in classical computing? It seems to me that both systems have 2^N possible outcomes.