Itow Open Opt

In Their Own Words: The Advantage of Uncertainty

Patrick ColesScientist

Share

Advancing the art of quantum computing, and probing the edges of the quantum world

August 21, 2020

If you zoom in enough—a single water molecule, say, instead of a pond—the world loses its familiar focus. Old certainties, like the location and motion of an object, become uncertain. Distinct physical attributes, like clockwise or counterclockwise rotations, become blurry mixtures, a little of each. The equations of physics no longer tell you what will happen, but rather the probabilities for everything that could happen. And even after one of those possibilities materializes, there’s no way to explain why it happened, even with hindsight. Our usual notions of a deterministic reality are ambiguous at best, or at worst, downright meaningless. Up close, the “real” world gives way to the quantum world.

This quantum world is my world. It is where I live my professional life.

Don’t get me wrong: I’m not asking for your sympathy. I like it here. I know it sounds frustrating living all the time without certainty, specificity, or predictability, especially if you’re trying to get work done—and I do technical, computational work. But there’s a silver lining here. As in other areas of life, with my work, there’s something to be said for nuance, open-mindedness, and relaxing my need for control. Even for a task as detailed and precise as mathematical optimization or simulation, it can actually be a tremendous advantage to work without certainty, specificity, or predictability. In fact, a big part of my professional work, quantum computing, expressly relies on uncertainty. And I’m proud to say that my colleagues and I at Los Alamos have made incredible progress in this field in the past couple years.

Quantum software for quantum hardware

First things first: quantum computing is no longer just some future fantasy like starships and teleporters. Quantum computing exists. For very simple computations, it actually works right now. And quantum computers—these are physical machines, currently available—are rapidly improving. Within a few years, these early-generation quantum computers will begin outperforming the world’s most advanced nonquantum, or classical, computers when solving certain kinds of problems.

Now, as you may already know, a quantum computer bit, or qubit, is not necessarily a one or a zero; it can be some of each. In general, it is a shifting mixture, or superposition, of “oneness” and “zeroness.” Sometimes it’s more one than zero, sometimes more zero than one, and sometimes fifty-fifty. If two qubits interact with one another, they can be in a superposition together, and that’s what physicists call entanglement. This is like two light switches either both being on or both being off, but never one off and the other on. But it’s more intricate than that, because you have the freedom to put a plus sign or minus sign in the superposition, which gives rise to interference effects (much the way colliding ocean waves or sound waves interfere, with peaks and troughs amplifying or canceling each other out). 

Taken together, superposition and entanglement lead to an enormous number of possible states for qubits (and hence a quantum computer) to be in. By contrast, having a computer made of classical bits is like being confined to a prison. Imagine living in this incredibly huge space (the quantum world). To solve some important problem, like simulating the molecules in a coronavirus, you need to move from point A to point B in this huge space. So, with your quantum computer, which is completely free to move in this space, you simply take the straight-line trajectory from A to B. But with a classical computer, your movement is restricted; you are confined to the prison hallways, so you can’t take the optimal path, and the calculation takes longer. 

It is a common misconception that quantum computers simultaneously explore many different classical pathways at the same time. This is wrong. Ultimately, I believe that the quantum computer is the most difficult-to-explain technology ever invented, and I don’t want to give a false explanation for why it performs certain calculations quickly. Rather, I like to turn the question around and ask: why are classical computers so bad at certain tasks? It’s because using your laptop computer is like being confined to a prison, in the vast quantum world that we live in. 

But it’s not all about the hardware. Different kinds of algorithms are needed to take advantage of this very specialized hardware. For example, quantum computers offer a larger set of logical gates. Some of these mimic classical gates, such as AND, OR, or NOT, but others are exclusive to quantum computing and require new algorithms to use. You also need ways to optimize all the programs you try to run. For example, you don’t want to waste limited quantum-computing resources on subroutines that are better handled by a classical computer chip, so you need specialized algorithms to parse that out. My colleagues and I have developed several algorithms of this sort.

Why are quantum-computing resources limited? Two reasons. The first is simply that the quantum computers that exist today have few qubits to work with. State of the art right now is a 53-qubit machine from IBM and similar-sized machines from Google. The number of qubits available will certainly improve with time, but they remain limited now.

Itow Cat1
In a quantum superposition, multiple possibilities are simultaneously true. Unlike problems in classical physics, such as calculating the trajectory of a baseball, future outcomes in quantum physics are uncertain; they can only be assigned probabilities.

The second reason is more fundamental. Quantum superpositions are delicate. They can be ruined (or “collapsed” in quantum lingo) not only when they are deliberately probed (for example, by a scientific measurement) but also by any number of prosaic interactions with the environment, such as encountering a stray blip of electromagnetic radiation. Stray blips of this sort are ubiquitous (heat waves, radio waves), and particles in superposition states—such as a superposition of two different spin states, representing a superposition of 0 and 1 for computational purposes—can be shielded from the environment for only so long. Better methods for isolating qubits from the environment will presumably emerge over time, but nonetheless, the essence of a quantum computation is necessarily time-limited. A superposition can’t be maintained forever, and if it collapses too early, the calculation is gone.

Quantum bits for quantum leaps

If qubit superpositions can be maintained long enough, however, they can be applied to certain classes of calculations that are severely impractical for a classical computer. A simple example is the purely mathematical operation of factoring—identifying the prime numbers that, when multiplied together, result in a given number (as 3×2×2 is 12). Modern encryption methods, like those that make it safe to enter your credit-card information on a respectable website, rely on the computational difficulty of factoring. Breaking the encryption requires factoring numbers so large that it would take centuries or more for today’s best classical computers to do. That’s because a classical computer can’t exploit the wavelike nature of the factoring problem (finding factors is like finding the periods of a wave) the way a quantum computer can.

Not surprisingly, another task at which classical computers dramatically underperform quantum computers is running simulations governed by quantum physics. Suppose for example you want to analyze a group of 53 electrons (the same number as the IBM machine’s available qubits), and you’re only interested in the very simplest property of each electron: its spin. When measured with respect to a particular axis, electrons have only two spin states, called “spin up” and “spin down.” It’s binary—at least, in a collapsed, non-superposition state. That means the number of possible combinations is 253.

To track all that with a classical computer would require 253 bits of memory, which is roughly a petabyte. Today’s most advanced supercomputers can handle that, although even high-end consumer computers cannot. But add one more electron to the simulation—54 electrons instead of 53—and the memory requirements double to 254. Adding just a few more electrons takes us beyond the capability of the most powerful classical computers currently in existence—all to simulate the most simplistic binary spin states of just fifty-some particles! I mean, imagine working with the more complicated states of electrons within atoms, which have various energy levels and orbital properties in addition to spin. Then 253 could become something more like 2053. Besides, 53 particles might represent a single molecule, and not a very big one; by contrast, a bit of matter you could hold in your hand and notice would be upwards of a trillion trillion particles.

Itow Cat2
Quantum computing relies on maintaining a shared superposition, called entanglement, among a small number of particles, or qubits. Unlike classical (nonquantum) computer bits—but very much like the true subatomic world—qubits are not restricted to standard binary operations, and therefore quantum computers can solve certain problems much faster than their classical counterparts.

The essential trouble here is that these types of problems scale exponentially. Add just one more digit to the number being factored or one more particle to the quantum simulation, and the amount of memory required doubles (at least). Just a 2 percent increase in complexity (from 53 to 54 electrons, say) results in a 100 percent increase in computational difficulty and memory requirements. (By now, everyone is familiar with the accelerating pace of exponential growth after seeing the horrific, rapid rise of the coronavirus pandemic.)

The same is not true with qubits. Each qubit can simulate one electron’s spin. Going from 53 to 54 electrons doesn’t double the number of qubits required; it just means adding one more qubit. And that’s the key advantage of quantum computing. It eliminates the exponential growth in computational difficulty that classical computers face with these kinds of problems. Instead, the growth is linear, so that a marginal increase in a problem’s complexity produces a correspondingly marginal increase in computational difficulty. That’s what makes quantum computing so promising. It’s also what makes the whole field so fascinating: the way to obtain mathematical certainty in the face of intractably expanding complexity is to deliberately introduce additional uncertainty with superpositions of qubits.

Quantum simulations for quantum behavior

I used to find it bizarre that physics has two distinct and seemingly incompatible ways of describing nature. When studying just one particle or a few, quantum physics is necessary and reliable. When studying larger systems of particles—bacteria, airplanes, galaxies, whatever—classical physics is obviously the way to go, and quantum physics, while perhaps still true if you could somehow account for all the constituent particles, is wildly beyond our ability to calculate and utterly unnecessary. While superimposing human perspectives onto nature invites boundaries like this, physicists like me much prefer smooth transitions rather than abrupt changes. 

It can actually be a tremendous advantage to work without certainty, specificity, or predictability.

Now, like most physicists, I believe that the boundary between quantum and classical lies in a process called decoherence. A quantum system “decoheres” by broadcasting information about itself to its environment, and then classical physics emerges. But how does that work? How exactly do the essential qualities of quantum behavior—superposition, entanglement, and uncertainty—conspire to produce rigid, predictable classical laws like Newton’s laws of motion or Maxwell’s laws of electromagnetism? Everyone in physics knows that understanding decoherence, and therefore the quantum-to-classical transition, represents a major scientific discovery, but there’s never been any rigorous way to test it.

That’s about to change. Current estimates suggest that decoherence really gets underway for systems of a few hundred particles. That’s enormously beyond the capability of any foreseeable classical supercomputer, possibly forever, due to the exponential-scaling problem. But a few-hundred-qubit quantum computer is probably only a couple years away. So I’ve been working frantically to develop algorithms by which a quantum computer can simulate quantum decoherence. That way, we’ll be ready when we, as a species, finally get the chance for the first time in history.

Amazingly, molecules around the few-hundred particle threshold for the quantum-to-classical transition show up often in the biochemistry of the human body. Most people think that these molecules are so closely surrounded by others that they’re effectively much larger sets of particles and therefore, in effect, permanently decohered. But we can’t be sure. Many, many biomolecules are about the right size, such that quantum decoherence could possibly be integral to the processes they carry out—processes like sight and smell, neurotransmitter signaling, and DNA replication. We scientists may have sophisticated ways of modeling stellar interiors, rocket engines, and global climate dynamics, but we’re only just now approaching the point of having the computational technology to understand ourselves.

For some calculations, the fastest way to obtain mathematical certainty is to deliberately introduce the uncertainty of qubits.

Here’s a case in point from a paper that several Los Alamos colleagues and I published last year. Proteins, including the enzymes that catalyze all sorts of biofunctions in all sorts of lifeforms, are generally larger than just a few hundred particles, but they’re still expected to exhibit some remnants of quantum behavior. They are made from long chains of amino-acid molecules, but rather than remaining linear chains, they bunch up with a complex set of specific folds. How they fold is extremely important; the resulting shape is intimately connected to the proper function of the molecule. 

So how do the proteins know how to fold properly? There’s some debate about that. One theory argues for classical determinism: the chain of amino acids will kink and fold at the same spots every time because of some complex set of built-in classical forces. But another theory argues that the proteins are solving an optimization problem from scratch, effectively running a quantum computation. They are exploring different ways to fold—at different locations along the chain, in different sequences—via some kind of composite superposition to arrive at an optimal endpoint. 

Itow Cat3
Quantum decoherence occurs when a large number of particles get involved, somehow “diluting” the superposition. A quantum computation can be maintained only as long as the entangled qubits are isolated from their surroundings, and when that is no longer true, the superposition is replaced with a single outcome. At this point, physical behavior appears classical, not quantum.

Wild conjecture? Or reasonable speculation? The only way to know for sure is to simulate all the particles involved on a quantum computer and apply a specialized algorithm to identify whether or not the relevant behavior has shifted from quantum to classical. As it turns out, we have a tremendous head start on that.

Quantum histories for the quantum-classical transition

I count myself fortunate to have worked under the tutelage of Professor Robert Griffiths during my first postdoctoral position ten years ago. Griffiths developed a powerful methodology for doing quantum physics known as the consistent histories (CH) formalism, and I learned a lot about this novel approach from him. It’s not often used, partly because it is conceptually quite different from the standard quantum mechanics methodology taught to physics students, and partly because it quickly bumps up against the exponential-scaling problem when implemented on a classical computer. However, here at Los Alamos, I realized that CH offered a particularly promising way to study the quantum-to-classical transition on a quantum computer. My colleagues and I recently developed a hybrid algorithm—part quantum computing, part classical computing—to do just that.

The details are complicated, but in essence it works like this: First, you invent what’s called a framework in the CH formalism. This is a series of observables, such as the individual atomic positions in a folding protein, measured individually in some order. This much, but no more, can be done on a classical computer. Then you use a quantum computer to check all the possible sequences of measurement outcomes, or “histories,” for “consistency”—looking for sequences that can occur without interfering with one another. (In quantum mechanics, particles have wavelike attributes and can therefore interfere; we need to suppress this interference if we want to have a consistent framework.) The algorithm repeats this entire process to evaluate many different frameworks.

Our algorithm identifies the onset of decoherence by constructing as-classical-as-possible descriptions of a quantum system.

Now—and this is important—I’m from Pittsburgh. So I like to think of using consistent histories in terms of answering questions like, “What are the odds that both the Steelers and the Penguins win their respective championships this year?” The consistency here comes from the fact that the two events do not interfere with one another; both teams can (and absolutely should) win in the same year. Or one or the other. Or (gasp!) neither. CH seeks out frameworks like that—those with sets of outcomes not riddled with quantum interference. Effectively, this consistency search obtains as-classical-as-possible descriptions of a quantum system, which means it can assign scores rating “how classical” some process is. By finding the most consistent, most classical behavior, the algorithm can identify when and where decoherence is occurring.

Wow, right?

This algorithm is delightful in its apparent contradictions. It represents a sort of metaphorical superposition of nonoverlapping states: relying on both quantum and classical computations, maintaining a superposition of qubits to simulate the collapse of a superposition of particles, and calling upon purely theoretical foundations (consistent histories) to advance practical applications (quantum computing). Here in the quantum world (where I live, remember?), it’s like installing a window that looks out onto the so-called real world beyond—a window that lets us see not just what the classical world looks like, but why it came to look that way.

Superposition of progress

I’m relatively new to the quantum computing team at Los Alamos, having just started my fourth year. But I can’t believe how much we have accomplished together in that time. I should note that of course we’re standing on the shoulders of giants, and one particular giant deserves special mention: our colleague Wojciech Zurek, just down the hall, has been instrumental in pioneering a great deal of quantum foundations research, especially in the area of decoherence. Other Los Alamos colleagues include Rolando Somma, who is also developing functional quantum-computing algorithms for existing and future quantum computers; Andrew Sornborger, who is doing fascinating work in quantum and neuromorphic computing and is collaborating with me on the intersection between these two topics (so-called “quantum machine learning”); and Lukasz Cincio, Carleton Coffrin, and Stephan Eidenbenz, who have all worked with me in co-organizing a prestigious quantum-computing summer school that sees students from around the world come here to Los Alamos each year for training in the theory, application, and programming of quantum computers. (And those are just a few of my fellow staff members in this amazing group; I haven’t even mentioned the many talented postdoctoral researchers we have here!)

We’re finally processing information the way nature does, no longer constrained by classical bits.

We’ve developed new codes to conserve qubit resources and improve quantum simulations and, perhaps somewhat conversely, to take advantage of qubit-based processing to compile quantum-computing algorithms—that is, translate them from a logical programming language used by programmers like me into machine-language instructions. We’re exploring the fertile intersection between quantum computing and machine learning. We’re proposing new algorithms to solve linear systems of equations (a ubiquitous task in engineering) on quantum computers with an exponential speedup. We’re improving quantum encryption technology and deploying new ways to assess its performance. And, of course, we’re attacking the problem of decoherence, hard. To me, all of this feels like a new kind of progress for humankind. We’re finally using nature’s fundamental methods, processing information the way nature does, and simultaneously exploring all possibilities in any situation.

Reality isn’t, well, real. Not in the way we normally think of it, anyway. Our reality emerges from something else, something expressly nonspecific. We’ve known this for a long time, but we’re only just now beginning to truly understand it, use it, and learn from it. We’re learning to accept ambiguity to obtain precision. We’re making unpredictability work to our advantage. We’re zeroing in on classical certainty by widening our view of quantum histories, and we’re solving difficult problems faster by expanding beyond the paths offered by classical bits. It’s a thought paradigm almost deliberately built on contradiction. And it’s completely new, except it’s not. We’re actively managing qubits and developing new ways to query them, but we’re also just obeying the standard wisdom of the ages. Embrace uncertainty. Live in the now. Hold on loosely, but don’t let go. 

If you zoom out enough, the world loses its familiar focus. The same thing happens if you zoom in. LDRD

More 1663 Stories

1663 Home