Wednesday, October 17, 2007

Programming the Universe by Seth Lloyd

A conventional digital computer uses bits – things that can have one of two values, conventionally 0 and 1 – to encode instructions and data. In physics terms, this is a classical concept – items have one of two exact values. But on the smallest scale, the universe is more complex than that, obeying the laws of quantum mechanics. The quantum mechanical equivalent of a bit – a qubit – is a superposition of 0 and 1; not until it is observed does it take one of those values. Until then, it can be considered as 0 and 1 at the same time. Likewise, while two bits can at one time have only one of four possible states – 00, 01, 10, 11 – two quibits are the superposition of all four. And so on, with the number of simultaneous states for quibits rapidly increasing with the number of bits.

Now image that a bit represents an instruction for a computer (in a real computer, it’s a byte or bytes, but the argument is still basically the same). Say 0 means “add 1 and 3” and 1 means “add 2 and 2.” A standard digital computer can do one or the other, depending upon whether that bit is 0 or 1. What does a quantum computer, using qubits instead of bits do? Both at once. Thus, a quantum computer – made up of even a relatively modest number of qubits – can perform an incredible number of instructions at once. To date, we’ve only managed to use up to 20 quibits, but we’ll increase this over time. The end result is that in the not to distant future (within 20 years, most likely), we’ll, for example, be able to factor very large numbers. Those who know cryptography know that this means that our current secure encryption schemes can then be cracked.

But this is only a starting point for exploring quantum computing. Seth Lloyd does a great job of looking at the many aspects of quantum computation in Programming the Universe. He starts with information theory, exploring how the second law of thermodynamics (that entropy increases) can also be viewed as involving information theory – entropy is really the hidden information of the system. On the most fundamental level, the universe is processing information. Moreover, as information shows, a quantum computer the size of the universe would be able to simulate the universe: the universe and a quantum computer are thus interchangeable, and the universe can be viewed as a quantum computer.

When this is first stated the typical reaction is “what does this mean, and what do we gain in our understanding by looking at things this way?” That was certainly my reaction. But this paradigm can yield some interesting breakthroughs in understanding. For example, quantum computing points toward a possible theory of quantum gravity – one that may do a better job of addressing this most fundamental problem of physics than string theory does (though we’re not there yet). Information theory and quantum computing also manages to address another very basic question: why is the universe we see around us so complex?

The universe at the time of the big bang was very simple. There were no complex structures. How did complexity come about? (I won’t try to define “complexity” here, though Lloyd addresses this in his book.) Some have used the “monkeys trying to type Shakespeare” analogy: a million monkeys, typing for long enough, will produce Shakespeare’s plays. But, as Lloyd points out, this doesn’t work. Even if a monkey typed the first act of Hamlet, chances are the next letter it types won’t be the first letter of act two. In a classical world, chances are that things will stay random. But, Lloyd note, information theory and the view of the universe as a computational machine addresses that. Picture instead the monkeys typing into a computer. What’s the chance that they will type the first ten million digits of pi? Very, very slim. But, what’s the chance they will type a simple computer program to generate pi. The program (pick your computer language) is much shorter than ten million characters. So the chance of the monkeys producing complexity (pi, e, fractals, etc.) in a computational universe is much greater than their doing so in where computation doesn’t happen. It’s a fascinating way to view the evolution of the universe.

This is a very good book – both well written and mind bending. It’s one of those science books that alters the way you look at and think about the universe. While I’ve read quite a few physics books over the years, this was my first on the topic of quantum computing, and it’s given me a new perspective –on what we might accomplish in computer science in the coming decades and, more importantly, on how the universe works. And I’m also ready to back and re-read several Greg Egan novels.

2 Comments:

Blogger marketwizard said...

Great article! I'll be back for more.

http://www.inQubit.com

3:31 AM  
Blogger TechTonics said...

"So the chance of the monkeys producing complexity (pi, e, fractals, etc.)
in a computational universe is much greater than their doing so in where
computation doesn’t happen."

Kolmogorov complexity (AIT), uses a definition of complexity like this one:
An object is random if it has maximal possible complexity.

I'm reading "Programming the Universe" now, and Lloyd has Gell-man as an
ancestor. The definition they use seems quite opposed to the AIT def:

"A measure that corresponds much better to what is usually meant by
complexity in ordinary conversation, as well as in scientific discourse,
refers not to the length of the most concise description of an entity
(which is roughly what AIT is), but to the length of a concise description
of a set of the entity’s regularities. Thus something almost entirely
random, with practically no regularities, would have effective complexity
near zero. So would something completely regular, such as a bit string
consisting entirely of zeroes. Effective complexity can be high only a
region intermediate between total order and complete disorder."

So the view of AIT is that Pi isn't random because a short algorithm can
produce a very long output. Since Pi isn't random AIT defines Pi as
minimally complex in contrast to the "effective complexity" that Gell-Man
and Lloyd define. I liked Walter Jon Williams book, "Implied Spaces".
Spandrels forever, Stephen

11:02 AM  

Post a Comment

<< Home