Intelligence and the World

Alva Noe, a philosopher of mind, talks over at edge.org about the problem of consciousness. He argues that the old "input-output" model—where "perception is input from the world to the mind, action is output from the mind to the world, and cognition and consciousness is what happens inside the head to relate those two"—is wrong. Instead, he says, consciousness is "something we do." Like a dance, it is "locked into an environment"—it is as much in the world as it is in our heads. Divorcing the two would produce "only very impoverished experiences."

I'm reminded of the 1987 paper, "Intelligence without representation [pdf]," by Rodney Brooks (inventor of the Roomba), in which he develops his famous subsumption architecture. His key insight is to "use the world as its own model." No central processor, no ontologies, no "representations."

Instead, his robots have independent modules that each do their own low-level sensing, processing, and actuating; the system's intelligence comes from the suppression of one by another. So, for example, a robot might have two modules: "avoid objects" and "go towards the light." They don't talk—the high-level light seeker doesn't care about avoiding objects, and the low-level collision detector doesn't care about the light. But the bot's behavior looks coordinated: it will go towards the light while it avoids objects. How?

The bot's primary directive is to seek light. But the collision detector is always working. So when the light-seeker chooses to move in a way that would cause a collision, the lower-level module kicks in and suppresses its parent. It's that simple.

Since the only interconnections in this model are inhibitory side-taps (which are trivial to implement), each module can devote almost all of its computational resources to a single task, instead of passing and processing messages. And that buys you a very tight loop with the environment, because there's no delay caused by a central intelligent symbol-cruncher. Sound familiar?


Brooks, like Noe, thinks intelligence comes from interaction with the world, not some intricate calculator within a mind. They both eschew "representation" in favor of a different kind of dance between your brain and the environment. (Mind you, Noe's never quite clear—in this talk at least—about what this "dance" entails.)

It turns out they're both off. Brooks, in fact, anticipates the critical flaws with the approach in two questions near the end of his paper:

  1. How many layers can be built in the subsumption architecture before the interactions between layers become too complex to continue?
  2. How complex can the behaviors be that are developed without the aid of central representations?

Here's a good answer to the first question. I'll leave it to Arnold Trehub, who wrote a response to Noe's talk, to take a shot at the second:

The rich and constantly changing content of your phenomenal world cannot exist outside of your phenomenal 3D space that encompasses it. Whatever your conscious experience, it must be an experience of something somewhere in your egocentric phenomenal world. The problem is that humans have no sensory transducers by which to detect and represent the extended coherent 3D space of the world we live in. This means that the brain must have an innate system of biological mechanisms (most likely neuronal) that provide us with a transparent representation of the world from our privileged egocentric perspective.

And as he points out later in that response, recent research has quite convincingly identified (some of) the neural correlates of phenomenal experience. Your brain does represent the world. It does crunch symbols intelligently.

Of course, we would be remiss to claim that either Noe or Brooks is wrong for thinking that intelligence is a feature of both the brain and the environment. It is. It's just that the brain creates its own world—maybe not a complete representation of reality, but something close.