The Reconstruction Problem
The Last Science, Part 1 of 10
I was twelve years old when I started wondering what it means to be conscious.
It was Hindu Heritage Summer Camp, at an ashram in the Pocono Mountains of Pennsylvania. My parents had sent me, and to be honest I’d gone mostly because it was a sleepaway camp and I was a preteen. Plus, friends were going. I didn’t expect much. But something happened there that changed me forever — set the path of my life.
Swami Parvati, an American who converted to Hinduism and truly walked the walk, wasn’t interested in just teaching us the rituals. She truly understood the ABCD (American-born confused desi) (I’m the full A-J: emigrated from Gujarat, house in Jersey). She was teaching us the philosophy underneath — reincarnation, the atman, enlightenment. And the questions followed naturally. If I’m reborn as a rabbit, what carries over? What part of me is the part that’s aware? What does it even mean to have a mind?
I was an introverted kid with a TRS-80 that had no operating system and a 300-baud modem with an actual cradle for an actual old-timey handset. At some point the spiritual questions got tangled with the scientific ones. The ashram was asking: what is the self? The computer was asking: what is computation? I didn’t know at the time how tangled up those questions would become.
I became an anesthesiologist. Specifically, a pediatric anesthesiologist — I put kids to sleep for surgery at Seattle Children’s Hospital. This turns out to be relevant, because anesthesia is a consciousness laboratory.
Here’s what I do every day. I take a human being — a child, a person with a rich inner world, feelings, fears, sometimes a favorite stuffy — and I give them a drug, and within seconds, there is nobody home. The lights are off. I set up our patient for success, which looks different for every different surgery, and then I wait. At the end, I turn off the medicines maintaining anesthesia, and they wake up. They’re back — though, at times, the emergence process involves delirium, grogginess, asking the same question 15 times. 99.99% of the time, the person returns.1
Where did they go?
This is the clinical question I face daily. I need to know whether someone is conscious. If I get it wrong, a child experiences surgery. That happens, rarely, and it’s potentially devastating. So I monitor brain activity, I watch vital signs, I titrate drugs. And the whole time, underneath the clinical task, there’s this question that most people in my field don’t spend a lot of time thinking about: what am I actually turning off?
Twenty years of doing this, and I can tell you — we know remarkably well how to turn consciousness off. We are shockingly bad at explaining what we’re turning off.2
So. Let’s talk neurons.
A neuron does one thing. It fires an electrical pulse — an action potential — or it doesn’t. On or off. It’s not quite that simple (there’s timing, there’s frequency, there’s a bunch of molecular machinery), but at the level that matters for this story, a neuron’s vocabulary is: spike, or no spike.
Now think about what your brain is doing right now as you read this. You’re seeing black shapes on a white background and extracting meaning from them. You’re hearing my voice in your head (or something like it). You’re sitting somewhere, feeling the chair or the bed or the train seat. There might be music playing. You might be hungry.
All of that — every single bit of it — is built from neurons that can only say one thing: spike.
This is the reconstruction problem. Somewhere between “neurons fire electrical pulses” and “I see the color red and it looks like that,” something extraordinary happens. A brain made of cells that can only transmit binary electrical signals somehow constructs an entire experienced world. Three-dimensional space. Color. Sound. Emotion. The feeling of being you, looking out through your eyes, right now.
How?
Let’s make this concrete. You’re looking at a red apple.
Light hits your retina. Photoreceptors respond — three types of cones, sensitive to different wavelengths. They convert photons into electrical signals. Those signals travel along the optic nerve to your visual cortex, where neurons start firing in response to edges, colors, motion, depth.
But here’s the thing: by the time the signal is in your cortex, the apple is gone. There’s no picture of the apple in your brain. There’s no tiny movie screen. There are just neurons, firing sequences of spikes, and somehow from those spike trains, your brain builds — reconstructs — a three-dimensional red apple sitting on a table, and it feels like something to see it.
Your brain has to solve an inverse problem. It receives a degraded, noisy, incomplete stream of electrical pulses, and from that stream, it has to figure out what’s actually out there in the world. It’s like trying to reconstruct the shape of a building by listening to echoes. Except harder, because at least with echoes you know the medium. Neurons don’t carry labels. A spike in your visual cortex doesn’t arrive stamped “this data from so-and-so opponent photoreceptor group at x,y,z location in the retina.” It’s just a spike. Same as a spike in your auditory cortex. Same as a spike anywhere.
So the brain doesn’t just need to reconstruct reality. It needs to reconstruct reality from a signal that doesn’t, on its face, contain enough information to specify what reality is.
Except it does contain enough information. We know this because you can see the apple.
Here’s where I want to plant a flag for the rest of this series.
The brain is a recurrent system. That means neurons don’t just fire forward — from your eyes to your visual cortex to your decision-making areas in a nice clean pipeline. They also fire backward. And sideways. Cortical neurons are massively interconnected, with recurrent loops at every scale — local loops within tiny cortical columns, long-range loops between distant brain regions, loops through subcortical structures and back.
On my view, this recurrency is in fact the key to understanding the whole trick.
Because when a neuron receives input from other neurons that received input from other neurons that were influenced by the same neuron a few milliseconds ago, something specific happens: the current signal gets mixed with echoes of the recent past. The activity right now reflects not just what’s happening at this instant, but what happened 10 milliseconds ago, 50 milliseconds ago, 200 milliseconds ago. Information coming in now is being blended with information that came in before. And before that. And before that.
This blending — the specific mathematical operation of combining a signal with time-delayed copies of itself — turns out to be extraordinarily powerful. Powerful enough, in fact, to solve the reconstruction problem.
In 1981, a mathematician named Floris Takens proved something remarkable that you've almost certainly never heard of. He showed that if you take a single measurement from a complex system — just one — and you keep taking that same measurement over and over, staggered in time, you can figure out what the whole system is doing. The full pattern. The shape of how the system moves. And that, my friends, is what this particular ballgame is all about.
The brain doesn’t know about Takens’ theorem. But, according to my State Space Theory, biology has been implementing it for as long as there’s been a cortex.
More next time.
This is Part 1 of a 10-part series on the State Space Theory of Consciousness. All the papers are here. Part 2: “The Trick the Brain Already Knows” — where a theorem from dynamical systems theory meets the architecture evolution already built.
I’m a professor of anesthesiology at the University of Washington, where I also direct the Center for Perioperative and Pain Initiatives in Quality, Safety, and Outcomes. I’ve spent the past several years building SST, which is published and under review in peer-reviewed journals across consciousness studies, philosophy, neuroscience, and computational biology.
Postoperative cognitive dysfunction is a real and critically important aspect of geriatric anesthesia: we do what we can to minimize the risk, but there are factors outside of our control.
I’m not talking about the extensive scientific base of knowledge regarding impacts of anesthetics at GABA, NMDA, mitochondrial complex I, etc. Descriptions of anesthetic mechanisms at that level of granularity are indeed quite mature. What I’m talking about is the network-level understanding of what we’re turning off. This level of description turns on an adequate theory of consciousness. Presumably, you’re here because you know we have lots of options there, but no clear winner.

