Consciousness Is a Verb
The Last Science, Part 3 of 10
Quick recap. Your brain receives spike trains from the world. From the outside in, this looks like a reconstruction problem to us: how does the brain create useful representations from neurons firing “in the dark”?1 According to SST, recurrent cortical circuits perform delay coordinate embedding on those signals — combining them with time-delayed copies of themselves — and what pops out is a reconstruction of the dynamical structure of whatever caused the signal. Color, motion, depth, sound: all built by DCE engines that embed sensory data into state spaces preserving the geometry of reality.
At the end of Part 2, I said something that I need to now make good on. I said that this process requires time — not in the boring sense that all things take time, but in the deep sense that the computation is temporal. That the time is constitutive. That without the unfolding, there’s nothing there at all.
Let me show you what I mean.
Hum a few bars of “Happy Birthday.” Just in your head is fine.
You experienced a melody. Now freeze it. Pick any single instant — any one infinitesimal slice of time — and tell me: what’s the melody at that instant?
There isn’t one. At any given instant there’s a pressure wave at a particular frequency. Maybe a note. But a note isn’t a melody. A melody exists only across time. It’s constituted by the sequence — by the relationships between notes, by the way one follows another, by rhythm and phrasing. You can’t freeze a hurricane and still have a hurricane: the ‘object’ is defined by its dynamical evolution, not its state at any given moment.
That’s how I think consciousness works. In my view, consciousness resembles a storm better than it resembles a raindrop.
The dominant approach in consciousness science — largely inherited from physics — treats consciousness as something that “supervenes” on a state: that it is an object of some kind. A configuration of neurons at an instant. A snapshot. The idea is that if you could specify every neuron’s firing state at a single moment, you’d have captured everything that matters. The consciousness is “in” the arrangement. This view filters down into our daily discourse: how’s your mood? what was your state of mind at the time?
I think that’s what’s called in philosophy a “category error” — essentially, putting something into the wrong bucket. And I think it’s the category error that has kept the field stuck for decades.
Delay coordinate embedding is defined on time series. The whole mechanism requires the blending of present signals with past signals through recurrent loops. Take away the temporal processing and you don’t have a reduced version of the process. You have nothing — the way a single frame of a film isn’t a slow movie. There’s no motion in a photograph. There’s a spatial pattern that might depict what motion looked like at one instant, but the motion isn’t in there. Motion requires time. So does consciousness.
I want to be precise about this because it’s easy to hear “consciousness takes time” as a trivial observation. Of course it takes time. Everything takes time. Digestion takes time. So what?
So this: digestion is a process. You wouldn’t try to understand digestion by looking at a snapshot of a stomach and asking “where’s the digestion in this configuration of molecules?” The digestion isn’t in the configuration. It is the process of chemical transformation unfolding over time. A frozen stomach isn’t slowly digesting. It isn’t digesting at all.
But for some reason, when we think about the brain and consciousness, we keep reaching for the snapshot. We ask: what’s the neural correlate of consciousness? What state is the brain in when someone is conscious? What’s the configuration? We’re looking for digestion in a frozen stomach.
Under State Space Theory, consciousness is what happens when DCE engines are running — when recurrent circuits are actively integrating signals across time, reconstructing and predicting, trajectory flowing through state space. Stop the process and you haven’t paused consciousness. You’ve ended it. There’s no one home in a frozen brain, not because the lights went out, but because the thing that constituted the lights was the process itself.
Consciousness is a verb. Not a noun.
This reframes everything.
Take the classic hard problem of consciousness: why does this particular arrangement of matter give rise to subjective experience? Why does it feel like something to be a brain processing information? Philosophers have been grinding on this for decades. Some very smart people think it’s unsolvable in principle.
But notice the assumption buried in the question. “Why does this arrangement of matter...” — arrangement. Configuration. State. The question presupposes that consciousness is a property that a physical system either has or doesn’t, like charge or temperature. And then it asks for the bridge law: what connects the physical property to the experiential property?
What if there’s no bridge because there aren’t two sides?
Here’s my view, which I call Computational Dynamic Monism.2 Consciousness isn’t a property the brain has. It’s an activity the brain does. The experience is the process of hierarchical delay coordinate embedding, accessed from the inside. When you see red, what’s happening is that DCE engines are actively reconstructing color dynamics — trajectory is flowing through a three-dimensional quality space built from cone-cell inputs. That flowing trajectory, that ongoing reconstruction, is the experience of red. Not correlated with it. Not giving rise to it. It is the redness.3
We don’t ask why running “has the property of” locomotion. Running is locomotion. We don’t ask why hydrogen fusion “gives rise to” the sun. Fusion is what the sun is. The hard problem asks why a physical process “gives rise to” experience. The answer, on my view: it doesn’t give rise to it. It’s the same thing described two ways — from the outside (recurrent dynamics, DCE, state space trajectories) and from the inside (what it feels like to be the system undergoing that process).
Same phenomenon. Two vocabularies. No bridge needed because there was never a gap.
I know how that sounds. It sounds like I’m dodging the question. “You just said consciousness IS the process — you didn’t explain why the process is conscious!”
Fair. But consider: that objection has the same structure as asking why water IS H2O. There’s no further explanation. Water is H2O. The identity was discovered, not derived. You can’t deduce it from the concept of water or the concept of H2O. But once you know it, asking “but why does H2O give rise to water?” is confused. There’s nothing for it to give rise to. It already is the thing.
The reason this feels unsatisfying for consciousness when it doesn’t feel unsatisfying for water is that we have two radically different ways of accessing consciousness. We can study it from the outside — brain scans, neural recordings, computational models. And we experience it from the inside — the redness, the pain, the what-it’s-like-ness. Those two access routes generate different conceptual vocabularies, and because the vocabularies feel so different, we intuit that there must be two different things. The feeling that there must be two things is an artifact of how we're built, not of how reality is.
That’s why the hard problem seems hard. Not because it is hard. Because our architecture — the very DCE-based processing that constitutes our consciousness — gives us dual access to a single phenomenon, and we mistake the duality of access for a duality of substance.
One more point before next time. If consciousness is a process — a verb, an activity, something the brain does — then a system that isn’t doing it doesn’t have it. Not “has it but dimly.” Not “has a reduced form.” Doesn’t have it, the way a stopped engine isn’t slowly running.
I said earlier that I turn consciousness off for a living. Under SST, what I’m doing with anesthetic drugs is disrupting the high-frequency dynamics that DCE requires. Slow down the recurrent processing, break the timing of those loops, and the embedding fails. The quality spaces collapse. The process stops. Nobody home.
And when I reverse the anesthetic, I’m not restoring a state. I’m restarting a process. The engine turns over again. The DCE engines spin back up, trajectories resume flowing through state spaces, and the kid wakes up and asks where their mommy is. Usually several times, and sometimes with a lot of confused screaming beforehand (something called emergence delirium, very common in preschool age kids) — because the bootup process isn’t clear
Consciousness wasn’t stored somewhere while they were under. It doesn’t hide in a closet. It simply wasn’t happening. And then it was again.
Next time: what makes some of this processing conscious and the rest of it not.
This is Part 3 of a 10-part series on the State Space Theory of Consciousness. Part 1: “The Reconstruction Problem.” Part 2: “The Trick the Brain Already Knows.” Part 4: “The Hierarchy and the Apex.” All the papers are here.
Consciousness science is obsessed with the twin problems of Chalmers' (1995) notorious "hard problem" of consciousness: even a complete account of neural function seems to leave unexplained why there is subjective experience at all. One could imagine, it seems, all the neural processing occurring "in the dark" without any accompanying experience. This is the so-called "explanatory gap" (Levine, 1983) between objective mechanism and subjective experience that has generated divergent responses across philosophy and cognitive science.
The philosophically inclined can find the preprint, currently in peer review, here. The type-B physicalism (a posteriori identity) draws on Kripke’s arguments about necessary a posteriori truths. Four formal arguments for the constitutive temporality of consciousness — from temporal phenomenology, integration semantics, the experiencer problem, and empirical constraints — are developed in that paper. For the philosophers reading: yes, this is a process-ontological reading in the tradition of Whitehead and Rescher, but grounded in specific computational mechanism rather than speculative metaphysics. There are callouts Sellars’ adverbial view but, if anything, my view would be more of a “gerundial” view.
Note: we haven’t mentioned photons of a certain wavelength here. Those photons stimulate the retina, which create neuronal spike trains, which then get processed by the relevant brain areas. But I can (theoretically) give you the experience of red by stimulating those neurons without any photons. Or by using a transcranial magnetic stimulator to apply activations to areas of your brain. The redness is not in the photons. It’s in your brain activity. We know that much. The DCE mechanism is the hypothesis on how.

