You are not conscious.

Something inside your skull is running a real-time simulation of you, and that is conscious. The real you — the neurons, the electrochemistry — has never experienced a single thing.

This is the core claim of the Four-Model Theory (FMT). It sounds radical, but it follows from a structural observation about how brains work — an observation that dissolves the so-called “hard problem” of consciousness and generates testable predictions, several of which have been independently confirmed.

The setup: two levels, four kinds of modeling

Your brain does something peculiar. It doesn’t just process information — it builds simulations.

Think of a digital twin in engineering. A factory has real machines AND a real-time simulation of those machines on a screen. The simulation isn’t the machines. But it’s the only thing anyone in the control room can see.

Your conscious mind is the control room. You have never touched the factory floor.

FMT identifies four kinds of modeling that the brain performs simultaneously:

Implicit world modeling (IWM)

Your brain’s synaptic weights encode decades of experience about how the world works. This knowledge is structural — stored in connection strengths, not in a simulation. It processes reality constantly, predicting sensory input, detecting anomalies, driving reflexes. You are never directly conscious of it.

When you catch a ball, implicit world modeling handles the trajectory prediction. When you feel uneasy in a conversation without knowing why, implicit modeling has detected a social pattern you haven’t consciously registered.

Implicit self-modeling (ISM)

The same structural encoding, but turned inward. Your brain tracks your body, your capabilities, your behavioral tendencies, your autonomic state. This is the deep self-knowledge that lets you reach for a coffee cup without consciously calculating joint angles, that makes you flinch before you “decide” to, that gives you a felt sense of whether you can jump a gap.

Like implicit world modeling, this is never directly conscious. You experience its outputs, not the process itself.

Explicit world modeling (EWM)

Here’s where consciousness enters. From the implicit world model, your brain generates a real-time simulation of the world — your visual scene, your auditory landscape, your spatial sense of the room you’re in.

This simulation isn’t reality. It’s a model of reality, constructed in real time, informed by sensory input but not identical to it. Every optical illusion demonstrates this: the simulation diverges from physical reality because the simulation follows its own rules.

This is your conscious experience of the world. Not the world itself — a simulation of it.

Explicit self-modeling (ESM)

And this is the crucial piece. From the implicit self-model, your brain generates a real-time simulation of you — the experiencing subject, the “I” that seems to be watching the show.

The ESM is what it feels like to be you. Not a representation of you viewed from nowhere — a simulation that, from the inside, constitutes being someone. When you introspect — when you think about what you’re thinking — you’re the ESM examining itself.

This is why consciousness feels like something. It’s not that neurons “produce” experience through some mysterious alchemy. It’s that the explicit self-simulation is the experiencer. There’s no gap between the simulation and the experience because they’re the same thing, described at different levels.

Why this matters: dissolving the hard problem

The “hard problem” of consciousness, as formulated by David Chalmers, asks: why does physical processing feel like something? Why isn’t the brain just a dark machine, processing information without any inner experience?

FMT dissolves this question rather than answering it.

Asking “why does neural firing feel like seeing red?” is like asking “why does transistor switching feel like running Windows?” It doesn’t. The transistors don’t experience Windows. The operating system exists at a different level of description from the transistor physics.

Similarly, neurons don’t experience consciousness. The explicit self-simulation operates at a different level from the neural substrate that generates it. The substrate processes. The simulation experiences. These aren’t the same thing, and asking why one “produces” the other is a category error — like asking how the number 7 smells.

The hard problem is hard because it assumes consciousness must be explained in terms of the substrate. FMT says: consciousness IS the simulation, not the substrate. The simulation is generated by the substrate, yes — but it has its own coherence, its own dynamics, its own properties. Including the property of feeling like something from the inside.

The critical regime: edge of chaos

The four kinds of modeling aren’t sufficient on their own. The system must also operate in a specific dynamical regime: edge-of-chaos criticality.

This is Wolfram’s Class 4 — the narrow band between rigid order (Class 1–2) and pure chaos (Class 3) where complex, self-sustaining computation is possible. Too ordered, and the system freezes into repetitive patterns. Too chaotic, and information is destroyed faster than it can be integrated. At criticality, the system achieves the maximum computational capacity needed to sustain a real-time self-simulation.

I predicted this in 2015, in a German-language book that sold zero copies.

In 2025, Hengen and Shew published results from 140 neural datasets confirming that waking cortex operates at criticality, while sleep and anesthesia shift the brain away from this regime. The signature I predicted tracks consciousness onset and offset with remarkable precision.

Ten years. Independent confirmation. From researchers who had never heard of me.

Nine testable predictions

FMT makes nine specific, falsifiable predictions. Here are three of the most striking:

The salvia prediction

Salvia divinorum users report literally becoming objects — a chair, a wall, a zipper. This isn’t metaphorical. They describe the experience as being the object, having the object’s perspective.

FMT explains this directly: the ESM (explicit self-model) is redirectable. Normally, it receives input from the implicit self-model — your body, your history, your identity. Salvia disrupts this input channel. With normal self-input suppressed, the ESM latches onto whatever dominant sensory signal remains. If you’re looking at a chair, the chair becomes the dominant input. The ESM doesn’t hallucinate being a chair — it is the chair, briefly, because the self-simulation has been redirected to model the chair instead of you.

No other theory of consciousness predicts this. Most can’t even describe it.

The psychedelic therapy prediction

Anosognosia is a condition where stroke patients are unaware of their own paralysis. They genuinely don’t know they can’t move their arm. This is an ESM failure — the self-model hasn’t updated to reflect the damage.

FMT predicts that psychedelics — which increase neural criticality — should be therapeutically effective for anosognosia, because they push the system into the dynamical regime where the self-model can restructure.

This prediction falls directly out of the architecture. It isn’t bolted on. Cross-domain surprise predictions are how you know a theory is doing real work.

The criticality prediction (confirmed)

As described above: consciousness requires edge-of-chaos criticality. Made in 2015, confirmed across 140 datasets in 2025 by researchers working independently.

What FMT is not

FMT is not a claim that consciousness is “just” a simulation — as if “just” diminishes it. The simulation is the most sophisticated thing the brain does. It’s everything you’ve ever experienced.

FMT is not dualism. The simulation runs on neural substrate. It’s not a separate substance — it’s a different level of description of the same physical system.

FMT is not a claim that all self-simulating systems are conscious. The dynamical regime matters. A thermostat has a crude self-model (it tracks its own temperature) but doesn’t operate at criticality. Consciousness requires the full architecture: both implicit and explicit modeling, both world and self, at the right dynamical regime.

And FMT is not proven. It’s a theory with predictions, some confirmed, some still awaiting testing. What it offers is precision: it tells you exactly what to look for, what to measure, and what would falsify it. In a field dominated by unfalsifiable speculation, that’s worth something.

The cosmological footnote

The same structural signature — Class 4 criticality, singularity-bounded, holographic encoding — appears at brain scale AND at cosmological scale. This is described in a separate paper (the SB-HC4A framework).

Either this is a coincidence, or self-referential computation has a preferred architecture that doesn’t care about scale.

Either way, it’s worth taking seriously. It’s the part of this work that keeps me up at night.

Where to go from here

If you want the full argument with all the technical details, the research paper is freely available on Zenodo.

If you prefer a book-length treatment written for a general audience, The Simulation You Call “I” covers the same ground with more context, examples, and the personal story behind the theory.

For more on what FMT says about AI consciousness specifically, see Can AI Be Conscious?

For a critique of the dominant alternative theory, see Why Global Workspace Theory Explains Nothing About Consciousness.


Matthias Gruber is a biomedical engineer, consciousness researcher, and author. He developed the Four-Model Theory independently over a decade of research outside traditional academia. He works in AI transformation by day and neural architectures by night.