Are you more invested in asking "Why is this wrong?" than "Is this true?"
Intelligence can be deployed as a shield or as a lens
Do you deploy intelligence as a shield or as a lens? Most people, most of the time don’t notice which one they’ve picked up. This is because phenomenologically, from the inside, they feel the same. They both feel like, “I’m asking good questions, interrogating the facts.”
The shield is fundamentally defensive. The cognitive machinery is being used to maintain a prior, often unconscious, position. The more sophisticated the person’s intellect, the more elusive the defense. Smarter people produce better rationalizations, which is why intelligence alone doesn’t correlate well with Reality Alignment on contested questions. Intelligence as a lens directs the same machinery outward—toward the structure of the claim itself, independent of what its truth would mean for the person evaluating it. In order to be intelligent as a truth seeker—you cannot bring along any of your preferred beliefs. Even one prior commitment distorts the field of view.
You can’t avoid falling into the distortion of your own cognition, but you can arm yourself with an epistemology that resuscitates you. A commitment to “no priors” is exactly that. It returns you to the open mind whenever you realize you have become intellectually defensive. It can be installed as an override. Just like “I have faith.” Both statements need no referent.
Ideas that come at a perceived cost to your social status are what tend to trigger the defensive posture. People rarely switch into shield mode over emotionally neutral claims. It’s when the truth of a proposition would require some kind of costly update—to their identity, their publicly stated position, their sense of competence, their place in a social hierarchy—that intelligence gets conscripted into defense. And this happens pre-cognitively—before their conscious intelligence “comes online.” It’s a reaction to social cues, not reason. The reason is post hoc. The tell is that the quality of their reasoning suddenly drops in a very specific way: it becomes locally clever but globally incoherent, because it’s optimizing for a conclusion rather than following a process.
People use intelligence either instrumentally (to protect a position) or epistemically (to locate the truth), and the switch between the two is almost always status-driven and unconscious.
Meta-rationality
This points to a concept I call meta-rationality. Meta-rationality is the conscious or unconscious constraint over what rationality is allowed to optimize. Meta-rationality constrains expected-value calculations, by designating certain commitments, values, or modes of being as constraints—boundary conditions placed outside metabolic optimization rather than variables to be optimized. Example: “I’m not a drinker” vs. “I’m not having a drink tonight.”
“I’m not having a drink tonight” places the decision inside the optimization loop—which means every new piece of information (the social pressure, the rough day, the friend who just ordered a bottle) gets fed back in as a variable, and the expected-value calculation runs again, and again, and eventually the math comes out differently because the math was always going to come out differently under enough pressure. The decision has to be re-derived from scratch in every new context, and “willpower” is just the name we give to repeatedly arriving at the same answer under worsening conditions.
“I’m not a drinker” removes the variable from the equation entirely. The calculation never runs. There’s nothing for local circumstances to update because the commitment isn’t a derived conclusion—it’s a constraint on what conclusions are reachable. It’s an early termination from the decision loop. And that’s why it’s so much more effective in practice, despite being less “rational” in a narrow sense. A pure expected-value reasoner would say you should always leave every option on the table and just calculate correctly. But that advice assumes the calculator isn’t subject to systematic distortion under pressure.
So meta-rationality is the recognition that a reasoning system that can reason about everything, including its own commitments, will eventually reason itself out of any commitment that becomes locally costly. The only way to maintain certain positions is to place them outside the space of things you’re willing to reconsider—and that this is a feature of good epistemics rather than a failure of them, because some decisions are better made through periodic audit, at a high level, than re-derived continuously under variable conditions.
“No priors” is a meta-rational commitment to epistemic openness—placed outside the optimization loop so that it can’t be locally overridden by the status-defense mechanism. You’re not deciding each time whether you care about the truth. You’ve already decided.
Most people however, can’t afford this. Their identity and power in the world depend on load-bearing fictions that, if challenged, would exceed their cognitive and metabolic budget. So people live inside a fiction that serves their energy needs.
And the fiction is cheaper than Reality Alignment locally, in the short term, for the holder—that’s why they keep it. The shield works. It maintains their position, their identity, their comfort. The costs are externalized—onto everyone else. The rest of us now have to navigate around the distortion: the colleagues who can’t name the obvious problem, the field that stalls out because it cannot ask the right questions, the relationship that depends on a shared mythology to keep peace, the students who inherit theories that do not track reality.
Maintaining load-bearing fictions isn’t a survival strategy that deserves sympathy—it’s extractive. The person out of alignment with reality is drawing down on the epistemic commons, forcing everyone around them to subsidize their status and comfort by pretending their self-concept or worldview makes sense. They get the identity, the certainty, the social position, and the bill goes to the commons.
Which means “no priors” isn’t a luxury position—it’s basic epistemic hygiene. It’s the refusal to make everyone else pay for your comfort. And the refusal to pay for theirs with a loss of intellectual integrity.
The person “bravely” maintaining their constructed identity under pressure is actually engaged in parasitic extraction from the ecology. Each degree of distance away from reality comes at a cost. If the cost is externalized, then maintaining load-bearing fictions is a form of defection, and "no priors" is the cooperative move. It's not saintly, not aspirational, not a privilege of the leisure mind. It's the basic demand that you bear your own epistemic costs rather than passing them to everyone around you. The person who won't examine their premises is asking everyone else to live inside their distortion field—and calling that normal.
Courage is being willing to drop the fiction. Absorbing the cost of alignment yourself—rather than externalizing confusion as a byproduct of maintaining your status.
Like what you are reading? Stay tuned for my next book, WORLD DESTROYER’S HANDBOOK: The Thermodynamics of Human Coordination, A Unified Metabolic Theory of Human Social Behavior—coming soon.

