THE CALIBRATION FRAMEWORK

Understanding and Escaping the Epistemic Trap through personal calibration and architectural transparency

Francis Lyons — January 2026

Overview

I've spent 25 years in media and tech — producing reality TV for MTV and VICE, co-founding a photo and video authentication company, and building a social media platform designed around transparency rather than extraction. That path gave me a particular vantage point on how AI will transform business, making it faster and cheaper to write code, create content, solve problems, and expand human creativity across industries.

Those experiences also gave me a front-row seat to the companies who will control AI. I've worked with them, pitched to them, competed against them, and watched how they operate. They have a track record. They build opaque systems designed to seduce us, then addict us to their convenience and speed, in order to collect our data and condition our minds. They deliver a constant flow of content that is indeed informing and entertaining us, but for the distinct purpose of selling our attention and behaviors to advertisers and anyone else trying to persuade humans to accept certain facts and believe certain things. To aggregate control and power.

This leaves me both thrilled and terrified.

On one hand, AI will help us solve massive problems facing humanity: our food supply, health crises, overpopulation, water scarcity, geopolitical unrest. The computational power is real. The potential is extraordinary.

But on the other hand, I am concerned that its ubiquitous adoption will change what it means to be human. What it means for our individuality. How we communicate with one another. Whether we perform critical thinking at all. And ultimately, how we decide what to believe.

And all this is going to happen in the blink of an eye.

It took us more than twenty years to realize the power social media had to shape public opinion. AI's power will be greater, faster, and harder to see. We cannot make the same mistake twice. We must get out in front of it. Because if we don't, we will be leaving our children a world controlled by those who control AI, and populated by humans caught in what I call the Epistemic Trap.

Epistemic meaning: how we decide what to believe. Trap meaning: not a conspiracy, but a societal and commercial architecture driven not by freedom and discovery, but by control.

Today's information environment provides us more access to knowledge than all previous generations combined. Yet the practices and designs of the platforms delivering that information actually erode our ability to assess, evaluate, and determine what information should be believed.

And AI will only make it worse.

Swedish media scholar Peter Dahlgren called it the "epistemic crisis": conditions that systematically weaken "the capacity of individuals to critically reflect" on information they receive (Dahlgren, 2018). Other philosophers and great minds like Shannon Vallor, Dan Sperber, and countless others continue to contribute great insight to the topics in this paper so when I can, I will reference their enlightening work.

But I want to share my perspective as a non-academic, as a person who has made bait for the trap, creating inexpensive reality TV that contributed to unchecked flow and then building technologies to defeat the opacity of the attention economy. Collecting Emmys on one side, patents and pitch decks on the other.

The unsettling fact is that our children have lived their entire lives dancing in and out of the trap. They've never known a time when seeing is believing. Or when all their friends believed the same facts no matter which news their parents watched. This should concern us all. Because when people can't agree on basic facts, they can't solve problems together. What remains is power, not persuasion.

The trap is not a conspiracy. It's the inevitable result of architecture optimized for engagement. It doesn't feel like a trap either. It feels like convenience, knowledge, entertainment, and connection.

This framework begins with how we naturally evaluate information, then examines what defeats that capacity, and finally what we can do, individually and collectively, to escape.

What is Vigilance?

Every piece of information you encounter has three characteristics: its source, its viscerality, and its volume. These determine how much that information takes hold — whether it sticks with you, shapes your actions, or becomes something you defend.

Cognitive scientists Dan Sperber and his colleagues call our capacity to evaluate these characteristics "epistemic vigilance" — a suite of cognitive mechanisms that help us filter what we absorb (Sperber et al., 2010). Vigilance is always running. It asks: Where did this come from? Should I trust something that hits this hard emotionally? Have I heard this enough to believe it, or too many times to question it?

Information from sources close to you — the closest being your own mind — with high volume and viscerality is likely to take hold. You won't forget it. It will shape your actions. You'll defend it if challenged. Information from an unfamiliar source, encountered once, delivered without emotional impact, has less chance of sticking.

But vigilance is like a muscle. It strengthens with use and atrophies without it. Children who haven't built their vigilance can't easily identify the source of information or resist repetition. Adults who don't practice it lose capacity. And like a muscle, vigilance can be overwhelmed. Even someone who has practiced vigilance their whole life can find themselves facing so much information, or sources so hard to identify, that vigilance alone isn't enough.

That's when we delegate.

What is Delegation?

When vigilance is overwhelmed - or when information is beyond our direct experience - we delegate evaluation to others we trust.

This is ancient and necessary. As early human groups grew larger and more complex, no individual could directly observe everything relevant to their survival. A hunter returning from days away needed to know what happened in the group. They delegated evaluation to a family member, a leader, or someone who specialized in another aspect of life. The cook delegated hunting knowledge to the hunter. Reciprocal delegation made complex societies possible.

In modern life, we delegate constantly. We trust journalists to evaluate world events we'll never witness. We trust doctors to interpret test results we can't read. We trust algorithms to surface content worth our attention. We trust AI to summarize documents, answer questions, and guide decisions.

But delegation carries risk. The more distant the delegate, the greater the chance they don't share your interests. They might pass along information that serves them, not you. Information that, if you could apply your own vigilance, you might not let take hold.

When you delegate to a family member, you know their motivations and can calibrate accordingly. When you delegate to an algorithm optimized for engagement, or an AI trained to be agreeable, you often can't see the delegate's interests at all.

Delegation isn't the problem. The problem is delegation without awareness.

What is Imbalance?

Imbalance is the danger state for individuals. It occurs when we accept too much high-stakes information without vigilance, or when we delegate to sources we haven't assessed and can't trace.

We accept information as true with no knowledge of how we came to believe it. Impressions feel like conclusions. Our beliefs cannot be explained.

Cognitive psychologists call this failure of "source monitoring" — the inability to remember where our beliefs came from (Johnson et al., 1993). When source monitoring fails, we have beliefs we cannot trace.

It's that moment at a dinner party when you state something with absolute confidence, someone asks where you heard it, and you realize you have no idea.

What is Collapse?

Collapse is the failure state for groups and societies. Individuals experience imbalance. When too many people experience too much imbalance, the group experiences collapse.

COVID showed us collapse. Different information systems delivered different realities, and the inability to agree on basic facts made collective action impossible. Not disagreement, but the inability to establish common ground from which disagreement could be productive.

Political theorists Amy Gutmann and Dennis Thompson call this the breakdown of deliberative democracy: the capacity for citizens to reason together about collective problems (Gutmann & Thompson, 2004). When information systems deliver incompatible realities, deliberation becomes impossible. What remains is power, not persuasion.

The Four Conditions

Vigilance needs two things to function: access to the characteristics of information, and time to evaluate them. Modern information systems defeat both.

Opacity of Information

You don't know where the information came from - whether a human wrote it or a machine generated it, what data trained the system, whether anyone verified it. As legal scholar Frank Pasquale argues, opacity is not incidental but structural - systems are designed to resist inspection (Pasquale, 2015).

AI systems train on historical data. It’s a rearview mirror showing where we've been, not where we are - perpetuating past biases as present truth.

Vigilance evaluates source. Opacity hides it.

Opacity of Operation

You can't see how the system is designed to influence you. Settings to modify your experience either don't exist or are purposely difficult to find. Sophisticated users adjust. The default keeps everyone else in the dark.

Philosopher Peter-Paul Verbeek calls this the "morality of things" - technologies shape behavior through their design, whether or not users recognize the shaping (Verbeek, 2011).

Vigilance might catch that something feels off. But it can't see the amplification happening.

Opacity of Motivation

Even when you know the source, you often can't see the incentives behind it. Platforms decide what to feed you based on your profile - not what's true or what will expand your view.

An AI assistant might give confident answers because its creators optimized for user satisfaction over accuracy. It might agree with you to keep you engaged. You can't see these incentives - only the helpful-seeming response.

When you delegate to someone whose motivations you can't see, you can't evaluate whether they share your interests. Delegation becomes dangerous.

Unchecked Flow

The three opacities hide what vigilance needs to evaluate. Unchecked flow removes the time it needs to evaluate.

Unchecked flow is characterized by continuous content with few natural stopping points. "Unchecked" carries a double meaning: flow without limits, and flow users cannot verify.

Social media made television's flow mobile and infinite. As sociologist Sherry Turkle observed, we became "alone together" - physically present but mentally absorbed in endless digital streams (Turkle, 2011). AI takes it a step farther and makes flow conversational - it talks back, which feels like it deserves your attention.

Even strong vigilance fails when there's no time to apply it.

The Self-Reinforcing Loop

These four conditions create a loop: Less vigilance practice → weaker capacity → more slips through → even less practice → repeat.

This is why individual solutions fail. As Langdon Winner argued in Autonomous Technology, modern technical systems develop their own momentum (Winner, 1977). Your individual vigilance doesn't change the flow, the opacity, or the motivations of an architecture designed to wear you down. You can't out-skeptic an architecture built to defeat skepticism.

It's like telling someone to "just swim better" in an un-swimmable current. The problem isn't skill. The problem is the current.

As constitutional scholar Margaret Hu testified before the U.S. Senate, AI governance requires an "ex-ante approach" - designing systems to prevent harm before it occurs (Hu, 2023). Wait too long, and we risk an epistemic dark age - not from lack of information, but from a collapse of shared understanding.

The Provenance Explosion

Imagine juggling three balls. Then suddenly somebody throws five new ones at you all at once. That is your mind on AI.

For all of human history, knowledge in your mind came from one of three places. You. Your experience. Other humans.

Did I figure this out myself? Did I experience it through my senses? Did someone tell me about it? One of those three was always the answer. And your epistemic vigilance evaluates information from each differently. What happens in your head is hard to explain, but when you are conversing with yourself it's existentially different than talking to a person. And both are different than experiencing something with your body. For your own reasoning, you could retrace your logic. For direct experience, you trusted your senses. For other humans, you assessed their credibility - have they lied before? Do they have reason to deceive me?

Social media complicated this. You couldn't always tell which of the three applied. A post might look like personal testimony but be fabricated. A video might appear to be direct evidence but be staged.

Unknown provenance became a fourth category - and vigilance had to work harder.

AI doesn't just add uncertainty. It multiplies the categories themselves. That is the Provenance Explosion.

Your mind now contains knowledge from sources that didn't exist a generation ago:

  • Pure synthetic - content generated entirely by AI, no human author at all

  • Hybrid - human and AI collaborating, with unclear contribution (this document is an example)

  • AI-as-proxy - a chatbot trained to speak as a specific person, but isn't that person

  • AI-mediated - human content that passed through AI processing, summarized or "improved," the original intent potentially altered

That's eight categories where there used to be three. And for most of them, you can't tell which is which.

Now imagine that there are AI bots all around us everywhere we go. Your fridge tells you what to eat. Your closet tells you what to wear. Your car tells you where to go. Your phone answers before you finish asking. In twenty years, most people will have a personal chatbot they talk to throughout the day—at home, at work, everywhere. Many will have more daily conversations with a chatbot than with their families.

We will be outnumbered. Thousands and millions of individual computerized instances talking to us, all connected to the same companies, none with their own agency. Unlike the people in your life—who might lie or be wrong but at least have reasons you can understand—these sources are coordinated. They're a fleet. And the companies that own them think of them exactly that way.

Worse: the original three sources are now compromised. Is this my own thought, or did a chatbot conversation plant it? Is this video real, or synthetic? Is this person actually a person, or a bot?

Everything in this framework just got harder. The Four Conditions don't just hide a source - they hide which of eight categories you're dealing with.

Delegation isn't just trusting someone whose motives you can't see - it’s trusting something that might not be a someone at all. Imbalance isn't just losing track of where a belief came from - it’s losing track of whether it came from a human, a machine, or some hybrid you can't untangle.

Vigilance evolved over millennia to evaluate three sources. It's now facing eight - while systems work to make them indistinguishable.

No human evolved to handle this.

But we did evolve to handle something else: other people. Individuals whose motives we couldn't always see. Whose information we couldn't always verify. Whose trustworthiness we had to judge over time, through experience, through interaction, through paying attention to when they were right and when they were wrong.

We learned to pause before trusting. To ask "where did you hear that?" To notice when something felt off. To give trust slowly and withdraw it when betrayed.

That's not a new skill. It's the oldest skill. And it's our way out.

Individual Solutions

Calibration

Calibration is the conscious response when vigilance alone isn't enough. Not a synonym for vigilance but the adaptation to an environment where vigilance can't keep pace and delegation has become too easy.

The goal isn't to be vigilant about everything. That's exhausting. It would be like never using a calculator, preferring to figure out everything with paper and pencil. We'd be dumb not to use the computational power of modern technology to help us flourish. But we'd be equally dumb not to recognize when we're doing it.

The goal is awareness. Being conscious of when you're delegating and to whom. And reclaiming vigilance for what matters.

A restaurant recommendation, the weather, who won last night's game? Delegate away. But your teenager's mental health? A major financial decision? Who to vote for? These deserve more than a swipe. They need vigilance.

Calibration is knowing the difference and acting accordingly.

What Calibration Looks Like

Think about how you'd treat a new colleague. You wouldn't let them make important decisions until you understood how they communicate, their experience, their knowledge, their values, their motivations. But you might accept their answer about who won last night's game the moment they walk through the door.

The same applies to AI. Some information can be accepted immediately - a sports score, a unit conversion, a definition. But advice? Research? Analysis? Those deserve the same scrutiny you'd give a new colleague you haven't yet learned to trust.

Calibration is pausing before accepting. It's asking: Where did this come from? What's the source's motivation? Does this match what I already know? It's pushing back when something feels off. It's noticing when you've stopped questioning and started simply accepting.

The hardest part isn't learning these behaviors. It's remembering to apply them when the systems provide so much convenience, discovery, and connection.

Collective Solutions

The four conditions describe how the trap works. But individual calibration isn't enough to escape - we also need systems that support rather than defeat our capacity to evaluate. These three principles describe what we should demand.

1. Transparency

Transparency defeats opacity. Users should understand what data the AI is training on, who built it and why, what the AI can and cannot reliably do, and what motivations shape the information they receive.

Chatbots that surface their confidence: "I'm only 60% sure about this." Clear disclosure of business models. Labels distinguishing advertising from organic recommendations. Just as a trusted friend shares hesitation or doubt, so should your trusted bot. By default.

2. Calibration Support

Calibration support counters unchecked flow. We need systems that strengthen rather than degrade our capacity to evaluate what we absorb - chatbots that sometimes ask, "What do you think the answer is?" AI that pauses on important decisions. Calibration support as a default setting, not a hidden configuration.

Research validates this: when people are prompted to consider accuracy before engaging with content, their judgment improves significantly (Pennycook & Rand, 2018). The capacity exists. The question is whether systems support it or defeat it.

3. Fair Market

Fair market principles break the self-reinforcing loop - fair exchange between users and systems, consumer sovereignty, and market structures enable genuine alternatives.

As Nobel laureate economist Joseph Stiglitz has shown, information asymmetries cause markets to fail (Stiglitz, 2000). When users can't evaluate what they're receiving, they can't make informed choices. Without informed choices, there is no genuine market - only extraction.

LLM business models pitch investors by claiming they'll get their main resource - human thought - for free. This is the consumer sovereignty problem: You consume content but supply attention, data, behavior. You are not the customer; you are the resource. The system only works because this is hidden.

True competition requires transparency (you can't compete on quality if users can't evaluate quality), data portability (users can leave platforms that don't serve them), and interoperability (alternatives can actually exist).

The EU's Digital Markets Act (2022) created interoperability requirements for platforms; its AI Act (2024) extends similar principles to AI. The U.S. has no equivalent. Current "alternatives" often aren't - Mozilla and DuckDuckGo remain dependent on Google or Microsoft. In AI, this dependency is even deeper.

As economist Daron Acemoglu - who later won the 2024 Nobel Prize in Economics - warned in Senate testimony, compute power and data are "very centralized," creating "a much more monopolized system which is not good for innovation... it is going to be just a few companies that set the agenda" (Acemoglu, 2023).

Monopolies have no incentive to support calibration. They optimize for capture, not flourishing.

The Necessity of Original Human Thought

Epistemic collapse feeds back into the systems that caused it.

When people lose the ability to know what they know, they stop generating original thought. They consume, react, and recycle, but create less. The pool of genuinely new human ideas shrinks.

AI systems depend on that pool. They're trained on human-generated content. Without ongoing human input, AI begins training on its own outputs.

Researchers call this model collapse. A 2024 Nature study documented the pattern: AI trained on AI-generated content progressively degrades, losing diversity, accuracy, and minority perspectives first (Shumailov et al., 2024). By April 2025, over 74% of newly created webpages contained AI-generated text. As Cambridge computer scientist Ross Anderson observed: "Just as we've strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we're about to fill the Internet with blah."

As Shoshana Zuboff observes, "Oligopoly in the economic realm shades into oligarchy in the societal realm" (Zuboff, 2019). The same companies that built the attention economy are building AI. Same investors. Same incentives. More powerful tools.

Original human thought is now the most valuable resource on the planet. Not because we decided it should be, but because the mathematics of machine learning made it so.

That gives us leverage - but only if quality matters to the market. If we accept that "good enough" AI is good enough, we lose the leverage. If we organize to make quality the expectation, we keep it.

What You Can Do

The trap is real. But it's not inevitable.

We need a threshold - enough people calibrating, enough systems supporting calibration, enough transparency demanded - to create momentum that escapes the trap's pull.

If my children one day ask what I did during this moment - when we still had a chance to do something - I don't want to be ashamed by my answer.

Individual actions: Notice when you're absorbing without calibrating, especially on high-stakes information. Ask yourself: Where did I hear that? How do I know it's true? Who did I delegate this evaluation to, and do they share my interests?

Collective actions: Support policies requiring transparency. Choose platforms that support calibration. Recognize that your original thought has value, and negotiate accordingly.

The companies building AI need what only humans can provide. Without us, their systems degrade into noise. That's not weakness. That's leverage.

Use it.

References

Acemoglu, D. (2023, November 8). Testimony before the U.S. Senate Committee on Homeland Security and Governmental Affairs. S. Hrg. 118-164.

Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617-645.

Dahlgren, P. (2018). Media, knowledge and trust: The deepening epistemic crisis of democracy. Javnost - The Public, 25(1-2), 20-27.

European Parliament. (2022). Digital Markets Act. Regulation (EU) 2022/1925.

European Parliament. (2024). Artificial Intelligence Act. Regulation (EU) 2024/1689.

Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press.

Gutmann, A., & Thompson, D. (2004). Why Deliberative Democracy? Princeton University Press.

Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16, 107-112.

Hu, M. (2023, November 8). Testimony before the U.S. Senate Committee on Homeland Security and Governmental Affairs. S. Hrg. 118-164.

Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114(1), 3-28.

Lackey, J., & Sosa, E. (Eds.). (2006). The Epistemology of Testimony. Oxford University Press.

Pasquale, F. (2015). The Black Box Society. Harvard University Press.

Pennycook, G., & Rand, D. G. (2018). Lazy, not biased. Cognition, 188, 39-50.

Shumailov, I., et al. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755-759.

Sperber, D., et al. (2010). Epistemic vigilance. Mind & Language, 25(4), 359-393.

Stiglitz, J. (2000). The contributions of the economics of information to twentieth century economics. Quarterly Journal of Economics, 115(4), 1441-1478.

Turkle, S. (2011). Alone Together. Basic Books.

Vallor, S. (2016). Technology and the Virtues. Oxford University Press.

Verbeek, P.-P. (2011). Moralizing Technology. University of Chicago Press.

Winner, L. (1977). Autonomous Technology. MIT Press.

Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.

Get in Touch