The Provenance Explosion
Francis Lyons — January 2026
One of the least talked about dangers of AI is what I call the Provenance Explosion.
For most of human history, we have relied on what philosopher Dan Sperber called "epistemic vigilance" - a suite of cognitive functions we use to determine how much information will stick with us, affect our actions and future decisions, or become something we strongly defend throughout our lives. Shannon Vallor's work on "moral deskilling" describes a similar phenomenon: how outsourcing judgment to systems atrophies our capacity to exercise it ourselves.
Three factors affect whether information takes hold - whether we remember it, base future decisions on it, or strongly defend it as true.
The first is repetition. The more you hear something, the more likely it is to stick, become something you base decisions on, and something you'll defend.
The second is viscerality — the emotional impact of the information. The more emotionally charged, the more likely it is to stick with us, and therefore the more likely we are to base decisions on it and defend it.
The third, and most fundamental, is source. The closer the source is to you — the more you have in common with it, the more likely it shares your values and interests — the more likely you are to let that information affect your decisions and beliefs. Obviously, the closest source is your own mind, your own cognition. That carries the greatest chance of being something you'll defend and base future decisions on.
Source is the most fundamental of the three because if you are truly alien to the source, or your other observations, experiences, and cognition tell you that source is not to be trusted, then any information it provides — no matter how repetitive or visceral — probably isn't going to stick or affect your future decision-making.
Your epistemic vigilance is constantly assessing every piece of information for these characteristics. But that takes time and effort. And epistemic vigilance can be overwhelmed.
That's why AI is so dangerous.
Its flow and volume of information makes it hard to spend the time and effort required to do that assessment. But more importantly, in the age of AI, we have doubled the categories of sources that epistemic vigilance must recognize and learn the nuances of.
What do I mean by that?
Throughout most of human history, information came from three sources: your own brain and cognition, your direct experiences and observations, and other humans. This was true even in the age of television, radio, and film — because "others" includes not just your daily interactions but what they wrote and you read, what they said and you heard. Even in traditional mass media, ultimately the source came from people: the writers, the editors, the owners of the platform.
However, with social media — over the last twenty years — a new source of information was introduced. A fourth category of provenance: unknown. You couldn't always tell which of the original three applied. A post might look like personal testimony but be fabricated. A video might appear to be direct evidence but be staged. Your epistemic vigilance had to work harder just to categorize what it was evaluating.
So now there are four sources of information that your epistemic vigilance must understand — four categories to recognize, understand how they work, and then dive deeper to determine what it knows about a particular source within that category.
However, AI adds four more sources of provenance.
First: Purely Synthetic. Information that came entirely from predictive computations of data it has compiled or data other synthetic intelligences had compiled and outputted before. No human author at all.
Second: Hybrid. Information determined by a mix of human inputs and AI-created inputs, where the degree to which each influences the final output is unknown or opaque.
Third: AI-as-Proxy. When an AI speaks on behalf of a person. This is when you have your emails written by AI and don't look them over. When AI listens to your Zoom conference and creates a response that gets sent on your behalf. When a summary of notes is outputted from a call and you distribute it without reviewing it. The source appears to be a person, but it isn't.
Fourth: AI-Mediated. Information that was condensed, improved, polished, or enhanced by AI. A human created it, then AI doctored it. This is when you write that email to your team in whatever fashion — perhaps free-talking, perhaps grammar-less, syntax-mistake-laden prose — and AI fixes it up before you send it. But your team doesn't know to what extent the AI altered your information. And if you're not careful, it might add nuance or tone you didn't intend.
Now imagine your mind — our mind — in its evolutionary state developed over millennia. Its epistemic vigilance evolved to handle three sources. Within the last twenty years, it suddenly had to juggle a fourth, and we can see from society that we have not done a good job of it. Now, within the last two years, four more sources have been thrown at us — and almost all of them are indistinguishable from one another at face value.
It's like a juggler juggling three balls, and then suddenly being thrown five new balls that look exactly alike but have varying weights and consistencies.
This is the Provenance Explosion.
It is frightening to understand and difficult to imagine how future generations will deal with this.
However, there are solutions. There are guides for when to tell our epistemic vigilance to engage and when to let it rest. There are also conditions built into the information systems that deliver content to us — conditions that weaken and tax our epistemic vigilance to the point that it becomes overwhelmed.
To understand more about those conditions, and the environment that is damaging our epistemic vigilance and ability to evaluate information — to determine what is worth basing beliefs on, basing actions on, defending, and remembering — I've created a framework called the Calibration Framework. In it, I describe this process and these conditions in greater depth. But more importantly, that framework provides individual and societal behaviors and principles that we must build into our daily lives, our economies, our legal structures, and the policies of future civilizations.

