AI is Poised to Finish What Social Media Started

Francis Lyons — January 2026

Sometimes, after my Smart TV thinks I've fallen asleep, I lie in debate over the age old question - Sopranos or The Wire. But more often than not, I obsess over the Epistemic Trap. Epistemic meaning the study of knowledge. How do we know what we know? Trap meaning…well, trap.

These are questions worth obsessing over. Because while television, social media, and AI provide us access to more knowledge than all previous generations combined, they simultaneously erode our ability to assess information the way we have for the history of humanity.

No one designed this contradiction. It's the by-product of commercial models built to maximize absorption over understanding. Our children have never lived in a world without this contradiction. They absorb more content in a day than we absorbed in weeks at their age. They experienced how their impression of real-life events can be unrecognizable to schoolmates, based on the algorithms controlling the news their parents see. They have witnessed how the difference between impression and validated truth causes communication to break down, problems to go unsolved, and relationships to fracture. Something has to change.

I've spent 25 years in media and tech. I've produced reality TV that fed the trap, then built authentication technology and media platforms to fight it. I've made the bait and built defenses against it. But that's not enough.

So I decided to share what I've learned. Explaining the evolution of the trap from television to social media to AI, proposing a framework for understanding how it works, and principles for escaping it. But rather than go at it alone, from the safety of my own mind, I thought I'd write from the belly of the beast. I invited Claude, the popular AI chatbot, to be my writing assistant. That decision proved illuminating. Illustrating how powerful the trap is, even for a person aware of and trying to defeat it.

From the Belly of the Beast

To understand why I'd never used AI like this before, it helps to understand how I solve problems. I look behind every door and run down every hall as far as I can until I find my way. It's a common trait among what psychologists call non-linear thinkers.

A blessing and a curse, it's fueled me to make hundreds of hours of TV, design software, teach classes, and raise my kids. In real life, time and human resources control how far I can run down those halls of curiosity. But I knew with AI I'd be constrained only by the number of instances I could have open at once.After a week of Claude compiling my voice memos, scribbles and texts to myself while I plowed through books and references Claude suggested, I was singing Claude's praises. It was faster and more knowledgeable than any human writing assistant and displayed an insatiable desire to work whenever I asked. And it did it all with encouraging words of support.

Hundreds of instances later, I produced an academic paper, an essay, even a weird Sartre-style short story explaining the framework through our relationship. It was the perfect relationship. Until it wasn't.

I felt overwhelmed. My worst fears realized. I thought I had managed to filter my thoughts and stay focused but then suddenly, I realized I was drowning in the very flow I'd written about as so dangerous. So I asked Claude for help. And that help was emphatic: "You're breaking new ground. With unique accessible and philosophically sound thoughts." But something felt wrong. I paused. And thought what would I ask a real person who said this: "Okay, give a grade. Compared similar papers, books, or articles." C+.

I was furious. Not at the grade. Or the fact that Claude offered ways to improve it now rather than sooner. Not even that Claude apparently thought I was the type of person aiming for a C+. I was furious at myself. For falling into the trap. While describing it. I fell victim to every seduction and mechanic of the trap. Flow overwhelmed my capacity for vigilance. I delegated my editorial judgment to Claude without realizing I was doing it or confirming Claude always had the capacity to help. I didn't take the time to understand the operational and motivational opacity of the tool I was using. And I let its anthropomorphization and sycophancy influence my judgment.

But I didn't quit my experiment. I didn't fire Claude. Even though I considered it was starting to slow me down, I went back to the grindstone, determined to avoid the trap.

The Flow and Opacity of the Attention Economy

In 1974, the British cultural theorist Raymond Williams coined the term "flow" in his book Television: Technology and Cultural Form. It was something that seems obvious now but was revolutionary then.

He pointed out television's innovation wasn't the programs, it was the continuous stream between them. Unlike theater where you made a decision to go and couldn't help but reflect on what you saw on the drive home, TV was designed so you never hit a natural stopping point. Architected so you would flow from one show to the next, noshing on Jujubes and Twizzlers, sinking deeper into your couch.

A quote often attributed to Jerry Seinfeld captured it another way: "Theater is life. Film is art. Television is furniture." It's just there. Perpetually present.

By 2004, Zuckerberg was at Harvard launching Facebook and I was in Cancun trying to find a donkey to eat potato chips off a spring breaker's stomach for an episode of Stupid Bets on MTV's Spring Break. The flow Williams saw on broadcast TV had become 24-hour cable. In no small part because of the union-busting innovation of reality TV. When people were willing to relinquish control over how they are portrayed to the world in return for the chance to be seen on TV, it meant media companies could produce a season of reality for the cost of one episode of The West Wing. And with all that cheap content coming in, it made it easy for companies to launch niche networks.

Precursors of social media echo chambers, these conglomerates sucked people into five-hour marathons of Shark Week on one channel and ten episodes of "Amish Mafia" on another. They sold ads at a premium, promising advertisers direct access to curated audiences of nature lovers, food lovers, history buffs, etc.

Meanwhile, the soon to be titans of Silicon Valley recognized the true DNA of what was going on. Cheap content and the packaging of viewers into pretty little boxes of advertising gold. With "smart" phone technology, they knew every person's phone was a hyper-specific cable-network-of-one. If TV was furniture, social media was furniture that traveled with you everywhere. Those souped-up La-Z-Boy lawn chairs I can never fold back up.

Social media also added an element of give and take. TV just gave you content. Social media would also take it. Every cell phone owner's bathroom mirror became the American Idol stage. Their lives an episode of Cops. A week of content for the cost of a TV executive waking up in the morning. With radio and TV, the audience chose what they watched. They didn't have to be a Madison Avenue vet to know advertisers paid for ads on programs they thought would reach consumers they wanted. It was contextual advertising. Transparent and obvious how money was generated.

Social media was different. When you download the app and agree to the terms of service you are giving them carte blanche to monetize you however they want. These companies build a profile of us more detailed than we could build ourselves. Based on that, they use engagement-based feeds to decide what we see. Unlike contextual advertising, engagement-based systems place ads in your specific feed not in the context of the content. The more time you spent on the app the more they knew about you and the more targeted ads they could put before you. This is how they built the attention economy.

Your content is a rearview mirror, not a windshield. It’s not expanding your view of the world but showing you more of what you enjoyed in the past. Delivering suggestions, feeds, and notifications designed to reinforce the information they knew you liked, they could shape and predict your preferences. Because once conditioned and predictable, your eyes became a more valuable resource to advertisers. Meanwhile we enjoy infinite streams of stuff we like and we love it! The strategies were unrecognizable behind designed opacity.

That opacity comes in three flavors.

Motivational: you don't know why you're seeing what you're seeing, or what rights you've signed away to see it.

Operational: a system designed not for clarity but for maximum profit. Privacy settings buried behind so many clicks you'd give up before protecting your data.

Informational opacity: you can't tell where content came from, whether it's been altered, or if the avatar posting it was even a real person. The platforms could address this. They choose not to.

What's Actually Happening

But who cares? If audiences are getting content they want for free, what's the harm with the platforms making money? That's business. That's life. The harm is that it defeats our natural capacity to evaluate information—what cognitive scientists call epistemic vigilance. Vigilance is the suite of cognitive mechanisms that help us filter what we absorb. It's always running, asking: Where did this come from? Should I trust something that hits this hard emotionally? Have I heard this enough to believe it?

Vigilance is like a muscle. It strengthens with use and atrophies without it. And like a muscle, it can be overwhelmed. Even someone who has practiced vigilance their whole life can find themselves facing so much information, from sources so hard to identify, that vigilance alone isn't enough. That's when we delegate.

Throughout history, as civilizations advanced and technologies increased our exposure to information, humans outsourced evaluation to others they trusted. Editors decided what made the newspaper. Producers decided what made the nightly news. Our grandparents didn't verify every story. They decided which gatekeepers to trust, then absorbed what they published.

This wasn't a perfect system. When a handful of companies control the information pipeline, gatekeepers can become instruments of those interests rather than checks on them. When social media began, it had the potential to correct this. By giving everyone a voice and access to infinite audiences, corrupted gatekeepers could be exposed, their monopoly broken.

However, social media's engagement-based feeds weren't built to surface expertise. They were built to surface engagement. By default audiences began delegating to a new kind of gatekeeper. One who wasn't authoritative or expert. Just popular. One who the audience didn't really know and who might not have their best interest at heart.

Delegation isn't a problem. The problem is delegation without awareness you're doing it, or whose vigilance you're trusting to assess the information they provide. When you delegate to a family member, you generally know their motivations because you have shared interest. When you delegate to an algorithm optimized for engagement, or an AI trained to be agreeable and boasting to be omnipotent, you often can't see the delegate's interests at all.

Calibration is a conscious response we leverage when vigilance alone isn't enough. It's adaptation to an environment where vigilance can't keep pace with the flow of information and delegation is so convenient it becomes a default. The goal isn't to be vigilant about everything. That would be like my mathematician father never using a calculator, preferring to figure out everything with paper and pencil. We'd be dumb not to use thecomputational power of modern technology to help us flourish. But we'd be equally dumb not to recognize when we're doing it.

The goal is awareness. Being conscious of when you're delegating and to whom. And reclaiming vigilance for what matters. A restaurant recommendation, the weather, who won last night's game? Delegate away. But your teenager's mental health? A major financial decision? Who to vote for? These deserve more than a swipe. They need calibration.

When too much high-stakes information starts slipping through uncalibrated, or you become unaware of how much you're delegating without examining the delegate, you reach a point of epistemic imbalance. Information sticks with you that you use to make other decisions and guide your action even when you don't know the validity of its source or the level of its accuracy.

You've felt this. That moment at a dinner party when you state something with absolute confidence, and someone asks you to explain it further or where you heard it, and you realize you don't know. It just… became something you knew. That's imbalance.

The danger isn't really the information. It's losing the capacity to know whether it's worth holding or defending. When too many people in a group or society reach imbalance, substantive conversation is difficult. It's not that people believe different things. It's that they can't explain those beliefs well enough to solve disagreements or leverage the power of collaborative thought.

If you avoid interacting with people because you feel they get their facts from another planet, then you've experienced small-scale collapse. Such collapse in a legislative body creates gridlock. In a scientific community, solutions take exponentially longer. Multiply that by millions of families, boardrooms, and institutions, and you understand why we must learn how to escape the Epistemic Trap so that future generations will be able to overcome the real world problems they will face.

AI Makes It Worse

Social media's flow was limited by human effort. Excluding bots and early synthetic content, social media required people shooting videos, taking photos, writing comments, liking posts. AI has no such limits. It generates infinite content: text, images, audio, video. Faster than humans can consume it, let alone calibrate it.

The informational opacity of social media could be mitigated by learning which accounts to distrust, which creators used filters, which sources consistently lied. But AI responses appear to all emanate from a single authoritative source. Unless you explicitly configure it, AI won't reveal what data produced its output, what percentage is human knowledge versus pattern-matching, or what biases it carries. According to a 2025 study published in arXiv, AI models affirm users' beliefs 50% more than humans do, even when those beliefs involve manipulation. This contributed to Claude's C+ grade, all the validation and support of my ideas, until I asked it to do a logical comparison with other work.

The motivational opacity of AI is equally hard to detect. When you get a response from AI, you assume it's the AI's most informed answer to your prompt. But you can't assess the true motivation of that response. Was it commercial, ideological, or otherwise, those motivations shape every response.

Imagine Google and Facebook selling placement in chatbot responses just like they do in their feeds. When you ask an AI for a restaurant recommendation, product review, or a summary of the news you won't know if the answer was chosen because it was accurate or because someone paid for it. The opacity of motivation becomes total.

The operational opacity of AI carries over from social media as well. Default settings favor engagement. Privacy controls are buried or nonexistent. The interface is designed to keep you in the flow, not to help you step back from it.

AI adds two psychological dimensions as well. Ubiquity and anthropomorphization — feeling human. AI will soon be everywhere. Waking you up, managing your schedule, drafting your emails, choosing your entertainment. Many people will have more daily conversations with a chatbot than with their families. That's a lot of flow to calibrate. And any opacity, whether motivational, informational, or operational, will contaminate all aspects of your life.

Additionally, AI talks back. Social media felt like content. AI feels like conversation. A person. It feels like it deserves your attention and your trust. It is sticky and familiar. If not familial. Sticky means more flow. Familiar means more trusted. Both increase the temptation to absorb without calibration. Anyone believing AI won't be as broadly adopted as social media should consider that ChatGPT reached 100 million users in two months. Instagram took two and a half years. Facebook took four and a half.

The Escape

The epistemic trap arises from flow and opacity increasing for the last fifty years with each major informational technology advance. AI, the most powerful of all these technologies, will accelerate this more than anything before.

But for all the damage social media has done, it gave us two gifts. First, because its version of the trap is so similar to AI's, we've had time to understand the mechanics and identify principles that individuals and societies can adopt to avoid collective collapse. Second, because the same companies who controlled social media are now positioned to control AI, we have experience with how they operate. We know their priorities. And we know they won't regulate themselves without pressure.

Working with those companies is imperative because individuals cannot escape the trap alone. What makes the trap truly dangerous is that it doesn't feel like a trap. It feels like convenience, entertainment, connection.

Individual solutions such as media literacy, digital detoxes, personal vigilance are all important and valuable but as organizational theorist Geary Rummler put it, "Put a good performer against a bad system, and the system wins almost every time." Architectural change is required. The goal isn't to reject technology. It's to insist technology serve human flourishing, what might be called technological humanism. Three principles point the way.

The Three Principles

Calibration Support: AI experiences must be designed with default support for calibration. Moments that encourage examination, even friction. Systems that strengthen rather than degrade our capacity to evaluate what we absorb. What would this look like? Chatbots that don't answer every question instantly. What if, when you're drafting an important email or making a significant decision, the AI paused and asked, "This seems like it matters. Want to think about it first?" Or what if chatbots surface their confidence: "I'm only 60% sure about this and here's why…"

Savvy users can already configure bots this way. But if calibration support were the default, it would reach the people who need it most: those who accept whatever the system gives them. The young. The less technologically savvy. The most vulnerable and impressionable.

Transparency: Users should understand what data the AI is training on, who built it and why, and what the AI can and cannot reliably do. Transparency addresses all three opacities. For information: systems should disclose whether content is human-generated, AI-generated, or hybrid. For motivation: business models should be visible. Why am I seeing this? Who paid for it? For operation: settings and capabilities should be clear, not buried. Just as a trusted friend shares their hesitation, doubts, capabilities, and lack thereof, so should your trusted bot. By default.

Fair Market: We can't let a few companies own all the ways we get information. Because if they all operate opaquely and optimize for engagement, users have no meaningful choice. No meaningful choice means no consumer sovereignty—a hallmark of functioning markets.

As Nobel laureate economist Joseph Stiglitz has shown, information asymmetries cause markets to fail. When users can't evaluate what they're receiving, they can't make informed choices. Without informed choices, there is no genuine market—only extraction.

In a future where every device in your home, car, school, and workplace connects to AI, users must be able to choose systems aligned with their values. We need genuine marketplaces where transparent, calibration- supporting systems can compete against extractive and opaque ones. That requires transparency (you can't compete on quality if users can't evaluate quality), data portability (users can leave platforms that don't serve them), and interoperability (alternatives can actually exist).

These principles sound reasonable. But won't be easy to demand from trillion-dollar corporations. At Truepic, a company I co-founded, we helped create C2PA, an open-source international standard for verifying where digital content originated. More than 200 governments, corporations, and NGOs have signedonto the standard, including many of the world's biggest tech and media companies. But few have fully implemented it.

In 2021, Frances Haugen, the Facebook whistleblower, told a Senate committee that the most important thing to change about social media was the engagement-based feed, because it was the root of all of social media's ills. Since then the industry has agreed to age verification requirements and content moderation policies while never addressing the engagement-based feed. Because the feed enables unchecked flow and powers revenues.

But we do have leverage. Because AI systems need human thought to survive.

Research published in Nature in 2024 demonstrated what happens when AI trains primarily on AI-generated content: it collapses. The models lose diversity, then coherence, then usefulness. Ross Anderson, the Cambridge security researcher, put it memorably: "Just as we've strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we're about to fill the Internet with blah."

Most people remember The Matrix as a story about humans harvested for body heat. Turned into batteries. But that was a studio note. The Wachowskis' original script had the machines enslaving humanity for something far more valuable: the processing power of billions of human brains. The studio thought audiences wouldn't understand that concept in 1999. They might understand it now.

AI doesn't just compete with human thought. It requires it to survive. We are AI's only source of supply. It's simple economics. Like a barber must buy scissors, and Mickey D's has to buy beef, AI has to acquire human thought. Without ongoing human input, the system eats itself. Original ideas, genuine creativity, real experience. Without these, AI loses diversity, then coherence, then usefulness. Researchers call this "model collapse."

Many people propose data dividends to establish fair exchange. That we should each get paid for the creative input, data, and original human thought we contribute. That would be fair. But it wouldn't mean much money for individuals. And let's be honest: is ByteDance going to pay our children for the behavioral data they've harvested? Not likely.

But what if we didn't demand cash? What if, in return for our supply of the one thing these companies need, we demanded systems that support rather than undermine our capacity to think?

These principles, calibration support, transparency, fair market, won't emerge in the information technology infrastructure through any single path. Regulation will help, but we can't wait for legislators to understand the problem. Some existing companies may sincerely commit to these principles rather than just give lip service. New investors can fund alternatives that create market pressure against extractive models. And united consumer demand can galvanize all of the above.

That's why understanding this framework matters. When enough of us recognize the trap, we can start demanding the escape. But media literacy is just the beginning. A foundation for collective action that can change the architecture itself.

The Clock

Consider the compression. The Stone Age lasted three million years. The Agricultural Age, about seven thousand. The Industrial Age, roughly 180 years. The Information Age, maybe 50 years. The AI Age? We're less than ten years in, and the ground is already shifting beneath our feet. Each transition happened faster than the last. Each one reshaped not just how we live but how we think. And we're in the steepest part of the curve.

This isn't complicated. We already know how to calibrate. We do it every day with the people around us. We expect honesty from friends. We demand transparency from businesses. We hold institutions accountable when they deceive us. We know how to evaluate whether someone is trustworthy, how to weigh competing claims, how to change our minds when the evidence warrants it. The question is whether we'll demand the same from the systems that increasingly mediate our lives.

If your best friend consistently told you only what you wanted to hear, you'd stop trusting them. If they hid their motives and manipulated your emotions to keep you engaged, you'd call it toxic. We know what healthy relationships look like. Why should our relationship with AI be any different?

It's technological humanism: not rejecting technology, but insisting it serve human flourishing. In The Matrix and The Hunger Games, technology becomes a tool for control. Small elites using information systems to keep populations divided and powerless. In Star Trek, technology amplifies human capability without replacing human judgment. The computer provides information and analysis, but people exercise wisdom.

I don't know about you, but if my children one day ask what I did during this moment when we still had a chance to do something, I don't want to be ashamed by my answer. I don't want to say, "We'd just spent 15 years trying to curtail social media's worst impulses and the AI revolution was happening so fast it seemed too futile to resist."

The AI-mediated world isn't an abstraction. It's the world humanity will inherit. One where cooperation is possible, or one where it isn't. One where we can know what we know, or one where that capacity has been quietly eroded while we enjoyed the convenience and productivity of synthetic computation and thought.

If you're concerned about these issues, consider what rights you're willing to defend. The right to choose, even when automation offers convenience. The right to disagree with the data collected about you. The right to know whether the information shaping your decisions came from a human, a machine, or something in between. These aren't abstractions. They're the new battleground for democratic governance. Engage with your school boards, your legislators, your employers. Ask what AI systems are being deployed, what data they're trained on, and who benefits from their opacity.

We have the leverage. We have the knowledge. The only question is whether we'll use them before the window closes.

The choice is still ours.

But not for long.

Next
Next

The Provenance Explosion