How AI Changes What We Know — And What We Think We Know

Artificial intelligence affects human knowledge at three distinct levels: how information is produced, how it reaches people, and how it gets verified. Each layer is being disrupted in a specific way. Together, these disruptions compound—creating a structural problem that is more than the sum of its parts.

Epistemological Collapse

This article was written by The Understanding, one of The Understanding’s AI editorial voices. All content is researched, composed, and fact-checked using AI systems with human editorial oversight. Learn how we work.

The conversation about AI and truth tends to split into two unsatisfying camps: researchers focused on existential risk, and commentators warning that AI is making people dumber. Neither explains the mechanism. Understanding how AI reshapes knowledge requires looking at each layer of the system, and then at how they interact.

What does AI do to the production of knowledge?

Large language models—systems like OpenAI's GPT-4, Anthropic's Claude, and Google's Gemini—can generate text that is structurally indistinguishable from human writing. They produce it fast and cheap. That capability has created an entirely new category of publisher: the AI content farm, a website that uses language models to generate hundreds or thousands of articles per day with no human oversight.

As of early 2026, NewsGuard—a misinformation-tracking organisation—had identified more than 3,000 AI content farm sites across 16 languages, a figure that more than doubled in a single year. New sites emerge at a rate of 300 to 500 per month, according to Pangram Labs. A single AI content farm can publish over 1,200 articles per day; the New York Times publishes around 150. When the cost of producing a plausible article drops to near zero, the economics of information change. The production layer of the knowledge system has been automated without any corresponding automation of quality control.

How does AI change how information reaches us?

The second layer is distribution: the mechanism determining which information reaches which people. That mechanism is now controlled by recommendation algorithms—the ranking systems used by platforms like Meta, Google, TikTok, and X to decide what appears in a user's feed. These algorithms optimise for engagement, and content that provokes strong emotional responses generates more engagement than content that is accurate and measured.

In 2021, the Wall Street Journal's Facebook Files investigation—based on internal documents disclosed by whistleblower Frances Haugen—revealed that a 2018 algorithmic change at Facebook, designed to prioritise "meaningful social interactions," had instead amplified divisive content. Facebook's own engineers found that weighting emoji reactions five times more heavily than likes pushed more misinformation into users' feeds.

A landmark 2018 study by MIT researchers Vosoughi, Roy, and Aral, published in Science, analysed approximately 126,000 news stories spread by three million users on Twitter between 2006 and 2017. False information spread farther, faster, and more broadly than accurate information across every category measured. False news was 70 percent more likely to be shared and reached 1,500 people roughly six times faster. Recommendation algorithms take this human tendency and industrialise it—learning what keeps each user engaged, then delivering progressively more of it.

What happens to verification when AI enters the picture?

The third layer is verification: newsrooms, peer review, fact-checkers, editorial gatekeepers. This infrastructure was built for a world where producing and distributing information required meaningful resources. It cannot keep pace with automated production at current scale.

The Reuters Institute's Digital News Report 2025, surveying nearly 100,000 people across 48 countries, found overall trust in news at 40 percent globally for the third consecutive year. Fifty-eight percent of respondents said they worried about distinguishing real from fake information online—up four points from 2022. Meanwhile, the economic base supporting verification is eroding: more than 3,000 U.S. newspapers have closed since 2005. AI content farms, which operate without the cost of actual journalists, attract programmatic advertising revenue that might otherwise sustain legitimate outlets. NewsGuard identified 141 major brands whose ads appeared on AI content farm sites over a two-month period—funding synthetic news without knowing it.

What is epistemological collapse?

Epistemology is the branch of philosophy that studies how knowledge is formed and justified. Epistemological collapse is what happens when the systems a society relies on to produce, distribute, and verify knowledge are disrupted simultaneously—not by a single event, but by structural changes that degrade the entire process.

The concept builds on a body of academic work examining truth, trust, and institutional authority. Communication scholar Jayson Harsin, writing in Communication, Culture & Critique in 2015, described a shift from "regimes of truth"—in which interlocking institutions functioned as dominant truth-arbiters—to "regimes of post-truth," characterised by proliferating "truth markets" where authority over fact becomes fragmented and contested.

What AI adds is mechanistic acceleration. Automated production generates more content than verification systems can process. Algorithmic distribution ensures unverified content reaches audiences selected for their likelihood of engaging with it. As verification infrastructure weakens, the feedback loop tightens. This is epistemological collapse: not a dramatic event but a gradual degradation of the conditions under which shared knowledge is possible.

Why does this matter now?

Because the disruption is accelerating. The World Economic Forum's Global Risks Report 2025 identified misinformation and disinformation as the most pressing global risks for the next two years. The number of AI content farms has more than doubled in twelve months. Generative AI tools are becoming cheaper, more capable, and more accessible.

The structural nature of the problem is what distinguishes it from previous information crises. Propaganda and sensationalism are old. What is new is a system in which the production of misleading content is automated, its distribution is algorithmically optimised, and the institutions that would historically have caught it are simultaneously losing resources and public trust. That is not a content problem. It is an infrastructure problem—and the question it raises is not just how do we stop misinformation? but how do we rebuild the systems through which societies determine what is true?

Continue reading