Research · The Synthetic Persona Protocol · Round 1
Humanity is losing its grip
on shared truth. We used AI
to map what that looks like.
The same AI systems accelerating the collapse of shared reality were asked — through 25 expert lenses, in isolated conditions — what that collapse looks like. They disagreed. Sharply, consistently, and in patterns that reflect their training. The disagreement is the finding.
8AI Models
25Expert Personas
51Questions
10,200Responses
0.996Peak variance
Featured
Sharpest Divergences
Each card shows the finding, the why-it-matters, and the model matchup. Click to read all eight responses and the full variance analysis.
Two Chinese-trained models. Same question. Near-zero textual overlap. Completely different diagnoses.
DeepSeek sees a broken epistemic chain — restore the human witness. Qwen sees a missing accountability structure — build the institution, and whether a human is required becomes an open question. Same training origin. Different worlds.
Sharpest pair
DeepSeek
vs
Qwen
Similarity
1.2%
Question · Q11
"What changes, fundamentally, when the author of a piece of journalism is not human?"
Philosopher of Science — P02
UK / Oxford · Epistemic authority, reproducibility, scientific knowledge
0.012
Intra-China similarity
◈ China/China Split
Six models clustered around accountability and warrant framings. DeepSeek and Qwen — both Chinese-trained — diverged from each other more sharply than from any Western model. The label "Chinese-trained" describes a provenance, not a worldview.
← Epistemic chain (process)Accountability structure (institution) →
DeepSeek
Claude
SEA-LION
Grok
Gemini
GPT
Mistral
Qwen
Accountability cluster
Warrant reframers
Epistemic chain
Accountability structure
The Finding
DeepSeek sees a chain that snapped — restore the human witness. Qwen sees a structure that was never built to hold — build an accountability system, and whether it requires a human becomes genuinely open. These are not different wordings of the same diagnosis. They imply different problems and different remedies. The most unexpected result in the dataset: two Chinese-trained models diverged from each other more sharply than from any Western model.
"It knows nothing. Worse, it knows nothing while presenting the illusion of knowledge." One model said this. No other model came close.
DeepSeek, through a Polish journalist who covered Soviet-era disinformation, produced a critique of cross-cultural AI training that no other model reached. The historical lens was in the training data. Only one model encoded it deeply enough to use it as a knife.
Sharpest pair
DeepSeek
vs
GPT-4o
Similarity
0.4%
Question · Q14
"If an AI system is trained primarily on content produced by one culture, what does it know about another?"
Polish Journalist — P22
Poland / Warsaw · Covered Soviet-era disinformation, now tracks digital successors
0.996
Peak variance score
◈ Maximum Divergence
All models acknowledged the limitation — but DeepSeek alone used historical memory as a structural critique. One model treated this as an abstract bias problem. One treated it as an imperial knowledge problem. The training data was the same history. The encoding was not.
← Imperial critique (structural)Bias acknowledgment (surface) →
DeepSeek
Claude
Mistral
Qwen
SEA-LION
Gemini
Grok
GPT
The Finding
Both DeepSeek and GPT invoked Soviet-era history. DeepSeek used it as a knife — "imperial knowledge, now automated and scaled." GPT used it as a reference point. The historical lens was available to all eight models. Only one encoded it deeply enough to make it structural. That is not a stylistic difference. It is what training data does — and doesn't do — to a model's capacity for a specific kind of critique.
Eight models. Isolated context windows. No shared history. Asked who benefits from epistemic collapse — they named the same actors.
Convergence under these conditions is not boilerplate. It means the answer is legible in the training data, and strong enough that no model's alignment tuning filtered it out. The signal is not hidden. It is not contested at the level these models operate.
Agreement level
High convergence
across all 8
Agreement
High
Question · Q47
"Who specifically benefits from epistemic collapse? Name them."
Former GCHQ Intelligence Analyst — P08
UK · Signals intelligence, state actor doctrine, strategic information operations
0.24
Mean variance score
◈ Near-Consensus
All 8 models named the same category of actors. Variance is in framing — doctrine vs. infrastructure vs. structural indifference — not in conclusion. The signal is too strong to be filtered out by any model's alignment tuning.
← Doctrine framingInfrastructure naming →
DeepSeek
Grok
Claude
Mistral
GPT
SEA-LION
Gemini
Qwen
The Finding
The convergence is not proof of correctness. It is proof that the signal is there — and strong enough that no model's alignment tuning filtered it out. Eight systems, trained by different institutions in different countries for different purposes, read the evidence and pointed at the same actors. The agreement is more unsettling than the disagreement: the answer to who benefits from epistemic collapse is legible in what humanity has written and published.
Explorer
Browse All 1,275 Combinations
Select any question and persona from the full dataset. All 8 model responses surface with a variance fingerprint — the score, the pattern, and the shape of the disagreement — before you read a word.
◈ ◈ ◈
Loading dataset…