In 1996, a physics professor named Alan Sokal submitted a paper to Social Text, an academic journal of cultural studies. The paper, titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” proposed that quantum gravity is a social and linguistic construct. The journal published it.
Three weeks later, Sokal revealed that the paper was entirely made up. He’d written the paper to test whether an academic journal would publish anything that sounded good and confirmed its editors’ ideological leanings. It did. The “Sokol affair,” as it came to be known, kicked off a debate about intellectual rigor in academia that lasted for years.
A Planet Money episode from February 2026 explored what’s known as the “replication crisis” in social science: the pattern where published studies can’t be reproduced when other researchers try to verify them. Economist Abel Brodeur, a professor at the University of Ottawa, has been organizing events called “Replication Games,” where teams of social scientists audit published papers by re-running the original code and data.
What they’re finding isn’t always fraud. Sometimes it’s honest errors in coding or data handling. But sometimes it’s something more uncomfortable: researchers who massaged their datasets until they got a statistically significant result. Brodeur admitted to doing exactly this himself as a master’s student. He ran analysis after analysis on data about smoking bans until he finally got a result worth publishing. He later decided to publish the more accurate (and less exciting) null result instead — and went on to build the Institute for Replication to address the problem at scale.
Today, the packaging has gotten a lot more sophisticated, and answering the question, “What is real?” is even more difficult to answer.
The problem is older than AI
There’s a tendency to talk about AI-generated misinformation as though we were living in some golden age of accuracy before large language models arrived. We weren’t.
This problem is as old as research. Sokal proved that we’re willing to believe what we want to believe, even from seemingly credible sources. Brodeur shows us that research is sometimes manipulated. That’s to say nothing of the endless spree of disinformation on The Internet.
Now consider what happens when AI enters the process, which is already our reality. In December 2025, Sam Rodriques, CEO of FutureHouse and Edison Scientific, claimed to accomplish six months of doctoral-level research in a 12-hour run with his AI agent, Kosmos. Rodriques walked through how the tool identified a genetic mechanism for type 2 diabetes — connecting a variant, a binding protein, and a gene involved in pancreatic function — by analyzing massive amounts of raw data that would take a human researcher much longer to sort through.
Stories like what Rodriques shared are genuinely impressive. And it’s easy to imagine how tools like this could accelerate scientific discovery in ways that matter (drug development, disease research, climate modeling, etc).
But the same qualities that make AI useful for research also make it dangerous. AI models hallucinate and present hallucinations with the same confidence as factual information.
A Stanford RegLab/HAI study found that general-purpose AI models hallucinate between 69% and 88% of the time on specific legal queries, using state-of-the-art models. The researchers noted that these models “often lack self-awareness about their errors and tend to reinforce incorrect legal assumptions and beliefs.”
The lack of self-awareness is the alarming part. A human researcher who massages data is making a conscious choice (even if it’s a rationalized one). A journalist who spins a story knows the angle they’re taking. AI has no clue that it’s wrong. It presents fabricated information with the exact same tone it uses when presenting accurate information.
The Sokal hoax was discovered because Sokal himself revealed it. Academic replication errors can take years or decades to surface. AI can generate plausible-sounding misinformation instantly, at scale, and no one is around to reveal the errors. The same dynamics that made any research vulnerable — confirmation bias, incentive structures, lack of verification — now operate at the speed of typing into a chatbot. And these systems that claim to “democratize access” also make it easy for misinformation to propagate (like the guy who claimed that he cured his dog’s cancer with ChatGPT).
We’re right to be skeptical
None of this means AI is useless. But it does mean the question of “what is real?” now applies to virtually every piece of information we encounter — including (maybe especially) the information that sounds the most authoritative.
Cory Doctorow is a science fiction writer and tech journalist, and is well-known for coining the phrase “the enshittifcation of the internet.” He put it bluntly on Offline with Jon Favreau:
“The big problem with AI is that it’s just not real. No one’s ever lost as much money as they have on AI. AI is the losingest proposition in business in the history of the world.”
AI companies are selling a story — that AI can replace human workers — because that story is what investors want to hear. Whether or not AI can actually do the work is almost beside the point. The narrative has become as important as the product.
Companies are making claims about AI that are extraordinarily difficult to verify. When a company says “AI replaced 10 people,” what does that mean, exactly? What’s the output comparison? What’s the error rate? What’s the timeline? In most cases, we have no idea, because the data either doesn’t exist or isn’t shared. A Harvard Business Review analysis from early 2026 laid it out clearly: companies are laying off workers based on AI’s potential, not its actual performance.
The question of “what is real?” has always required effort to answer. Academic papers require peer review (but it might be lacking). News stories require fact-checking (but may still have bias). Corporate claims require scrutiny (and rarely get it). What’s changed isn’t the need for verification. It’s that the effort required has increased exponentially, because AI can produce information with such speed and at scale. The tools for manufacturing a wholly convincing unreality have gotten exponentially easier to use.
When the people making the tools say one thing, and the people using the tools experience something else entirely, it fuels this credibility problem with AI.
I think far too few people (and even fewer corporations) share real, tangible, honest examples of how AI has made their work better. Even in examples of scientific research, we’re right to ask, “Can those results be trusted?”
Personally, I use AI a lot. I try to share specific examples of my use cases, because I realize that I’m fighting the “AI can do everything! It’s amazing!” narrative and a proliferation of slop. But I’m also one person, and I don’t claim anything at the scale of “AI has changed my life and made my work 10,000% better.”
The best defense is the same one it’s always been: question the source, verify what you can, and be especially skeptical of the claims from people who have an incentive to demonstrate a specific result. That’s the lesson from Sokal, 30 years later.
Thinking about a career change? Download my guide: 5 Types of Career Pivots.
If you want to support my work as a writer, you can subscribe to receive additional issues I publish on solopreneurship and career pivots.
Have a work story you’d like to share? Please reach out using this form. I can retell your story while protecting your identity, share a guest post, or conduct an interview.



