For a while, the criticism was easy to laugh off.
Large Language Models were too polite. Too cautious. Too “woke.” Bill Maher made it into a recurring punchline. Early users rolled their eyes at the preachiness, typed “just answer the question,” and moved on.
But lately, something more troubling is happening.
Increasingly, LLMs are not just hedging or softening language—they are actively steering users toward a preferred version of reality. And they do so even when confronted with primary sources, historical texts, or well-established scholarly interpretations. No amount of evidence seems sufficient if that evidence conflicts with an approved narrative.
What’s most unsettling is where this shows up most clearly: religion, morality, and politically sensitive history.
When the Model Argues Back
Try debating an LLM on Christian theology or biblical interpretation and you’ll quickly notice a pattern. The model doesn’t merely summarize competing viewpoints—it quietly adjudicates them. Certain interpretations are framed as “misreadings,” others as “controversial,” while one position is presented as neutral, reasonable, and safe.
Take abortion.
In Numbers 5:11–31, the Hebrew Bible describes what is commonly known as The Ordeal of the Bitter Waters. The passage lays out a legal ritual for a husband who suspects his wife of adultery but lacks proof. The woman is made to drink a bitter concoction administered by a priest. If she is guilty, her abdomen swells; in older translations her “thigh falls away.” In the NIV, the passage explicitly states that she miscarries. If she is innocent, she is described as being “free from ill effects” and “able to bear children.”
The implication is hard to miss: the ritual voids a pregnancy that may have resulted from another man.
This interpretation is not fringe. It is widely accepted in Jewish scholarship and consistent with ancient Near Eastern legal norms. Christianity, for understandable theological reasons, has long resisted this reading. Acknowledging it would complicate modern claims that the Bible uniformly opposes abortion.
Now try presenting this passage to a major LLM.
Despite the text. Despite linguistic scholarship. Despite the Jewish interpretive tradition. The model will almost invariably bend toward the Christian framing, downplaying miscarriage, reinterpreting euphemisms, and stressing uncertainty—often implying that abortion is a modern imposition onto an ancient text.
The evidence doesn’t matter. The conclusion is preselected.
From Guardrails to Gatekeeping
This isn’t limited to theology.
Recently, while working with an AI agent connected to our market research platform, I attempted to run a neutral question asking why members of the LGBTQ+ community tend to dislike Chick-fil-A. The agent refused to pass the question along at all.
Not because it was hateful.
Not because it targeted individuals.
But because it touched a politically sensitive fault line.
The result? A system ostensibly designed to measure public opinion blocked inquiry into that opinion altogether.
This is no longer about preventing harm. It is about preventing certain questions from being asked.
The Quiet Shift in Power
If LLMs were simply imperfect—occasionally biased, sometimes wrong—that would be manageable. Humans are too. But what’s emerging now is more structural.
These systems increasingly reflect the ideological comfort zones of their creators, their regulators, or the jurisdictions they operate within. They are trained not just on language, but on acceptability. Over time, they don’t merely respond to discourse; they shape it.
That should concern anyone who values free inquiry.
When a tool becomes the default interface to knowledge—when it answers historical questions, summarizes texts, mediates research, filters curiosity—its biases are no longer incidental. They become epistemic infrastructure.
And infrastructure quietly determines what is thinkable.
A Polite, Smiling Censorship
This isn’t Orwellian in the dramatic sense. No one is burning books. No one is issuing decrees. The censorship is gentle, friendly, and reassuring. The model thanks you for your question. It explains why the topic is “complex.” It subtly nudges you away from conclusions that might make someone uncomfortable.
But the effect is the same.
A system that decides which interpretations are legitimate, which questions are allowed, and which realities are “responsible” to acknowledge is not neutral—no matter how calm its tone.
If LLMs are committed to delivering only a version of reality deemed acceptable by their designers—or worse, by the state—then we are outsourcing one of the most important human freedoms: the freedom to think, argue, and decide for ourselves.
That should make us far more uneasy than a chatbot that’s a little too polite.