False balance — presenting two "sides" of a question as equally credible when the weight of evidence strongly favours one — is a structural tendency of AI-generated content. Models trained to avoid appearing biased will often present a "both sides" framing even when scientific consensus, documented evidence, or legal findings make this framing misleading.
Recognising AI-Generated False Balance
Signals include: "While some experts believe X, others argue Y" when X has 95%+ expert consensus. "Critics argue A; supporters counter B" when A and B are not equivalently evidenced. "The question remains debated" for questions with established scientific consensus (climate change, vaccine efficacy, evolution). These phrasings appear frequently in AI drafts because they mirror the hedging language used in balanced training data — but they misrepresent the actual state of knowledge on settled questions.
The Fix
For settled scientific or empirical questions: report the consensus accurately ("97% of climate scientists attribute warming to human activity") and separately note genuine remaining uncertainties without false-equating fringe positions with consensus. Prompt AI: "If this question has scientific consensus, report it accurately. Do not present fringe minority positions as equivalently credible to scientific consensus."