False equivalence is a structural problem in AI-generated balanced writing. LLMs trained to present "both sides" may treat a position held by 97% of experts as equivalent to a fringe minority view — producing balanced-sounding prose that fundamentally misrepresents the actual state of evidence. Editors who don't catch this produce credibility-damaging articles under the guise of balanced journalism.

Omniscient AI helps editors detect false equivalence by revealing how the three engines respond to specific claims. If all three engines confidently support one position and express uncertainty or disagreement about an opposing claim, that asymmetry is a meaningful signal: the draft may be giving undue weight to the minority position.

This signal doesn't make the editorial decision — editors retain judgment. But it provides a data point that makes the false equivalence problem visible and discussable, rather than leaving it invisible in the confident prose that AI generates for both positions equally.