Factual drift occurs when a claim is cited, summarized, and re-cited repeatedly through AI intermediaries, with each iteration introducing small semantic shifts that cumulatively distort the original meaning. A statistic that was "approximately 40%" in the primary source becomes "nearly half" in the first citation, "a majority" in the second, and "most" in the third — a drift from a specific number to a directional claim with different implications. Lawyers whose cases involve media-cited factual claims need to detect this drift before opposing counsel uses it.
Omniscient AI helps lawyers trace factual drift by verifying claims against the primary source knowledge in AI training data. If a media claim has drifted significantly from what the underlying sources actually support, engine disagreement or uncertainty flags the drift. The lawyer can then trace back to the primary source and document the drift chain — which may be useful for challenging the media citation's accuracy or for understanding how the public narrative about their case has evolved from the original facts.
In litigation involving media coverage of corporate actions, regulatory findings, or public statements, factual drift detection is a concrete litigation tool. If a plaintiff's case rests on a repeatedly-cited media claim that has drifted from its primary source, documenting that drift with Omniscient AI verification evidence provides a foundation for a source reliability challenge.