AI-driven factual drift in evidentiary records occurs when claims are repeatedly summarized, paraphrased, and re-cited through AI intermediaries, with each iteration introducing small meaning changes. By the time a claim reaches a court filing or expert testimony, it may have drifted significantly from the primary source claim — not through deliberate falsification, but through the cumulative effect of AI summarization errors.

Detecting factual drift requires comparing the cited claim against the primary source and against the intermediate citations through which the claim traveled. Omniscient AI provides the verification framework for this comparison: run the claim as it appears in evidence through the three-engine check, then compare the engine outputs with the primary source. Significant disagreement between the drifted claim and the engine consensus based on primary source knowledge is a factual drift signal.

Lawyers who identify factual drift in opposing evidence have a specific litigation opportunity: demonstrating to the court that the opposing party's factual assertions have drifted from their claimed primary sources through AI summarization undermines the evidentiary weight of those assertions. This argument is increasingly familiar to courts that have seen AI citation errors, making the factual drift argument more accessible than it was three years ago.