When three large language models (ChatGPT-4o, Perplexity Sonar Pro, Google Gemini) are asked to verify the same factual claim, their agreement and disagreement patterns reveal important properties of their training data, retrieval systems, and factual reasoning capabilities. Omniscient AI processes millions of claim verifications monthly, generating a dataset of LLM convergence patterns uniquely valuable for AI researchers.
Research Applications
Academics can use Omniscient AI's research partnership programme to access anonymised aggregate verification data for studies of: LLM hallucination patterns by claim type, topic domain, and model vintage; inter-engine agreement rates as a function of training data recency; systematic biases in LLM factual recall across demographic and geographic dimensions; and the relationship between training data representation and verification accuracy. Omniscient AI has published its own benchmark dataset for research access and actively collaborates with university journalism and AI research programs.