The Taxonomy of False Information

Media scholar Claire Wardle, in her influential 2017 framework developed for First Draft (now First Draft News), introduced a three-part taxonomy of information disorder that has become the standard reference in media literacy and journalism education:

The distinction matters because appropriate responses differ. Misinformation requires education and correction. Disinformation may require legal intervention and platform enforcement. Malinformation raises different ethical questions about privacy and press freedom.

Why the Term "Fake News" Is Inadequate

The term "fake news," popularised during the 2016 US presidential election and widely used in political rhetoric, is analytically insufficient and increasingly counterproductive. Its original meaning — fabricated news stories designed to generate clicks and revenue — has been diluted by its political weaponisation, in which it is applied to any journalism an actor disagrees with, regardless of its factual accuracy. Media literacy researchers and journalism organisations including the Reuters Institute, SPJ, and UNESCO now recommend avoiding the term "fake news" in favour of the more precise vocabulary of misinformation, disinformation, and malinformation.

The Scale of the Problem

The global information disorder problem is substantial and growing. A 2024 Edelman Trust Barometer survey found that 63 percent of respondents in 28 countries were worried about disinformation. MIT Media Lab research found that false news stories spread approximately six times faster than true stories on Twitter/X, and reached further in terms of geographic and demographic reach. The World Economic Forum's 2024 Global Risks Report ranked "misinformation and disinformation" as the world's top short-term risk — above extreme weather events and geopolitical conflict.

AI has created a dual relationship with this problem: AI-powered tools like Omniscient AI help detect and counter misinformation at scale, but AI also dramatically lowers the cost and increases the sophistication of disinformation production, including synthetic media (deepfakes) and AI-generated content farms.

Detection and Countermeasures

Effective countermeasures against misinformation and disinformation operate at multiple levels: platform-level (content moderation, labelling, reduced algorithmic amplification of unverified content); organisational-level (newsroom fact-checking, verification policies, source authentication); individual-level (media literacy education, lateral reading, use of fact-checking tools); and technological-level (AI-powered claim detection and verification tools, deepfake detection systems, provenance tracking).