What Is Synthetic Media?

Synthetic media is any media content — text, images, video, audio, or multimodal combinations — that has been generated or substantially transformed by artificial intelligence. The category includes AI-generated images (Midjourney, DALL-E 3, Stable Diffusion), AI-generated video (Sora, Runway ML, Pika), AI-generated voice (ElevenLabs, Play.ht, Resemble AI), AI-written text (GPT-4o, Claude 3.5), and composite synthetic content that combines multiple modalities.

Synthetic media is not inherently problematic — AI-generated illustrations in news articles, synthetic voice for audio accessibility, and AI-written financial summaries are all legitimate applications. The problems arise when synthetic media is used deceptively — to present AI-generated or AI-manipulated content as authentic real-world documentation.

Legitimate Applications in Journalism

Synthetic media offers several genuinely valuable applications in journalism: Data visualisation and illustration — AI image generation can create high-quality illustrative content for stories where stock photography is unavailable or inappropriate. Accessibility — AI voice synthesis can provide audio versions of print articles for audiences with visual impairments. Historical reconstruction — AI image and video enhancement can restore clarity to degraded archival footage, with appropriate disclosure. Interactive storytelling — generative AI enables personalised interactive narratives for data journalism that could not be produced at individual customisation scale by human creators.

The Integrity Risks

The primary integrity risks of synthetic media in news contexts are: Source deception — presenting AI-generated imagery as documentary photography; Quote fabrication — using voice cloning to create convincing fake audio statements attributed to public figures; Event fabrication — generating synthetic video of events that did not occur; Document forgery — using LLMs to generate plausible but false official documents; and Scale deception — using AI to create the appearance of widespread social movement support (astroturfing) through synthetic social media content.

Content Credentials (C2PA) as a Solution

The Coalition for Content Provenance and Authenticity (C2PA) has developed an open technical standard that embeds cryptographically signed provenance metadata directly into media files — recording who created the content, when, with what tools, and whether AI generation was involved. Supported by Adobe, Microsoft, Google, Sony, Canon, Nikon, and the BBC, C2PA adoption is accelerating as the primary industry-wide solution to synthetic media deception. A C2PA-verified photograph from a news photographer carries a cryptographic chain of custody that a synthetic image cannot replicate.