The terms "AI-assisted" and "AI-generated" are used interchangeably in public debate, but they describe fundamentally different editorial products with different legal, ethical, and quality implications. Getting this distinction right is a prerequisite for sound AI policy in any newsroom.
AI-Assisted Journalism
AI-assisted journalism uses AI tools to support a human journalist's work: research acceleration, first-draft scaffolding, fact-checking, translation, headline optimisation, metadata generation. The journalist retains full editorial control, makes all substantive decisions, and takes responsibility for the final article. The AI is a tool, equivalent in principle to a search engine or a spreadsheet.
AI-Generated Journalism
AI-generated journalism produces the entire article autonomously from structured data or a brief, with minimal or no human editorial involvement. The Associated Press has used this approach for financial results and sports box scores since 2014 โ domains where the "story" is essentially a structured data readout. The ethical risks multiply sharply when AI generation moves beyond data-dense formats into analysis, interpretation, or sensitive topics.
Why the Distinction Matters
From a legal perspective, AI-generated content may not qualify for copyright protection in most jurisdictions (the human authorship requirement). From a reputational perspective, audiences associate "AI-generated" with lower quality and higher hallucination risk โ even when the content is accurate. Disclosure policies should distinguish clearly between the two: "AI-assisted" signals quality enhancement; "AI-generated" signals a fundamentally different editorial process.