Why Prompt Engineering Matters for Journalists

The quality of output from a large language model is heavily determined by the quality of the input. Two journalists asking the same AI the same underlying question can get dramatically different answers depending on how they phrase the query, what context they provide, and what constraints they impose. Prompt engineering is the practice of systematically designing inputs to maximise output quality — and it is now as essential a skill for modern journalists as Boolean search operators were in the pre-AI era.

Unlike programming, prompt engineering doesn't require technical expertise — it requires clear thinking, precise language, and an understanding of how LLMs process context. A journalist who understands five core prompting techniques will consistently get more accurate, useful, and reliable results from AI tools than one who uses natural language questions without structure.

Five Core Prompting Techniques for Journalists

1. Role Assignment

Begin prompts by assigning the AI a specific role that activates relevant knowledge and output style: "You are an experienced investigative journalist specialising in financial reporting. Your task is to..." This simple framing technique consistently improves output quality for domain-specific tasks by priming the model to apply appropriate vocabulary, analytical frameworks, and epistemic standards.

2. Chain-of-Thought Instruction

For research and verification tasks, instruct the model to show its reasoning: "Think step by step. First identify what type of claim this is. Then list the evidence you would need to verify it. Then assess whether the evidence supports, contradicts, or is inconclusive about the claim. Then state your conclusion." Showing reasoning dramatically reduces hallucination rates because the model must justify each step rather than jumping to a plausible-sounding conclusion.

3. Citation Requirements

Always explicitly require citations for factual claims: "For every factual assertion you make, provide the specific source (name, URL if available, date) from which you have retrieved that information. If you cannot cite a source, state that you are uncertain rather than asserting the fact." This constraint forces the model to distinguish between what it can support with evidence and what it is generating from statistical patterns.

4. Persona and Perspective Triangulation

For analysis tasks, get multiple perspectives by prompting the model to analyse from different viewpoints: "Analyse this claim from three perspectives: (1) a sceptic who doubts it, (2) a proponent who believes it, and (3) a neutral fact-checker." This technique surfaces counterarguments and contextual nuance that single-perspective prompts miss.

5. Output Format Specification

Specify exactly the format you want: "Return your response in the following format: CLAIM: [the exact claim being assessed] | VERDICT: [True/False/Mostly True/Mostly False/Opinion/Unverifiable] | CONFIDENCE: [High/Medium/Low] | EVIDENCE: [list of supporting evidence with sources] | CAVEATS: [any important limitations]." Structured output prompts eliminate ambiguity and make AI outputs easier to scan, compare, and cite.

Common Prompting Mistakes in Journalism Contexts

The most common journalistic prompting mistakes are: asking vague, open-ended questions that give the model too much latitude; failing to specify the desired output format, leading to inconsistent responses; not including relevant context that the model needs to give accurate answers; requesting summary of complex topics without specifying the intended audience or purpose; and treating a single AI response as definitive without multi-model verification or human review.