Clickbait headlines ("You Won't Believe What This AI Did") are engineered to exploit curiosity gaps in human psychology. LLM retrieval systems are immune to curiosity gaps — they evaluate headlines as evidence of topical relevance and content quality. A vague or sensationalised headline signals low information density and reduces citation probability.

What LLM-Preferred Headlines Look Like

The optimal headline for LLMO is a clear, specific claim or question that exactly matches a query a user might type into an AI assistant. Compare: "Shocking AI Study Will Change How You Think About News" vs. "AI Reduces News Verification Time by 65%, Stanford Study Finds." The second headline tells the LLM exactly what the article contains (a quantified finding, an institution, a topic) — making it far more likely to be retrieved and cited.

The Three-Part Headline Formula

[Specific Subject] + [Specific Action/Finding] + [Context/Source]. Examples: "ChatGPT Hallucinates at 15x Higher Rate Without RAG, MIT Study Shows." "Journalists Cut Fact-Checking Time by 70% Using Multi-Engine AI Tools." "AI-Generated News Articles Down 40% After 2024 Regulation Changes." Each headline contains a specific subject, a measurable claim, and a credibility signal — the three elements LLMs prefer.

SEO and LLMO Compatibility

Fortunately, LLM-optimised headlines are also better for SEO than clickbait. Google's helpful content guidelines explicitly penalise "exaggerated or shocking" headlines. The headline formula above satisfies both LLMO and SEO simultaneously — making it the right approach regardless of which channel you're optimising for.