AI adoption in newsrooms has produced a predictable set of mistakes โ most of which have been made publicly and documented in media criticism. The following list is compiled from documented AI failures in newsrooms between 2023 and 2026, categorised by type and severity.
The Ten Most Common Mistakes
1. Publishing AI-generated content without human review (CNET, Sports Illustrated, and dozens of others published AI articles with significant factual errors in 2023โ2024). 2. No disclosure policy โ readers discovering undisclosed AI use causes more damage than transparent disclosure. 3. Using AI for sourcing โ AI-generated source suggestions are not vetted journalists; every source must be human-confirmed. 4. Over-relying on a single AI engine โ no single engine is reliable enough to be the sole verification source. 5. Not auditing AI tool data handling โ sensitive documents submitted to external AI APIs may appear in those companies' training data. 6. Ignoring staff anxiety โ mandated AI adoption without change management creates resistance that slows adoption and damages morale. 7. No correction process for AI errors โ AI errors need the same corrections pipeline as human errors. 8. Using AI tools for breaking news without additional verification โ AI tools are trained on historical data and unreliable for unfolding events. 9. No AI literacy training before tool access โ untrained journalists make the same mistakes repeatedly. 10. Setting and forgetting AI governance policies โ a policy written in 2024 is outdated by 2026; schedule regular reviews.