AI disclosure labels are rapidly becoming a standard journalism practice — and their quality varies enormously. Vague labels ("Some AI tools were used in the preparation of this article") provide minimal information and, in some cases, raise more questions than they answer. Specific, functional labels build trust by telling readers exactly what AI did and didn't do in producing the article.
The Anatomy of a Good AI Disclosure Label
An effective label specifies: 1) which AI tools were used, 2) for which specific tasks, 3) what human oversight was applied. Example: "Perplexity Pro was used to conduct background research on legislative history. ChatGPT-4o assisted with initial draft structure. All facts, quotes, and conclusions were verified by [Reporter Name] and edited by [Editor Name], who take full editorial responsibility for this article." This label is specific enough to be informative, honest about AI involvement, and reassuring about human oversight.
What Disclosure Labels Should Not Claim
Avoid labels that overstate AI involvement ("This article was written by AI") or understate it ("Minor AI assistance was used") when neither is accurate. Avoid labels that make verification impossible ("AI tools were used in accordance with our editorial standards") without specifying what those standards require. The test: could a reader who understood AI capabilities understand exactly how AI was used from reading your label?