The growing use of AI tools like ChatGPT in drafting press releases, commentary, and bios is creating new challenges for journalists and PR professionals tasked with distinguishing authentic writing from machine-generated text.
Tech reporter Chris Stokel-Walker points to recurring linguistic “tells,” such as phrases like “flip the script” and formulaic structures beginning “They’re not just… they’re”. Freelance journalist Harry Wallop says spotting such stock phrases is often an instant giveaway, while Dr Roger Miles notes AI copy frequently avoids concrete verbs in favour of abstractions like “making money” instead of “profitability.”
Other signals include near-uniform paragraph lengths, puffed-up importance (e.g. “stands as a testament”), and overly smooth tone. Content strategist James Snodgrass likens AI’s non-committal style to a “2.2 undergraduate essay.” According to Tom’s Guide, additional hallmarks include vague generalisations, formulaic openings (“Have you ever wondered…”), and overly upbeat, jargon-filled prose that reads more like a press release than authentic commentary.
The issue is particularly acute in public relations. Julie Thomson Dredge of Frame PR says clients increasingly rely on ChatGPT due to lack of writing confidence, leaving PR professionals to rewrite AI copy into credible, human language. She warns that sending AI-generated text to journalists wastes time, undermines credibility, and can lead to blacklisting.
To cope, newsrooms are turning to AI-detection tools like Pangram and Quillbot, which scan for machine-generated patterns, though they are far from perfect. Similar tech, such as SightEngine, is used for spotting AI-created images.
The consensus across UK journalism and PR is clear: while AI can support writing, genuine human insight, judgement, and authenticity remain irreplaceable. The industry faces a delicate balance—leveraging AI’s efficiency while protecting the integrity of content in an era of rapid digital transformation.
Created by Amplify: AI-augmented, human-curated content.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative presents recent insights into identifying AI-generated text, with references to events and publications from 2023 and 2024. The earliest known publication date of similar content is May 15, 2024, in the Genetic Literacy Project. ([geneticliteracyproject.org](https://geneticliteracyproject.org/2024/05/15/is-ai-infiltrating-scientific-publishing-rise-of-suspicious-tell-words-popping-up-in-published-papers/?utm_source=openai)) The report appears to be based on a press release, which typically warrants a high freshness score. No significant discrepancies in figures, dates, or quotes were found. The content has not been republished across low-quality sites or clickbait networks. The inclusion of updated data alongside older material suggests an effort to provide current information. However, the presence of recycled material may slightly reduce the freshness score.
Quotes check
Score:
9
Notes:
The direct quotes from Chris Stokel-Walker, Harry Wallop, James Snodgrass, and Dr Roger Miles are unique to this report. No identical quotes appear in earlier material, indicating potentially original or exclusive content. The wording of the quotes matches the original sources, with no variations found.
Source reliability
Score:
7
Notes:
The narrative originates from Press Gazette, a reputable UK-based publication focusing on journalism and media. This adds credibility to the report. However, the report's reliance on a press release introduces some uncertainty, as press releases can sometimes present information in a biased or promotional manner. The individuals mentioned in the report—Chris Stokel-Walker, Harry Wallop, James Snodgrass, and Dr Roger Miles—are all verifiable professionals with public profiles, lending further credibility to the content.
Plausability check
Score:
8
Notes:
The claims about identifying AI-generated text are plausible and align with current discussions in the field. The narrative is consistent with other reputable outlets covering similar topics. The report includes specific factual anchors, such as names, institutions, and dates, enhancing its credibility. The language and tone are appropriate for the UK audience and the subject matter. There is no excessive or off-topic detail, and the tone is neither unusually dramatic nor vague.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The report provides timely and original insights into identifying AI-generated text, supported by credible sources and specific details. While the reliance on a press release introduces slight uncertainty, the overall content is consistent with reputable outlets and presents plausible claims. The language and tone are appropriate for the UK audience and the subject matter.