A growing divide is emerging in how organisations adopt artificial intelligence. Some firms are confidently AI-native, others are cautiously experimenting, and many are still struggling to implement the technology effectively. Pleo CTO Meri Williams captured the mood bluntly, describing companies “flailing around and spending a lot of money” without a clear AI strategy.
OpenAI’s latest usage data reflects this uncertainty. Work-related use of ChatGPT is declining, even as personal use rises. Milda Bayer, VP of Marketing at Lepaya, attributes this trend to confusion at leadership level over which tools best fit their business needs. She warned that without investment in workplace-ready AI tools, adoption will remain low—few employees are willing to fund access themselves.
Yenny Cheung, VP of Product Engineering at BlueFish AI, advocates deeper community engagement to stay ahead of AI’s rapid evolution. She recommends joining WhatsApp or LinkedIn groups, attending conferences and taking part in hackathons. Cheung co-founded ‘Speed AI Build,’ a hackathon designed to keep participants close to the cutting edge.
Structured learning also has its place. Bayer points to curated content such as the “How I AI” podcast by Claire Vo, and MIT Sloan’s “Me, Myself, and AI,” which offers insights from diverse AI leaders. Other notable sources include “20VC” for business trends and “In Pursuit of Good Tech” for ethical perspectives.
Vjera Orbanic, founder of The Coaching Body, calls for wider AI literacy to democratise access to the technology. Working with Ethical Intelligence, she stresses that understanding AI’s capabilities and limits is essential to ensure it is used for good—and not just by tech elites.
To manage information overload, industry experts recommend blended strategies: focus on AI tools tied to business goals, allocate regular learning time, and use AI to summarise and streamline content. A weekly AI check-in can help teams stay updated without burning out.
Marketers are also urged to prioritise human oversight. According to Forbes Council members, maintaining transparency and aligning AI use with brand values are key to preserving customer trust.
Ethical and safety considerations remain front and centre. Users must understand how AI works, guard against bias, and avoid over-reliance. Responsible use—backed by robust privacy safeguards and continuous oversight—supports safe adoption.
Practical applications of generative AI continue to grow. Tools are already helping users summarise emails and videos, practise new languages and enhance everyday workflows.
As the UK seeks to lead in AI, experts stress that the path forward lies in blending community, education and ethics with focused engagement. Grounded, responsible participation will be key to unlocking AI’s full potential.
Created by Amplify: AI-augmented, human-curated content.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative was published on September 24, 2025, indicating recent content. The article includes updated data, such as OpenAI's usage report showing a decline in ChatGPT's work-related use, contrasted by a rise in personal use. This suggests a high freshness score. However, the article references a press release, which typically warrants a high freshness score. No discrepancies in figures, dates, or quotes were identified. No recycled content or republishing across low-quality sites was found. No earlier versions with different figures, dates, or quotes were identified. No similar content appeared more than 7 days earlier. The inclusion of updated data alongside older material does not significantly affect the freshness score.
Quotes check
Score:
10
Notes:
The direct quotes from Pleo CTO Meri Williams, OpenAI's usage report, and Milda Bayer, VP of Marketing and New Business Sales at Lepaya, appear to be original and exclusive to this narrative. No identical quotes were found in earlier material, and no variations in quote wording were identified. No online matches were found for these quotes, indicating potentially original content.
Source reliability
Score:
8
Notes:
The narrative originates from Sifted, a reputable organisation known for its coverage of European startups and technology. The article includes insights from established professionals in the AI industry, such as Meri Williams, Milda Bayer, and Yenny Cheung. However, the article references a press release, which typically warrants a high freshness score. No unverifiable entities or fabricated information were identified.
Plausability check
Score:
9
Notes:
The claims made in the narrative are plausible and align with current trends in AI adoption and usage. The article provides specific details, such as the decline in ChatGPT's work-related use and the rise in personal use, which are supported by OpenAI's usage report. The recommendations for staying informed about AI developments, including joining AI communities and attending events, are practical and widely recognised strategies. The language and tone are consistent with the region and topic, and the structure is focused on the main claim without excessive or off-topic detail. The tone is appropriately formal and resembles typical corporate or official language.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is recent, with no recycled content or discrepancies identified. The quotes appear original and exclusive, and the source is reputable. The claims are plausible, supported by specific details, and the language and tone are appropriate. No significant credibility risks were identified, leading to a PASS verdict with high confidence.