A growing divide is emerging in how organisations adopt artificial intelligence. Some firms are confidently AI-native, others are cautiously experimenting, and many are still struggling to implement the technology effectively. Pleo CTO Meri Williams captured the mood bluntly, describing companies “flailing around and spending a lot of money” without a clear AI strategy.

OpenAI’s latest usage data reflects this uncertainty. Work-related use of ChatGPT is declining, even as personal use rises. Milda Bayer, VP of Marketing at Lepaya, attributes this trend to confusion at leadership level over which tools best fit their business needs. She warned that without investment in workplace-ready AI tools, adoption will remain low—few employees are willing to fund access themselves.

Yenny Cheung, VP of Product Engineering at BlueFish AI, advocates deeper community engagement to stay ahead of AI’s rapid evolution. She recommends joining WhatsApp or LinkedIn groups, attending conferences and taking part in hackathons. Cheung co-founded ‘Speed AI Build,’ a hackathon designed to keep participants close to the cutting edge.

Structured learning also has its place. Bayer points to curated content such as the “How I AI” podcast by Claire Vo, and MIT Sloan’s “Me, Myself, and AI,” which offers insights from diverse AI leaders. Other notable sources include “20VC” for business trends and “In Pursuit of Good Tech” for ethical perspectives.

Vjera Orbanic, founder of The Coaching Body, calls for wider AI literacy to democratise access to the technology. Working with Ethical Intelligence, she stresses that understanding AI’s capabilities and limits is essential to ensure it is used for good—and not just by tech elites.

To manage information overload, industry experts recommend blended strategies: focus on AI tools tied to business goals, allocate regular learning time, and use AI to summarise and streamline content. A weekly AI check-in can help teams stay updated without burning out.

Marketers are also urged to prioritise human oversight. According to Forbes Council members, maintaining transparency and aligning AI use with brand values are key to preserving customer trust.

Ethical and safety considerations remain front and centre. Users must understand how AI works, guard against bias, and avoid over-reliance. Responsible use—backed by robust privacy safeguards and continuous oversight—supports safe adoption.

Practical applications of generative AI continue to grow. Tools are already helping users summarise emails and videos, practise new languages and enhance everyday workflows.

As the UK seeks to lead in AI, experts stress that the path forward lies in blending community, education and ethics with focused engagement. Grounded, responsible participation will be key to unlocking AI’s full potential.

Created by Amplify: AI-augmented, human-curated content.