In October 2025, Jesus College, Oxford, hosted more than 200 leaders from across industry, government, academia, media and law at the Oxford Generative AI Summit (OxGen AI 25), a major gathering to explore the impact and trajectory of generative artificial intelligence.

Over two days, more than 25 panels, keynotes and fireside chats addressed the transformative role of AI in reshaping not just technology, but society itself. Three themes emerged as central to future progress: change management, AI sovereignty and trust and truth.

Effective change management, speakers said, now requires inclusive strategies that engage all levels of an organisation, particularly those with limited technical expertise. Delegates warned against allowing AI vendors to dictate workflows or imposing top-down rollouts. Instead, successful adoption depends on collaborative internal dialogue and openness to experimentation. With the expected rise of agentic AI—systems capable of independent decision-making—attendees noted a likely shift in workforce dynamics, particularly for early-career workers.

The theme of sovereignty captured growing unease around reliance on dominant US and Chinese tech firms. Delegates discussed how regions such as the UK and EU must balance the need for domestic innovation with the benefits of collaboration. Sovereignty in AI, speakers argued, is increasingly a matter of national security, encompassing cultural, environmental and health considerations.

New research was also presented on building ‘Sovereign AI’ systems—especially within emerging 6G networks—that retain operator-level control while enabling global partnerships. These models are part of a wider push to ensure that countries maintain agency in how AI is developed and deployed.

Trust in AI outputs remains a critical challenge. Large Language Models still produce unverified or inaccurate content, with one speaker likening them to “drunken graduate students”. AI-generated text and multimedia content increasingly blur the line between fact and fabrication, raising unresolved questions around ownership and legal liability.

In response, some developers now anchor outputs with citations, though concerns persist over how bad actors might exploit convincing fakes. Academic work is also evolving, with frameworks like LoBOX promoting trust through institutional accountability rather than full system transparency—a recognition of AI’s intrinsic complexity.

This shift aligns with newer governance models such as Human-AI Governance (HAIG), which focus on dynamic trust relationships and the distribution of authority as AI gains autonomy. These approaches are intended to ensure meaningful human oversight as AI systems assume more decision-making power.

The broader context is one of geopolitical tension and industrial realignment. As AI becomes central to national strategies, competition for talent, data and compute is fragmenting the global digital economy. Generative AI and autonomous systems are now seen as defining pillars of Industry 5.0—technology that is sustainable, resilient and centred on human needs.

OxGen AI 25 offered a rare space for integrated dialogue across technical, ethical, legal and strategic domains. It also reinforced the UK’s ambition to lead in responsible AI development, even as regulatory and practical challenges mount.

By managing technological change, asserting strategic autonomy and fostering trustworthy systems, the UK has the opportunity to set global standards for AI governance—balancing innovation with societal benefit.

Created by Amplify: AI-augmented, human-curated content.