When Demis Hassabis, co-founder and chief executive of DeepMind, tells a national newspaper that artificial intelligence “could be 10 times bigger than the Industrial Revolution — and maybe 10 times faster,” it signals both the scale of change he expects and the urgency of preparation. Speaking to The Guardian, Hassabis said artificial general intelligence could arrive within five to ten years, offering “radical abundance” but also the risk of rapid social dislocation.
The warning comes amid visible advances. DeepMind’s AlphaFold system for predicting protein structures has been credited with accelerating scientific discovery, releasing models for hundreds of millions of proteins for open use in drug discovery, sustainability and disease research. According to the company, predictions that once took months or years can now be made in seconds. That kind of leap explains why many business and academic leaders now describe disruption in terms of years, not decades.
Optimism about AI’s potential is tempered by projections of economic strain. Investment-bank analysis cited by the BBC suggests generative AI could affect the equivalent of 300 million full-time jobs worldwide, with administrative, legal and clerical roles among the most exposed — even as the same report forecasts significant productivity and GDP gains if change is well managed. UK ministers have signalled a preference for adoption that complements, rather than replaces, human work.
Industry leaders echo that duality. Former OpenAI chief technology officer Mira Murati told WIRED that progress is likely to continue rapidly, stressing the need for interdisciplinary input, safety engineering and regulation so benefits are broadly shared. “The technology is not intrinsically good or bad,” she said, adding that society must “collectively keep steering the models toward good.”
The pace is tangible. Reuters has reported that OpenAI’s GPT-5 is nearing release, with early testers noting gains in coding, reasoning and problem-solving. But scaling, safety evaluation and energy demands remain major considerations, with economic effects on cloud infrastructure, electricity grids and public policy.
Legal and ethical pressures are also rising. Copyright and data-use lawsuits from publishers, artists and authors are challenging model-training practices, with courts beginning to set precedents that will influence licensing strategies. Reuters’ coverage suggests this will force companies to rethink data provenance and commercial models, potentially slowing parts of the race while improving accountability.
For the UK — and any jurisdiction seeking to shape the transition — the opportunity is to pair ambition with institution-building. Government has promoted AI investment for productivity gains while emphasising complementarity with work. A practical strategy could include national reskilling programmes for workers most exposed to automation; clear licensing and data-use frameworks to reduce legal uncertainty; and incentives for open science and shared infrastructure, following AlphaFold’s model, to direct AI power toward public-good applications.
The industry itself is calling for coordinated regulation and safety guardrails to manage risks from misinformation to energy bottlenecks. Hassabis has urged international cooperation to ensure that radical productivity gains do not lead to radical inequality.
The UK has strong research institutions, a biotech sector ready to apply tools like AlphaFold, a growing AI ecosystem and policymakers engaged in regulatory design. If these assets are marshalled through targeted investment, data and IP clarity, and inclusive rollouts, the UK could set a global example of responsible, high-impact AI innovation.
Time is short: the industry moves fast, and public policy must match that pace. But with the right mix of science, stewardship and social policy, AI’s transformational potential — from accelerating medical breakthroughs to boosting productivity — can be realised in ways that broaden opportunity rather than narrow it.
Created by Amplify: AI-augmented, human-curated content.