A major study from the University of Cambridge has revealed deepening anxiety among British novelists about the growing influence of generative AI, with more than half believing it could eventually replace human fiction writers.
The research surveyed 332 figures in the UK fiction industry, including 258 published novelists. It found that 51 per cent now see AI as a potential successor in fiction writing, while 59 per cent believe their work has already been used to train AI systems without consent.
Writers report mounting financial losses, with 39 per cent attributing a drop in income to the spread of generative AI tools. A further 85 per cent expect their earnings to decline in future. Genre writers in romance, crime and thrillers appear particularly vulnerable, as AI-generated books flood marketplaces, reducing visibility and readership.
Several authors say AI-generated titles have been falsely published under their names on platforms like Amazon, sometimes using stolen character names and plot elements. In response, Amazon has introduced upload limits for Kindle Direct Publishing, though plagiarised and fraudulent titles persist.
The disruption is spreading across the creative economy. Marketers, publishers and content strategists face a saturated market where AI-generated content drives down prices and undermines differentiation. The volume of low-cost content also raises concerns about audience trust and the perceived value of human-made storytelling.
Cambridge researchers found widespread dissatisfaction with current copyright protections. A large majority of authors oppose government proposals to require rights holders to opt out of having their work used for AI training. Instead, 86 per cent support a consent-based, opt-in system. Nearly half favour licensing being managed by a dedicated industry body.
Some authors now view human-written fiction as at risk of becoming a niche luxury product. Independent publishers have started using “AI-free” labels to reassure readers and reinforce authenticity.
These concerns extend beyond the UK. A recent US court ruling found that AI company Anthropic’s use of copyrighted books for training qualified as fair use, but also struck down the unlicensed copying of millions of pirated titles. Meta Platforms has also faced criticism for allegedly using pirated books to train its AI systems, prompting further debate over corporate responsibility and ethical standards.
UK creative groups have taken a firm stance. In late 2024, organisations including the British Phonographic Industry and the Society of Authors opposed a government proposal to allow AI companies to use copyrighted material unless creators explicitly opt out. They argued for enforcing existing laws, not weakening them. For content professionals, the findings offer clear signals. The economics of storytelling are shifting. Provenance, authorship verification and transparency are becoming essential. Stronger IP protection and ethical AI use are now strategic imperatives.
While the creative sector faces significant pressure, those who lead with responsible practices and respect for human creativity can still thrive in an AI-driven age.
Created by Amplify: AI-augmented, human-curated content.