Opinium’s new polling, carried out for The Observer, reinforces a clear public instinct about artificial intelligence in schools: use it to help pupils learn, not to do their learning for them. The survey of 2,050 UK adults, conducted between 6 and 8 August 2025, found a striking split in popular sentiment. Half of respondents (50%) think pupils should be allowed to use AI for research, but three quarters (73%) oppose pupils using AI to complete essays or homework; just 17% say that should be permitted. Support for explicit teaching about AI is strong — two thirds (66%) want schools to educate pupils about emerging technologies — but people draw a firm line between “assist” and “replace” when it comes to classroom use.

That central judgment — AI as an aid, not a substitute for human responsibility — runs through a series of recent studies and sector reports. Opinium’s broader analysis of public responses to the government’s AI Opportunities Action Plan shows acceptance tends to track task risk: routine administrative automation attracts far more public support than AI-led planning or high‑stakes decisions. The same analysis flags the conditions the public wants before entrusting AI with sensitive roles: demonstrable accuracy, robust regulation, clear lines of accountability and strong data security.

The ambivalence about teachers’ use of AI is notable. The Observer poll found opinion narrowly split over teachers using AI for lesson planning (41% in favour, 42% opposed) and slightly more resistant when it comes to administrative paperwork (38% in favour, 43% opposed). Opposition hardens around assessment: 58% of respondents oppose teachers using AI to mark essays or homework, suggesting the public views marking as an inherently human responsibility tied to professional judgment and accountability.

Those public instincts align with evidence from schools and leaders. Browne Jacobson’s School Leaders Survey reports that around half of leaders say their schools use AI tools to some extent and about one in five use them regularly. Many leaders report efficiency gains in lesson preparation and in tailoring material to pupils, but also flag shortfalls in staff expertise and concerns about plagiarism, safeguarding and legal compliance. Similarly, Cambridge University Press & Assessment’s nationally representative polling of over 2,200 adults found broad backing for AI to assist with routine tasks but strong distrust of its use in assessment, with younger respondents consistently more receptive than older ones.

Practice on the ground is moving faster than policy in some respects. The Department for Education’s omnibus surveys show that many secondary pupils already use generative AI in lessons and for homework, underlining that classroom familiarity with these tools has risen sharply. Internationally, educators report similar trends: a survey of US K‑12 educators and administrators found rising optimism about AI’s potential to reduce workload and help with drafting communications and lesson materials, even as concerns about cheating, privacy and the limited reliability of detection tools persist.

Taken together, the evidence points to a pragmatic middle way. There is appetite for using AI to lift routine burdens from teachers and to equip pupils with the skills they will need in an AI-shaped economy — but only if that adoption is governed, resourced and explained. Several recurring policy priorities emerge from the data and sector reports:

  • Clear national standards and guidance. The public and school leaders want a consistent framework that sets out what AI may and may not be used for in classrooms, how assessment should be handled, and minimum data‑protection and transparency requirements.
  • Investment in teacher training and capacity‑building. Where schools report benefits — in lesson planning, personalised learning and efficiency — those gains are often tied to leaders who have invested in staff training. Government funding for continuing professional development would help spread good practice and reduce uneven adoption between better‑resourced and less‑resourced schools.
  • Assessment redesign and academic‑integrity measures. Given the strong public resistance to AI marking and the widespread concern about cheating, policymakers and exam boards should accelerate work on assessment formats and integrity tools that remain meaningful in an AI era, while ensuring detection technologies and sanctions are proportionate and fair.
  • Vendor accountability, transparency and data security. The public’s conditions for trust — accuracy, regulation, accountability and data security — imply procurement standards that require vendors to publish model capabilities, limitations, training data provenance where feasible, and clear contractual protection for pupil data.
  • Curricular literacy for pupils and parents. With many pupils already using generative AI, the surveys point to an urgent need for schools to teach responsible use, digital literacy and the ethical dimensions of AI, and for parents to be brought into the conversation so home use is consistent with school expectations.
  • Equity of access. To realise the promise of personalised learning and reduced teacher workload, policymakers must tackle disparities in access to devices, connectivity and staff expertise so AI does not widen existing attainment gaps.

These measures would also help the UK position itself as a leader in responsible educational innovation. The public’s cautious optimism — supportive of education about AI and selective practical uses but wary of automation that erodes professional judgement — gives policymakers a clear mandate: foster the positive opportunities while putting firm guardrails around riskier applications. As Opinium’s findings make plain, that balance is both politically sensible and pedagogically defensible.

There are practical steps schools and local authorities can start taking now. School leaders who have already piloted AI tools advise focusing on small, well‑scoped pilots that measure impact on teacher workload and pupil outcomes; investing in staff CPD; publishing transparent local policies on permitted and prohibited uses; and engaging parents and governors in policy design. Exam boards and inspection bodies should signal quickly how assessment and accountability frameworks will evolve so teachers can plan with confidence rather than uncertainty.

If the UK matches the technical and regulatory ambition expressed elsewhere — and pairs it with sustained investment in teachers and pupils — the outcome could be broadly positive. Across the surveys and reports, a consistent theme emerges: people want the next generation to understand AI and to be prepared for an AI-rich future. They are not opposed to innovation; they want it done responsibly. That is a clear opportunity for education leaders and policymakers to shape a model of AI in schools that enhances learning, preserves human judgement where it matters most, and sets the UK up as a global exemplar in responsible edtech.

Source: Noah Wire Services