Artificial intelligence is rapidly reshaping the cyber threat landscape, creating new vulnerabilities for organisations across the UK and US. While offering transformative potential, AI is also enabling more sophisticated attacks and amplifying the complexity of securing digital infrastructure.

Internally, the swift adoption of AI tools often outpaces security reviews. Integrating these systems into existing IT environments introduces vulnerabilities—from unpatched software to excessive access permissions—that threat actors can exploit. These risks are magnified by the speed at which AI is being embedded into core business processes.

Externally, attackers are weaponising AI to automate and scale operations. AI enables faster scanning for network weaknesses and launches parallel attacks that outpace traditional defences. Advanced threats now include polymorphic malware that rewrites its code to evade detection, and AI-powered phishing capable of mimicking voices and creating deepfakes. AI is also being used to mine stolen data for targeted fraud and blackmail.

AI systems themselves are also being attacked. Techniques such as prompt injection and training data poisoning can distort AI outputs, leak sensitive information, or degrade performance. The UK’s National Cyber Security Centre warns that advanced “frontier AI” tools will significantly impact cyber resilience by 2027.

Regulators are responding. In the UK, while no AI-specific cybersecurity law exists, obligations under GDPR and the Network and Information Systems Regulations apply. The NCSC has published guidance to help organisations mitigate AI-related risks. Proposed legislation such as the Cyber Resilience Bill suggests a move towards heightened accountability, with senior executives increasingly exposed to personal liability.

The US framework is more fragmented but similarly evolving. Federal agencies including the FTC and SEC are targeting executives over cyber failings, while state-level laws—like California’s AI transparency rules—mandate incident reporting where AI is implicated in cyberattacks. Legal exposure is rising, and litigation over AI-driven security breaches is expected to grow.

The impact is already visible. The NCSC reported a 16% rise in hostile cyber activity in 2024, with AI-enhanced methods playing a central role. Cabinet Office Minister Pat McFadden noted that AI is increasing both the volume and sophistication of attacks, prompting new cybersecurity strategies and regulatory plans.

Business readiness remains a concern. A global Lenovo survey found 65% of IT leaders doubt their defences can counter AI-powered threats such as polymorphic malware and insider misuse. Many organisations lack effective safeguards for AI assets—models, data and prompts—and face challenges linked to legacy systems, limited resources and skills shortages.

The legal consequences for weak cyber governance are growing. UK law allows for regulatory fines, civil claims and, in some cases, criminal prosecution. The Online Safety Act adds further obligations, especially around harmful or misleading AI content.

Legal teams now play a critical role. In-house counsel must map regulatory duties across jurisdictions, educate senior leaders, and ensure governance structures address AI and cyber risk. Policies should mandate cybersecurity reviews for all AI use, while board-level oversight must be documented.

Vendor agreements require scrutiny. Legal teams should confirm that cloud-based AI providers meet recognised security standards and report incidents promptly. Given that no defence is foolproof, liability mitigation strategies—covering vendor obligations, insurance and financial risk planning—are essential.

Experts advise investing in AI-enhanced threat intelligence, advanced detection tools, and network segmentation. For critical infrastructure, innovations such as remote kill switches may become vital. As AI becomes embedded in business operations, organisations that embed robust, legally compliant cybersecurity frameworks will be better positioned to gain customer trust and unlock new opportunities.

AI is both a powerful asset and a complex risk. The legal landscape is moving toward stricter accountability for cybersecurity and AI governance. Organisations that act now to strengthen their defences and compliance frameworks will be best placed to lead in an AI-driven economy.

Created by Amplify: AI-augmented, human-curated content.