AI systems in the workplace could soon be the leading source of data leaks, researchers warn, as organisations race to adopt autonomous agents without adequate security controls.

The rapid deployment of AI assistants is introducing new vulnerabilities, with systems often granted excessive access or left unmonitored. These agents, operating as independent identities within IT networks, risk exposing sensitive data due to misconfigurations and poor governance.

A report by Proofpoint found that AI assistants will need to be treated like human employees in terms of identity management, including individual profiles, trust scores and closely monitored privileges. “Security teams will no longer focus solely on human actors; they will be forced to treat their AI agents as first-class identities,” said Ravi Ithal, Chief Product and Technology Officer for AI Security at Proofpoint.

Instances of data exposure linked to AI are already widespread. In 2025, 84% of AI tools experienced data breaches, with over half involving credential theft. Much of this was attributed to ‘shadow AI’, where employees use personal accounts or consumer platforms like ChatGPT, Google Gemini and Microsoft Copilot for work. A related survey showed 57% of staff admitted entering confidential company data into generative AI tools.

“This kind of informal behaviour creates major blind spots in data protection,” said Adrian Covich, Vice President of Systems Engineering for Proofpoint in Asia-Pacific and Japan. He urged organisations to audit AI usage and bolster data protocols ahead of regulatory reforms expected in 2026. Further research by GitGuardian found 6.4% of AI-assisted code repositories leaked secrets – 40% more than the average for public repositories. Developers using LLMs may be prioritising speed over security, analysts said. Meanwhile, cyber espionage is becoming more discreet. Rather than phishing, attackers now infiltrate networks through encrypted messaging and trusted platforms. “The most effective espionage in 2026 won’t be loud or flashy,” said Alexis Dorais-Joncas, Head of Espionage Research at Proofpoint. “It’ll be invisible, hiding in plain sight behind the tools and platforms we trust every day.”

The rise of AI is also pushing governments to rethink cybersecurity laws. Australia is reviewing its AI governance framework, with officials calling for stronger data controls and compliance with evolving standards. Certifications like ISO 42001 provide a base, but may fall short in addressing the full scope of AI-related risks.

As oversight increases, experts say organisations must adapt quickly. Measures such as expanding identity and access management to include AI, curbing shadow AI, and aligning with regulatory expectations will be vital to ensuring AI develops safely and ethically.

Created by Amplify: AI-augmented, human-curated content.