The Cloud Security Alliance (CSA) released a comprehensive research report on April 21, 2026, revealing that 65% of organizations—nearly two-thirds—have experienced cybersecurity incidents linked to unchecked AI agents within the past year. The study, titled Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises, was conducted in collaboration with Token Security and highlights a growing disparity between the rapid deployment of autonomous AI and the governance frameworks required to manage them. The findings indicate that the risk posed by AI agent scope violations has transitioned from a theoretical concern to a routine operational challenge for the majority of modern enterprises.
According to the research, these incidents have resulted in significant enterprise-wide consequences. Data exposure was the most prevalent outcome, reported by 61% of affected organizations. Furthermore, 43% of firms cited operational disruptions, while 41% experienced unintended actions within critical business processes. Financial losses were reported by 35% of respondents, and 31% noted delays in both internal and customer-facing services. The report indicates that these disruptions are no longer isolated technical glitches but are increasingly affecting core enterprise functions, financial performance, and service delivery timelines.
A critical finding in the report is the visibility gap regarding AI agent activity. While 68% of IT and security professionals expressed high confidence in their ability to see AI agents on their networks, 82% of all respondents admitted to discovering previously unknown agents over the last 12 months. These unsanctioned agents were most frequently located within internal automation environments and large language model platforms. The CSA noted that this lack of visibility makes it nearly impossible for infrastructure teams to ensure secure deployment or apply necessary controls, as many agents are deployed by employees without centralized approval or oversight.
The report also identifies retirement debt as a burgeoning security risk. This term refers to AI agents that remain active on a network long after their intended purpose has been served, often retaining high-level permissions and active credentials. The survey found that only 21% of organizations have established formal decommissioning processes for AI agents. Without these protocols, agents continue to hold access to sensitive data and systems, creating a persistent and unmonitored attack surface. The research suggests that current identity and access management systems, often designed for human users, are frequently inadequate for managing self-directed, API-driven agents that operate continuously at runtime.
Hillary Baron, AVP of Research at the Cloud Security Alliance, stated that AI agent governance has shifted from a technical oversight issue to a primary business risk management concern. Baron emphasized that as these agents gain greater autonomy, governance must evolve into a unified operational model to sustain control at scale. Currently, only 21% of firms maintain a real-time registry of their AI agents, while 32% rely on non-real-time records. The CSA concludes that without a shift toward real-time inventory and lifecycle management, the accumulation of shadow AI agents will continue to create structural exposures for global enterprises.