On April 21, 2026, Kroll released its comprehensive research report, The State of AI Security: Innovation vs. Risk, which highlights a significant disconnect between the rapid adoption of artificial intelligence and the implementation of robust security protocols. According to the findings, 76% of organizations globally have experienced a security incident involving AI applications or models in the past two years. The report, based on a survey of 1,200 senior IT and security decision-makers across North America, Europe, and Asia-Pacific, underscores a widespread lack of foundational governance frameworks as businesses rush to integrate AI into their operational workflows.

The technical data provided by Kroll indicates that the most common type of incident involves data leakage, with 48% of affected companies reporting that sensitive corporate information was inadvertently exposed through public generative AI interfaces. Additionally, 34% of organizations faced challenges related to prompt injection attacks, where malicious inputs were used to manipulate model outputs or bypass safety filters. The research also identifies a critical vulnerability in API security; 29% of incidents were traced back to insecure connections between third-party AI services and internal enterprise systems.

Kroll analysis reveals that while AI innovation is surging, security infrastructure remains underdeveloped. Only 22% of surveyed businesses have established a formal AI risk management committee, and 61% do not have a centralized inventory of the AI models currently in use within their environment. This lack of visibility extends to technical controls, as 55% of respondents admitted they do not perform regular red-teaming or adversarial testing on their AI deployments. The report notes that the average duration of service disruption following an AI-related breach is 98 hours, often requiring extensive data sanitization and model retraining.

In terms of sector-specific data, the financial services industry reported the highest rate of incidents at 83%, followed by the technology sector at 80% and healthcare at 77%. Kroll attributes these high figures to the complexity of the data sets and the high stakes of the automated decisions being made. Jason Smolanoff, President of Kroll Cyber Risk division, stated that the current landscape is characterized by a security debt that could undermine the long-term viability of AI initiatives. The report concludes that while 88% of organizations intend to expand their AI capabilities by the end of 2027, less than 20% of those organizations have a dedicated budget for AI-specific security tools or personnel training, suggesting that the gap between innovation and protection may continue to widen.