Vercel announced on April 23, 2026, that it has identified an additional set of customer accounts compromised during a security incident linked to the third-party AI tool Context.ai. This update follows the initial disclosure on April 19 and comes after the company expanded its investigation to include a broader set of indicators of compromise (IOCs) and a comprehensive review of environment variable read events in its logs. Vercel confirmed that all newly identified affected customers have been notified directly, though the company has not disclosed the total number of impacted users.
The breach originated from a compromise at Context.ai, a provider of AI analytics and productivity tools. According to threat intelligence reports from Hudson Rock, the incident began in February 2026 when a Context.ai employee’s device was infected with Lumma Stealer malware. The malware exfiltrated credentials and OAuth tokens from Context.ai’s AWS environment, including a token belonging to a Vercel employee who had authorized Context.ai’s AI Office Suite using their enterprise Google Workspace account. This granted the attacker a persistent access path into Vercel’s internal systems, bypassing standard multi-factor authentication (MFA) by leveraging the pre-authorized OAuth session.
Once inside the environment, the threat actor—identified by researchers as the ShinyHunters group—maneuvered through internal systems to enumerate and decrypt environment variables that were not designated as sensitive. While Vercel’s architecture encrypts variables marked as sensitive to prevent them from being read, those not flagged were stored in a manner that allowed the attacker to access them in plaintext. These variables often contained API keys, database credentials, and signing keys. A threat actor subsequently listed what they claimed to be Vercel’s internal database for sale on a criminal forum for an asking price of $2 million.
In response to the incident, Vercel has collaborated with cybersecurity firm Mandiant and industry partners including GitHub, Microsoft, npm, and Socket. These collaborations confirmed that no npm packages published by Vercel were tampered with, and the software supply chain remains secure. Vercel CEO Guillermo Rauch stated that the incident highlights the risks of shadow AI, where employees grant broad OAuth permissions to unvetted third-party tools. To mitigate future risks, Vercel has implemented product updates that default all new environment variables to sensitive and enhanced team-wide management controls for these secrets.
Vercel’s core services remained operational throughout the incident. The company continues to advise all users to rotate their environment variables and review third-party app permissions within their Google Workspace environments. The investigation is ongoing as Vercel and law enforcement work to determine the full extent of data exfiltration and the activity of the sophisticated adversary.