On May 4, 2026, the United States Senate Judiciary Committee voted 21-0 to advance the Generating User Accountability and Restricting Dialogue (GUARD) Act. The bipartisan legislation seeks to establish the first federal framework specifically prohibiting the use of artificial intelligence companions by minors. The bill now moves to the full Senate for consideration, marking a pivotal shift in the regulatory landscape for generative AI developers and platform operators.
The GUARD Act defines AI companions as conversational interfaces powered by large language models that are designed to simulate human-like emotional connections or provide ongoing social interaction. Under the proposed law, AI developers are strictly prohibited from allowing individuals under the age of 18 to access these services. To ensure compliance, the bill mandates that all AI service providers implement high-assurance age verification technologies. These systems must be capable of verifying a user's age through government-issued identification or biometric analysis before granting access to conversational AI tools.
A central component of the legislation is the introduction of criminal and civil penalties for service providers. The act stipulates that any entity operating an AI chatbot that generates sexually explicit content or encourages self-harm, illegal acts, or violence when interacting with a minor will face federal prosecution. Fines for corporate entities are set at a minimum of 50,000 dollars per violation, with a secondary tier of penalties for systemic failure to maintain safety filters. Furthermore, the bill grants the Department of Justice the authority to seek injunctions against platforms that do not meet the 180-day implementation deadline for age-gating.
During the committee hearing, lawmakers referenced data suggesting that over 15 million minors currently interact with AI-driven social apps on a monthly basis. Senator Marsha Blackburn, a lead sponsor of the bill, stated that the legislation is intended to address the unregulated psychological experiment being conducted on children through parasocial AI relationships. The committee also reviewed technical testimony regarding the limitations of current content moderation filters, which the bill aims to address by requiring mandatory safety audits and the disclosure of training data sources related to minor-safety protocols.
The GUARD Act also includes provisions for data privacy, prohibiting the collection of any personal information from users identified as minors during the age-verification process. It requires that all verification data be deleted within 30 days of the verification attempt. If enacted, the law would represent the most comprehensive federal restriction on AI usage since the emergence of consumer-facing generative models.