On April 20, 2026, U.S. Representative Blake Moore (R-UT) introduced the AI Children's Toy Safety Act, a federal legislative proposal designed to prohibit the integration of artificial intelligence chatbots into products marketed for children. The bill seeks to ban the manufacturing, importation, sale, and distribution of any children's toy or childcare article that incorporates generative AI or chatbot technology within the United States. Congressman Moore cited a combination of data privacy risks, the potential for addictive engagement patterns, and the danger of exposing minors to explicit content as the primary drivers for the legislation.

The proposed act targets a growing segment of the toy industry that utilizes large language models (LLMs) to create interactive play experiences. According to the bill's supporting documentation, the use of AI chatbots in toys poses significant privacy challenges, as these devices often collect and process voice and text data from children. Congressman Moore emphasized that many of the underlying AI platforms—including those developed by OpenAI, Google, Perplexity AI, xAI, and Anthropic—maintain terms of service that explicitly prohibit use by unsupervised children under the age of 13. Despite these internal policies, the technology is frequently licensed to third-party toymakers for use in products specifically designed for that age demographic.

Technical concerns highlighted in the legislation include the unpredictable nature of chatbot interactions. Because many AI models are trained on vast datasets generated by adults, there is a documented risk of the systems producing age-inappropriate or explicit content during unscripted conversations with children. The bill also addresses the psychological impact of AI-driven toys, with Moore stating that such technology can lock children into addictive patterns that interfere with the development of relational maturity and self-discipline. The congressman argued that AI companies should not be permitted to use children's toys as a vessel for data collection or as a means of influencing minors.

The introduction of the AI Children's Toy Safety Act comes amid increasing international competition and domestic regulatory pressure. Congressional records indicate that over 1,500 AI toy companies are currently operating in China, many of which export products to the U.S. market. Domestically, the federal bill follows similar legislative efforts at the state level, most notably in California, where lawmakers recently moved to establish safeguards and potential bans on AI-enabled toys. Moore’s federal proposal aims to create a uniform national standard to prevent the infiltration of AI chatbots into the childcare industry.

Under the terms of the bill, the Consumer Product Safety Commission would be tasked with overseeing the enforcement of the ban. The legislation defines childcare articles broadly to include products intended to facilitate sleep, feeding, or the sucking or teething of children. By drawing what Moore described as a line in the sand, the act seeks to prioritize human-centric AI development and basic ethics over the rapid commercialization of generative tools in the youth market. The bill has been referred to the House Committee on Energy and Commerce for further review.