On April 24, 2026, a surge of artificial intelligence legislation reached critical milestones across several U.S. states, signaling an intensifying shift toward localized governance of the technology. Lawmakers in Alabama, Hawaii, Maryland, and Tennessee advanced measures targeting high-risk AI applications in healthcare, the safety of minors in digital environments, and the transparency of synthetic media. These developments reflect a growing trend of state-level intervention as comprehensive federal AI frameworks remain absent.

In Alabama, the legislative focus on April 24 centered on the implementation of Senate Bill 63, the AI Prior Authorization Oversight Bill. Recently signed into law, the act establishes new compliance standards for health insurance carriers using automated systems for coverage determinations. Effective October 1, 2026, the law prohibits insurers from relying exclusively on AI to deny or reduce medical claims. It mandates that a qualified healthcare professional must review and finalize any adverse determination based on medical necessity. Additionally, the bill requires insurers to certify annually to the Department of Insurance that their algorithms are applied fairly and do not discriminate based on group data.

Hawaii’s legislature moved today to reconcile three significant AI bills that have cleared both chambers. House Bill 1782 and Senate Bill 3001 establish a regulatory framework for AI companion systems and conversational services, specifically targeting interactions with users under the age of 18. These measures require operators to implement protocols to prevent the production of harmful content, such as suicidal ideation, and provide clear disclosures to account holders. Simultaneously, House Bill 2137 addresses the commercial use of deepfakes by requiring the disclosure of synthetic performers in advertising and prohibiting the non-consensual distribution of realistic digital imitations.

Maryland lawmakers have recently transitioned four AI-related bills to Governor Wes Moore for final approval. This package includes Senate Bill 8 and Senate Bill 141, which provide protections against deepfakes in personal and political contexts. Furthermore, the Maryland Artificial Intelligence Toy Safety Act has advanced, proposing rigorous pre-market safety assessments for AI-enabled products marketed to children. Under this act, manufacturers must encrypt child user data and are prohibited from using such data to train unrelated AI models. Violations are classified as deceptive trade practices, with potential civil penalties reaching $50,000 per instance.

In Tennessee, the General Assembly entered the final hours of its regular session on April 24 with several AI bills on the cusp of passage. The state Senate recently approved SB 1700, a chatbot safety bill, while the House continues to deliberate on HB 1898, a measure that would impose new mandates on AI developers under the framework of child protection. These efforts follow the state’s earlier enactment of the ELVIS Act, which protects an individual’s voice and likeness from unauthorized AI replication. Collectively, these state actions on April 24, 2026, establish a complex regulatory environment for technology firms operating across the United States.