Artificial Intelligence (AI) has rapidly evolved, prompting governments worldwide to establish regulatory frameworks to manage its societal impact. This article provides a comparative analysis of recent AI legislation in the United States, European Union, and Asia, highlighting the diverse approaches to AI governance.
United States: State-Level Initiatives
In the U.S., AI regulation has predominantly been state-driven, with several states enacting laws to govern AI development and deployment.
California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
In 2024, California introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, authored by State Senator Scott Wiener. The bill aimed to implement safety measures for advanced AI models. However, on September 29, 2024, Governor Gavin Newsom vetoed the bill, citing concerns over its regulatory framework, which targeted only large AI models based on computational size, potentially overlooking smaller models that might present equally significant risks. Governor Newsom emphasized the need for adaptable regulation in the rapidly evolving AI landscape. (en.wikipedia.org)
Texas Responsible Artificial Intelligence Governance Act (TRAIGA)
In June 2025, Texas Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law, effective January 1, 2026. The legislation applies to developers and deployers of AI systems used by Texas residents and prohibits the development and deployment of AI systems intended to incite violence, self-harm, unlawful discrimination, and other illegal activities. (en.wikipedia.org)
European Union: The Artificial Intelligence Act
The European Union has taken a comprehensive approach to AI regulation with the enactment of the Artificial Intelligence Act (AI Act) in 2024. The AI Act establishes a risk-based framework, categorizing AI systems based on their potential risk to fundamental rights and safety. It imposes strict requirements on high-risk AI applications, including mandatory conformity assessments, transparency obligations, and human oversight measures. The Act also introduces provisions for the establishment of national supervisory authorities to ensure compliance and enforcement. (en.wikipedia.org)
Asia: China's AI Governance
China has been proactive in advancing AI governance through a combination of laws, administrative regulations, and policy instruments. In August 2025, China unveiled the "AI Plus" Action Plan, aiming to integrate AI into various sectors, including technology, industry, consumption, public services, governance, and international cooperation. The plan outlines a three-step roadmap, with the final stage targeting 2030, envisioning AI fully empowering high-quality development. Additionally, in October 2025, China amended its Cybersecurity Law to promote the safe development of AI, marking a new phase of regulatory maturation. (chambers.com)
Comparative Analysis
The approaches to AI regulation in the U.S., EU, and Asia reflect differing priorities and governance models.
Risk-Based vs. Context-Specific Regulation
The EU's AI Act exemplifies a risk-based regulatory approach, imposing stringent requirements on high-risk AI applications to mitigate potential harms. In contrast, the U.S. has adopted a more context-specific approach, with state-level regulations like TRAIGA focusing on prohibiting AI systems that incite harm or unlawful discrimination. China's "AI Plus" Action Plan indicates a state-guided strategy, aiming to integrate AI across various sectors while promoting safe development.
International Cooperation and Standardization
The EU's AI Act and China's "AI Plus" Action Plan emphasize international cooperation and standardization in AI governance. The EU's Act includes provisions for establishing national supervisory authorities, while China's plan outlines a roadmap for AI integration and international cooperation. The U.S. approach, particularly at the state level, has been more fragmented, with varying regulations across states and a lack of a unified federal framework.
Conclusion
The global landscape of AI regulation is diverse, with each region adopting frameworks that align with its unique priorities and governance structures. The EU's comprehensive, risk-based approach, China's state-guided strategy, and the U.S.'s state-driven initiatives illustrate the varied paths toward managing AI's societal impact. As AI continues to evolve, ongoing international dialogue and cooperation will be essential to harmonize regulations and ensure responsible AI development worldwide.