As we advance into 2024, artificial intelligence (AI) continues to evolve rapidly, prompting legislators around the globe to formulate new laws to govern its expansive reach. Last year marked a significant turning point in the regulation of AI, with various regions enacting laws to harness AI’s innovative potential while mitigating its social and economic risks. This comprehensive overview examines the latest legislative efforts in the European Union, Canada, California, and beyond, providing insights for businesses to adapt to these emerging legal landscapes.
EU, Canada, and California: Pioneers in AI Regulation
The EU AI Act
The European Union’s groundbreaking AI Act, expected to take effect in early 2026, is set to introduce stringent requirements for AI usage. This act, aimed at regulating generative AI tools like ChatGPT, mandates transparency, technical documentation, governance structures, and prohibits certain high-risk AI applications. Non-compliance could lead to fines as high as €35 million or 7% of global turnover.
Canada’s Artificial Intelligence and Data Act (AIDA)
Canada’s AIDA, progressing towards becoming the nation’s first AI regulatory framework, emphasizes the regulation of “High-Impact Systems.” The act aligns with the EU AI Act and imposes specific obligations on generative AI systems not classified as “high-impact.” It also defines the role of the AI & Data Commissioner, underscoring robust enforcement and penalties for non-compliance.
California’s Draft AI-related Rules under the CCPA
California is set to enact the most significant AI law in the U.S. through its draft rules under the CCPA. These rules aim to protect consumers when businesses employ automated decision-making technologies. Key provisions include “Pre-use Notices,” opt-out options for consumers, and detailed information about ADMT logic and outputs. The draft rules could profoundly impact online advertising and data-scraping techniques.
The Global Impact of Local AI Legislation
State and Local Initiatives
In the absence of federal AI legislation in the U.S., states like Texas, Connecticut, and Illinois have passed their own AI laws, while cities such as Seattle and New York City are developing policies to align with national priorities. These local actions demonstrate a proactive stance in AI governance, addressing a spectrum of concerns from traffic safety to employment.
Frontier AI Models and Local Concerns
Lawmakers are increasingly focusing on “frontier” AI systems, capable of posing significant public safety risks. Regulations aim to impose transparency requirements and restrictions on deepfakes, especially concerning election security.
The Challenge of Congressional Inaction
Persistent inaction at the federal level is catalyzing state-driven AI regulation. The diverse nature of state laws could complicate compliance for companies operating across multiple jurisdictions, highlighting the need for a unified national approach.
AI Governance: A Framework for Risk Assessment
Amidst this regulatory patchwork, Dominique Shelton Leipzig, a partner at Mayer Brown, emphasizes proactive AI governance. Her framework, aligning with proposed global legislation, categorizes AI use cases into red, yellow, and green lights, providing a structured approach for businesses to assess and manage AI risks effectively.
Corporate Strategies for AI Compliance
Businesses must develop comprehensive AI policies, update privacy disclosures, identify and document AI risks, comply with existing laws, and review vendor processes and agreements. These steps are crucial in mitigating risks and ensuring readiness for evolving AI regulations.
Conclusion: Preparing for the AI-Regulated Future
As we delve deeper into 2024, it is imperative for business leaders to stay informed about the dynamic legal environment surrounding AI. The evolving landscape necessitates a strategic approach to AI governance, ensuring compliance while leveraging the technology’s transformative potential. By understanding and adapting to these new regulations, businesses can navigate the complexities of AI usage, ensuring innovation and progress within the bounds of legal and ethical standards.