fbpx

AI legislation and regulation hub

Welcome to our AI legislation and regulation hub, your go-to source for the latest updates on AI-related bills, regulations, and legislative developments from around the world. Here, we provide comprehensive and up-to-date information on how different countries and regions are navigating the evolving landscape of artificial intelligence governance. Whether you’re a policymaker, industry professional, or simply interested in the legal frameworks shaping the future of AI, this page offers invaluable insights and detailed analyses to keep you informed and ahead of the curve.

Last update on:

29/09/2024

  • Snapshot
    • The Australian government has emphasized using existing regulatory frameworks for AI. In 2021, it launched an AI Action Plan to enhance AI capability and speed up the development and adoption of trusted, secure, and responsible AI technologies in Australia. In June 2023, the government released a discussion paper on safe and responsible AI, seeking public consultation on whether Australia has the appropriate governance arrangements for the safe and responsible use and development of AI. In January 2024, the government provided its interim response on this issue.
  • In force?
    • No
  • Relevant links
  • Snapshot
    • Brazil is moving towards regulating artificial intelligence with Bill No. 2,338/2023, known as “Brazil’s Proposed AI Regulation.” At present, there are no specific laws or regulations in place that directly govern AI in Brazil. The timeline for the regulation’s implementation and its final content is still uncertain. The bill must undergo review and voting in both the Federal Senate and the House of Representatives before receiving presidential approval. As of now, there is no set date for the next steps in the legislative process, and the details of the regulation may change as it progresses.
  • In force?
    • No
  • Relevant links:

Bill No PL21/2020 regarding the use of artificial intelligence in Brazil (2020): Click here to open the link

  • Snapshot
    • Canada’s forthcoming AI and Data Act (AIDA), part of Bill C-27, aims to safeguard Canadians from high-risk AI systems, promote the development of responsible AI, and position Canadian firms and values at the forefront of global AI advancements. The AIDA will:
      • Ensure that high-impact AI systems adhere to existing safety and human rights standards.
      • Prohibit reckless and malicious uses of AI.
      • Grant the Minister of Innovation, Science, and Industry the authority to enforce the act.
  • In preparation for AIDA and to ensure compliance, Canada has published a code of practice for the development and use of generative AI. Additionally, the country has issued a Directive on Automated Decision-Making, which sets several requirements for the federal government’s use of automated decision-making systems.
  • Snapshot
    • On August 15, 2023, China implemented its first administrative regulation for Generative AI services through the joint release of the Interim Measures for the Management of Generative Artificial Intelligence Services (the “AI Measures”). This regulation was collaboratively issued by the Cyberspace Administration of China, the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration.
  • In force?
    • Yes
  • Relevant links

English translated version of the  Interim Measures for the Management of Generative Artificial Intelligence Services (2023): Click here to open the link

  • Snapshot
    • In December 2023, the EU AI Act completed the political trilogue process, with the European Commission, Council, and Parliament reaching a consensus on their respective positions. This pivotal act establishes uniform regulations for the introduction of AI systems into the EU market, applicable to both EU and third-country providers and deployers. It embraces a risk-based framework, prohibiting certain AI systems while setting stringent requirements for high-risk ones. Additionally, it enforces standardized transparency rules for specific AI systems. This follows the 2018 communication from the European Commission, Parliament, Council, Economic and Social Committee, and the Committee of the Regions regarding Europe’s AI strategy.
  • In force
    • As of 13 March 2024, the EU Commission adopted the Artificial Intelligence Act. However, certain of its provisions will only come into effect in 2026.
  • Link to the adopted text
  • Snapshot
    • The proposed Digital India Act aims to replace the IT Act of 2000 and regulate high-risk AI systems. The Indian government is advocating for a robust, citizen-centric, and inclusive “AI for all” environment. A task force has been established to make recommendations on ethical, legal, and societal issues related to AI and to set up an AI regulatory authority. According to its National Strategy for AI, India aspires to become an “AI garage” for emerging and developing economies, creating scalable solutions that can be easily implemented and designed for global deployment.
  • In force?
    • No
  • Relevant links
  • Snapshot
    • The U.K. government has proposed a context-specific, proportionate approach to AI regulation, relying on existing sectoral laws to establish guardrails for AI systems. Key resources for policy guidance include:
      • A pro-innovation approach to AI regulation.
      • The Algorithmic Transparency Recording Standard Hub.
      • The AI Standards Hub, a new initiative focused on international standardization for AI technologies.
      • A government guide to using AI in the public sector.
      • Guides from the Government Digital Service and the Office for AI on understanding AI ethics and safety.
      • The Centre for Data Ethics and Innovation’s AI Governance research report.
      • The Information Commissioner’s Office’s guidance on the AI auditing framework.
      • The ICO and Alan Turing Institute’s resource on explaining decisions made with AI.
    • In force
      • No
    • Relevant links
      • The Office for Artificial Intelligence became part of the Department for Science, Innovation and Technology in February 2024: Click here to open the link
  • Snapshot
    • The U.S. has introduced various frameworks and guidelines to maintain its leadership in AI research and development and to regulate government use of AI. In May 2023, the Biden-Harris administration updated the National AI Research and Development Strategic Plan, focusing on a principled and coordinated approach to international collaboration in AI research. The Office of Science and Technology Policy issued a request for public input on AI’s impact, while the National Telecommunications and Information Administration sought feedback on policies to foster trust in AI systems through an AI Accountability Policy Request for Comment.
  • In force
    • Currently there is no comprehensive federal legislation or regulations regulating the development of AI. Nonetheless, existing bills or federal laws with limited application are linked below
  • Relevant links
  • California
    • On 29 September 2024, Governor Gavin Newsom’s office unveiled a series of initiatives including the signing of several bills related to AI regulation and the vetoing of SB 1047, a bill that aimed to impose stricter AI standards. The legislation currently in force is listed below.
    • AB 1008: Clarifies that personal information under the California Consumer Privacy Act can exist in various formats, including AI-stored information.
    • AB 1831: Expands child pornography laws to include AI-generated content.
    • AB 1836: Prohibits the production or distribution of AI-generated replicas of deceased individuals without consent.
    • AB 2013: Requires AI developers to disclose the data used to train their AI systems.
    • AB 2355: Mandates that political advertisements featuring AI-generated content include a disclosure.
    • AB 2602: Provides protections for the use of an individual’s voice or likeness in digital replicas.
    • AB 2655: Requires large platforms to label or remove AI-generated election content.
    • AB 2839: Expands the timeframe for AI-generated election materials from 60 days to 120 days.
    • AB 2876: Asks the State’s curriculum commission to consider AI literacy in education.
    • AB 2885: Establishes a uniform definition of AI in California law.
    • AB 3030: Requires health care providers to disclose AI-generated communications to patients.
    • SB 896: Directs Cal OES to assess AI threats to California’s critical infrastructure.
    • SB 926: Criminalizes the distribution of sexually explicit AI-generated images without consent.
    • SB 942: Requires AI developers to include provenance disclosures in their AI-generated content.
    • SB 981: Requires social media platforms to create mechanisms for reporting sexually explicit AI-generated images.
    • SB 1120: Regulates the use of AI in health care decision-making processes.
    • SB 1288: Calls for the Superintendent of Public Instruction to explore AI applications in education.
    • SB 1381: Expands child pornography laws to include AI-altered content.
  • Colorado
    • Colorado AI Regulation: Colorado has introduced the first comprehensive legislation in the US to govern the use of AI by both companies and government entities in making critical decisions that affect individuals. Set to take effect in 2026, this law mandates transparency and grants people the right to contest unjust AI decisions. This initiative, inspired by a similar proposal in Connecticut, marks a significant milestone. While signing the bill, Democratic Governor Jared Polis expressed some reservations and called for further refinements before its implementation. Experts believe this law will enhance transparency and accountability in AI applications.
    • National Conference of State Legislatures: The Colorado AI Act focuses on consumer protections in interactions with AI systems and requires developers of high-risk AI systems to exercise reasonable care to prevent algorithmic discrimination.
    • Senate Bill 24-205: Click here to open the link
  • Florida
    • Political Advertisement Disclaimer: Florida has passed a law requiring specific disclaimers on certain political advertisements and electioneering communications. The law outlines requirements for these disclaimers and imposes both criminal and civil penalties for non-compliance. It also allows individuals to file complaints and ensures expedited hearings.
    • Bill Text: Click here to open the link
    • Government Technology Modernization: Florida has established a Government Technology Modernization Council tasked with providing recommendations to the Legislature. Additionally, it has enacted laws prohibiting the possession or viewing of generated child pornography, with stringent criminal penalties.
    • Bill Text: Click here to open the link
  • Oregon
    • AI Task Force: Oregon has formed a Task Force on Artificial Intelligence to examine and define terms related to AI used in technology fields and potential legislation. This task force will start by reviewing terms and definitions used by the U.S. government and relevant federal agencies.
    • Bill Text: Click here to open the link
    • AI in Campaign Ads: Oregon’s new legislation requires that any campaign communication involving synthetic media must include a disclosure indicating that the media has been manipulated. The Secretary of State is authorized to enforce this requirement and impose civil penalties for violations.
    • Bill Text: Click here to open the link
  • Tennessee
    • AI Advisory Council: Tennessee has created an artificial intelligence advisory council to recommend strategies for integrating AI into state government operations. This council’s goal is to align AI use with state policies and improve public service efficiency.
    • Bill Text: Click here to open the link
  • Utah
    • AI Policy Act: Utah’s new legislation, the Artificial Intelligence Policy Act, holds AI users liable for violations of consumer protection laws if AI usage is not properly disclosed. The act establishes the Office of Artificial Intelligence Policy and a regulatory AI analysis program, alongside a learning laboratory program to assess AI technologies, risks, and policies.
    • Bill Text: Click here to open the link
  • Virginia
    • AI Analysis and Commission: Virginia has passed legislation directing the Joint Commission on Technology and Science (JCOTS) to analyze AI use by public bodies and to establish a Commission on Artificial Intelligence. JCOTS must report its findings to various legislative committees by December 1, 2024.
    • Bill Text: Click here to open the link
  • Washington
    • AI Task Force: Washington has established a task force to evaluate current AI uses and trends, and to make legislative recommendations. The task force’s findings will include a literature review on AI public policy issues, including benefits, risks, racial equity, workforce impacts, and ethical considerations.
    • Bill Text: Click here to open the link
  • West Virginia
    • AI Task Force: West Virginia has created a state Task Force on Artificial Intelligence. The task force’s responsibilities include reporting its findings and recommendations, with a specified termination date for the task force.
    • Bill Text: Click here to open the link
  • Wisconsin
    • AI in Political Ads: Wisconsin’s new law requires disclosures regarding AI-generated content in political advertisements. This law grants rule-making authority and imposes penalties for non-compliance.
    • Bill Text: Click here to open the link

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.