fbpx

The EU AI Act came into force on 1 August – what does this mean for your company?

As of 1 August 2024, the much-anticipated European Union Artificial Intelligence (EU AI Act) has come into force. As the leading jurisdiction in terms of AI regulation, the European Union (EU) has for long set the benchmark for international AI regulation. Accordingly, the commencement of the EU AI Act’s legal force and effect is a major development not only for the EU, but also on an international level.

It is common cause that the EU AI Act is the first and most comprehensive legislation on AI. It is also common cause that the obligations contained in the Act are not insignificant and, notwithstanding that the manner in which the EU will regulate AI is yet to be seen, adequate steps should be taken by companies that want to avoid contravening the Act. This article highlights the key implications for the EU AI Act for companies, and what general compliance would look like.

The EU AI Act Risk Categories

The EU AI Act categorizes AI systems into four risk categories or “levels”, i.e., minimal, limited, high, and unacceptable risk. Every category comes with specific requirements and obligations:

  1. Minimal Risk: AI applications that pose negligible risk to rights or safety, such as spam filters and AI-enabled video games. These systems are encouraged to follow a code of conduct but face minimal regulatory burden. For example, AI-enabled video games and spam filters.
  2. Limited Risk: AI systems requiring transparency, such as chatbots and image generators. These systems must be clearly labelled to inform users they are interacting with AI. For example, chatbots and image generating technologies.
  3. High Risk: AI systems with significant potential to affect individuals’ rights or safety, including credit scoring, employment decisions, and biometric identification. These systems must comply with stringent regulatory standards, including risk management, data governance, documentation, and human oversight. For example, employment, border control, biometrics and credit reasoning.
  4. Unacceptable Risk: AI practices that threaten fundamental rights or pose severe safety risks, such as social scoring and predictive policing. These practices are explicitly prohibited by the Act. For example, certain types of biometric categorisation, predictive policing and social scoring.

Implications for Companies

The EU AI Act places obligations on groups that can be divided into two broad groups, namely AI systems developers (and with additional obligations on GPAI model developers), and AI users. In light of the widespread present-day use of AI, the application of the EU AI Act is extensive.  

Operational Changes and Cost Implications: Companies may need to invest in compliance infrastructure, including hiring legal and technical experts to navigate the EU AI Act regulations. This may require that companies affect major changes to their existing AI systems and processes to meet the requirements of the Act. In 2021, a preliminary analysis for the EU that was based on the assumption that approximately 10% of systems would fall into the “high-risk” category, estimated that industry compliance costs with respect to the EU AI Act would be around EUR 29,000 per model, and between EUR 1.6-3.3 billion annually, in total. Performing the requisite conformity assessments (i.e., an external audit) was estimated to add another EUR 20,000, on average, to this figure, with a substantially higher figure being added in circumstances where a full internal process is required.

Data Governance and Transparency: High-risk AI systems must adhere to strict data governance standards. For example, AI systems are considered “high-risk” irrespective of whether the system is marketed or independently implemented by a company if (i) the AI system is a safety product by function, or is a product’s safety component; and (ii) the product classified in (i) is required to undergo a third-party conformity assessment with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonization legislation listed in Annex I of the EU AI Act. Companies will further need to ensure that the data collection, storage, and processing mechanisms are transparent and compliant with the Act. This includes maintaining detailed documentation and records of AI system operations.

Risk Management and Human Oversight: The EU AI Act prescribes robust risk management frameworks for high-risk AI systems. The risk management framework entails a continuous iterative process, which is planned and conducted for the entire duration of the relevant high-risk AI-system’s life cycle. Therefore, companies must implement comprehensive risk assessment and mitigation strategies, including regular systemic review and updating to identify and analyse any potential risks to health, safety or fundamental rights and deal with such risks in an appropriate manner. Additionally, to ensure that AI decisions can be reviewed and corrected, if necessary, high-risk AI systems are required to be designed and developed with appropriate human oversight.

Prohibition of Certain AI Practices: Companies should be vigilant in respect of and avoid AI practices deemed unacceptable by the EU AI Act. Article 5 of the Act delineates the prohibited practices. The overarching theme of Article 5 is the prohibition of AI systems that manipulate human decision-making or exploits human vulnerabilities, AI systems that classify people based on social behaviour or personal traits, or systems that predict individuals’ risk of committing a crime. For example, Article 5(a) prohibits “the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm”.

Global Impact and Competitive Advantage: The EU AI Act is expected to set (and arguably already does so) a global standard for AI governance. Companies that comply with the Act may gain a competitive advantage by demonstrating their commitment to ethical AI practices. The EU has also stated that the Act aims to decrease administrative and financial burdens for companies, with a particular focus on small and medium-sized enterprises. Conversely, non-compliance could result in hefty fines and reputational damage – and the consequences are potentially significant, as fines in relation to contraventions of Article 5 can result in administrative fines of up to EUR 35 million or, in circumstances where the offender is an undertaking, up to 7% of its global turnover for the preceding financial year, whichever is higher.

What Does Achieving Compliance Look Like?

Conduct a Compliance Audit: Companies should start by conducting a thorough audit of their AI systems to identify areas of non-compliance. This involves assessing the risk level of each AI application and determining the necessary steps to meet the Act’s requirements. Useful tools for purposes of ensuring compliance with the EU AI Act includes the EU AI Act Compliance Checker, the CapAI procedure, and OneTrust.

Implement Data Governance Policies: Establish clear data governance policies that ensure transparency, accountability, and compliance with the Act. This includes documenting data sources, processing methods, and AI model operations.

Develop Risk Management Frameworks: Create robust risk management frameworks that include regular testing, validation, and human oversight of AI systems. Ensure that these frameworks are integrated into the company’s overall governance structure.

Train Employees and Stakeholders: Provide training for employees and stakeholders on the requirements of the EU AI Act and the company’s compliance policies. This helps to ensure that everyone involved in the development and deployment of AI systems understands their responsibilities.

Engage with Regulators and Industry Groups: Stay informed about updates to the EU AI Act and engage with regulators and industry groups to share best practices and insights. This can help companies stay ahead of regulatory changes and maintain compliance.

As is evident from the above, EU AI Act presents both challenges and opportunities for companies. By investing in compliance and adopting ethical AI practices, businesses can not only avoid penalties but could also build trust with consumers and gain a competitive edge in the global market. Join our newsletter for the easiest way to stay up to date on key developments across the AI industry.

For more information, access the full EU AI Act here: https://artificialintelligenceact.eu/the-act/

author avatar
Nicola Taljaard Lawyer
Lawyer - Associate in the competition (antitrust) department of Bowmans, a specialist African law firm with a global network. She has experience in competition and white collar crime law in several African jurisdictions, including merger control, prohibited practices, competition litigation, corporate leniency applications and asset recovery. * The views expressed by Nicola belong to her and not Bowmans, it’s affiliates or employees

This content is labeled as created by a human - more information