fbpx

AI Regulation’s Critical Moment: Leadership to Influence the GPAI Code of Practice

As the European Union (EU) moves closer to implementing its AI Act, a significant decision looms. Within the next three weeks, the AI Office will likely appoint a group of external leaders to oversee a crucial component of the Act: the chairs and vice-chairs for the Code of Practices for General-Purpose AI (GPAI) models. These appointments will play a vital role in shaping the EU’s approach to regulating AI, ensuring that the European vision for a trustworthy AI ecosystem prevails.

A Crossroads in AI Regulation

The rise of generative AI, with applications like OpenAI’s ChatGPT, has been a game-changer not just economically but also politically. The final negotiations of the AI Act saw member states such as France, Germany, and Italy expressing concerns that regulating the foundational aspects of AI might stifle innovation and hinder startups like Mistral and Aleph Alpha. Interestingly, France, after several shifts in its stance, aligned with this viewpoint.

On the other hand, the European Parliament was more worried about market concentration and the potential for AI to infringe on fundamental rights. It advocated for a comprehensive legal framework specifically for generative AI, referred to in the legislation as GPAI models.

Caught between these conflicting perspectives, the EU co-legislators opted for a middle ground—a co-regulatory approach. This approach lays out obligations for GPAI model providers, defined through codes of practice and technical standards. Commissioner Thierry Breton championed this strategy, drawing inspiration from the 2022 Code of Practice on Disinformation.

Drawing from Past Experience

The upcoming appointments for the chairs and vice-chairs of the GPAI Code of Practice reflect lessons learned from previous initiatives like the Code of Practice on Disinformation. This earlier effort initially faced criticism, with many auditors feeling that companies were doing the bare minimum. A stern review by the European Commission’s disinformation team led to improvements, including the involvement of civil society and the appointment of an independent academic to chair the process.

This experience has informed the AI Office’s current co-regulatory strategy for GPAI. On June 30, the Office proposed a governance system for drafting the GPAI Codes of Practice, involving four working groups. Stakeholders are given multiple opportunities to contribute, including public consultations and plenary sessions. However, GPAI companies will still play a dominant role, with exclusive workshops and the option to opt-out of the final outcomes, as the codes are voluntary.

The Role of (Vice) Chairs: Balancing Ambition with Feasibility

The independence and expertise of the chairs and vice-chairs will be crucial for maintaining the credibility and balance of the drafting process. These individuals will wield significant influence, as they will effectively act as the primary authors of the codes, chairing the working groups and guiding the discussions. An additional ninth chair may even take on a coordinating role.

The goal for these leaders will be to find the right balance between implementing ambitious rules that address systemic risks and ensuring that the obligations remain technically feasible and conducive to innovation. The GPAI Code should reflect a pragmatic understanding of current technology while setting high standards for safety and trustworthiness.

To achieve this, the AI Office should prioritize selecting chairs and vice-chairs based on their technical, socio-technical, or governance expertise in GPAI models, as well as their experience in leading committee work on a European or international level.

Navigating a Complex Selection Process

Choosing the right leaders for these roles is no small feat. AI safety is still an emerging field characterized by rapid change and experimentation. The AI Office must consider a diverse range of professional backgrounds, balance various interests, and adhere to EU commitments to country and gender diversity. At the same time, it’s important to recognize that many of the leading experts in AI safety are based outside of Europe.

While the GPAI Code should emphasize EU values, incorporating internationally recognized experts could enhance the code’s legitimacy and encourage non-EU companies to align with its standards. The global nature of AI challenges means that a broad, inclusive approach will likely yield the best results.

Setting the Tone for the Future of AI Regulation

The selection of chairs and vice-chairs for the GPAI Code is a pivotal moment for AI regulation in the EU. Their leadership will shape the evolution of this co-regulatory effort, especially as it tackles complex socio-technical issues and sensitive policy areas such as intellectual property rights, child sexual abuse material (CSAM), and defining the critical thresholds that will determine the obligations GPAI models must meet.

The expertise and vision of these leaders will be instrumental in guiding the development of GPAI rules that not only address immediate concerns but also ensure that the European approach to AI trustworthiness and safety endures. As the AI landscape continues to evolve, the choices made now will have lasting implications, setting the standard for AI governance in Europe and beyond.

What’s next?

The AI Office’s upcoming appointments for the GPAI Code of Practice represent a critical step in the EU’s journey to establish a robust and trustworthy AI ecosystem. By selecting leaders with the right blend of expertise and vision, the EU can create a regulatory framework that balances innovation with safety, ensuring that the European way to AI trustworthiness not only endures but sets a benchmark for the rest of the world to follow.

AI was used to generate part or all of this content - more information