Competition regulators are becoming increasingly aware and concerned about the challenges posed by algorithms in the digital economy. However, as we at The Legal Wire have previously discussed, the emergence of generative AI and foundation models has introduced even greater complexities for competition regulators to consider. With AI technologies now transforming everything from media to legal and financial services, competition regulators having to take a closer look at the possible risks — particularly regarding partnerships between tech giants and specialized AI companies.
Market Concentration and Control of Key Inputs
Building and deploying foundation models requires extensive resources, including enormous datasets, specialized hardware, and massive computational power. This infrastructure is so resource-intensive that it has led to a few powerful players holding the vast majority of the market share (i.e., a market characterized by high concentration), which raises concerns as to monopolistic behaviors and stifled competition.
According to the UK’s Competition and Markets Authority (CMA), there are the three interrelated risks, namely that:
- Firms that control critical inputs for the development of foundation models (e.g., such as cloud computing infrastructure and proprietary datasets) could limit access to them to guard against competitors, which could have a particularly detrimental impact on smaller players;
- Powerful players could abuse their power in business or consumer markets in order to distort the options in respect of foundational model services and restrict competition in deployment; and
- Firms that already enjoy market power could reinforce their power in the foundational model value chain by entering into partnerships with key players.
These conditions could make it nearly impossible for new entrants to compete, effectively monopolizing the AI landscape in a way that threatens market diversity. In this regard, CMA Chair Sarah Cardell observed that “[t]he essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences.”
Vertical Integration and Market Power
A further challenge arises from the vertical integration strategies of major technology companies. For instance, big tech firms often incorporate AI functionalities into a suite of products, leveraging existing power in the market to gain advantages in AI. This strategy can create formidable entry barriers for competitors, potentially restricting consumer choice and setting the stage for conduct that could undermine fair competition in several ways.
Google’s partnership with OpenAI, for example, exemplifies these concerns. Through its collaboration, Google has expanded its AI capabilities across services, consolidating its hold on the AI sector while accessing more data to refine its models. In another instance, Microsoft’s substantial investment in OpenAI has led to the integration of advanced language models like ChatGPT into Microsoft’s Office suite and Azure cloud services, giving Microsoft a competitive edge and potentially curtailing access to these models for competitors in cloud computing or business software.
The European Commission has flagged such vertical integration as a significant risk to competition in the AI sector. By linking generative AI models to exclusive ecosystems, companies like Google and Microsoft create formidable barriers for competitors who wish to leverage similar technologies for different markets. Interestingly, in a joint statement by the European Commission, CMA, Department of Justice and Federal Trade Commission, the following was stated:
“Given the speed and dynamism of AI developments, and learning from our experience with digital markets, we are committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”
While it has been emphasized that the competition authorities’ decisions will ultimately always remain sovereign and independent, the respective regulators have demonstrated increasing concern and an inclination to adopt an interventionist approach as AI evolves. That said, it should also be borne in mind that AI policy in both the US and EU may change quite dramatically in the coming months with the election of President Trump and replacement of European Commissioner Margrethe Vestager, respectively.
The Competitive Dangers of Big Tech Partnerships in AI
One of the most notable examples of competitive risk in AI partnerships involves Microsoft’s multi-billion-dollar stake in OpenAI, which grants Microsoft unique advantages. By offering ChatGPT as part of its Azure cloud services, Microsoft not only secures its market share but also limits OpenAI’s generative AI accessibility for competitors. According to recent studies, such exclusivity clauses in partnership agreements have intensified regulatory scrutiny, as they can limit the availability of AI advancements to smaller firms and even entire industries.
The Nvidia and Adobe partnership offers another critical example. Nvidia’s power in the graphics processing units (GPUs) market, which is essential for training AI models, gives Adobe a competitive advantage in providing cutting-edge AI-powered creative tools. As part of their partnership, Adobe has access to Nvidia’s latest AI technology, which could prevent other creative software firms from gaining equal access to these essential hardware resources. This exclusivity raises potential red flags under competition law, as the partnership may create unequal access to essential inputs in the creative software market.
Partnerships between Meta and other data providers also illustrate this risk. Meta’s collaborations allow it to access high-quality datasets essential for training foundation models, creating further barriers to entry for new players that do not have access to similar resources. This lack of accessible data is one reason many companies are finding it challenging to develop competitive alternatives in the AI space.
The CMA has highlighted that these types of partnerships and collaborations could solidify the positions of dominant companies by creating an “ecosystem lock-in” effect. Smaller firms unable to afford similar partnerships or infrastructure may be shut out of the market, raising concerns about stifled innovation and reduced consumer choice.
Regulatory Responses: Addressing the Power Imbalance in AI
Regulators are actively working to address these competition concerns. The CMA, for instance, has proposed guidelines that advocate for transparency, fair access, and accountability in AI collaborations. By promoting principles for fair competition, the CMA aims to prevent monopolistic behaviors while ensuring that smaller players and consumers benefit from AI advancements.
Similarly, the European Commission, in partnership with global regulatory bodies, is developing guidelines that would compel AI developers to disclose the parameters of their partnerships, offering transparency into the advantages conferred by exclusive arrangements. The Commission’s approach aims to ensure that the transformative power of AI is accessible across industries and that no single entity wields disproportionate influence.
Challenges Ahead: Balancing Innovation with Fair Competition
The Legal Wire has previously observed that, with the continued expansion of the global AI market, striking a balance between innovation and fair competition seems an all-too-burdensome task. AI partnerships between powerful firms like Microsoft and OpenAI or Nvidia and Adobe demonstrate that while collaborations can drive technological advancements, they also risk entrenching market power or monopolistic positions. As more and more competition and digital markets regulators look to these issues, we are seeing many of them recognize that careful regulatory oversight is vital for ensuring that AI’s benefits are widely distributed without compromising market diversity. A commitment to transparency, equitable access, and ethical practices in AI partnerships will be essential to fostering a competitive and innovative AI ecosystem that serves all stakeholders.