More
Social Media

US Investigation Against AI Giants Can’t Keep Up with Rapid Development

Microsoft, NVIDIA, and OpenAI Face Regulatory Scrutiny as US Agencies Divide Oversight Responsibilities

Last week, the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC) announced a strategic division of labor for investigating major AI companies: Microsoft, OpenAI, and NVIDIA. According to a report by The New York Times, the DOJ will investigate NVIDIA, while the FTC will oversee the probes into OpenAI and Microsoft.

On the surface, this might seem like just another regulatory step. But historically, such moves have led to significant outcomes. For instance, the 2019 division of responsibilities for investigating Apple, Amazon, Google, and Meta resulted in severe antitrust indictments. This new agreement signals that regulators are taking the potential dangers of modern AI seriously. The big question is whether they can act swiftly enough to keep pace with the rapid advancements in AI technology.

The Big Players in AI

It’s no coincidence that Microsoft and NVIDIA, two of the world’s most valuable publicly traded companies, are leading the charge in the Generative AI market:

  • Microsoft: With a market value of $3.15 trillion, Microsoft has invested $13 billion in OpenAI, closely integrating AI capabilities into its products.
  • NVIDIA: Valued at $3.01 trillion, NVIDIA has seen unprecedented growth due to its dominance in the AI chip market, even surpassing Apple in market value.

Although OpenAI is not publicly traded, it is the powerhouse behind the current AI revolution. Together, these three entities form the “holy trinity” of the GenAI market:

  • OpenAI: Develops models and services that others emulate.
  • Microsoft: Integrates these models into a wide range of products.
  • NVIDIA: Provides the processing power needed to train and run these models.
Also Read:  Generative AI and the Midsize Firm Emergence: Insights from ABA Techshow 2024

Until now, these companies have largely avoided significant antitrust scrutiny.

Regulatory Moves and Past Precedents

In July 2022, the FTC began investigating whether OpenAI’s data collection practices harm customers. By January, the FTC was also looking into the collaboration between OpenAI and Microsoft, as well as investments by Google and Amazon in Anthropic. However, these investigations have been relatively limited, and the U.S. lags behind the European Union in AI regulation.

The new division of responsibilities indicates a readiness to escalate into a broad antitrust investigation. According to The New York Times, the DOJ’s investigation into NVIDIA could examine how its software locks users into using its chips and the distribution methods of these chips. Meanwhile, the FTC’s probe could focus on Microsoft’s integration of OpenAI’s models into its products, and how their close collaboration impacts technological development.

Potential Implications and Historical Context

This early-stage agreement is a significant signal of intent. In 2019, a similar division of labor led to extensive indictments against Apple, Amazon, Google, and Meta. A successful outcome for the government could mean a loss of tens of billions in annual revenue for these companies, and potentially even their breakup.

If history repeats itself, we might see serious charges against NVIDIA, Microsoft, and OpenAI. This could include requiring Microsoft to dissolve its alliance with OpenAI, considering it an effective acquisition. However, these proceedings are time-consuming. For example, the lawsuits against Apple and Amazon, filed in March and September respectively, are still far from trial. The Meta lawsuit, filed in late 2020, has not yet begun, and the Google case’s closing arguments were only heard recently, with a lengthy appeals process expected.

Also Read:  EU Drives Global Initiative for Artificial Intelligence Regulation to Safeguard the Future

The Challenge of Rapid Technological Evolution

The regulatory system is slow and cumbersome, especially in a field like AI, which evolves at an extraordinary pace. New developments and capabilities emerge almost monthly, making it challenging to execute effective regulation. By the time regulators address current issues, they might already be dealing with outdated concerns.

Already, one could argue that regulators are doing too little, too late. To truly regulate AI, prevent market dominance misuse, and impose real oversight on potentially destructive products, a more proactive and faster approach is necessary.

EU’s Proactive Approach: Launching the AI Office

While the U.S. grapples with its regulatory framework, the European Union is far ahead. Last month, the EU announced the establishment of an official AI Office, following a comprehensive AI legislation process. This office will oversee AI development regulations, ahead of the AI law taking effect on June 16.

Structure and Function of the AI Office

  • Leadership: Lucilla Sioli, Director for AI at the European Commission, will lead the new office.
  • AI Board: Comprising regulators from each of the 27 member states, the board will assist in regulatory processes and hold its inaugural meeting this month.
  • Divisions: The office will have five divisions, employing over 140 people, including technical experts, lawyers, political scientists, and economists. It will expand as needed.

Key Units and Their Roles

  • Regulations and Compliance Unit: Liaises with member states, coordinates enforcement actions, ensures AI law implementation, handles investigations, and punitive actions.
  • AI Safety Unit: Identifies systemic risks in general AI models and develops risk assessment methods.
  • AI for Social Good: Focuses on projects related to climate simulations, cancer diagnosis, and urban development solutions.
  • Excellence in AI and Robotics: Supports and funds AI research and development.
  • AI Innovation and Policy Coordination: Oversees policy implementation, monitors trends and investments, and promotes AI applications across European Digital Innovation Hubs.
Also Read:  Generative AI: Charting its Revolutionary Course in Legal Landscape

Enforcement and Penalties

According to reports from Reuters and CNBC, fines for violating the AI law will range from 7.5 million euros or 1.5% of a company’s global revenues, to 35 million euros or 7% of global revenues, depending on the type of violation.

Conclusion

The U.S. faces significant challenges in keeping pace with the rapid development of AI technology. The current regulatory framework is slow and cumbersome, making it difficult to implement timely and effective regulations. Meanwhile, the European Union’s proactive approach, exemplified by the establishment of the AI Office, serves as a model for comprehensive and forward-thinking AI regulation. To effectively oversee the burgeoning AI industry, the U.S. must adopt a more agile and proactive regulatory strategy, ensuring that technological advancements are balanced with robust oversight and consumer protection.

AI was used to generate part or all of this content - more information

Share the post

Join our exclusive newsletter and get the latest news on AI advancements, regulations, and news impacting the legal industry.

What to read next...