fbpx

Trump’s AI Policy Shift Sparks Global Compliance Challenges

On Monday, 20 January, President Donald Trump made a pivotal change to U.S. AI policy by revoking former President Joe Biden’s 2023 executive order on AI. The original order aimed to establish safety standards for AI systems, including content watermarking and other safeguards to mitigate potential risks. Trump’s decision marks a significant deviation from the cautious, regulation-heavy approach embraced by former President Biden, which aligned with that of the European Union.  This could signal the start of a fragmented global regulatory landscape for AI.

But what does this mean for multinational companies operating across borders? And how will businesses navigate the increasingly divergent regulatory frameworks of the U.S. and EU?

Trump’s AI Vision: Deregulation for Innovation

Trump’s AI policy focuses on deregulation to stimulate innovation and economic growth. His administration has framed the executive order’s repeal as a victory for U.S. competitiveness in the global AI race, particularly against China, whose rapid AI advancements have stoked geopolitical concerns.

In a statement, Trump called Biden’s order “an unnecessary shackle” on American tech companies. The administration’s new policy removes the requirement for AI safety standards, does away with mandatory watermarking of AI-generated content, and eliminates several compliance measures designed to address national security risks. Instead, the focus is on creating an environment where businesses can innovate freely without the burden of federal oversight.

Trump’s approach has been praised by some U.S. companies and investors, who argue that a lighter regulatory framework will accelerate AI development and deployment. However, critics warn that this laissez-faire strategy could increase risks, including misinformation, biased AI systems, and vulnerabilities in critical infrastructure.

The EU AI Act: A Cautious Balancing Act

Across the Atlantic, the European Union’s AI Act represents a starkly different approach. Passed in 2024, the legislation imposes strict requirements on companies deploying AI systems in the EU. It categorizes AI applications by risk level – ranging from minimal to high, and mandates extensive compliance measures for high-risk applications, such as biometric identification and AI in healthcare.

The Act also includes transparency requirements, prohibitions on certain high-risk AI systems, and hefty fines for non-compliance, with penalties reaching up to 6% of a company’s global revenue. The EU aims to establish itself as the global leader in “ethical AI,” prioritizing human rights and consumer protection over speed of innovation.

International Reception: A Tale of Two Philosophies

Globally, the U.S. and EU approaches have drawn mixed reactions. Trump’s deregulation has been lauded by some emerging economies and tech companies for its potential to drive innovation, but it has also raised alarms about unchecked risks. In contrast, the EU AI Act has been praised by human rights organizations but criticized by tech firms for its complexity and cost of compliance.

The contrasting policies highlight a growing divide in global AI governance: one prioritizing innovation at any cost, the other focused on minimizing harm through precautionary regulation.

The Dual Compliance Dilemma

For multinational companies, this divergence creates a regulatory maze. A company operating in both the U.S. and the EU must now navigate two fundamentally different regimes. How do businesses reconcile Trump’s hands-off approach with the EU’s strict compliance requirements?

The challenges are daunting, and could have significant consequences for companies. For instance, a U.S.-based AI company developing a high-risk AI system, such as facial recognition software, might have to design two versions of its product: one that meets the EU’s stringent standards and another tailored to the U.S.’s deregulated environment. This dual compliance obligation not only increases costs but also slows down deployment timelines, reducing overall efficiency.

Moreover, companies face strategic decisions about where to allocate resources. Should they focus on the U.S. market, where regulations are lax but risks are higher, or on the EU, where the cost of compliance is steep but offers a stamp of ethical credibility?

A Fragmented Global AI Landscape

The divergence between the U.S. and EU highlights a larger issue: the potential fragmentation of AI regulation worldwide. With other countries taking cues from either the U.S. or EU, the result could be a patchwork of conflicting rules. This fragmented landscape poses significant risks, including barriers to global trade, inconsistent safety standards, and the inability to establish a unified approach to address AI’s societal challenges.

From a business perspective, this disjointed environment could stifle innovation by creating uncertainty and increasing costs. Yet, it also offers opportunities for companies agile enough to adapt. Businesses that can build compliance frameworks flexible enough to meet multiple regulatory requirements may gain a competitive edge, positioning themselves as leaders in the global AI market.

The Road Ahead: Risks and Rewards

The benefits and risks of each approach are distinct. Trump’s deregulation could unleash a wave of AI-driven innovation, giving U.S. companies a competitive advantage in global markets. However, the lack of oversight increases the likelihood of AI-related risks, from data misuse to economic dislocation.

The EU’s cautious strategy, on the other hand, may slow innovation but ensures a more controlled rollout of AI technologies. By prioritizing ethics and safety, the EU positions itself as a global standard-bearer, appealing to consumers and countries concerned about the unchecked power of AI.

For businesses, the stakes couldn’t be higher. Navigating these dual regulatory regimes requires strategic foresight, operational flexibility, and a keen understanding of the risks and rewards in each market. As the U.S. and EU chart their divergent paths, companies must ask themselves: Can they afford to play by two sets of rules? And more importantly, can they afford not to?

While the debate over AI regulation rages on, one thing is clear: the global AI landscape is at a crossroads, and the choices made today will shape the future of technology – and the world, for decades to come.

author avatar
Nicola Taljaard Lawyer
Lawyer - Associate in the competition (antitrust) department of Bowmans, a specialist African law firm with a global network. She has experience in competition and white collar crime law in several African jurisdictions, including merger control, prohibited practices, competition litigation, corporate leniency applications and asset recovery. * The views expressed by Nicola belong to her and not Bowmans, it’s affiliates or employees

This content is labeled as created by a human - more information