As artificial intelligence (AI) continues to evolve rapidly, the legal and ethical implications of its use become increasingly complex. Dominique Shelton Leipzig, a partner at Mayer Brown and leader of the firm’s global data innovation practice, emphasized the critical need for proactive AI governance at the EmTech MIT conference. Her insights, drawn from the early drafts of proposed global legislation and her book “Trust: Responsible AI, Innovation, Privacy and Data Leadership,” provide a nuanced framework for assessing and addressing AI risk.
Assessing AI Risk: A Traffic Light Approach
Shelton Leipzig proposes a red light, yellow light, and green light system, inspired by international legislative efforts, to categorize AI use cases based on risk levels:
Red-Light Use Cases (Prohibited)
Certain AI applications are deemed too hazardous and are therefore prohibited. This includes AI used in democratic processes like voting, continuous surveillance in public spaces, remote biometric monitoring, and social scoring for decision-making in financial matters. These areas are deemed too prone to abuse and potential harm.
Green-Light Use Cases (Low Risk)
Conversely, AI applications in customer service, product recommendations, chatbots, and video gaming are generally safe and carry minimal risk of bias or safety issues. These AI uses have established a track record of safe application over several years.
Yellow-Light Use Cases (High Risk)
The majority of AI applications fall into this category, requiring rigorous governance. From HR and family planning applications to financial services like credit evaluation and investment management, these use cases demand careful monitoring due to their potential for high risk.
Governing High-Risk AI
For AI deemed high risk, Shelton Leipzig outlines key steps for governance, drawing from the EU Artificial Intelligence Act and the White House’s “Blueprint for an AI Bill of Rights”:
- Ensure High-Quality Data: Data used must be accurate, relevant, and legally compliant.
- Embrace Continuous Testing: Regular testing pre- and post-deployment is essential to identify and mitigate algorithmic biases, ensuring safety and regulatory compliance.
- Human Oversight: When deviations from expected outcomes occur, human intervention is necessary for correction and risk mitigation.
- Create Fail-Safes: Establish clear protocols to halt AI applications when deviations cannot be corrected effectively.
Proactive Steps Towards AI Governance
Shelton Leipzig advises organizations not to delay implementing these governance measures, even as AI legislation remains in development. AI governance should be collaborative, involving key stakeholders and informed by continuous updates to the board of directors, general counsel, and CEO.
Conclusion: AI Governance as a Preemptive Strategy
The framework proposed by Shelton Leipzig highlights the importance of preemptive and ongoing governance in AI applications. By adopting these guidelines, companies can navigate the complex landscape of AI with greater confidence, ensuring their technology aligns with ethical standards and regulatory expectations. This approach not only safeguards against potential legal and ethical pitfalls but also fosters trust and credibility in AI technologies among consumers and business partners. As AI continues to reshape various industries, responsible governance will be key to harnessing its potential without compromising on safety, privacy, or equity.