Social Media

US Government Tightens AI Use with New Safeguards

In a bold move to ensure the ethical use of artificial intelligence (AI) within federal operations, the White House announced on Thursday a comprehensive plan to integrate “concrete safeguards” into the government’s expanding AI applications. This initiative is aimed at safeguarding Americans’ rights and maintaining safety across various sectors where AI is increasingly becoming a staple.

Key Requirements for Federal Agencies:

  • Monitoring and Assessment: Agencies are tasked with closely monitoring AI’s impact on the public, particularly focusing on preventing algorithmic discrimination.
  • Risk Assessments: Conducting thorough risk evaluations is now mandatory, alongside establishing clear operational and governance benchmarks.
  • Public Transparency: In an effort to foster public trust, agencies will detail their AI use in public disclosures, ensuring citizens are well-informed of the AI technologies at play in government operations.

Executive Orders and National Security:

President Joe Biden’s executive order from October plays a crucial role in this new directive. It calls for AI developers, especially those whose systems may pose risks to national security or public welfare, to disclose safety test outcomes to the U.S. government before any public release.

Specific AI Safeguards:

  • Opt-out Mechanisms: Air travelers now have the option to bypass AI-based facial recognition by the Transportation Security Administration without compromising screening efficiency.
  • Human Oversight in Healthcare: AI tools aiding in diagnostics within federal healthcare must be verified by human professionals to ensure accuracy.

Generative AI: A Double-Edged Sword

The rapid advancement of generative AI has stirred a mix of excitement and concern, prompting the government to act swiftly to address potential negative outcomes ranging from employment disruptions to threats against democracy.

Also Read:  Canadian Courtroom Faces First Instance of AI-Generated Fake Legal Cases

Promoting Transparency and Safe AI Use:

  • AI Inventories: Agencies will publicly share their AI use cases, along with relevant metrics and, where feasible, government-owned AI codes and models.
  • AI in Action: Examples of current federal AI applications include FEMA’s use of AI in disaster assessment and the CDC’s AI-powered disease spread predictions.

Investment in AI Expertise:

The White House is on the hunt for 100 AI professionals to guide the responsible and safe adoption of AI technologies across federal agencies. Additionally, each agency is expected to appoint a chief AI officer within the next 60 days, marking a significant step toward institutionalizing AI governance.

This initiative represents a significant step toward balancing innovation with ethical considerations and public welfare, reflecting the Biden administration’s commitment to leading by example in the responsible use of AI technology.

Share the post

Join our exclusive newsletter and get the latest news on AI advancements, regulations, and news impacting the legal industry.

What to read next...