The US AI Safety Institute (US AISI) at the National Institute of Standards and Technology (NIST) has released the second public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for identifying, measuring, and mitigating risks to public safety and national security across the AI lifecycle. The updated guidelines include a number of improvements to the initial public draft (released July 2024), including updates such as (1) detailing best practices for model evaluations; (2) expanding domain-specific guidelines on cyber and chemical and biological risk; (3) underscoring a marginal risk framework; (4) addressing open models; and (5) managing risk across the AI supply chain. The comment period on these updated guidelines is open until 15 March 2025.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.