California is pushing forward with the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, a proposed bill designed to regulate the development and deployment of advanced AI models. Central to this legislation is the creation of a new regulatory body, the Frontier Model Division.
The Need for AI Regulation
The rise of artificial intelligence has generated both excitement for its potential and fear of its misuse. Across the country, some 200 bills have been proposed to place guardrails on this powerful emerging technology. Among these, California’s proposed bill could have the most significant impact, especially in Silicon Valley, home to tech giants like Google, Meta, and Y Combinator.
“The goal here is to get ahead of the risks instead of waiting for the risks to play out, which is what we’ve always done,” said state Sen. Scott Wiener, the San Francisco Democrat who authored SB 1047.
Potential Benefits and Risks
While AI promises advances in fields like medicine, climate science, wildfire forecasting, and clean power development, it also poses significant risks. Concerns range from the development of autonomous weapons to AI-driven algorithms used in phishing attacks and malware. Notable incidents include the 2017 WannaCry ransomware attack and the 2010 Flash Crash.
Wiener’s bill aims to mitigate these risks by regulating the development and deployment of advanced AI models through the Frontier Model Division. It also proposes the creation of CalCompute, a publicly funded computing cluster to support large-scale AI model development and foster equitable AI innovation.
“It is a very basic requirement to perform the safety evaluations that these large labs have already committed to perform,” Wiener explained.
Opposition from Tech Industry
However, the bill has faced strong opposition from the tech industry. Critics argue that it misunderstands how advanced AI systems are built and places disproportionate obligations on developers, potentially stifling innovation and open-source AI development.
Rob Sherman, vice president and chief privacy officer for Meta, highlighted these concerns in a letter to Wiener, stating that the bill “fails to take this full ecosystem into account and assign liability accordingly, placing disproportionate obligations on model developers for parts of the ecosystem over which they have no control.”
Despite these objections, Wiener maintains that the bill’s requirements are reasonable and necessary. “If a significant risk of catastrophic harm is identified, the developers must take steps to mitigate this risk, making it harder and less likely for these dangers to materialize,” he said.
Scope and Enforcement
The bill specifically targets developers who produce large-scale AI models, typically those with computing power valued over $100 million. It would impose penalties on companies that fail to report AI safety incidents or prevent the use of AI with hazardous capabilities. Violations could result in civil penalties, including the deletion of an AI model and its data if the violation involves severe consequences like death or property damage.
Only the California attorney general would have the authority to file cases for violations under this proposal.
Legislative Journey
The bill, currently awaiting a budgetary assessment by the appropriations committee, has garnered support from various AI safety advocacy groups, including Open Philanthropy, Encode Justice, and the Center for AI Safety. A May poll by David Binder Research, sponsored by the Center for AI Safety, found strong bipartisan support for such legislation among likely voters.
After passing the Senate in May, the bill moved through the Assembly’s Privacy and Consumer Protection and Judiciary committees. It now heads to Appropriations and a floor vote before potentially reaching the governor’s desk.
The Bigger Picture
Sunny Gandhi of Encode Justice emphasized the importance of proactive regulation, referencing lessons learned from the spread of disinformation on social media. “It’s better to get ahead of the curve to stop an AI Chernobyl from happening,” Gandhi said, referring to the 1986 nuclear disaster. “It is a nascent technology that moves extremely rapidly.”
As AI continues to evolve, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act represents a crucial step in balancing innovation with public safety, setting a precedent for future AI governance.