As the end of September approaches, California Governor Gavin Newsom faces a critical decision regarding SB 1047, a landmark AI safety bill that could have wide-ranging impacts on the tech industry and AI development in the state. This legislation, part of a broader AI regulatory push, aims to impose stricter safety standards on developers of advanced AI models, particularly those that could pose risks such as bioweapons creation or financial destabilization.
If signed into law, SB 1047 would require developers of AI systems to follow stringent guidelines. This includes undergoing third-party audits starting in 2026, reporting any significant AI-related safety incidents, and complying with new regulations designed to protect against AI misuse. There’s also a strong emphasis on protecting whistleblowers who raise concerns about corporate compliance with these safety standards.
While the bill has gained support from some AI safety advocates and tech companies like Anthropic, major industry players such as Google, Meta, and OpenAI have voiced opposition. Their concerns revolve around the potential to stifle innovation and competitiveness, especially for open models and smaller developers. Critics argue that the bill’s requirements could make it harder for California to maintain its reputation as a tech innovation hub.
Governor Newsom, who has shown interest in AI safety in the past, remains undecided, signaling concerns that the bill could have unintended consequences for the state’s tech economy. With just days left before the September 30 deadline, his decision will likely set a precedent for AI regulation across the U.S.
All eyes are on Sacramento as Newsom prepares to make this pivotal choice, with both supporters and detractors eagerly awaiting the outcome.