By Shirin Ghaffary, Bloomberg
A California lawmaker is making another effort to regulate artificial intelligence in the state after legislation that would have held large companies liable for harm caused by their technology was vetoed last year by Governor Gavin Newsom.
State Senator Scott Wiener, a San Francisco Democrat, has introduced a bill that would require companies developing AI models above a certain computing performance threshold to publicly release safety and security protocols that assess the potential catastrophic risks to humanity from the technology. Under the law, AI companies also would need to report any “critical safety incidents,” such as theft of sensitive technical details, to the state attorney general. Companies that may be affected by the proposed legislation include OpenAI, Alphabet Inc.’s Google and Anthropic.
The first-in-the-nation transparency requirements are part of amendments released Tuesday to SB 53, which Wiener introduced earlier this year. The legislation includes protections for whistleblowers within AI companies and the creation of a public cloud to provide low-cost access to computing power for startups and academic researchers.
“As AI continues its remarkable advancement, it’s critical that lawmakers work with our top AI minds to craft policies that support AI’s huge potential benefits while guarding against material risks,” Wiener said in remarks prepared for the release of the amendments. “SB 53 strikes the right balance between boosting innovation and establishing guardrails to support trust, fairness and accountability in the most remarkable new technology in years.”
The proposed requirements come after the US Senate recently blocked an effort to bar individual states from enacting AI regulations in the quickly advancing field. Wiener’s effort to push regulation despite his earlier failed attempt shows how AI regulation continues to be a popular issue among legislators who say they want to protect the public from the technology’s potentially devastating harms.
Wiener previously authored AI regulation bill SB 1047, which drew fierce opposition from some AI companies, tech industry leaders and Silicon Valley venture capitalists for a requirement that would have held companies liable for catastrophic harm to people caused by their technology. While SB 1047 passed California’s legislature, it was ultimately vetoed by Newsom, who called it too burdensome.
While SB 53 has transparency requirements and potential penalties if companies don’t disclose critical safety incidents, it doesn’t include SB 1047’s provision on liability.
Newsom created a working group on AI cutting-edge models following his veto of SB 1047. The panel, which includes well-known AI researcher Fei-Fei Li — who opposed last year’s legislation — was asked to develop policy guidelines for the state. The new language around transparency requirements in Wiener’s SB 53 mirror the broad recommendations from the panel in a recent report. The working group didn’t propose any specific legislation.
Already, companies like Meta Platforms Inc., Google, OpenAI and Anthropic, regularly release safety guidelines for their models; SB 53 aims to codify and standardize these procedures.
The amended bill is expected to go before a California Assembly committee later this month. Wiener called it a “work in progress” and said he will be talking with stakeholders in the coming weeks to refine the proposal.
More stories like this are available on bloomberg.com
©2025 Bloomberg L.P.