Skip to content

JOBUZO

  • News
  • Indonesia
  • Toggle search form
California lawmaker behind SB 1047 reignites push for mandated AI safety reports

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

Posted on 10 July 2025 By jobuzo

California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world’s largest AI companies to publish safety and security protocols and issue reports when safety incidents occur.

If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI.

Senator Wiener’s previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California’s governor then called for a group of AI leaders — including the leading Stanford researcher and co-founder of World Labs, Fei-Fei Li — to form a policy group and set goals for the state’s AI safety efforts.

California’s AI policy group recently published their final recommendations, citing a need for “requirements on industry to publish information about their systems” in order to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that SB 53’s amendments were heavily influenced by this report.

“The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,” Senator Wiener said in the release.

SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve — ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California’s AI industry.

News :<div>12 weeks' jail for school IT support technician who took upskirt videos of teachers</div>

“These are concerns that my organization and others have been talking about for a while,” said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.”

Techcrunch event

Boston, MA
|
July 15

The bill also creates whistleblower protections for employees of AI labs who believe their company’s technology poses a “critical risk” to society — defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage.

Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI.

Unlike SB 1047, Senator Wiener’s new bill does not make AI model developers liable for the harms of their AI models. SB 53 was also designed not to pose a burden on startups and researchers that fine-tune AI models from leading AI developers, or use open source models.

News :Migrant acquitted in first trial over US border military zones

With the new amendments, SB 53 is now headed to the California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom’s desk.

On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports.

The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation — an attempt to limit a “patchwork” of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July.

“Ensuring AI is developed safely should not be controversial — it should be foundational,” said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. “Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California’s SB 53 is a thoughtful, well-structured example of state leadership.”

Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California’s AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts.

Leading AI model developers typically publish safety reports for their AI models, but they’ve been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models.

SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they’ll be watching closely as Senator Wiener once again tests those boundaries.

California lawmaker behind SB 1047 reignites push for mandated AI safety reports


News

Post navigation

Previous Post: Nvidia becomes 1st company to clinch US$4 trillion value amid AI boom
Next Post: NASA will lose over 2,000 senior staff due to proposed Trump budget cuts

Related Posts

IDF soldier describes arbitrary killing of civilians in Gaza IDF soldier describes arbitrary killing of civilians in Gaza News
Europe melts in record heat: Schools close, Eiffel Tower shuts as temperatures soar Europe melts in record heat: Schools close, Eiffel Tower shuts as temperatures soar News
UN Charter: a cornerstone agreement among nations that continues to be violated UN Charter: a cornerstone agreement among nations that continues to be violated News

Latest

  • Extreme heatwave triggers fires, casualties across Italy as temperatures begin to recede
  • Hong Kong’s IntelliGen AI takes on Google DeepMind in drug discovery
  • EU seeks trade deal framework with U.S., keeps retaliation option open: officials
  • NASA employees layoffs: How many federal government workers will lose jobs across 10 regional centres?
  • Hundreds of migrants moved from Crete to Greek mainland as island struggles with Libya arrivals
  • Trump slaps 50pc steel tariff on Brazil in retaliation for Bolsonaro trial, Lula hits back over ‘interference’
  • Two killed as Russia launches barrage of drones at Kyiv
  • Ex-South Korean leader Yoon held again over martial law declaration
  • Trump sends out tariff letters to 7 more countries but avoids major U.S. trade partners
  • Two cargo ships sink after deadly Houthi attacks in Red Sea

Copyright © 2025 JOBUZO. Disclaimers | Privacy Policies

Powered by PressBook Masonry Blogs