As the home of many tech giants, California could play a pivotal role in setting AI regulation standards—and experts say the state needs to act fast.
A new report released Tuesday, titled The California Report on Frontier AI Policy, outlines a proposed framework for regulating artificial intelligence that could become state law. The report follows Governor Gavin Newsom’s veto of SB 1047 in September 2024. The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California’s most ambitious attempt to regulate AI.
Related posts
This Article Includes
- 0.1 Related posts
- 0.2 About 20 Haitian Nationals Deported from U.S. Arrive in Port-au-Prince, 60 More Expected Soon
- 0.3 California Realtor Charged for Allegedly Hiking Rent 30% After Wildfire Displacement
- 1 Clash with Federal Bill to Ban State-Level AI Rules
- 2 Why Size Alone Shouldn’t Decide Regulation
- 3 SB 1047 and the Push for Guardrails
Since the release of the report’s initial draft in March, researchers say new evidence shows AI can pose serious threats, including risks related to chemical, biological, radiological, and nuclear (CBRN) weapons. They warn that without proper safeguards, the technology could cause “irreversible harm.”
Still, the report’s authors argue that regulation must be balanced—strong enough to reduce risk but not so strict that it stifles innovation.
Clash with Federal Bill to Ban State-Level AI Rules
The report lands amid debate in Congress over a Republican-led budget bill that would block states and local governments from creating their own AI laws for the next decade. Supporters of the bill claim a single national framework would avoid regulatory confusion. Critics say it would invite tech companies seeking less oversight while undermining state-level protections against AI bias.
The authors of the California report argue that the state can design “carefully targeted policy” that both supports national alignment and fulfills its duty to keep residents safe.
Why Size Alone Shouldn’t Decide Regulation
One of Newsom’s key criticisms of SB 1047 was its focus on regulating AI models based solely on size. The new report agrees, stating that regulation should also account for each model’s risk level and impact.
The report recommends requiring third-party risk evaluations for AI systems—especially since developers often withhold information on data collection, testing, and safety procedures. Some companies operate with such secrecy that even their leaders can’t fully explain how their systems work. Anthropic’s CEO has admitted that his team doesn’t completely understand its own AI model.
Third-party audits, especially from diverse teams outside the tech industry, could improve transparency and help protect communities most vulnerable to AI bias. These evaluations could also encourage companies to improve safety measures and reduce their legal exposure.
However, conducting these evaluations requires access to internal company data—something developers may be reluctant to share. A 2024 audit by AI safety firm METR revealed that OpenAI only gave limited access to information about its o3 model, making a full safety assessment difficult.
SB 1047 and the Push for Guardrails
After vetoing SB 1047, Newsom promised to develop new AI safeguards and formed an advisory group of experts—including Stanford’s Fei-Fei Li—to shape future policy. Their recommendations are reflected in this report.
SB 1047 would have been the most aggressive AI regulation in the U.S., requiring AI companies to implement strict safety protocols, protect whistleblowers, and maintain the ability to shut down their models in an emergency. While the new report supports whistleblower protections, it does not endorse the proposed “kill switch.”
Major tech firms like OpenAI, Meta, Google, and Hugging Face opposed SB 1047, calling its requirements unrealistic and damaging to innovation. Elon Musk publicly backed the bill, and two former OpenAI employees criticized the company’s resistance in a letter to Governor Newsom.
As the debate continues, California finds itself at a crossroads—torn between tech industry pressure and growing calls for urgent, enforceable AI safeguards.