Welcome to the AITX Community AI Policy Update

I’m Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and co-host of the Scaling Laws podcast.

Every month, I’ll share some analysis of the most important AI policy news, with a focus on regulatory shifts most important to developers and builders.

I also plan to share some recommended sources of good policy deep dives. Your feedback is welcomed! If you’d ever like to dive further into AI policy, give me a shout! 

TL;DR - AI Policy Update #3

Governor Newsom signaled his intent to sign a major frontier AI bill. Though the legislation carves out startups, it will likely accelerate adoption of similar laws around the states—creating the sort of patchwork known to chill innovation.

Full Breakdown

THE BASICS OF SB 53

SB 53, aims to establish some ground rules for the companies building advanced models, focusing on preventing worst-case scenarios before they happen. The bill is built on three key pillars: transparency, incident reporting, and whistleblower protection. 

Pulling Back the Curtain: 

At its core, SB 53 is about making sure AI developers are open about the potential dangers of their creations. The bill zeroes in on what it calls a "catastrophic risk"—a single AI-related incident that could lead to more than 50 deaths or over $1 billion in property damage. Think of scenarios straight out of science fiction: an AI helping to create a bioweapon, launching an autonomous cyberattack, or simply going rogue and evading human control. 

To manage these risks, the bill requires frontier AI developers to: 

Publish a Safety Playbook: Companies must create and publicly share a "frontier AI framework." This document will outline their safety protocols, what they consider dangerous capability thresholds, and how their internal governance is set up to prevent disasters. 

Keep it Updated: This isn't a one-and-done assignment. The framework must be reviewed annually and updated within 30 days of any changes.

Issue Transparency Reports: For each new frontier model, developers must publish a report detailing its technical specs and their assessment of its catastrophic risks.

Check in with the State: Developers must share risk assessments from their internal AI use with California’s Office of Emergency Services (OES) every three months. 

No Lying Allowed: The bill explicitly prohibits developers from making false or misleading statements about the risks of their models or their safety practices. 

Sounding the Alarm: A Hotline for AI Incidents 

When something goes wrong with a powerful AI, the state government wants to know immediately. SB 53 establishes a system to ensure critical safety incidents—like an unauthorized person getting access to a model's core programming or an AI agent acting uncontrollably—are reported quickly. The bill directs the Office of Emergency Services (OES) to set up a hotline for reporting these incidents. Developers would be required to report an incident within 15 days. However, if there's an imminent threat of death or serious injury, that window shrinks to just 24 hours. To keep the public informed, the OES will publish an annual report summarizing the types of incidents reported, with all identifying information removed. The bill also looks ahead, stating that if the federal government creates a similar or stricter reporting system, California will defer to the national standard. 

Protecting the Insiders: 

Whistleblower regulations tend to be sectoral. AI is obviously a very new sector. That's why the bill relies on employees within AI companies to raise red flags when they see something concerning. Recognizing that speaking up can be risky, SB 53 creates strong whistleblower protections. 

Specifically, the bill forbids companies from retaliating against employees whose job involves managing risk when they report a belief that the company's activities pose a "specific and substantial catastrophic risk." These new protections supplement existing California laws that protect all employees who report any violation of the law. To ensure these protections have teeth, the bill empowers employees to sue their employers for retaliation. Furthermore, the Attorney General can enforce the bill's transparency and reporting rules with civil penalties of up to $1 million per violation.

While this may sound great to folks considered about worst case AI scenarios, critics point out that whistleblower protections can lead to unintended consequences, such as reducing hiring in the first place and making it difficult to fire disgruntled employees. 

UPSHOT

I’ve described SB 53 as the “least bad bill” I’ve seen from the states on frontier AI regulation. Policy folks at the largest labs will mention that they already do many of the tasks required by SB 53. More generally, there’s a sense that more information from the labs can inform better policy making down the road. Still, the devil is always in the details (a tired line that’s nevertheless an accurate one). How this law is enforced, how burdensome it proves to be on labs, and whether the gathered information is effective in mitigating the speculative harms behind the bill are all open questions. 

California continuing to “lead” on AI innovation also carries a significant risk of causing others to follow. While SB 53 may not pose significant hurdles to AI innovation in isolation, if this bill is copied by other states then labs may soon face a complex and varied set of related requirements. 

Keep Reading

No posts found