Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

On September 29, 2025, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA). Authored by Senator Scott Wiener, TFAIA follows the release of the Governor’s California Report on Frontier AI Policy, which was drafted by the Joint California Policy Working Group on AI Frontier Models. The legislation also builds on insights from recent legislative hearings on artificial intelligence raising potential catastrophic risks that could arise from AI technology. Passed by the legislature two weeks ago, TFAIA introduces new transparency and safety obligations for companies developing frontier artificial intelligence models.

In his press release following the signing of the bill, Governor Newsom emphasized the state’s effort to balance safety and progress: “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.” His statement reflects the broader intent behind TFAIA, which is to create a framework that addresses emerging risks while maintaining California’s leadership in technological advancement.

TFAIA goes into effect on January 1, 2026, and is expected to have a significant impact on frontier AI developers.

Scope

TFAIA applies to developers of frontier models, which are defined as foundation models trained using more than 1026 operations (FLOPS or integer operations). The law defines foundation models as artificial intelligence models that are:

(1) Trained on a broad data set,

(2) Designed for generality of output, and

(3) Adaptable to a wide range of distinctive tasks.

While the law applies broadly to all frontier model developers, it imposes stricter requirements on large frontier model developers, defined as those with annual revenue exceeding $500 million.

Catastrophic Risks & Critical Safety Incidents

TFAIA focuses primarily on addressing scenarios involving catastrophic risks and critical safety incidents.

Catastrophic risk is defined, in part, as any risk that materially contributes to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from:

  • creation or release of a chemical, biological, radiological, or nuclear weapon,

  • engaging in cyberattacks and certain conduct (e.g., murder, assault, and theft) that would be crimes if committed by a human, or

  • evading the control of its frontier developer or user.

Critical safety incident is defined in part as

  • unauthorized access to, modification of, or exfiltration of, model weights of a frontier model that results in death or bodily injury,

  • harm resulting from a catastrophic risk,

  • loss of control of a frontier model causing death or bodily injury,

  • a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer in a manner that demonstrates materially increased catastrophic risk.

Transparency Requirements

As TFAIA’s title suggests, one of the core pillars of the law is transparency. The law introduces a series of disclosure and reporting obligations aimed at ensuring that there is clear oversight of the development of frontier artificial intelligence systems and means to address catastrophic risks.

Frontier AI Framework

Under TFAIA, large frontier developers must clearly and conspicuously publish and comply with a frontier AI framework on their website. As part of this framework developers must explain how they:

  • incorporate national and international standards,

  • follow industry best practices,

  • assess and manage catastrophic risks (including thresholds and mitigations),

  • conduct third-party evaluations of risks,

  • secure unreleased model weights,

  • identify and response to critical safety incidents and

  • maintain internal governance processes.

This framework must be updated annually, with major changes disclosed within 30 days.

Transparency Report

Before deploying a new or significantly modified model, all frontier developers must publish a transparency report on its website. The report must contain key details such as the release date, supported languages, and intended uses and restrictions. Large frontier developers must also disclose summaries of catastrophic risk assessments and mitigation steps taken in its transparency reports.

California’s Office of Emergency Services Reporting

Catastrophic Risk Reporting

Large frontier developers must also assess catastrophic risks from internal use of their models. Such developers must send summaries of these risks to California’s Office of Emergency Services (Cal OES) every three months or on a reasonable schedule.

Incident Reporting

The bill also directs Cal OES to create a reporting mechanism that can be used by both frontier developers and members of the public to submit incident reports. This will serve as a formal channel for reporting critical safety incidents related to frontier models, as outlined by TFAIA.

All frontier developers, regardless of size, must report to Cal OES any critical safety incident within 15 days of discovering it. If the incident poses an imminent risk of death or serious physical injury, it must be reported within 24 hours to the appropriate authority based on the nature of the threat.

Large frontier developers have an additional obligation to regularly transmit summaries of assessments related to catastrophic risks from internal use of their models. These summaries must be submitted every three months or on a reasonable schedule communicated in writing to Cal OES.

Penalties and Enforcement

A large frontier developer that fails to comply with TFAIA may be subject to a civil penalty of up to one million dollars per violation, depending on the severity of the offense.

These penalties apply specifically to failures such as not publishing or transmitting required documents like the Frontier AI Framework or transparency reports, failing to report critical safety incidents, making materially false or misleading statements about catastrophic risk or compliance, or not adhering to their own published framework. The California Attorney General has authority to enforce these penalties. There is no specific private right of action.

Whistleblower Protections

Notably, TFAIA also creates whistleblower protections under the California Labor Code for employees who disclose health and safety risks of frontier models. It prohibits frontier developers from preventing or retaliating against employees who report concerns about catastrophic risks or violations of the law.

CalCompute

In addition to the requirements for frontier developers, TFAIA also creates a consortium within the Government Operation Agency to develop “CalCompute,” a public computing cluster. The aim of CalCompute would be to support safe and ethical AI development by fostering research and innovation and expanding access to computational resources. The consortium is required to take steps to ensure that the CalCompute platform and workforce is established in the University of California system.

Continued Updates and Regulations

Furthermore, TFAIA includes a provision that allows California’s Department of Technology to make recommendations for updating the law to keep pace with technological advancements.

The law also authorizes Cal OES to adopt regulations designating federal laws, regulations, or guidance documents as meeting California’s standards for safety incident reporting. To qualify, a federal rule must set requirements that are substantially equivalent to or stricter than those outlined in TFAIA. These rules do not need to mandate reporting directly to the state. Additionally, the federal standard must be designed to assess, detect, or reduce the risk of catastrophic harm from AI systems.

Conclusion

Developers working on frontier AI models should review TFAIA closely to understand its implications and prepare for compliance. With 32 of the world’s top 50 AI companies based in California, these obligations are likely to shape industry norms and influence how developers approach risk management and compliance going forward.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

Clara De Abreu E Souza is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.