California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

On October 13th, 2025, Governor Gavin Newsom signed into law AB 853, which amends the California Artificial Intelligence Transparency Act (AI Transparency Act (SB 942)), a law placing obligations on makers of generative AI systems aimed at increasing transparency to allow individuals to more easily assess whether digital content is generated or modified using AI. AB 853 expands the scope of the existing law to include transparency obligations not only to those that develop generative AI systems but also platforms that distribute generative AI content. It also expands to cover those that do not make or distribute AI content at all, namely device manufacturers that record audio and visual content to allow individuals to include information that presumably would indicate non-AI generated content.

The AI Transparency Act’s stated aim is to enhance trust in AI by addressing concerns about the prevalence of increasingly realistic AI-generated content. “The proliferation of AI-generated content is having a profound effect on all of us, particularly as the rapidly evolving technology becomes increasingly easy to access and distribute, and the content becomes more and more difficult to distinguish from reality” said Newsom in his letter to members of the California State Assembly.

The same day he signed these Amendments into law, Newsom also signed into law two other AI laws: the Artificial Intelligence: Defenses Act (AB 316), discussed below, which prohibits a defense that the AI “autonomously” caused harm to an individual, and the Companion Chatbots Act (SB 243) also aimed at transparency and safety regarding certain AI companion chatbots. Newsom also recently signed into law the Health Advice from Artificial Intelligence Act (AB 489), and Transparency in Frontier Artificial Intelligence Act (SB 53).

Covered Provider Obligations

Existing Law

The current AI Transparency Act applies to “covered providers” which means any person that:

(A) “creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users,” and

(B) “is publicly accessible [in California].”

Under existing law, covered providers must (in sum):

  • Make available a free AI detection tool meeting certain criteria, including details about provenance of the data;

  • Include an option to have an easily recognizable (i.e., “manifest”) disclosure in the AI-generated image, video, or audio content that is being created or altered by GenAI systems;

  • Include a latent (defined as present but not manifest; “manifest” defined as “easily perceived, understood, or recognized by a natural person”) disclosure or watermark, conveying certain information, in AI-generated image, video, or audio content created by or altered by AI to the extent that it is technically feasible; and

  • Contractually require third party licensees of GenAI systems to maintain a latent disclosure in the content it creates.

New Amendments

AB 853 delays the effective date of the existing provisions under the law to August 2, 2026 (previously January 1, 2026).

Under other new amendments, starting January 1, 2027, covered providers must not make available any generative AI system lacking the required transparency disclosures stipulated under the Act.

Large Online Platform Obligations

While the current law only applies to covered providers, AB 853 adds obligations that also apply to large online platforms. “Large online platform” is defined as:

“[A] public-facing social media platform, file-sharing platform, mass messaging platform, or stand-alone search engine that distributes content to users who did not create or collaborate in creating the content that exceeded 2,000,000 unique monthly users during the preceding 12 months.”

Large online platforms do not include broadband internet access service providers or telecommunications service providers.

Under the new amendments, by January 1, 2027, any large online platform, in connection with content it distributes, must (in sum):

  • Detect and disclose whether any provenance data (i.e., data that is embedded into digital content or that is included in the digital content’s metadata, for the purposes of verifying the digital content’s authenticity, origin, or history of modifications) is embedded into or attached to content distributed on the large online platform;

  • Provide a user interface that provides users with information about content authenticity, origin, and modification history such as whether any digital signatures are available;

  • Allow users to inspect all available system provenance data in an easily accessible way via information posted on the platform, in downloadable form, or through a link; and  

  • Not knowingly remove provenance data or digital signatures from content distributed on their services, where technically feasible.

Capture Device Manufacturer Obligations

The new amendments also include obligations on capture device manufacturers. “Capture device manufacturer” means:

“a person who produces a capture device for sale in the state [,but] does not include a person exclusively engaged in the assembly of a capture device.”

A captured device means:

“a device that can record photographs, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.”

Starting January 1, 2028, to the extent that is technically feasible, capture device manufacturers must:

(1) Provide users with the option to include a latent disclosure in content captured by a device that includes: (i) the name of the capture device manufacturer, (ii) the name and version number of the capture device that created or altered the content, and (iii) the time and date of the content’s creation or alteration; and

(2) Embed latent disclosures in content captured by the device by default.

Remedies/Defenses

Penalties under the Act remain the same. Violators of the Act can be held liable for a civil penalty of $5,000 per violation, with each day of non-compliance counted separately. The Act also allows for injunctive relief and recovery of attorney’s fees and costs. The Act is enforceable by a civil action filed by the Attorney General, a city attorney, or a county counsel.

Under California’s new Artificial Intelligence: Defenses Act (AB 316), however, those who develop, modify, or use AI are prohibited from asserting a defense that the AI “autonomously” caused harm to an individual. Therefore, anyone who develops, alters, or uses AI can be held directly responsible for any harm caused by AI technology, such as the outputs of an AI chatbot.

Key Takeaways

To address the requirements of the AI Transparency Act:

Developers of Generative AI systems should:

  • Develop solutions to create and distribute compliant generative AI content disclosures.

  • Ensure all vendor and third-party licensee contracts cover compliance with new AI transparency requirements.

Large Online Platform entities such as social media platforms, should:

  • Develop means to detect provenance data in AI generated content.

  • Develop means to display this information to users of their platforms.

Entities such as smart phone makers, camera makers, and makers of video and audio recording devices, should:

  • Develop solutions to create compliant disclosures in audio and visual content captured by users about device information and time and date of content capture and editing.

  • Ensure that devices embed this required information into content by default, but with user choice to disable.

Companies should also be on the lookout for follow-up legislation expected in 2026 to address any implementation challenges posed by the AI Transparency Act.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

Leslie Veloz is an Associate at Hintze Law PLLC. Her areas of expertise include AI/ML technologies, U.S. state comprehensive and federal privacy laws, vendor risk management, privacy assessments, privacy by design, data protection agreements, and data breach notification.