Artificial Intelligence

California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

On October 13th, 2025, Governor Gavin Newsom signed into law AB 853, which amends the California Artificial Intelligence Transparency Act (SB 942) (AI Transparency Act), a law placing obligations on makers of generative AI systems aimed at increasing transparency to allow individuals to more easily assess whether digital content is generated or modified using AI. AB 853 expands the scope of the existing law to include transparency obligations not only to those that develop generative AI systems but also platforms that distribute generative AI content. It also expands to cover those that do not make or distribute AI content at all, namely device manufacturers that record audio and visual content to allow individuals to include information that presumably would indicate non-AI generated content.

The AI Transparency Act’s stated aim is to enhance trust in AI by addressing concerns about the prevalence of increasingly realistic AI-generated content. “The proliferation of AI-generated content is having a profound effect on all of us, particularly as the rapidly evolving technology becomes increasingly easy to access and distribute, and the content becomes more and more difficult to distinguish from reality” said Newsom in his letter to members of the California State Assembly

The same day he signed these Amendments into law, Newsom also signed into law two other AI laws: the Artificial Intelligence: Defenses Act (AB 316), discussed below, which prohibits a defense that the AI “autonomously” caused harm to an individual, and the Companion Chatbots Act (SB 243) also aimed at transparency and safety regarding certain AI companion chatbots. Newsom also recently signed into law Health Advice from Artificial Intelligence Act (AB 489), and the Transparency in Frontier Artificial Intelligence Act (SB 53).

Covered Provider Obligations

Existing Law

The current AI Transparency Act applies to “covered providers” which means any person that:

(A) “creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users”, and

(B) “is publicly accessible [in California]”.

Under existing law, covered providers must (in sum):

  • Make available a free AI detection tool meeting certain criteria, including details about provenance of the data;

  • Include an option to have an easily recognizable (i.e., “manifest”) disclosure in the AI-generated image, video, or audio content that is being created or altered by GenAI systems;

  • Include a latent (defined as present but not manifest; “manifest” defined as “easily perceived, understood, or recognized by a natural person”) disclosure or watermark, conveying certain information, in AI-generated image, video, or audio content created by or altered by AI to the extent that it is technically feasible; and

  • Contractually require third party licensees of GenAI systems to maintain a latent disclosure in the content it creates.

New Amendments

AB 853 delays the effective date of the existing provisions under the law to August 2, 2026 (previously January 1, 2026).

Under other new amendments, starting January 1, 2027, covered providers must not make available any generative AI system lacking the required transparency disclosures stipulated under the Act.

Large Online Platform Obligations

While the current law only applies to covered providers, AB 853 adds obligations that also apply to large online platforms. “Large online platform” is defined as:

“[A] public-facing social media platform, file-sharing platform, mass messaging platform, or stand-alone search engine that distributes content to users who did not create or collaborate in creating the content that exceeded 2,000,000 unique monthly users during the preceding 12 months.”

Large online platforms do not include broadband internet access service providers or telecommunications service providers.

Under the new amendments, by January 1, 2027, any large online platform, in connection with content it distributes, must (in sum):

  • Detect and disclose whether any provenance data (i.e., data that is embedded into digital content or that is included in the digital content’s metadata, for the purposes of verifying the digital content’s authenticity, origin, or history of modifications) is embedded into or attached to content distributed on the large online platform;

  • Provide a user interface that provides users with information about content authenticity, origin, and modification history such as whether any digital signatures are available;

  • Allow users to inspect all available system provenance data in an easily accessible way via information posted on the platform, in downloadable form, or through a link; and  

  • Not knowingly remove provenance data or digital signatures from content distributed on their services, where technically feasible.

Capture Device Manufacturer Obligations

The new amendments also include obligations on capture device manufacturers. “Capture device manufacturer” means

“a person who produces a capture device for sale in the state [,but] does not include a person exclusively engaged in the assembly of a capture device.”

A captured device means:

“a device that can record photographs, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.”

Starting January 1, 2028, to the extent that is technically feasible, capture device manufacturers must:

(1) Provide users with the option to include a latent disclosure in content captured by a device that includes: (i) the name of the capture device manufacturer, (ii) the name and version number of the capture device that created or altered the content, and (iii) the time and date of the content’s creation or alteration; and

(2) Embed latent disclosures in content captured by the device by default.

Remedies/Defenses

Penalties under the Act remain the same. Violators of the Act can be held liable for a civil penalty of $5,000 per violation, with each day of non-compliance counted separately. The Act also allows for injunctive relief and recovery of attorney’s fees and costs. The Act is enforceable by a civil action filed by the Attorney General, a city attorney, or a county counsel.

Under California’s new Artificial Intelligence: Defenses Act (AB 316), however, those who develop, modify, or use AI are prohibited from asserting a defense that the AI “autonomously” caused harm to an individual. Therefore, anyone who develops, alters, or uses AI can be held directly responsible for any harm caused by AI technology, such as the outputs of an AI chatbot.

Key Takeaways

To address the requirements of the AI Transparency Act:

Developers of Generative AI systems should:

  • Develop solutions to create and distribute compliant generative AI content disclosures

  • Ensure all vendor and third-party licensee contracts cover compliance with new AI transparency requirements

Large Online Platform entities such as social media platforms, should:

  • Develop means to detect provenance data in AI generated content.

  • Develop means to display this information to users of their platforms

Entities such as smart phone makers, camera makers, and makers of video and audio recording devices, should:

  • Develop solutions to create compliant disclosures in audio and visual content captured by users about device information and time and date of content capture and editing

  • Ensure that devices embed this required information into content by default, but with user choice to disable

Companies should also be on the lookout for follow-up legislation expected in 2026 to address any implementation challenges posed by the AI Transparency Act.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

Leslie Veloz is an Associate at Hintze Law PLLC. Her areas of expertise include AI/ML technologies, U.S. state comprehensive and federal privacy laws, vendor risk management, privacy assessments, privacy by design, data protection agreements, and data breach notification.

California Passes Law on AI Companion Chatbot Safety

On Oct. 13, 2025, California Governor Gavin Newsom signed into law Senate Bill 243 – Companion Chatbots. SB 243, authored by Senator Steve Padilla, requires operators of companion chatbot platforms to notify users that the chatbot is AI, provide specific disclosures to minors, and restrict harmful content. The law also includes a private right of action.

The law is in response to mounting public concerns about children’s online interactions with companion chatbots. In his press release following the signing of multiple children’s online safety bills, Newsom highlighted this public concern. “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”

The law goes into effect January 1, 2026, with reporting requirements starting on July 7, 2027.

Scope

This law applies to operators, which is defined as a person who makes a companion chatbot platform available to a user in California. The law defines companion chatbots as “an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.”

The law excludes the following from the definition of “companion chatbot”:

  • A bot that is used only for customer service, a business’ operational purposes, productivity and analysis related to source information, internal research, or technical assistance.

  • A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game.

  • A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.

Key Provisions

Notice and Disclosure Obligations

The law outlines specific disclosure requirements for both general users and minors.

General Users

The law requires that if a reasonable person would be misled to believe that they are interacting with a human, operators must issue a clear and conspicuous notification that the companion chatbot is artificially generated and not human.

Minors

For users that operators know are minors they must not only disclose that the user is interacting with artificial intelligence, but they must also provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that remind the user to take a break and that the chatbot is artificially generated and not human.

Additionally, the law requires operators to disclose, on the application, the browser, or any other format through which users can access the chatbot platform, that the companion chatbot may not be suitable for some minors.

Safety Protocols and Transparency Measures

In addition to its disclosure requirements, the law mandates that operators implement, and publish on its website, safety protocols and transparency measures.

Under the law, companion chatbots may not engage with users unless the operator maintains a protocol that:

  • prevents the production of content related to suicidal ideation, suicide, or self-harm

  • provides notice to users referring them to crisis services, such as a suicide hotline or crisis text line, if they express suicidal thoughts or self-harm.

Content Restrictions for Minors

The law requires operators to implement reasonable measures to prevent companion chatbots from producing visual material depicting sexually explicit conduct or from directly stating that a minor should engage in such conduct.

Reporting Requirements

Effective July 1, 2027, operators must submit an annual report to California’s Office of Suicide Prevention detailing:

  • The number of times they have issued a crisis service provider referral notification in the preceding calendar year.

  • Protocols put in place to detect, remove, and respond to instances of suicidal ideation* by users.

  • Protocols put in place to prohibit a companion chatbot response about suicidal ideation* or actions with the user.

*The law requires that suicidal ideation be measured using evidence-based methods.

The law specifies that such reports must exclude any user identifiers or personal information. Once compiled, California’s Office of Suicide Prevention will publish data from this report on its website.

Private Right of Action

The law creates a private right of action for any person who suffers injury in fact as a result of a violation of the law and allows them to pursue:

  • Injunctive relief.

  • Damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation.

  • Reasonable attorney’s fees and costs.

Key Takeaways

Companion chatbot operators should develop protocols to ensure compliance with the law, including:

  • providing required user notification and disclosures,

  • identifying and responding to user expressions of self harm,

  • identifying and restricting content in scope, and

  • compiling and submitted required reporting.

This legislation was signed alongside a broader package of child online safety laws, including the Digital Age Assurance Act (AB 1043), which establishes new online age-assurance requirements. Together, these measures contribute to a growing framework of children’s online safety laws in California.

See our blog post on the Digital Age Assurance Act.

Clara De Abreu E Souza is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

By Clara De Abreu E Souza

On September 29, 2025, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA). Authored by Senator Scott Wiener, TFAIA follows the release of the Governor’s California Report on Frontier AI Policy, which was drafted by the Joint California Policy Working Group on AI Frontier Models.

Read More

Hintze Lawyers Recognized in 2026’s Best Lawyers in America

Hintze Lawyers Recognized in 2026’s Best Lawyers in America

This year, eight of Hintze Law’s attorneys have been recognized by Best Lawyers® across a variety of categories, marking a significant milestone for the firm. Every one of our associates earned recognition, reflecting both the breadth of talent within our team and the dedication each attorney brings to their practice.

Read More

California Adopts Privacy, Cybersecurity, ADMT Regulations and Amendments

California Adopts Privacy, Cybersecurity, ADMT Regulations and Amendments

By Sam Castic

The California Privacy Protection Agency (CPPA) has adopted final regulations on privacy risk assessments, cybersecurity audits, and automated decisionmaking technology (ADMT), as well as amendments to existing CCPA regulations.  Final publication of the regulations is pending review by the Office of Administrative Law, and depending on when that occurs, the regulations will likely take effect 10/1/2025 or 1/1/2026.  Some key concepts from these regulations, and actions to consider, are below.

Read More

GenAI in the Workplace: Hong Kong PCPD Releases Checklist for Employer Policies

GenAI in the Workplace: Hong Kong PCPD Releases Checklist for Employer Policies

By Leslie Veloz and Jennifer Ruehr

The Hong Kong Office of the Privacy Commissioner for Personal Data (“PCPD”) recently published its Checklist on Guidelines for the Use of Generative AI by Employees (“Checklist”). The goal of the Checklist is to help organizations draft internal policies and procedures governing employee use of generative AI (“GenAI”) tools, especially where GenAI is used to process personal data.

Read More

10 areas for US-based privacy programs to focus in 2025

10 areas for US-based privacy programs to focus in 2025

By Sam Castic

The post below was originally published by the IAPP at https://iapp.org/news/a/10-areas-for-privacy-programs-to-focus-in-2025.

This past year was another jammed one for privacy teams and it was not easy to stay on top of all the privacy litigation, enforcement trends, and new laws and regulations in the U.S.

Read More

The EDPB Releases an Opinion on AI Model Development and Deployment

The EDPB Releases an Opinion on AI Model Development and Deployment

By Emily Litka

On December 18th, in response to a request from the Irish Supervisory Authority (“SA”), the European Data Protection Board (the “EDPB”) published an opinion (the “Opinion”) on the application of the GDPR to certain aspects of AI model development and deployment.

Read More

California Enacts "genAI" Laws That Introduce New Privacy and Transparency Requirements, Amongst Others 

California Enacts "genAI" Laws That Introduce New Privacy and Transparency Requirements, Amongst Others 

By Emily Litka

In September 2024, California Governor Gavin Newsome signed a number of new generative AI (“genAI”) bills into law. These laws address risks associated with deepfakes, training dataset transparency, use of genAI in healthcare settings, privacy, and AI literacy in schools. California is the first US state to enact such sweeping genAI regulations.

Read More

FTC Introduces Novel Ban in Its Settlement with NGL Labs and Scrutinizes AI Representations

By Emily Litka

On July 9, 2024, The Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office (LA DA) reached a settlement with NGL Labs, the maker of the “NGL: ask me anything” app and its co-founders. The complaint alleged violations of the Federal Trade Commission Act (FTC Act), the Children’s Online Privacy Protection Act (COPPA), the Restore Online Shoppers’ Confidence Act (ROSCA), and similar California state laws. In the complaint, the FTC and LA DA also brought claims against NGL’s cofounders individually. 

Read More