State Legislation

California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

On October 13th, 2025, Governor Gavin Newsom signed into law AB 853, which amends the California Artificial Intelligence Transparency Act (AI Transparency Act (SB 942)), a law placing obligations on makers of generative AI systems aimed at increasing transparency to allow individuals to more easily assess whether digital content is generated or modified using AI. AB 853 expands the scope of the existing law to include transparency obligations not only to those that develop generative AI systems but also platforms that distribute generative AI content. It also expands to cover those that do not make or distribute AI content at all, namely device manufacturers that record audio and visual content to allow individuals to include information that presumably would indicate non-AI generated content.

The AI Transparency Act’s stated aim is to enhance trust in AI by addressing concerns about the prevalence of increasingly realistic AI-generated content. “The proliferation of AI-generated content is having a profound effect on all of us, particularly as the rapidly evolving technology becomes increasingly easy to access and distribute, and the content becomes more and more difficult to distinguish from reality” said Newsom in his letter to members of the California State Assembly.

The same day he signed these Amendments into law, Newsom also signed into law two other AI laws: the Artificial Intelligence: Defenses Act (AB 316), discussed below, which prohibits a defense that the AI “autonomously” caused harm to an individual, and the Companion Chatbots Act (SB 243) also aimed at transparency and safety regarding certain AI companion chatbots. Newsom also recently signed into law the Health Advice from Artificial Intelligence Act (AB 489), and Transparency in Frontier Artificial Intelligence Act (SB 53).

Covered Provider Obligations

Existing Law

The current AI Transparency Act applies to “covered providers” which means any person that:

(A) “creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users,” and

(B) “is publicly accessible [in California].”

Under existing law, covered providers must (in sum):

  • Make available a free AI detection tool meeting certain criteria, including details about provenance of the data;

  • Include an option to have an easily recognizable (i.e., “manifest”) disclosure in the AI-generated image, video, or audio content that is being created or altered by GenAI systems;

  • Include a latent (defined as present but not manifest; “manifest” defined as “easily perceived, understood, or recognized by a natural person”) disclosure or watermark, conveying certain information, in AI-generated image, video, or audio content created by or altered by AI to the extent that it is technically feasible; and

  • Contractually require third party licensees of GenAI systems to maintain a latent disclosure in the content it creates.

New Amendments

AB 853 delays the effective date of the existing provisions under the law to August 2, 2026 (previously January 1, 2026).

Under other new amendments, starting January 1, 2027, covered providers must not make available any generative AI system lacking the required transparency disclosures stipulated under the Act.

Large Online Platform Obligations

While the current law only applies to covered providers, AB 853 adds obligations that also apply to large online platforms. “Large online platform” is defined as:

“[A] public-facing social media platform, file-sharing platform, mass messaging platform, or stand-alone search engine that distributes content to users who did not create or collaborate in creating the content that exceeded 2,000,000 unique monthly users during the preceding 12 months.”

Large online platforms do not include broadband internet access service providers or telecommunications service providers.

Under the new amendments, by January 1, 2027, any large online platform, in connection with content it distributes, must (in sum):

  • Detect and disclose whether any provenance data (i.e., data that is embedded into digital content or that is included in the digital content’s metadata, for the purposes of verifying the digital content’s authenticity, origin, or history of modifications) is embedded into or attached to content distributed on the large online platform;

  • Provide a user interface that provides users with information about content authenticity, origin, and modification history such as whether any digital signatures are available;

  • Allow users to inspect all available system provenance data in an easily accessible way via information posted on the platform, in downloadable form, or through a link; and  

  • Not knowingly remove provenance data or digital signatures from content distributed on their services, where technically feasible.

Capture Device Manufacturer Obligations

The new amendments also include obligations on capture device manufacturers. “Capture device manufacturer” means:

“a person who produces a capture device for sale in the state [,but] does not include a person exclusively engaged in the assembly of a capture device.”

A captured device means:

“a device that can record photographs, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.”

Starting January 1, 2028, to the extent that is technically feasible, capture device manufacturers must:

(1) Provide users with the option to include a latent disclosure in content captured by a device that includes: (i) the name of the capture device manufacturer, (ii) the name and version number of the capture device that created or altered the content, and (iii) the time and date of the content’s creation or alteration; and

(2) Embed latent disclosures in content captured by the device by default.

Remedies/Defenses

Penalties under the Act remain the same. Violators of the Act can be held liable for a civil penalty of $5,000 per violation, with each day of non-compliance counted separately. The Act also allows for injunctive relief and recovery of attorney’s fees and costs. The Act is enforceable by a civil action filed by the Attorney General, a city attorney, or a county counsel.

Under California’s new Artificial Intelligence: Defenses Act (AB 316), however, those who develop, modify, or use AI are prohibited from asserting a defense that the AI “autonomously” caused harm to an individual. Therefore, anyone who develops, alters, or uses AI can be held directly responsible for any harm caused by AI technology, such as the outputs of an AI chatbot.

Key Takeaways

To address the requirements of the AI Transparency Act:

Developers of Generative AI systems should:

  • Develop solutions to create and distribute compliant generative AI content disclosures.

  • Ensure all vendor and third-party licensee contracts cover compliance with new AI transparency requirements.

Large Online Platform entities such as social media platforms, should:

  • Develop means to detect provenance data in AI generated content.

  • Develop means to display this information to users of their platforms.

Entities such as smart phone makers, camera makers, and makers of video and audio recording devices, should:

  • Develop solutions to create compliant disclosures in audio and visual content captured by users about device information and time and date of content capture and editing.

  • Ensure that devices embed this required information into content by default, but with user choice to disable.

Companies should also be on the lookout for follow-up legislation expected in 2026 to address any implementation challenges posed by the AI Transparency Act.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

Leslie Veloz is an Associate at Hintze Law PLLC. Her areas of expertise include AI/ML technologies, U.S. state comprehensive and federal privacy laws, vendor risk management, privacy assessments, privacy by design, data protection agreements, and data breach notification.

California Passes Law on AI Companion Chatbot Safety

On Oct. 13, 2025, California Governor Gavin Newsom signed into law Senate Bill 243 – Companion Chatbots. SB 243, authored by Senator Steve Padilla, requires operators of companion chatbot platforms to notify users that the chatbot is AI, provide specific disclosures to minors, and restrict harmful content. The law also includes a private right of action.

The law is in response to mounting public concerns about children’s online interactions with companion chatbots. In his press release following the signing of multiple children’s online safety bills, Newsom highlighted this public concern. “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”

The law goes into effect January 1, 2026, with reporting requirements starting on July 7, 2027.

Scope

This law applies to operators, which is defined as a person who makes a companion chatbot platform available to a user in California. The law defines companion chatbots as “an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.”

The law excludes the following from the definition of “companion chatbot”:

  • A bot that is used only for customer service, a business’ operational purposes, productivity and analysis related to source information, internal research, or technical assistance.

  • A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game.

  • A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.

Key Provisions

Notice and Disclosure Obligations

The law outlines specific disclosure requirements for both general users and minors.

General Users

The law requires that if a reasonable person would be misled to believe that they are interacting with a human, operators must issue a clear and conspicuous notification that the companion chatbot is artificially generated and not human.

Minors

For users that operators know are minors they must not only disclose that the user is interacting with artificial intelligence, but they must also provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that remind the user to take a break and that the chatbot is artificially generated and not human.

Additionally, the law requires operators to disclose, on the application, the browser, or any other format through which users can access the chatbot platform, that the companion chatbot may not be suitable for some minors.

Safety Protocols and Transparency Measures

In addition to its disclosure requirements, the law mandates that operators implement, and publish on its website, safety protocols and transparency measures.

Under the law, companion chatbots may not engage with users unless the operator maintains a protocol that:

  • prevents the production of content related to suicidal ideation, suicide, or self-harm

  • provides notice to users referring them to crisis services, such as a suicide hotline or crisis text line, if they express suicidal thoughts or self-harm.

Content Restrictions for Minors

The law requires operators to implement reasonable measures to prevent companion chatbots from producing visual material depicting sexually explicit conduct or from directly stating that a minor should engage in such conduct.

Reporting Requirements

Effective July 1, 2027, operators must submit an annual report to California’s Office of Suicide Prevention detailing:

  • The number of times they have issued a crisis service provider referral notification in the preceding calendar year.

  • Protocols put in place to detect, remove, and respond to instances of suicidal ideation* by users.

  • Protocols put in place to prohibit a companion chatbot response about suicidal ideation* or actions with the user.

*The law requires that suicidal ideation be measured using evidence-based methods.

The law specifies that such reports must exclude any user identifiers or personal information. Once compiled, California’s Office of Suicide Prevention will publish data from this report on its website.

Private Right of Action

The law creates a private right of action for any person who suffers injury in fact as a result of a violation of the law and allows them to pursue:

  • Injunctive relief.

  • Damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation.

  • Reasonable attorney’s fees and costs.

Key Takeaways

Companion chatbot operators should develop protocols to ensure compliance with the law, including:

  • providing required user notification and disclosures,

  • identifying and responding to user expressions of self harm,

  • identifying and restricting content in scope, and

  • compiling and submitted required reporting.

This legislation was signed alongside a broader package of child online safety laws, including the Digital Age Assurance Act (AB 1043), which establishes new online age-assurance requirements. Together, these measures contribute to a growing framework of children’s online safety laws in California.

See our blog post on the Digital Age Assurance Act.

Clara De Abreu E Souza is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

California Passes Digital Age-Assurance Act Into Law

California Passes Digital Age-Assurance Act Into Law

By Hansenard Piou

On October 13th, 2025, Governor Newsom signed the Digital Age Assurance Act (AB 1043) into law. Introduced by co-authors Assembly Member Buffy Wicks and Senator Tom Umberg, the law establishes age-assurance requirements for computer and mobile operating system providers and app stores as well as app developers with an aim to protect children’s online safety. The Digital Age Assurance Act enters into effect on January 1, 2027.

Read More

California’s Social Media Account Cancellation Act Signed into Law

California’s Social Media Account Cancellation Act Signed into Law

By Clara De Abreu E Souza

On October 8, 2025, California Governor Gavin Newsom signed into law Assembly Bill 656 — Account Cancellation. AB 656, authored by Assembly member Pilar Schiavo, focuses on social media platforms and requires them to provide users with a clear and accessible way to delete their accounts. This action must also trigger the complete deletion of the user’s personal data.

Read More

California Opt Me Out Act Signed into Law

California Opt Me Out Act Signed into Law

By Cameron Cantrell

On October 8, 2025, California’s Governor Newsom signed AB 566—the California Opt Me Out Act—into law. The California Opt Me Out Act, using the same definitions as the CCPA, requires any business that develops or maintains an internet browser to build in an opt-out preference signal (“OOPS”) functionality. The law takes effect on January 1, 2027.

Read More

California Further Amends its Data Broker Registration Law

California Further Amends its Data Broker Registration Law

By Hansenard Piou

On October 8, 2025, Governor Gavin Newsom signed SB 361 into law. Introduced by Senator Josh Becker, the bill amends California’s Data Broker Registration Law (and amendments to the law under the Delete Act) with additional disclosure requirements for data brokers.

Read More

Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

By Clara De Abreu E Souza

On September 29, 2025, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA). Authored by Senator Scott Wiener, TFAIA follows the release of the Governor’s California Report on Frontier AI Policy, which was drafted by the Joint California Policy Working Group on AI Frontier Models.

Read More

California Adopts Privacy, Cybersecurity, ADMT Regulations and Amendments

California Adopts Privacy, Cybersecurity, ADMT Regulations and Amendments

By Sam Castic

The California Privacy Protection Agency (CPPA) has adopted final regulations on privacy risk assessments, cybersecurity audits, and automated decisionmaking technology (ADMT), as well as amendments to existing CCPA regulations.  Final publication of the regulations is pending review by the Office of Administrative Law, and depending on when that occurs, the regulations will likely take effect 10/1/2025 or 1/1/2026.  Some key concepts from these regulations, and actions to consider, are below.

Read More

California’s Healthline.com Enforcement Action Shows CCPA’s Teeth – and Sensitive Data Reach

California’s Healthline.com Enforcement Action Shows CCPA’s Teeth – and Sensitive Data Reach

By Mason Fitch and Kate Black

The California Attorney General’s Office (“OAG”) announced an enforcement action against Healthline.com on July 1 that marks a significant development in California Consumer Privacy Act (CCPA) enforcement. This action, accompanied by the largest fine under CCPA yet at $1.55 million, highlights critical areas of consideration for any company engaging in the advertising ecosystem as well as any company that processes sensitive personal information.

Read More

State Privacy Regulators Announce Formation of Collaboratory Consortium

State Privacy Regulators Announce Formation of Collaboratory Consortium

by Felicity Slater and Susan Hintze

On April 16, 2025, the California Privacy Protection Agency (CPPA) and state Attorneys General from California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon announced the formation of the bipartisan "Consortium of Privacy Regulators." The focus of the Consortium will be to foster multi-state coordination, including sharing of expertise and resources, in investigation of potential violations of and enforcement of their state's respective comprehensive privacy laws.

Read More

Virginia Governor Signs Reproductive Health Data Restrictions into Law

Virginia Governor Signs Reproductive Health Data Restrictions into Law

by Cameron Cantrell and Felicity Slater 

On March 24, 2025, Governor Youngkin (R) of Virginia signed SB 754—which amends the Virginia Consumer Protection Act (VCPA) to restrict the collection and processing of “reproductive or sexual health information” and is enforceable through a private right of action—into law. The law will take effect July 1, 2025. 

Read More

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night

By Felicity Slater and Kate Black

The Maryland Online Data Privacy Act (“MODPA” or the “Act”), which takes effect October 1, 2025, establishes a set of novel requirements that will have a particular impact for companies operating in the health and wellness sectors. 

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

New York Legislature Passes Extraordinarily Restrictive Health Data Privacy Bill

New York Legislature Passes Extraordinarily Restrictive Health Data Privacy Bill

By Mike Hintze and Felicity Slater

Last year, we wrote about a proposed New York State law that would have significant impacts for entities that process health and wellness related data. That bill failed to pass before the 2024 legislative session ended. But today, in the early days of the 2025 session, the New York State legislature has passed Senate Bill S929 (SB S929), which is essentially unchanged from last year’s bill.  

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

10 areas for US-based privacy programs to focus in 2025

10 areas for US-based privacy programs to focus in 2025

By Sam Castic

The post below was originally published by the IAPP at https://iapp.org/news/a/10-areas-for-privacy-programs-to-focus-in-2025.

This past year was another jammed one for privacy teams and it was not easy to stay on top of all the privacy litigation, enforcement trends, and new laws and regulations in the U.S.

Read More

In ‘Holy Redeemer’ Settlement Agreement, OCR Continues to Prioritize Privacy Protections for Reproductive Health Information

In ‘Holy Redeemer’ Settlement Agreement, OCR Continues to Prioritize Privacy Protections for Reproductive Health Information

by Felicity Slater and Kate Black

On November 26, 2024, the Office of Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS) announced a resolution agreement and corrective plan with Pennsylvania’s Holy Redeemer Hospital (Holy Redeemer). The agreement settles OCR’s claim that Holy Redeemer disclosed a patient’s protected health information (PHI)—including intimate reproductive health details—without a permissible purpose or valid authorization from the patient in violation of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule

Read More

California Legislature Passes the Delete Act

By Taylor Widawski

On September 15, 2023, the California Legislature passed Senate Bill 362, known as the Delete Act, which amends the California data broker law. The bill now awaits a signature from the governor. If signed, certain aspects of the law will go into effect as soon as January 31, 2024.

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Washington My Health My Data Act - Part 8: Notice Obligations

By Mike Hintze

When it comes into effect, the Washington My Health My Data Act (MHMDA or the Act) will impose new privacy notice obligations on regulated entities. The Act requires specific privacy disclosures relating to data that meets the very broad definition of “consumer health data.” It appears to require regulated entities to draft, post, link to, and maintain a separate “Consumer Health Data Privacy Policy” that will be largely, but not entirely, redundant of their existing privacy statement(s).

Because the Consumer Health Data Privacy Policy will be publicly available and easily scrutinized by plaintiffs’ lawyers and the Washington Attorney General, mistakes implementing this obligation are likely to be a key source of costly and disruptive litigation. Regulated entities will therefore need to take great care in meeting the Act’s notice requirements which are, in some respects, unusual and unexpected. 

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Washington My Health My Data Act – Part 7: Biometric Data

By Mike Hintze & Jevan Hutson

Biometric data is among the broad range of “consumer health data” regulated by the Washington My Health My Data Act (MHMDA). In light of MHMDA’s broad definition of biometric data, GDPR-level consent requirements, new obligations, and private right of action, the Act dramatically changes and complicates the regulation of biometric data in Washington state and is poised to become the most disruptive change in U.S. biometric privacy law since Illinois’ BIPA.

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Washington My Health My Data Act - Part 6: Data Subject Rights

By Mike Hintze

The Washington My Health My Data Act provides consumers with several rights, including a right of access, a right to delete, a right to withdraw consent, and a right to not be discriminated against for exercising their rights. While each of these rights can be found in other privacy laws and so, at a high level, do not seem particularly surprising here, the ways they are included in this Act are unique, create uncertainty, and in some cases go well beyond what exists in any other privacy law.  As a result, regulated entities seeking to comply with them will face difficult, costly, and disruptive implementation challenges (and with respect to the deletion right, the potential for catch-22 situations where full legal compliance may be impossible). These challenges, along with the Act’s private right of action, set up a significant risk of expensive legal claims and litigation.

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze