9th Circuit Rules on Injunction of California Age-Appropriate Design Code, Upholding Coverage Definition

On March 12th, the 9th Circuit Court of Appeals ruled on parts of the injunction of the California Age-Appropriate Design Code Act (CA AADC). The circuit court overturned the injunction against the definition determining scope of covered businesses and affirmed the injunction against five of the six provisions that NetChoice challenged, sending the age estimation provision back to the district court.  

CA AADC was passed in September 2022 and was slated to go into effect July 1, 2024. The law was modeled after the UK Children’s Code and imposed several privacy-by-design and documentation requirements on for-profit businesses offering services that are “likely to be accessed” by children under 18. For a breakdown of the law’s requirements, please see our 2022 post analyzing the CA AADC and its impacts. 

Litigative History 

Shortly after the law’s passage, the CA AADC has been subject to prolonged litigation. NetChoice, a technology trade organization representing Google, Meta, Airbnb, and other technology companies, filed its initial complaint in 2022, alleging that the CA AADC was unconstitutionally vague and failed strict scrutiny under the First Amendment for compelling protected speech and content. The District Court granted NetChoice’s motion for preliminary injunction in September 2023, enjoining the act in full. (For more information about the decision, see our 2023 post.) 

California appealed to the 9th Circuit Court of Appeals which ruled in August 2024. The court applied the standard from the Supreme Court’s recent ruling in Moody v. NetChoice and remanded to the district court to conduct the proper First Amendment analysis. However, it enjoined the CA AADC’s data protection impact assessment (DPIA) requirement for likely violating the First Amendment, which is still enjoined. 

On remand, NetChoice filed an amended complaint in October 2024, asserting that the Act’s coverage of online businesses “likely to be accessed by children” was content based and failed strict scrutiny. NetChoice then filed a motion for preliminary injunction in November. The District Court granted the motion in March 2025, fully enjoining enforcement of the AADC. California appealed in April 2025, leading to the current decision. 

This appeal focuses on the CA AADC’s coverage definition (1798.99.30(b)(4)(A)–(F)), age estimation requirement (1798.99.31(a)(5)), and data use and dark patterns restrictions (1798.99.31(b)(1)–(4), (7)).  

CA AADC’s Coverage Definition 

The circuit court found that NetChoice was unlikely to succeed in its facial challenge that the law as a whole is content-based regulation targeting speech. The finding hinged on the definition of a key term determining coverage of the law -- “likely to be accessed by children” -- which is defined by a list of separate factors that can indicate whether it would be reasonable to expect a business to be accessed by children and, thus, covered by the CA AADC (“the coverage definition”). Under the Moody standard for a facial challenge, courts must first assess the law’s scope by examining what activities the law regulates and by what actors and then decide whether the number of the law’s applications that violate the First Amendment outweigh the law’s otherwise legitimate applications. 

Disagreeing with the district court and NetChoice, the 9th Circuit found the coverage definition -- “likely to be accessed by children” -- doesn’t speak to the nature of the business or its content. Because coverage by the CA AADC does not require all or even a combination of the definition’s factors and because the factors include age-based factors that don’t consider the content that a business publishes (e.g., looking at demographic user data received by the business), the circuit court held that the coverage definition would not raise the same First Amendment issues in a substantial number of applications of the law. 

While NetChoice focused on possible applications of the CA AADC that may impact covered speech, the circuit court found that it did not meet its burden for a facial challenge to consider whether a substantial proportion of applications would impact protected speech. As a result, the circuit court found the Act as a whole was unlikely to violate the First Amendment. 

Data Use, Dark Patterns, and Age Estimation 

While the 9th Circuit disagreed with the theory that the CA AADC should be fully enjoined due to the unconstitutionality of the coverage definition, the circuit court examined the challenged provisions individually. The circuit court upheld the injunctions against the data use and dark patterns provisions finding that the challenges were likely to succeed, but did not uphold the injunction against age estimation requirement. 

Age Estimation 

The circuit court found that the requirement to “estimate the age of child users with a reasonable level of certainty or apply the privacy and data protections afforded to children to all consumers” was unlikely to facially violate the First Amendment, since the provision “says nothing about restricting content on its face.” 

However, the provision requires the level of certainty to match “the risks that arise from the data management practices of the business.” In previously enjoining the DPIA requirement, the circuit court had found that “data management practices” was statutorily defined to cover the content of the online service, product, or feature. As a result, the circuit court remanded the issue to the district court to apply the Moody standard and consider whether “data management practices” cross-references the factors listed under the DPIA provision (1798.99.31(a)(1)(B)) which trigger First Amendment review. 

Data Use and Dark Patterns 

The circuit court held that the restrictions against use of data that is to children’s “material detriment” and that impacts their “best interests and well-being” were unconstitutionally vague as defined under the CA AADC. The circuit court noted that these terms have no established meaning or proscribed guidance as to what conduct may be “material detriment to the physical health, mental health, or wellbeing of a child.” 

While California argued that “best interest of the child” pulled from family law, the circuit court disagreed that family law’s case-by-case, child-specific standard translates to data privacy, especially when the CA AADC gauges whether data practices are in the best interest of child users as a group. 

The circuit court extended this reasoning to the dark pattern provision. The CA AADC prohibits using dark patterns to encourage children to provide more personal information than reasonably expected, to forgo privacy protections, or “to take any action that the business knows, or has reason to know, is materially detrimental to the child’s physical health, mental health.” Even if “dark pattern” is a defined term under the California Consumer Privacy Act (CCPA), the circuit court found that “has reason to know” and “materially detrimental” were also unconstitutionally vague. 

For these vagueness reasons, the circuit court upheld the injunction against the data use and dark patterns provisions. 

Notice-and-Cure Provision 

The 9th Circuit vacated the district court’s determination that CA AADC’s notice and cure provision is not volitionally severable. It remanded for the district court’s consideration of whether the remaining valid CA AADC provisions could be severed and enforced separately. 

Takeaways 

Even after the remand to the district court and the probable appeal of its result, litigation over the CA AADC will likely continue, as NetChoice may challenge specific elements of the coverage definition and age estimation provisions as enforced against specific covered entities. 

While children and teen online safety laws have shifted to age-verification/estimation and harmful content, NetChoice v. Bonta and the CA AADC have influenced how legislatures draft their children privacy laws for constitutional validity, and NetChoice has alleged that the “best interest of the child” requirements of similar AADC laws are also unconstitutionally vague. As we await the results of such litigation, the 9th Circuit rulings provide some takeaways for businesses.  

Key Takeaways. Although CA AADC’s coverage definition remains intact and the age estimation provisions seems likely to survive, the Act’s other unenjoined provisions currently apply to covered businesses. The requirements include: 

  • configuring all default privacy settings for children to a high level of protection, 

  • providing privacy tools for children and parents,  

  • restricting the default collection of precise geolocation information for children, and 

  • signaling to the child any collection of precise geolocation information.  

(Special thanks to Cobun Zweifel-Keegan at IAPP for his helpful graphic)  

Covered businesses should update their privacy programs to comply with these requirements and begin to consider how it may comply with age estimation requirements. 

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on data protection. Hintze attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of AI, privacy, and data security. 

Hansenard Piou is an Associate at Hintze Law PLLC with experience in global data protection issues, including kids’ and teens online safety global privacy laws, AADC, COPPA, privacy impact assessments, and GDPR.  

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Hintze Law Global Privacy Updates

The Hintze team continuously tracks privacy and security updates around the world to bring you a regular update on the latest developments. Below is a snapshot of updates from mid-February 2026 to date. Please also check out our latest AI Global Legal Updates post.

US Privacy Updates

State Privacy Enforcement

CalPrivacy Settles with PlayOn Sports for $1.10M over Alleged CCPA Violations​

On March 2nd, CalPrivacy announced a settlement with 2080 Media Inc (d/b/a PlayOn Sports), a sports ticketing platform widely used by schools, for allegedly violating the California Consumer Privacy Act (CCPA) by not offering effective opt-out options of sales and “sharing” or honoring the GPC (Global Privacy Control) opt-out signal. In addition to a $1.1M penalty, PlayOn must scan its websites and mobile apps at least quarterly to maintain an inventory of tracking technologies, complete CCPA risk assessments and have the Board of Directors review such assessments, and​ update its privacy notices for understandability by its student audience. See our blog post by Cameron Cantrell for more details. ​

CalPrivacy Settles with Ford for Alleged CCPA Do Not Sell/Share Violations

On March 5th, CalPrivacy announced a stipulated final order with Ford Motor Company relating to a discontinued practice of requiring consumers to confirm their email addresses before it processed do not sell and do not share requests. Consumers submitted a webform to make their requests, but Ford did not process the request until the consumer received a verification email and clicked a “confirm email” button in that email. Customers who did not do this did not have their requests processed. While this practice only impacted dozens of consumers during the time it was the process Ford followed, Ford agreed to pay a $375K penalty. See Sam Castic’s LinkedIn post for key takeaways.

CalPrivacy February Board Meeting​

On February 27th, the CalPrivacy Board met to discuss DROP (Delete Request and Opt-Out Platform) and a petition for requirements for “essential consumer devices” (phones, laptops, wearables).​ DROP went live for consumers on January 1, 2026, and received 100K+ consumer requests, with plans to integrate the data broker registry, provide API documentation to assist in DROP access, and establish a sandbox to test DROP integration. CalPrivacy voted to deny a petition to start CCPA rulemaking for data minimization, purpose limitation, and the consent requirements for “essential consumer devices,” but to explore the feasibility and impacts of such concepts described in the petition.

Texas AG Actions Targeting Companies with Chinese Ties​

In February, Texas Attorney General Ken Paxton filed five separate lawsuits against TP-Link, Anzu Robotics, Temu, Shein, and Lorex for violations of the Texas Deceptive Trade Practices Act (DTPA). While these companies operate in different sectors and product areas—including e-commerce, drones, baby monitors, and home networking devices—the lawsuits are part of a focused effort targeting companies with alleged ties to China. The lawsuits allege the companies misrepresented their products as “safe and secure” while concealing known ties to Chinese manufacturing and data processing. Furthermore, the complaints allege that the companies failed to disclose compelled data disclosure obligations under Chinese law that risked exposing American consumer data to foreign adversaries.

Connecticut Attorney General Enforcement Report​

On February 5th, the Connecticut Attorney General issued its 2025 enforcement report. The report focused on privacy complaints for cookie banners and simplicity of opt-out, the data breach notification clock beginning at “awareness of suspicious activity,” and explanation of consumer rights in privacy notice. The report explained that key areas of focus and enforcement will be youth privacy, cookies and data sales, data brokers, treatment of consumer health data, healthcare, pharma, telecom, and ed-tech sectors; and use of AI for employment-related decisions and AI-assisted pricing.

Iowa AG Files Suit Against GM and OnStar

On February 26th, Iowa’s Attorney General filed a lawsuit against General Motors (GM) and OnStar, alleging that the companies engaged in deceptive business practices in violation of the Iowa Consumer Fraud Act (ICFA). Building on the allegations from the recent FTC settlement against GM and OnStar, see our blog post by Elizabeth Crooks and Susan Hintze, the Iowa complaint adds allegations that GM incentivized salespeople to use misleading and deceptive techniques to obtain customer consent to enroll in connected vehicle services that sold personal data to data brokers and third parties. The complaint highlights that one of the data brokers GM sold data to partnered with a Chinese data broker, underscoring a growing enforcement focus on U.S. companies that allow personal data to go to China or companies with perceived ties to China.

State Legislation Updates

New York Health Information Privacy Act Reintroduced​

On Feb. 20th, an amended version of the NY Health Information Privacy Act (HIPA) was reintroduced by the bill's original sponsor, Liz Krueger (D).​​ The 2026 version revises the definition of “Regulated Health Information” to more closely align with the Washington My Health My Data Act and similar laws, expands the scope of processing permitted without authorization to include "providing, maintaining, developing, improving, or repairing a specific product, feature, or service requested by such individual, or functionality thereof," removes the 24-hour waiting period for requests for authorization, and adds a definition of “verifiable" for agents’ exercise of data subject rights to address past security concerns. See Felicity Slater’s LinkedIn post for further analysis and our Health + Biotech Group’s  post “A Few Current Trends in Health Privacy & AI.”

Virginia Legislature Passes Bill Banning Location Data Sales

Virginia’s legislature passed a bill that amends the state’s comprehensive privacy law to ban sales of precise geolocation data.  If the governor signs this bill, Virginia will join Maryland and Oregon in banning the sale of precise geolocation data.

Children’s Privacy

FTC Issues COPPA Policy Statement Regarding Age Verification Technologies​

On February 25, the FTC announced that it will not bring COPPA (“Children’s Online Privacy Protection Act”) enforcement actions against companies based on the collection children’s personal information for the purpose of age verification. The policy statement elaborates that since collection of such personal information without parental consent may be a violation of the COPPA Rule, non-enforcement is dependent on the company otherwise complying with the Rule, including purpose limitation and reasonable security measures.​ This policy statement follows concerns raised during the FTC age verification workshop and Chairman Ferguson’s opinion concurring with the Jan 2025 COPPA Rule amendments.

Alabama Enacts App Store Age Verification Law

On February 17th, Alabama joined Texas, Utah, Louisiana and California in enacting laws that require mobile app stores to verify user age information and to make it available to app developers. The law takes effect on January 1, 2027 (although some requirements don’t take effect until October 1, 2027). It imposes obligations on both app store providers and app developers, including specific requirements for developers once they receive age data. Similar laws have faced constitutional challenges: the Texas law has been enjoined and the Utah law is currently being challenged.

Utah App Store Age Verification Act Lawsuit​

On February 5, 2026, the Computer & Communications Industry Association (CCIA) sued Utah to block SB142, the App Store Accountability Act. The lawsuit alleges that the law violates the First Amendment for being overly broad in the affected apps and insufficiently tailored for the stated goal of child protection.

DOJ Rule

Google Lawsuits Allege DOJ Rule Violations

Following the February 5th lawsuit against Lenovo, three new class action lawsuits were filed against Google alleging that its use of tracking technologies on other companies’ websites, and subsequent sharing of data with Chinese companies, violates the DOJ Rule on Access to U.S. Sensitive Personal Data and Government-Related Data By Countries of Concern or Covered Persons. While the DOJ Rule does not have a private right of action, plaintiffs claim that the practices that allegedly violate the DOJ Rule help establish claims under the federal Electronic Communications Privacy Act (ECPA) and state statutes and torts. The cases were each filed in the U.S. District Court for the Northern District of California, and are: McGrath v. Google LLCNadeu v. Google LLC, and Jenkins v. Google LLC. See Sam Castic’s LinkedIn commentary on state claims relating to the DOJ rule.

Google ReCAPTCHA

Change to ReCAPTCHA Processing Role​

Google announced that it is updating its terms for its ReCAPTCHA service to reflect a change in Google's processing role. Previously, Google took the position that it was a controller with respect to the service. Starting April 2, 2026, Google will take the position that it is a processor and subject to Google's Cloud Data Processing Addendum. Customers are instructed to remove references to Google's Privacy Policy and Terms of Use from their website (likely in a privacy statement) to the extent they are placed there in connection with use of ReCAPTCHA.

International Updates

Children’s Privacy

UK ICO Fines Reddit £14.47m for Children’s Privacy Failures

On February 24th, the UK ICO (Information Commissioner’s Office) announced that it fined Reddit £14.47m for alleged children’s privacy failures citing the ICO’s Age Appropriate Design Code (AADC) - also known as the “Children’s Code”. The ICO alleged, among other things, that Reddit had terms preventing users under 13, but no age verification measures to check ages, had no lawful basis for U13 personal data,​ and had not performed a DPIA on the risk of using children’s personal data. Notably, the ICO indicated that self-declaration was not a sufficient form of age verification given risks posed by Reddit. The fine was based on the number of U13s impacted, the degree of the potential harm, duration of the failing, and global revenue. Reddit has announced plans to appeal.

UK ICO Fines Imgur Owner MediaLab £247,590 for Children’s Privacy Failures

On February 5th, the ICO fined MediaLab (owner of Imgur) £247,590 for failing to use children’s personal data lawfully. ​​The ICO concluded that MediaLab breached the UK GDPR by failing to implement any measures to check the age of users, processing the personal data of children under 13 without parental consent or any other lawful basis when offering online services, and failing to carry out a data protection impact assessment to identify and reduce privacy risks to children. The fine was based on the ​number of U13s impacted, degree of potential harm, duration of failing, global revenue, and acceptance of the provisional findings and commitment to address the allegations. 

Brazil’s Digital Statute of Child and Adolescents Enters into Effect March 17th

On March 17th, six months after its September 17th publication, Brazil’s Digital Statute of Child and Adolescents (Digital ECA) (Law 15.211) enters into effect. The law can be described as combination of COPPA and various AADC laws in and applies to internet applications, app stores, operating systems, games, and online services likely to be accessed by children and adolescents. Its requirements include setting privacy settings to a high level of protection by default, prohibiting techniques for targeted advertising at minors and techniques for profiling minors, prohibiting dark patterns and loot boxes, establishing parental tools, requiring content filtering and harmful or illegal content removal procedures, and completing risk assessments for children.

Spain DPA Fines Age Verification Provider Yoti Ltd €950,000 For Biometric Data and Consent Failures 

On March 10th, the Spanish Data Protection Agency (AEPD) fined Yoti for a total of €950,000 across three violations of the GDPR. For its digital ID app, Yoti requires users to confirm their age and consent to a facial scan to create an age token, arguing that the facial scan was for authenticating users. First, the AEPD disagreed and found that the scan is biometric data for the purpose of uniquely confirming individuals with 1:1 matching operations. Since this processing of special information was not justified under Article 9.2, the AEPD issued a €500,000 fine. Second, Yoti provided a prefilled checkbox for the processing of biometric data for internal research purposes instead of obtaining opt-in consent under Article 7, resulting in a €200,000 fine. Finally, Yoti retained personal data for longer than necessary for the purpose of its collection, specifically retaining live-video recordings of individuals after verifying that the recorded persons were real and retaining fraudulent official identification documents for training. The AEPD held that such retention violated Article 5.1(e), resulting in a €250,000 fine. Yoti has indicated that it intends to appeal the decision to the Spanish High Court. 

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on data protection including AI, privacy, and data security. Hintze attorneys and data consultants support technology, advertising, media, fintech, health, biotech, ecommerce, and mobile industries.

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

CalPrivacy Issues $1.1M Order Alleging Opt-Out Failures by PlayOn Sports

By Cameron Cantrell 

On March 3, 2026, CalPrivacy announced an enforcement decision against 2080 Media Inc., doing business as PlayOn Sports, a sports ticketing platform widely used by schools. The stipulated order settles CalPrivacy’s three claims against PlayOn, each alleging PlayOn’s tracking technology practices violated CCPA’s opt-out requirements.  

PlayOn’s Alleged Violations: Opt-Out Methods, OOPS Recognition, and Notice of Opt-Out Rights 

CalPrivacy first alleged that PlayOn sold personal information and disclosed personal information for cross-context behavioral advertising (known as “sharing” under CCPA) via targeted advertising cookies deployed on its digital properties, while “fail[ing] to offer an effective [opt-out method]” for such practices as required by CCPA. According to CalPrivacy, PlayOn only allowed consumers to opt-out of sale/sharing by (1) contacting a toll-free phone number and email address, which were allegedly not implemented to “sufficiently address” sale/sharing performed tracking technologies, and (2) directing consumers to opt-out directly with third parties (e.g., the Network Advertising Initiative). CalPrivacy stated these methods are “insufficient means” for a consumer to opt-out of sale/sharing that takes place via tracking technologies.  

The decision noted it was not relevant that “PlayOn ran only one targeted advertising campaign on its ticketing platform during the relevant time period,” as the single campaign alone constituted CCPA “sharing.” 

CalPrivacy next alleged that, in connection with the same alleged sale/sharing via tracking technologies, “PlayOn failed to configure its [d]igital [p]roperties to recognize and honor... Opt-out Preference Signals [‘OOPS’]”, causing PlayOn to not honor consumer opt-out requests made via OOPS, in violation of CCPA. 

Finally, CalPrivacy alleged that PlayOn’s required notices regarding opt-out rights, including its privacy policy and “Your Privacy Choices” page, failed to inform consumers of their opt-out rights generally and with respect to OOPS, violating CCPA in its own right and in each sale/sharing that took place without the required notices. 

Settlement Terms: Cookie Inventory, Audience-Appropriate Notices, and More 

PlayOn must pay a $1.1M penalty and comply with injunctive terms under the decision. In addition to standard terms requiring PlayOn to resolve the alleged violations and comply with CCPA, PlayOn must also (1) scan its digital properties “at least quarterly, to maintain a full and current inventory” of tracking technologies, (2) update its privacy notices as required and with consideration to ease-of-use based on age of the intended audience (e.g., “disclosures made on services selling tickets to high school events must be easy to read and understandable to attendees of those events”), and (3) post the metrics related to consumer rights described in 11 CCR § 7102, though it is not clear whether PlayOn otherwise would meet the threshold for such reporting.  

The injunctive terms also discuss the application of CCPA’s newly-effective risk assessment requirements, which are triggered by PlayOn selling personal information and disclosing it for cross-context behavioral advertising. For example, as part of weighing negative impacts these practices have on consumer privacy, CalPrivacy states PlayOn must consider “whether users are required to consent to [selling/sharing] … in order to participate in certain events.” 

Key Takeaways 

Businesses that utilize tracking technologies and are subject to CCPA should consider taking the following steps: 

Update cookie inventories. Though not explicitly stated, the settlement terms imply that part of PlayOn’s alleged failure to honor opt-out requests was based on PlayOn not having accurate and up-to-date information about the tracking technologies being implemented on its digital properties. Awareness of which cookies are being used, and whether there are sales or “sharing” involved in each cookie, will help ensure (1) opt-out requests are implemented with respect to all tracking technologies, and (2) opt-out notices accurately describe cookie practices. 

Review cookie opt-out methods. CalPrivacy emphasized the disconnect between how sale/sharing was allegedly taking place (tracking technologies) and how consumers could opt-out of that sale/sharing (phone or email). If you haven’t implemented a consumer-facing consent management tool aligned to your methods of tracking, revisit whether it’s an option for your business, and ensure that your opt-out intake methods will stop all sale/sharing via tracking technologies. 

Check OOPS configurations. PlayOn allegedly failed to configure its websites to recognize OOPS signals it received from consumer browsers. If your business is subject to CCPA, work with your technical team to make sure that your digital properties are recognizing and honoring opt-out requests submitted via OOPS. 

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on data protection including AI, privacy, and data security. Hintze attorneys and data consultants support technology, advertising, media, fintech, health, biotech, ecommerce, and mobile industries.

Cameron Cantrell is an Associate at Hintze Law PLLC recognized by Best Lawyers. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Hintze Law Global AI Legal Updates - February 2026

Hintze Law’s monthly Global AI Update by our AI + ML Group provides a curated overview of key AI‑related legal and regulatory updates from the past month. We spotlight new developments and emerging trends to help organizations that are developing, deploying, or relying on AI technologies stay ahead of what’s next. 

If you’d like to receive alerts for our blog posts, visit our blog to sign up.

US Updates

Proposed Class Action Alleges AI Discrimination in Hiring Practices

On January 20, 2026, a proposed class action was filed in California state court against Eightfold AI, a company offering AI-powered candidate recruitment and hiring tools. The lawsuit alleges that Eightfold is a “credit reporting agency” under the federal Fair Credit Reporting Act and California’s state-law counterpart, and that it discloses “consumer reports” to be used for “employment purposes” without providing the required notices and consumer rights in violation of those same laws. The lawsuit further alleges that such a violation is an unlawful and unfair business practice under California’s consumer protection law. Organizations that offer or use AI-powered recruiting and hiring tools should consider reviewing their own practices against the allegations raised in this complaint, as well as FCRA and similar state laws to identify any additional obligations or risks that have not yet been addressed by the organization.

IAB Publishes AI Transparency and Disclosure Framework

On January 15, 2026, the IAB published its first “AI Transparency and Disclosure Framework” for members. This Framework is designed to standardize best practices for AI transparency in the advertising industry. It addresses consumer-facing disclosures, machine-readable metadata, and format-specific guidance across images, video, audio, text, and synthetic influencers. Notably, the Framework does not require consumer-facing disclosures of AI uses wholesale; such disclosures are only required where nondisclosure would risk misleading consumers about someone’s identity or character. 

CA AG Announces Surveillance Pricing Investigative Sweep

On January 27, 2026, California’s Attorney General announced an investigative sweep focused on surveillance pricing. The California DOJ will be sending letters to businesses with significant online presence in retail, grocery and hotel sectors. These letters will request information regarding how businesses use consumers’ shopping and internet browsing history, location, demographics, inferential, or other data to set the prices of goods or services. 

NY AG Demands Information from Instacart about Algorithmic Pricing

On January 8, 2026, the New York Attorney General's Office announced that it had sent a letter to Instacart asking for more information about the company's use of algorithmic pricing. This signals that the state is prepared to enforce the recently enacted New York Algorithmic Pricing Disclosure Act. If in scope for this law, and your company uses algorithms to help set pricing, note that, amongst other obligations, the law requires a disclosure near any algorithm-set price stating: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” This may create additional risk, and further scrutiny of the data used as it may invite scrutiny from additional regulators. Read our post on the New York Algorithmic Disclosure Act by Felicity Slater.

Kentucky AG Sues Character.AI Under its Newly Effective Privacy Law

On January 8, 2026, eight days after the Kentucky Consumer Data Protection Act (KCDPA) went into effect, the Kentucky AG announced its first lawsuit under the new law. The lawsuit was filed against Character Technologies, Inc., owner of Character.AI, a platform offering companion chatbots. The complaint asserts multiple claims under both the Kentucky Consumer Data Protection Act (KCDPA) and Kentucky’s general consumer protection law, including allegations that Character.AI collected and used personal data of children under the age of 13 without obtaining verifiable parental consent. The allegations also suggest that sole reliance on users’ self-declared age was insufficient for age verification. This lawsuit highlights the continued regulatory focus on protecting children and minors, as well as the growing scrutiny of companion chatbot platforms. Organizations that offer products or services that may be used by children or minors should make sure to: (i) calibrate age assurance measures based on the types of risks children and minors can face; and (ii) implement and test features and controls to protect children and minors from identified risks.

Global Updates

Ontario Privacy Regulator Publishes Responsible AI Principles

On January 21, 2026, Ontario’s Information and Privacy Commissioner (“OIPC”) and Human Rights Commission (“OHRC”) released joint Principles for the Responsible Use of Artificial Intelligence. The Principles borrow their definition of in-scope “AI systems” from Ontario’s Enhancing Digital Security and Trust Act of 2024, and explicitly apply to automated decision-making systems, systems designed to undertake activities typically performed using human intelligence and skills, generative AI systems, foundational large language models and their applications, traditional AI technologies, and any other emerging innovative uses of AI technologies. Collectively, the Principles require that governed AI Systems are used in a way that is (1) valid and reliable, (2) safe, (3) privacy-protective, (4) human rights affirming, (5) transparent, and (6) accountable. While organizations are not required to comply with these Principles, OIPC and OHRC indicate that doing so will help ensure compliance with Ontario’s human rights and privacy laws.

British Columbia Privacy Regulator Publishes Guidance on AI Scribes in Healthcare

On January 28, 2026, British Columbia’s Information and Privacy Commissioner published guidance addressing key considerations for using AI scribes in the health sector. The guidance details how British Columbia’s Personal Information Protection Act (BCPIPA) applies to “tools that use generative AI to listen to, transcribe, and summarize real-time conversations between patients and healthcare providers.” Healthcare organizations considering adopting AI scribes should review this guidance to ensure they are accounting for all legal requirements under BCPIPA (if applicable). More generally, healthcare organizations can also review the guidance to benchmark approaches and understand key issues with AI scribe use related to output accuracy, vendor agreements, patient consents, and cybersecurity.

Taiwan’s Basic Law on AI Takes Effect

On January 14, 2026, Taiwan’s Basic Law on Artificial Intelligence took effect after being passed only a few weeks prior, on December 24, 2025. The law establishes a fundamental AI framework, sets out core principles for Taiwan’s own AI usage (i.e. in the public sector), and provides high-level policy objectives. Taiwan’s National Science and Technology Council will serve as the law’s central competent authority, and the country’s Ministry of Digital Affairs is tasked with creating a risk-based classification framework to implement under the law, to be based on international standards. While the law does not include any obligations for organizations in the private sector at this time, detailed sectoral regulations are expected to be forthcoming.

South Korea’s Basic Act on AI Takes Effect

On January 22, 2026, South Korea’s Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness (the “Act”) took effect, available in English here. The Act applies to AI across its lifecycle, including AI development and implementation. There are special obligations for generative, high-impact, and high-performance (high-computational power) AI, including unique notice, safety, and risk assessment requirements. Certain in-scope entities may also need to designate a South Korean agent. The Ministry of Science and ICT has provided some information about its interpretation of the Act, though more details are expected to be forthcoming. The Act applies broadly to AI that has “an impact on the domestic [South Korean] market or users.” Entities that believe they may meet this threshold should review the Act closely to determine whether they are in-scope for the Act’s requirements.

Spain’s DPA Publishes Guidance on AI Voice Transcription Issues

On January 14, 2026, Spain’s DPA published a blog covering the data protection implications of AI voice transcription services under the GDPR, EU AI Act, and similar laws. The guidance highlights several considerations for controllers implementing these technologies, applying each GDPR data protection principle to AI voice transcription. For example, controllers should be aware of the distinct but related processing activities in many of these services, such as the recording of audio, the creation of the transcription itself, and the act of validating or fine-tuning underlying speech-to-text models. Data subjects should be adequately notified of all involved processing, including “whether third parties will listen to their conversation (for example, in retraining).” Organizations processing personal data about residents of Spain should review this and other recent AI-related guidance from Spain’s DPA, including a look into how the DPA handles generative AI use internally.

Singapore Publishes Model AI Governance Framework for Agentic AI

On January 22, 2026, Singapore’s telecom regulator announced its “Model AI Governance Framework for Agentic AI.” The Framework provides an overview of agentic AI as a technology (primarily focused on large language model-based agents), standardized terminology to discuss agentic AI, and sources and types of risk in using agentic AI. It also outlines best practices for organizations considering using agentic AI, including: (1) assessing and bounding the involved risks upfront, (2) making humans meaningfully accountable in agentic workflows, (3) implementing technical and non-technical controls and processes across the agent life cycle, and (4) informing end-users of their responsibilities vs. the agent’s. The Framework is not strictly keyed to Singapore law and may serve as a useful reference point for any organization as it develops or refines its internal governance framework for agentic AI.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Alex Schlight is a Partner at Hintze Law PLLC recognized by Best Lawyers & Super Lawyers. Alex is co-chair of our AI + ML Group and counsels US and international clients on data privacy & AI compliance and risk management strategies.

Taylor Widawski is a Partner at Hintze Law PLLC recognized by Best Lawyers & Super Lawyers. Taylor is co-chair of our AI + ML Group and advises clients on privacy and security matters and has experience providing strategic advice on AI & privacy programs as well as AI & privacy product counseling across a variety of industries and topics.

Cameron Cantrell is an Associate at Hintze Law PLLC recognized by Best Lawyers. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Hintze & Partners Recognized by Chambers in 2026 Global Rankings

Hintze & Partners Recognized by Chambers in 2026 Global Rankings

Hintze Law and its lawyers have once again been recognized in Chambers & Partners for expertise in Privacy and Data Security in the 2026 Chambers Global Guide. These recognitions include Hintze Law’s fifth year being ranked as an Elite Law Firm for Privacy and Data Security as well as the firm’s third year receiving recognition for Privacy and Data Security: Healthcare.

Read More

Hintze Law Global Privacy Updates

blue and black digitized globe

The Hintze team continuously tracks privacy and security updates around the world to bring you a regular update of the latest developments. Below is a snapshot of updates from late December 2025 to February 16, 2026.



US Privacy Updates

Data Broker Regulation

Implementing Regulations of the California DELETE Act take Effect

On January 1, 2026, the regulations for California’s DELETE Act officially took effect. These regulations significantly broaden the scope of the state’s data broker laws by narrowing the definition of what constitutes a “direct relationship” with a consumer. By excluding certain businesses from this definition, the law now captures a much wider array of entities as data brokers. Consequently, these newly classified companies were required to register with the California Privacy Protection Agency (CPPA) as a data broker by the January 31 deadline.

CalPrivacy Data Broker Enforcement Actions

On January 8, 2026, the California Privacy Protection Agency (CPPA) fined two data brokers for failing to register as required by the Delete Act.  Datamasters (Rickenbacher Data LLC) was fined $45,000 for failing to register as a data broker after alleged inadequate screening out of California residents (despite Datamasters asserting that it did so). S&P Global was fined $62,000 for failing to register on time due to an administrative oversight. You can read Jennifer Ruehr’s Linked In post on these actions here.

FTC PADFAA

The FTC recently sent letters reminding a number of data brokers of their obligations under the Protecting Americans’ Data From Foreign Adversaries Act.  PADFAA prohibits data brokers from selling, releasing, disclosing, or allowing access to personally identifiable sensitive data (which is very broadly defined—inclusive of things like web browsing data) about Americans to any foreign adversary, including North Korea, China, Russia, and Iran, or any entity “controlled by” those countries (with control being determined based on factors like where the company is incorporated, based, its ownership structure, etc.).

 

Children’s Privacy

Disney Pays 10M for COPPA Settlement

On December 31st, 2025, the FTC announced that a federal judge approved a $10 million dollar settlement with Disney for alleged violations of the Children’s Online Privacy Protection Rule (COPPA Rule).  The settlement centers on allegations that Disney allowed personal data to be collected from children under 13 who viewed kid-directed videos on YouTube without notifying parents or obtaining their consent as required by COPPA. For further analysis and key takeaways, see our blogpost.

South Carolina Enacts AADC

On February 5, 2026, South Carolina enacted its Age-Appropriate Design Code Act which took effect immediately. The law applies to companies that do business in South Carolina and provide websites, apps, and other online services that are "reasonably likely to be accessed" by minors (people younger than 18). While there is no age assurance or verification requirement, the “likely to be accessed” standard includes online services directed to children as defined in COPPA, and those with users who are known to be minors (including both actual knowledge, and knowledge based on inferences that users are minors). Netchoice is challenging the law.

Oregon OCPA Amendments Add Child and Geolocation Restrictions

On January 1st, 2026, amendments to the Oregon Consumer Privacy Act (HB2008) entered into effect. The amendments increase the age for the sale/advertising/profiling prohibition from under 13 to under 16 and add a prohibition on the sale of precise geolocation data.

Washington State Attorney General Introduces New Social Media Legislation

On January 12, 2026, the Washington attorney general introduced legislation (HB 1834 and SB 5708) that would prohibit addictive feeds for minors and place time limitations on push notifications. While this bill is still in its early stages, this one is worth keeping an eye on as AG‑requested legislation in Washington often gets an extra push (for example, the MHMDA was an AG-requested bill). If passed, Washington would join the growing list of states with social media laws focused on kids and teens.

TX App Store Age Verification Law Blocked

On December 23, 2025, a federal court blocked the Texas App Store Accountability act from being enforced. This came just days before the law was set to take effect on January 1, 2026. The court held that the law is content-based and failed to satisfy strict scrutiny. While it is a preliminary injunction at this stage, the judge indicated that the law was unlikely to withstand pending constitutional challenge. With the Attorney General currently appealing the ruling, the act’s future remains legally uncertain.

FTC Workshop on Age Assurance

On January 28th, the FTC held a workshop discussing age verification technologies. Commissioners and staff expressed strong support for expanding the use of age‑verification technologies. They acknowledged that the shifting legal landscape and the wide range of verification methods create complexity for companies and noted that different levels of assurance may be appropriate depending on risk. The FTC also highlighted that COPPA could pose a barrier, since many verification methods require processing a child’s age before obtaining parental consent. Chairman Ferguson indicated that new guidance is coming, saying the workshop will inform future policy statements and potential COPPA rule amendments.

State Comprehensive Privacy Laws

California AG Reaches $2.75M CCPA Settlement with Disney for Do Not Sell/Share Issues. The California AG announced a CCPA settlement with Disney relating to allegations that it violated “sale” and “sharing” (for cross-context behavioral advertising) opt-out rights in violation of the CCPA and California’s unfair competition law, arguing Disney’s streaming service opt-out methods were ineffective, deceptive, and incorporated dark patterns, see our blog post.

Florida Targets Companies with Ties to China The Florida AG created a unit to focus on companies with ties to China.  Its focus will include data privacy as well as other topics.  Its first action was to issue a sweeping subpoena to Shein requesting an expansive list of documents and information about Shein business practices, including on a number of topics related to its data privacy and data security practices.  This illustrates the continued focus from regulators on companies with ties to China.  Chinese companies and other companies with ties to China should consider the increasing focus state and federal regulators are dedicating to them when making risk decisions about their efforts to comply with state and federal privacy laws.

Indiana Comprehensive Law

Indiana’s comprehensive privacy law, the Indiana Consumer Data Protection Act (ICDPA) took effect on January 1, 2026.. Indiana’s law largely mirrors the Virginia Data Protection Act.

Minnesota Consumer Data Privacy Act Cure Period Ends

The cure period for the Minnesota Consumer Data Privacy Act ended on January 31, 2025. This means the Attorney General is no longer required to provide 30 days notice before bringing an enforcement action.

DOJ Rule

Lenovo Lawsuit Alleges DOJ Rule Violations. 

On February 5, 2026, a class action lawsuit was filed against Lenovo alleging federal ECPA claims and California statutory and common law claims for Lenovo’s alleged practice of using tracking technologies on its website and transmitting customer data to its China-based parent company. The complaint in the case (Christy v. Lenovo (United States) Inc., Case No 3:26-cv-01133 (N.D. Cal)) makes a number of allegations about how the practices are a violation of the DOJ Rule on Access to U.S. Sensitive Personal Data and Government-Related Data By Countries of Concern or Covered Persons. While there is no private right of action under the DOJ Rule, there have now been a few lawsuits alleging that violations of the DOJ Rule support federal and state law claims. You can see the complaint in Sam Castic’s Linked In post here.

FTC / Connected Cars

FTC Finalizes Connected Cars / Location Data Settlement with GM and OnStar

On January 14, 2026, the Federal Trade Commission (FTC) finalized a settlement order with General Motors (GM) and OnStar regarding the collection and disclosure of driver behavioral and location data. The complaint alleged violations of the Federal Trade Commission Act (FTC Act), including the collection, use, and disclosure of such data without notice to consumers and without consumers’ informed consent. For further analysis and key takeaways, see our blogpost.

SCOTUS / VPPA

SCOTUS to Consider Definition of "Consumer" Under VPPA

On January 26, 2026, the Supreme Court granted a petition (Salazar v. Paramount Global) that could decide whether the VPPA applies more broadly to modern digital services and not just traditional video subscriptions. The Court is taking up a circuit split over whether someone becomes a VPPA “consumer” simply by subscribing to any product or service from a company that provides video, even if the subscription itself is for something dissimilar (such as a newsletter). If the Court adopts a broader interpretation, the VPPA could apply more widely in modern digital contexts, including to sites that blend video with newsletters, memberships, or accounts.

 

International Updates

Children’s Privacy

India proposes AI bill and DPDPA Amendments for Child Protections

On January 15th, the EDPB adopted a cooperative procedure establishing an informal framework among EEA supervisory authorities to authorize ad hoc contractual clauses and the adoption of SCCs to facilitate data transfers across EU member states.

Netherlands launches DSA investigation into Roblox over child safety

The Dutch competition and consumer authority, ACM (Netherlands Authority for Consumers and Markets), launched a formal investigation into the Roblox on January 30, 2026, specifically focusing on compliance with the EU's Digital Services Act (DSA) regarding the protection of minors. It is examining whether the gaming platform, which has tens of millions of daily users, 40% under age 13, complies with the EU Digital Services Act's requirements to protect minors from violent/sexual content, inappropriate contact, and dark patterns that manipulate children into purchases.

UK ICO fines Imgur for processing children’s data in violation of UK GDPR

On February 5, 2026, the ICO fined MediaLab (owner of Imgur) £247,590 for failing to use children’s personal information lawfully. The ICO concluded that MediaLab breached the UK GDPR by failing to implement any measures to check the age of users, processing the personal information of children under 13 without parental consent or any other lawful basis when offering online services, and   failing to carry out a data protection impact assessment to identify and reduce privacy risks to children.  The ICO’s press release further emphasized that online platforms must tailor age checks to their specific risk levels or face similar enforcement.

Regulatory Enforcement and Audits

The Office of the Australian Information Commissioner (OAIC) Privacy Compliance Sweep.

As of January 1, 2026, the Office of the Australian Information Commissioner (OAIC) has commenced its first privacy compliance sweep. This initiative reviews the privacy policies of businesses that collect personal information in person, specifically targeting the rental, pharmaceutical, hospitality, automotive, and second-hand dealer sectors. The audit will evaluate compliance with APP 1.4 requirements regarding mandatory policy content. In tandem with the sweep, the OAIC has also updated its official APP 1 guidance.

French CNIL fined Mobius Solutions 1 Million Euros for failing to comply with its GDPR obligations[CD5] 

On December 11, 2025, France’s CNIL fined Mobius Solutions €1 million for GDPR violations while acting as a non-EU processor for a music-streaming platform. The authority asserted jurisdiction under Article 3(2) because Mobius monitored EU users’ behavior to build audience segments. Specifically, Mobius failed to delete data post-contract, used controller data for its own purposes without authorization, and neglected to maintain a Record of Processing Activities (ROPA).

Austrian DPA orders Microsoft to stop tracking students

On January 21, 2026, the Austrian data protection authority DSB found that Microsoft had been allegedly tracking students by installing advertising and analytics cookies through Microsoft 365 Education without consent or a valid legal basis. The Austrian DPA ordered Microsoft to stop using all cookies that are not technically necessary in the product within four weeks and to cease processing data collected from these cookies. 

Italian DPA to probe Amazon workplace monitoring

On February 9th, the Italian Data Protection Authority (the Garante) in partnership with Italy's National Labour Inspectorate announced that they had launched a joint supervisory inquiry into Amazon's collection and processing of worker personal data and use of video surveillance systems in its main Italian logistics hubs. Italy's Worker's Statute requires covered entities to take specific steps in conjunction with their use of video surveillance systems. The press release indicates that these regulators believe Amazon may not have taken these steps. The Garante's states that the inquiry aims, "to ensure effective institutional supervision...where the impact of monitoring systems and data processing processes is particularly significant, in order to ensure adequate protection of workers' rights."

European Commission preliminarily finds TikTok's design in breach of the DSA

On February 6th, the EC announced that it preliminarily found TikTok in breach of the Digital Services Act for design features including infinite scroll, autoplay, push notifications, and its highly personalized recommender system. The investigation, launched on February 19, 2024, indicated allegations that TikTok did not adequately assess how the design features could harm its users, including minors and vulnerable adults or implement adequate risk mitigation measures, citing the low friction and easy dismissal of existing screentime management and parental control tools.

International Data Transfers & Cooperation

EDPB adopts cooperative procedure for ad hoc and standard contractual clauses

On January 15th, the EDPB adopted a cooperative procedure establishing an informal framework among EEA supervisory authorities to authorize ad hoc contractual clauses and the adoption of SCCs to facilitate data transfers across EU member states.

Brazil EU adequacy

On January 28, 2026, the EU Commission and Brazil adopted mutual adequacy decisions. The decisions cover both the private and public sector and will greatly facilitate the personal data flow between the EU and Brazil.

 Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on data protection. Hintze attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of AI, privacy, and data security


Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

California AG Settles Disney’s Alleged CCPA Opt-Out Violations for $2.75M

California AG Settles Disney’s Alleged CCPA Opt-Out Violations for $2.75M

On February 11, 2026, California’s Office of the Attorney General (“OAG”) settled with the Walt Disney Company to resolve four alleged CCPA violations related to the media giant’s streaming service business. The OAG’s complaint against Disney alleged violations of the CCPA and California’s unfair competition law, arguing Disney’s streaming service opt-out methods were ineffective, deceptive, and incorporated dark patterns.

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

FTC Finalizes Order Against GM and OnStar Over Driver Data

By Elizabeth Crooks and Susan Hintze

Two roads crossing and overlay of round icons with images depicting vehicles and location information

On January 14, 2026, the Federal Trade Commission (FTC) finalized a settlement order with General Motors (GM) and OnStar regarding the collection and disclosure of driver behavioral and location data. The complaint alleged violations of the Federal Trade Commission Act (FTC Act), including the collection, use, and disclosure of such data without notice to consumers and without consumers’ informed consent.

The Complaint

In its complaint claiming deceptiveness and unfairness under the FTC Act, the FTC made the following allegations.

GM and OnStar gave consumers false assurance that the driving data collected would only be used for consumers for their own safety and to assess their own driving habits. Instead, GM and OnStar sold this data to third parties, including consumer reporting agencies, auto insurance companies, and others for unrelated purposes and without appropriate notice or consent.

Consumers were not informed that constantly collected precise geolocation data; detailed driving events such as seat-belt usage, hard braking, and speeds over 80 mph; and data about which radio stations consumers listened to would be shared with these entities. These entities used the data for unexpected purposes including denying or canceling insurance, increasing insurance premiums, and for advertising analytics. Many consumers were, therefore, unaware of what exactly they had opted into when giving their consent. Based on the incomplete information GM and OnStar had provided to consumers, those consumers had no reason to expect that their consent to collection and use of their driving data might have real-world, negative financial consequences.

In addition to inappropriate notice and consent about sharing, consent for different features were bundled together inappropriately. Consent for safety and maintenance alerts were bundled with a consent to enroll in OnStar Smart Driver, a service unrelated to vehicle maintenance. There was only one ‘accept’ or ‘decline’ choice for such features and the choice was described in such a way that consumers did not understand what maintenance and safety features and alerts they would lose by not consenting to the OnStar service.

Further, GM did not provide a setting that allowed consumers the ability to mask location data on all vehicles. Where the setting was available, it was defaulted to “off,” and GM did not widely communicate the availability of the setting to consumers. Moreover, because of the lack of adequate disclosures at consent about the constant collection and sharing of precise location data, consumers did not appreciate the importance of the setting.

The complaint alleged that as a result of GM and OnStar’s business practices, consumers experienced loss of auto insurance, unexpected increases in insurance premiums, and loss of privacy about sensitive data, including locations visited and day-to-day movements.

The Order

In its order, the FTC defines location data more broadly than in past orders. For the first time, the definition of ‘location data’ includes data that reveals the precise location of not only a mobile device or consumer but also of their vehicle.

In its definition of Covered Driver Data, the FTC also describes a car’s vehicle identification number (VIN), or an alternative identifier that can be linked to VIN, as “reasonably linkable” to a consumer. It further describes data linked to a VIN as not included in its definition of “Deidentified.” Both definitions suggest a willingness to treat VIN as personal information.

The FTC’s order requires GM and OnStar to, in sum:

·         Not disclose driver data to a Consumer Reporting Agency.

·         Obtain affirmative express consent prior to collecting, using, or disclosing driver data to a third party; obtain separate consent for each separate, unrelated service or feature; and not place limits on withholding or withdrawing consent, such as by degrading the quality or functioning of a product or service as a penalty.

·         Give consumers a means to disable collection of 1) location data and 2) all vehicle data if they decline OnStar.

·         Honor consumer requests to access and delete their driver data.

·         Minimize data collection to what is reasonably necessary to fulfill the specific purpose for which it was collected.

·         Document, adhere to, and publish an up-to-date data retention schedule.

·         Delete or destroy all prior-retained driver data within 180 days of the order and instruct third parties to destroy data.

·         Not misrepresent collection, use, and disclosure of data or purposes for the same.

The order has a typical 20 year termination date. However, the FTC departed slightly from its standard duration, limiting the requirement not to disclose driver data to a Consumer Reporting Agency to only five years.

Key Takeaways

We highlight several key takeaways below, particularly for any organization collecting telemetry or location data:

Choice Mechanisms. Ensure that consents for unrelated services and features are not bundled together. And make sure that effects of consents are described clearly and thoroughly and not in a way that might cause confusion.

Treatment of ‘Location Data.’ Present consumers with a way to opt-in to and disable the collection and use of precise geolocation data separate from other choices and clearly inform consumers how to do so.
Ensure that your definitions and application of rules regarding precise geo-location data extends not only to the consumer but also those things a consumer has with them or travels in.

Notice. Ensure that consent disclosures and privacy statements are presented accurately and with enough detail that consumers can understand the impact of choices. Train those responsible for handling agreements to understand privacy commitments made to consumers and to ensure that agreements do not violate those commitments.

VIN and Other Unique IDs as Identifiable Data. If you collect VIN associated with data about an individual, ensure that you protect it as you would other personal data. Consider treating other unique identifiers that, like VIN, could be linked to individuals as personal data.

Third Party Accountability. Review data sharing agreements with third parties to ensure that limitations are clearly outlined and that continued access to data is conditioned on agreeing to, and having a process in place to, delete data upon your instruction. Verify that contractual commitments with third parties about consumer data do not conflict with promises made to consumers and that adequate consents are obtained before agreeing to share sensitive data with third parties.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on AI, privacy, and data security. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Elizabeth Crooks is Senior Privacy Analyst at Hintze. Elizabeth has a Masters of Science in Information Management and guides global companies on privacy, cybersecurity, and data protection matters. 

Susan Hintze is Co-Managing Partner at Hintze Law PLLC, on the IAPP’s Board of Directors, and a Westin Emeritus Fellow with the IAPP.

Hintze Law Global AI Legal Updates

Hintze Law’s monthly Global AI Update provides a curated overview of key AI‑related legal and regulatory updates from the past month. We spotlight new developments and emerging trends to help organizations that are developing, deploying, or relying on AI technologies stay ahead of what’s next. 

If you’d like to receive alerts for our blog posts, visit our blog to sign up.

US Updates

xAI Sues to Block Enforcement of California’s AB 2013

On December 29, 2025, xAI (parent company of X, formerly Twitter) filed suit against California’s Attorney General to enjoin enforcement of California’s AB 2013, a generative artificial intelligence (“Gen AI”) transparency law that requires “developers” of GenAI systems or services to publicly disclose information about the training data behind their Gen AI products. The lawsuit came just two days before the law took effect on January 1, 2026. xAI’s complaint primarily alleges that the law’s training disclosure requirements are an unconstitutional taking under the Fifth Amendment, forcing xAI to disclose valuable trade secrets without fair compensation, and that it is compelled speech that violates the First Amendment. The law is still in effect and enforceable, but companies subject to AB 2013 should watch these developments closely.

FTC Reverses Consent Decree for AI Service

On December 22, 2025, the Federal Trade Commission reopened and set aside a 2024 consent decree against Rytr, LLC, which offered an AI-enabled writing assistance service for subscribers to use to generate product and service reviews. The FTC’s 2024 action suggested that the reviews generated by the service could contain errors, and if they were subsequently posted by the service subscribers, could mislead other consumers. In its new order, the FTC reasoned that the service did not in fact violate Section 5 of the FTC Act, that it burdened AI innovation, was not in the public interest, and thus merited setting aside. This latest action signals that the current FTC will not view AI products and services with skepticism merely because of how users may choose to use them. Notably, this reversal was issued pursuant to recommended policy actions in the White House’s July 2025 AI policy statement and that statement’s underlying January 2025 executive order.

 

NY Enacts AI Frontier Model Law (the RAISE Act)

On December 19, 2025, New York’s Governor signed the Responsible AI Safety and Education (RAISE) Act into law, effective January 1, 2027. Critically, the version signed by the Governor (S6953B) is not the final text. The Governor agreed to sign it on the condition that chapter amendments would be introduced in the next legislative session. Those amendments (A9449) were published on January 6, 2026 , and were written to more closely mirror California’s Frontier Model law (SB 53).

The RAISE Act applies to “Large Frontier Developers” that build “Frontier AI Models” and requires Large Frontier Developers to (i) implement and publicly share a Frontier AI Framework, which must detail things like how it adopts recognized standards, assesses and mitigates catastrophic risks, uses third-party evaluations, maintains cybersecurity, responds to safety incidents, and governs internal processes, (ii) review such disclosures annually, and (iii) report certain “critical safety” incidents within 72 hours.  

Companies should determine whether they are in scope for the law, and in addition to building a compliance plan, ensure they have a plan in place to report critical safety incidents, which may borrow from or be included in existing incident response plans.  

NIST Invites Comments on Draft Cybersecurity AI Framework

On December 16, 2025, the National Institute of Standards and Technology (“NIST”) published a draft internal report NIST-IR 8596, setting out a preliminary Cybersecurity Framework Profile for Artificial Intelligence, or the NIST “Cyber AI Profile.” The Cyber AI Profile is designed to assist organizations in thinking about how to strategically adopt AI while also addressing emerging cybersecurity risks, addressing three main focus areas: (1) securing AI systems, (2) conducting AI-enabled cyber defense, and (3) thwarting AI-enabled cyberattacks. The draft report is open for public comment until January 30, 2026.

Trump Administration Issues Executive Order to Further a Standard National Policy for AI

On December 11, 2025, President Trump issued an executive order (“EO”) titled “Ensuring a National Policy Framework for AI.” The EO sets out that it is “the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework.” To achieve that end, the EO requires the following: (1) the Attorney General must set up an AI litigation task force with the sole purpose of challenging state AI laws on grounds that they violate rules on interstate commerce, are preempted, or are otherwise unlawful, (2) the Secretary of Commerce must publish an evaluation of laws that conflict with the stated policy and issue a policy notice making states with onerous AI laws ineligible for funding under the Broadband Equity Access and Deployment program, (3) the FCC must begin process to consider establishing a federal AI reporting and disclosure standard, (4) the FTC must clarify how its rules against unfair and deceptive practices apply to AI models and when state laws requiring changes to truthful AI outputs are pre-empted by the FTC Act, and (5) the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must recommend establishing a Federal AI policy framework that pre-empts state AI laws that conflict with the stated policy. 

Notably, EO’s can only direct the executive branch (i.e. federal agencies) to help effectuate the president’s Article II constitutional power to “take care that the laws be faithfully executed.”  EOs cannot override laws, direct agencies to act unlawfully, or dictate how state and local government may act.  To this end, this EO does not create new law, and any federal AI law must still be passed by Congress.

State Audit Finds Limited Enforcement and Noncompliance with NYC Job Applicant AI Law

On December 5, 2025, the New York State Comptroller shared the results of its audit of enforcement under New York Local Law 144 of 2021 (“NY LL 144”), which governs the use of “automated employment decision tools” in New York City. The audit found the state’s Department of Consumer and Worker Protection (“DWCP”), responsible for the law’s enforcement since July 5, 2023, has failed to implement an effective program to enforce the law. The audit results provide several corrective recommendations for DWCP, such as improving its process for receiving NY LL 144-related consumer complaints and implementing mechanisms to proactively address non-compliance with NY LL 144 through research, tools-testing, and DWCP audits of public-facing materials.

Employers and employee agencies subject to NY LL 144 should review their operations, and related compliance documentation, ahead of a potential enforcement wave. 

Multi-Law Class Action Filed Against AI Transcription Company

On December 5, 2025, a consolidated class action was filed against Otter.ai, the maker of AI transcription tools, in California federal court. The amended complaint, which before consolidation was focused on California’s Invasion of Privacy Act (“CIPA”), brings together claims under federal wiretap and computer fraud laws, state law counterparts in California and Washington, and Illinois’s Biometric Information Privacy Act. The plaintiffs allege that Otter.ai’s transcription service violated these laws by intercepting, accessing, recording, and copying conversational data and participants’ voiceprints without participant consent. The complaint also argues the alleged acts, including Otter.ai’s use of communications to train underlying AI models, further violate common law tort (such as intrusion upon seclusion), and similar state laws (such as unlawful business acts and “theft” of conversational data). These theories aren’t necessarily novel, having previously been employed against pixels and other tracking technologies to varied success, but this appears to be the first high-profile case applying them to an AI service.

Organizations developing or deploying AI transcription tools or related AI tools should closely review their notice, consent, and data use practices to help mitigate the risk of threatened wiretap (and related) litigation.

Draft Regulations under Illinois’s New HR and Recruiting AI Law

In early December 2025, the Illinois Department of Human Rights informally circulated draft regulations to implement recent amendments to the state’s Human Rights Act addressing AI use in recruiting and employment contexts. The regulations build on the amendment’s requirement to provide notice to employees and prospective employees before using AI for employment decisions (such as hiring, promotion, employment opportunities, discipline, etc.). Amongst other requirements, the draft regulations give specific directions as to where, when, and how these notices must be provided, and what they must contain.

These draft regulations have not yet entered formal rulemaking, but requirements are not currently expected to change significantly. Companies covered by the law should review requirements now and consider updating applicable disclosures accordingly.

Washington State AI Task Force Provides AI Regulation Blueprint to Legislature

On December 1, 2025, Washington state’s AI Task Force published an interim report describing eight categories of recommendations for the state legislature to consider as the state moves to fill the regulatory gap left by federal inaction. The Task Force’s recommendations focus on transparency, accountability, and enabling innovation across both AI development and use. For example, they specifically recommend the state legislature enact laws that would require (1) certain disclosures concerning training data involved in AI development, (2) employers to give notice of AI use in the workplace (such as use for employee monitoring and termination decisions), (3) law enforcement to attest that AI-assisted reports have been reviewed by humans, (4) periodic impact assessments and independent audits for AI systems used to respond to healthcare prior authorization requests, and (5) high-risk AI systems be implemented within a governance framework that tracks NIST’s ethical AI principles. The Task Force’s final report to the legislature is due July 1, 2026.

Global Updates

EU AI Act Code of Practice

On December 17, 2025, the European Commission released the first draft of the “Code of Practice on marking and labeling of AI-generated content.” The Code of Practice outlines detailed steps for signatories relating to the obligations under Articles 50(2) (Providers must ensure that “outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated”) and 50(4) (Deployers of content that constitutes a “deep fake” “shall disclose that the content has been artificially generated or manipulated.”) Adherence to the Code of Practice is voluntary, and is a way to demonstrate compliance, but is not necessarily required for compliance. That said, organizations looking to address AI marking requirements (whether under the EU AI Act or otherwise) can look to this draft as a resource to understand more about possible solutions. The European Commission invites feedback on the draft, which is due on January 23, 2026.

Vietnam Enacts National AI Legislation

On December 10, 2025, Vietnam’s National Assembly passed a Law on Artificial Intelligence. The law will begin to take effect on March 1, 2026, and explicitly applies to foreign entities engaging in AI research, development, provision, deployment, or use within Vietnam. Obligations under the law vary across actors (developers, suppliers, implementers) as well as the AI system’s “risk level” under the law’s classification scheme (high, medium, low). For example, suppliers of high-risk AI systems must complete a “conformity assessment” before deployment, and implementers (who actually deploy such systems) are responsible for ensuring the system is operated and used for its intended purposes.

Organizations operating in Vietnam or with Vietnamese customers should review the law to determine the full scope of their obligations.

UK Cybersecurity Office Guidance Warns of AI Prompt Injection Risks

On December 8, 2025, the UK’s National Cyber Security Centre published guidance aimed at organizations who are vulnerable to AI prompt injections, namely, organizations offering LLM-based products. The guidance contains technical explanations about the distinct risks that AI prompt injection pose over SQL injection and why existing measures against SQL injection may not be sufficient. It also provides mitigation steps organizations with LLM-based products should take, including applying privilege limitations to LLMs, incorporating emerging techniques to reduce the risk of an LLM acting on instructions hidden in data, and monitoring usage data for suspicious activity.

Organizations with LLM-based products should review this guidance to ensure current risk documentation and applied mitigations are appropriately addressed.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Alex Schlight is a Partner at Hintze Law PLLC. Alex counsels US and international clients on data privacy & AI compliance and risk management strategies.

Taylor Widawski is a Partner at Hintze Law PLLC. Taylor advises clients on privacy and security matters and has experience providing strategic advice on AI & privacy programs as well as AI & privacy product counseling across a variety of industries and topics.

Cameron Cantrell is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Wen Tseng Joins Hintze Law as Principal Privacy Consultant

Today, Hintze Law warmly welcomes Wen Tseng as our new Principal Privacy Consultant! For nearly two decades, Wen has been helping organizations develop and implement scalable and practical cybersecurity and privacy programs. He is a trusted advisor to organizations navigating the ever-evolving landscape of data protection and risk management, leveraging his expertise in assessing, building, and maturing GRC programs, transforming strategic vision into operational reality, and helping teams manage their compliance obligations under complex data protection and AI laws and regulations.

Before joining Hintze Law, Wen served as Director of Privacy at Microsoft, where he led the program operations team to ensure ongoing compliance with Data Subject Rights requests and supported Microsoft’s global marketing and sales activities with robust privacy reviews. Wen’s valuable ad tech and cybersecurity expertise helped navigate complex advertising technologies and privacy requirements while strengthening Microsoft’s privacy and security posture. Wen’s leadership extended to the Cloud Security Alliance as Interim Research Director, and earlier, Wen played a pivotal role at Washington Mutual Bank (now JPMorgan Chase), leading cybersecurity investigations and forensics, and also helped launch ShareBuilder, serving as its head of information security.

We’re thrilled to have Wen’s expertise and leadership on our team. Please join us in welcoming him to Hintze Law!

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

California’s Jam City Enforcement Action Highlights Importance of Opt-Out Mechanisms

On November 21st, 2025, the California Attorney General announced a $1.4 million dollar settlement with the mobile app gaming company, Jam City, Inc., the sixth such settlement by California regulators under the California Consumer Privacy Act (CCPA). The AG had sued Jam City, whose mobile gaming apps collect personal information such as device identifiers, IP addresses, and usage data, alleging that it had failed to offer appropriate methods to opt out of sale and sharing of personal data in violation of the CCPA.

The Complaint

In May 2024, an AG investigation found that 20 of Jam City’s 21 apps did not provide a link or setting for consumers to opt-out of the sale of their personal information or sharing of such data for behavioral advertising across Jam City’s apps and other apps and platforms.

The complaint thus alleges that Jam City did not provide CCPA compliant opt-out methods on its apps or its website. In addition to the lack of controls on the 20 apps, the 21st app provided a “Data Privacy” setting that allegedly did not reference the CCPA and was unclear about whether enabling the setting would effectuate an opt-out request. Additionally, the “Cookies and Interest Based Advertising” section of privacy policy on Jam City’s website “told consumers that they could email Jam City at ccpaoptout@jjamcity.com to stop targeted advertisements,” a method the AG claimed was allegedly insufficient under the CCPA.

The complaint further alleges that Jam City did not acquire opt-in consent to sell or share the personal information of consumers it knew to be less than 16 years old. Jam City allegedly age-gates several of its apps and provides “child-versions” which do not collect or share personal information with third parties. However, Jam City allegedly failed to properly age-gate six of its apps, only providing the child-versions to consumers who declared they were under 13. As a result, Jam City was improperly selling or sharing the data of consumers between 13 and 16 years old, including via cross-context behavioral advertising without obtaining opt-in consent.

The Settlement

The settlement orders Jam City to comply with the CCPA’s opt-out provisions, specifically requiring:

  • Implementing a consumer-friendly, easy to execute opt-out process with minimal steps and in the case of mobile apps or connected devices, such opt-out process being available in a setting or menu option that leads the consumer to a page, setting, or control that enables the consumer to opt-out the sale and sharing of the consumer’s personal information either immediately, or in the alternative, via a link to the notice of right to opt-out of sale/sharing in the privacy notice,;

  • Effectuating of a consumer opt-out l across all of Jam City’s mobile apps for any personal information associated with the consumer,;

  • Providing means by which the consumer can confirm the processing of their opt-out request; and

  • Avoiding language or design likely to confuse a reasonable consumer that choices related to the collection of personal information, other than the opt-out process, constitute a compliant opt-out method or must be selected to opt-out.

Additionally, the settlement also requires compliance with special rules for consumers under 16 years old:

  • Where Jam City implements an age-screening mechanism,

    • Designing the mechanism in a neutral manner that does not default to 16+ and does not suggest that certain features are unavailable to consumers under 16 years old;

    • Directing consumers who submit an age under 13 years old to a child-version of the app; and

    • Directing consumers who submit an age of at least 13 years old and less than 16 years old to a child-version of the app or obtain their affirmative authorization to sell or share their personal information before directing them to a non-child-version of the app.

  • Directing all third parties to whom Jam City sold or shared personal information collected prior to October 1, 2024, from consumers who submitted ages under 16 years old in any Jam City mobile apps to delete such personal information.

Takeaways

With its recent investigations and settlement actions, the California Privacy Protection Agency has shown its willingness to enforce the CCPA, especially its opt-out provisions. The Jam City settlement order to effectuate opt-outs wherever the business identifies the consumer is similar to the California’s AG recent settlement order against Sling TV, which was ordered to “provide an opt-out mechanism within the Sling TV app on various living-room devices, so consumers accessing Sling TV on various devices do not need to go to Sling TV’s website to opt-out.” This robust enforcement of implementation of opt-out measures comes from the CCPA regulation requiring businesses to comply with a customer’s previously given opt-out signal “where the consumer is known to the business."

Moreover, recent California legislation is a part of a national trend of increased concern for children’s online privacy and safety. Laws with additional requirements for processing minors’ data are being complemented with app store age-verification laws, such as California’s Digital Age Assurance Act, which provide developers knowledge of whether consumers are minors.

This enforcement action highlights the political momentum for minors’ online privacy and the CCPA’s increased enforcement activity. Consider the following actions to address the concerns raised in this enforcement action:

  • Review all platforms, both apps and websites where you collect personal information to confirm choice mechanisms for consumer rights are clear and conspicuous so that users can easily effectuate those rights and understand those requests are being processed.

  • Implement choice mechanisms to properly regulate processing in accordance with data protection law and the consumer’s age.

  • Effectuate opt-out requests so that the consumer is opted out of such processing across apps, devices, and services where the business has information connecting the identity of the consumer.

  • Ensure age-gating processes comply with regulatory guidance, including not defaulting to an age above the relevant age range or suggesting a particular age range is required to access certain features.

  • Be mindful of data practices and obligations with respect to minors’ data, especially as more states pass legislation protecting children and teens’ privacy, in particular, if you are an app publisher, be prepared to put in place processes to properly handle child and teen data as you may gain knowledge of age under coming age assurance laws.


Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Hansenard Piou is an Associate at Hintze Law PLLC with experience in global data protection issues, including kids’ global privacy laws, AADC, privacy impact assessments, GDPR, and privacy statements.  

New York’s Algorithmic Pricing Disclosure Act Takes Effect

New York’s Algorithmic Pricing Disclosure Act Takes Effect

By Felicity Slater,Sam Castic, Clara De Abreu E Souza

New York's Algorithmic Pricing Disclosure Act, signed into law by Governor Kathy Hochul on May 9th, 2025, officially took effect this week. The act regulates algorithmic pricing and requires covered entities to clearly and conspicuously disclose to consumers when such pricing methods are used.

Read More

Washington Marijuana Retailer Sued Under My Health My Data Act for Website Pixel Use

Washington Marijuana Retailer Sued Under My Health My Data Act for Website Pixel Use

by Sam Castic and Felicity Slater

A class action suit was recently filed against the companies that operate Uncle Ike's, a Seattle-area marijuana retailer. The suit filed in Washington federal court alleges common law tort claims, ECPA claims, and a claim under the My Health My Data Act (‘MHMDA’ or ‘the Act’). 

Read More

What is Government-Related Data Under the DOJ Rule?

What is Government-Related Data Under the DOJ Rule?

By Hansenard Piou and Sam Castic

This is the third in a series of blog postsabout the DOJ Rule regarding Access To U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons(the “DOJ Rule”). It provides an overview of the second type of data that the DOJ Rule focuses on: government-related data.

Read More

Hintze Law Recognized in 2026 Best Law Firms® Rankings

Hintze Law Recognized in 2026 Best Law Firms® Rankings

We are pleased to announce that Hintze Law has been recognized for excellence in the 2026 edition of Best Law Firms®, in both the national and Seattle area rankings for the firm’s work in Information Technology Law, Technology Law, and Advertising Law.

Read More

Federal District Court Dismisses VPPA Case, Ruling Apartments.com "Not a Videotape Business"

Federal District Court Dismisses VPPA Case, Ruling Apartments.com "Not a Videotape Business"

By Cameron Cantrell

On Monday, October 20, 2025, the Eastern District of Missouri dismissed a proposed class action based on the federal Video Privacy Protection Act ("VPPA") against CoStar, the company behind apartments.com. It isn't clear at this point whether the plaintiff will appeal.

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

California Prohibits AI Misrepresentations about Health Care Licenses

California Prohibits AI Misrepresentations about Health Care Licenses

By Cameron Cantrell

On October 11, 2025, California’s Governor Newsom signed AB 489, a law designed to address health advice from artificial intelligence (“AI”). It will take effect on January 1, 2026.

Read More

California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

By Leslie Veloz

On October 13th, 2025, Governor Gavin Newsom signed into law AB 853, which amends the California Artificial Intelligence Transparency Act (AI Transparency Act (SB 942)), a law placing obligations on makers of generative AI systems aimed at increasing transparency to allow individuals to more easily assess whether digital content is generated or modified using AI.

Read More

California Passes Law on AI Companion Chatbot Safety

California Passes Law on AI Companion Chatbot Safety

By Clara De Abreu E Souza

On Oct. 13, 2025, California Governor Gavin Newsom signed into law Senate Bill 243 – Companion Chatbots. SB 243, authored by Senator Steve Padilla, requires operators of companion chatbot platforms to notify users that the chatbot is AI, provide specific disclosures to minors, and restrict harmful content. The law also includes a private right of action.

Read More