Hintze Law Global AI Legal Updates

Hintze Law’s monthly Global AI Update provides a curated overview of key AI‑related legal and regulatory updates from the past month. We spotlight new developments and emerging trends to help organizations that are developing, deploying, or relying on AI technologies stay ahead of what’s next. 

If you’d like to receive alerts for our blog posts, visit our blog to sign up.

US Updates

xAI Sues to Block Enforcement of California’s AB 2013

On December 29, 2025, xAI (parent company of X, formerly Twitter) filed suit against California’s Attorney General to enjoin enforcement of California’s AB 2013, a generative artificial intelligence (“Gen AI”) transparency law that requires “developers” of GenAI systems or services to publicly disclose information about the training data behind their Gen AI products. The lawsuit came just two days before the law took effect on January 1, 2026. xAI’s complaint primarily alleges that the law’s training disclosure requirements are an unconstitutional taking under the Fifth Amendment, forcing xAI to disclose valuable trade secrets without fair compensation, and that it is compelled speech that violates the First Amendment. The law is still in effect and enforceable, but companies subject to AB 2013 should watch these developments closely.

FTC Reverses Consent Decree for AI Service

On December 22, 2025, the Federal Trade Commission reopened and set aside a 2024 consent decree against Rytr, LLC, which offered an AI-enabled writing assistance service for subscribers to use to generate product and service reviews. The FTC’s 2024 action suggested that the reviews generated by the service could contain errors, and if they were subsequently posted by the service subscribers, could mislead other consumers. In its new order, the FTC reasoned that the service did not in fact violate Section 5 of the FTC Act, that it burdened AI innovation, was not in the public interest, and thus merited setting aside. This latest action signals that the current FTC will not view AI products and services with skepticism merely because of how users may choose to use them. Notably, this reversal was issued pursuant to recommended policy actions in the White House’s July 2025 AI policy statement and that statement’s underlying January 2025 executive order.

 

NY Enacts AI Frontier Model Law (the RAISE Act)

On December 19, 2025, New York’s Governor signed the Responsible AI Safety and Education (RAISE) Act into law, effective January 1, 2027. Critically, the version signed by the Governor (S6953B) is not the final text. The Governor agreed to sign it on the condition that chapter amendments would be introduced in the next legislative session. Those amendments (A9449) were published on January 6, 2026 , and were written to more closely mirror California’s Frontier Model law (SB 53).

The RAISE Act applies to “Large Frontier Developers” that build “Frontier AI Models” and requires Large Frontier Developers to (i) implement and publicly share a Frontier AI Framework, which must detail things like how it adopts recognized standards, assesses and mitigates catastrophic risks, uses third-party evaluations, maintains cybersecurity, responds to safety incidents, and governs internal processes, (ii) review such disclosures annually, and (iii) report certain “critical safety” incidents within 72 hours.  

Companies should determine whether they are in scope for the law, and in addition to building a compliance plan, ensure they have a plan in place to report critical safety incidents, which may borrow from or be included in existing incident response plans.  

NIST Invites Comments on Draft Cybersecurity AI Framework

On December 16, 2025, the National Institute of Standards and Technology (“NIST”) published a draft internal report NIST-IR 8596, setting out a preliminary Cybersecurity Framework Profile for Artificial Intelligence, or the NIST “Cyber AI Profile.” The Cyber AI Profile is designed to assist organizations in thinking about how to strategically adopt AI while also addressing emerging cybersecurity risks, addressing three main focus areas: (1) securing AI systems, (2) conducting AI-enabled cyber defense, and (3) thwarting AI-enabled cyberattacks. The draft report is open for public comment until January 30, 2026.

Trump Administration Issues Executive Order to Further a Standard National Policy for AI

On December 11, 2025, President Trump issued an executive order (“EO”) titled “Ensuring a National Policy Framework for AI.” The EO sets out that it is “the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework.” To achieve that end, the EO requires the following: (1) the Attorney General must set up an AI litigation task force with the sole purpose of challenging state AI laws on grounds that they violate rules on interstate commerce, are preempted, or are otherwise unlawful, (2) the Secretary of Commerce must publish an evaluation of laws that conflict with the stated policy and issue a policy notice making states with onerous AI laws ineligible for funding under the Broadband Equity Access and Deployment program, (3) the FCC must begin process to consider establishing a federal AI reporting and disclosure standard, (4) the FTC must clarify how its rules against unfair and deceptive practices apply to AI models and when state laws requiring changes to truthful AI outputs are pre-empted by the FTC Act, and (5) the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must recommend establishing a Federal AI policy framework that pre-empts state AI laws that conflict with the stated policy. 

Notably, EO’s can only direct the executive branch (i.e. federal agencies) to help effectuate the president’s Article II constitutional power to “take care that the laws be faithfully executed.”  EOs cannot override laws, direct agencies to act unlawfully, or dictate how state and local government may act.  To this end, this EO does not create new law, and any federal AI law must still be passed by Congress.

State Audit Finds Limited Enforcement and Noncompliance with NYC Job Applicant AI Law

On December 5, 2025, the New York State Comptroller shared the results of its audit of enforcement under New York Local Law 144 of 2021 (“NY LL 144”), which governs the use of “automated employment decision tools” in New York City. The audit found the state’s Department of Consumer and Worker Protection (“DWCP”), responsible for the law’s enforcement since July 5, 2023, has failed to implement an effective program to enforce the law. The audit results provide several corrective recommendations for DWCP, such as improving its process for receiving NY LL 144-related consumer complaints and implementing mechanisms to proactively address non-compliance with NY LL 144 through research, tools-testing, and DWCP audits of public-facing materials.

Employers and employee agencies subject to NY LL 144 should review their operations, and related compliance documentation, ahead of a potential enforcement wave. 

Multi-Law Class Action Filed Against AI Transcription Company

On December 5, 2025, a consolidated class action was filed against Otter.ai, the maker of AI transcription tools, in California federal court. The amended complaint, which before consolidation was focused on California’s Invasion of Privacy Act (“CIPA”), brings together claims under federal wiretap and computer fraud laws, state law counterparts in California and Washington, and Illinois’s Biometric Information Privacy Act. The plaintiffs allege that Otter.ai’s transcription service violated these laws by intercepting, accessing, recording, and copying conversational data and participants’ voiceprints without participant consent. The complaint also argues the alleged acts, including Otter.ai’s use of communications to train underlying AI models, further violate common law tort (such as intrusion upon seclusion), and similar state laws (such as unlawful business acts and “theft” of conversational data). These theories aren’t necessarily novel, having previously been employed against pixels and other tracking technologies to varied success, but this appears to be the first high-profile case applying them to an AI service.

Organizations developing or deploying AI transcription tools or related AI tools should closely review their notice, consent, and data use practices to help mitigate the risk of threatened wiretap (and related) litigation.

Draft Regulations under Illinois’s New HR and Recruiting AI Law

In early December 2025, the Illinois Department of Human Rights informally circulated draft regulations to implement recent amendments to the state’s Human Rights Act addressing AI use in recruiting and employment contexts. The regulations build on the amendment’s requirement to provide notice to employees and prospective employees before using AI for employment decisions (such as hiring, promotion, employment opportunities, discipline, etc.). Amongst other requirements, the draft regulations give specific directions as to where, when, and how these notices must be provided, and what they must contain.

These draft regulations have not yet entered formal rulemaking, but requirements are not currently expected to change significantly. Companies covered by the law should review requirements now and consider updating applicable disclosures accordingly.

Washington State AI Task Force Provides AI Regulation Blueprint to Legislature

On December 1, 2025, Washington state’s AI Task Force published an interim report describing eight categories of recommendations for the state legislature to consider as the state moves to fill the regulatory gap left by federal inaction. The Task Force’s recommendations focus on transparency, accountability, and enabling innovation across both AI development and use. For example, they specifically recommend the state legislature enact laws that would require (1) certain disclosures concerning training data involved in AI development, (2) employers to give notice of AI use in the workplace (such as use for employee monitoring and termination decisions), (3) law enforcement to attest that AI-assisted reports have been reviewed by humans, (4) periodic impact assessments and independent audits for AI systems used to respond to healthcare prior authorization requests, and (5) high-risk AI systems be implemented within a governance framework that tracks NIST’s ethical AI principles. The Task Force’s final report to the legislature is due July 1, 2026.

Global Updates

EU AI Act Code of Practice

On December 17, 2025, the European Commission released the first draft of the “Code of Practice on marking and labeling of AI-generated content.” The Code of Practice outlines detailed steps for signatories relating to the obligations under Articles 50(2) (Providers must ensure that “outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated”) and 50(4) (Deployers of content that constitutes a “deep fake” “shall disclose that the content has been artificially generated or manipulated.”) Adherence to the Code of Practice is voluntary, and is a way to demonstrate compliance, but is not necessarily required for compliance. That said, organizations looking to address AI marking requirements (whether under the EU AI Act or otherwise) can look to this draft as a resource to understand more about possible solutions. The European Commission invites feedback on the draft, which is due on January 23, 2026.

Vietnam Enacts National AI Legislation

On December 10, 2025, Vietnam’s National Assembly passed a Law on Artificial Intelligence. The law will begin to take effect on March 1, 2026, and explicitly applies to foreign entities engaging in AI research, development, provision, deployment, or use within Vietnam. Obligations under the law vary across actors (developers, suppliers, implementers) as well as the AI system’s “risk level” under the law’s classification scheme (high, medium, low). For example, suppliers of high-risk AI systems must complete a “conformity assessment” before deployment, and implementers (who actually deploy such systems) are responsible for ensuring the system is operated and used for its intended purposes.

Organizations operating in Vietnam or with Vietnamese customers should review the law to determine the full scope of their obligations.

UK Cybersecurity Office Guidance Warns of AI Prompt Injection Risks

On December 8, 2025, the UK’s National Cyber Security Centre published guidance aimed at organizations who are vulnerable to AI prompt injections, namely, organizations offering LLM-based products. The guidance contains technical explanations about the distinct risks that AI prompt injection pose over SQL injection and why existing measures against SQL injection may not be sufficient. It also provides mitigation steps organizations with LLM-based products should take, including applying privilege limitations to LLMs, incorporating emerging techniques to reduce the risk of an LLM acting on instructions hidden in data, and monitoring usage data for suspicious activity.

Organizations with LLM-based products should review this guidance to ensure current risk documentation and applied mitigations are appropriately addressed.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Alex Schlight is a Partner at Hintze Law PLLC. Alex counsels US and international clients on data privacy & AI compliance and risk management strategies.

Taylor Widawski is a Partner at Hintze Law PLLC. Taylor advises clients on privacy and security matters and has experience providing strategic advice on AI & privacy programs as well as AI & privacy product counseling across a variety of industries and topics.

Cameron Cantrell is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

Wen Tseng Joins Hintze Law as Principal Privacy Consultant

Today, Hintze Law warmly welcomes Wen Tseng as our new Principal Privacy Consultant! For nearly two decades, Wen has been helping organizations develop and implement scalable and practical cybersecurity and privacy programs. He is a trusted advisor to organizations navigating the ever-evolving landscape of data protection and risk management, leveraging his expertise in assessing, building, and maturing GRC programs, transforming strategic vision into operational reality, and helping teams manage their compliance obligations under complex data protection and AI laws and regulations.

Before joining Hintze Law, Wen served as Director of Privacy at Microsoft, where he led the program operations team to ensure ongoing compliance with Data Subject Rights requests and supported Microsoft’s global marketing and sales activities with robust privacy reviews. Wen’s valuable ad tech and cybersecurity expertise helped navigate complex advertising technologies and privacy requirements while strengthening Microsoft’s privacy and security posture. Wen’s leadership extended to the Cloud Security Alliance as Interim Research Director, and earlier, Wen played a pivotal role at Washington Mutual Bank (now JPMorgan Chase), leading cybersecurity investigations and forensics, and also helped launch ShareBuilder, serving as its head of information security.

We’re thrilled to have Wen’s expertise and leadership on our team. Please join us in welcoming him to Hintze Law!

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

California’s Jam City Enforcement Action Highlights Importance of Opt-Out Mechanisms

On November 21st, 2025, the California Attorney General announced a $1.4 million dollar settlement with the mobile app gaming company, Jam City, Inc., the sixth such settlement by California regulators under the California Consumer Privacy Act (CCPA). The AG had sued Jam City, whose mobile gaming apps collect personal information such as device identifiers, IP addresses, and usage data, alleging that it had failed to offer appropriate methods to opt out of sale and sharing of personal data in violation of the CCPA.

The Complaint

In May 2024, an AG investigation found that 20 of Jam City’s 21 apps did not provide a link or setting for consumers to opt-out of the sale of their personal information or sharing of such data for behavioral advertising across Jam City’s apps and other apps and platforms.

The complaint thus alleges that Jam City did not provide CCPA compliant opt-out methods on its apps or its website. In addition to the lack of controls on the 20 apps, the 21st app provided a “Data Privacy” setting that allegedly did not reference the CCPA and was unclear about whether enabling the setting would effectuate an opt-out request. Additionally, the “Cookies and Interest Based Advertising” section of privacy policy on Jam City’s website “told consumers that they could email Jam City at ccpaoptout@jjamcity.com to stop targeted advertisements,” a method the AG claimed was allegedly insufficient under the CCPA.

The complaint further alleges that Jam City did not acquire opt-in consent to sell or share the personal information of consumers it knew to be less than 16 years old. Jam City allegedly age-gates several of its apps and provides “child-versions” which do not collect or share personal information with third parties. However, Jam City allegedly failed to properly age-gate six of its apps, only providing the child-versions to consumers who declared they were under 13. As a result, Jam City was improperly selling or sharing the data of consumers between 13 and 16 years old, including via cross-context behavioral advertising without obtaining opt-in consent.

The Settlement

The settlement orders Jam City to comply with the CCPA’s opt-out provisions, specifically requiring:

  • Implementing a consumer-friendly, easy to execute opt-out process with minimal steps and in the case of mobile apps or connected devices, such opt-out process being available in a setting or menu option that leads the consumer to a page, setting, or control that enables the consumer to opt-out the sale and sharing of the consumer’s personal information either immediately, or in the alternative, via a link to the notice of right to opt-out of sale/sharing in the privacy notice,;

  • Effectuating of a consumer opt-out l across all of Jam City’s mobile apps for any personal information associated with the consumer,;

  • Providing means by which the consumer can confirm the processing of their opt-out request; and

  • Avoiding language or design likely to confuse a reasonable consumer that choices related to the collection of personal information, other than the opt-out process, constitute a compliant opt-out method or must be selected to opt-out.

Additionally, the settlement also requires compliance with special rules for consumers under 16 years old:

  • Where Jam City implements an age-screening mechanism,

    • Designing the mechanism in a neutral manner that does not default to 16+ and does not suggest that certain features are unavailable to consumers under 16 years old;

    • Directing consumers who submit an age under 13 years old to a child-version of the app; and

    • Directing consumers who submit an age of at least 13 years old and less than 16 years old to a child-version of the app or obtain their affirmative authorization to sell or share their personal information before directing them to a non-child-version of the app.

  • Directing all third parties to whom Jam City sold or shared personal information collected prior to October 1, 2024, from consumers who submitted ages under 16 years old in any Jam City mobile apps to delete such personal information.

Takeaways

With its recent investigations and settlement actions, the California Privacy Protection Agency has shown its willingness to enforce the CCPA, especially its opt-out provisions. The Jam City settlement order to effectuate opt-outs wherever the business identifies the consumer is similar to the California’s AG recent settlement order against Sling TV, which was ordered to “provide an opt-out mechanism within the Sling TV app on various living-room devices, so consumers accessing Sling TV on various devices do not need to go to Sling TV’s website to opt-out.” This robust enforcement of implementation of opt-out measures comes from the CCPA regulation requiring businesses to comply with a customer’s previously given opt-out signal “where the consumer is known to the business."

Moreover, recent California legislation is a part of a national trend of increased concern for children’s online privacy and safety. Laws with additional requirements for processing minors’ data are being complemented with app store age-verification laws, such as California’s Digital Age Assurance Act, which provide developers knowledge of whether consumers are minors.

This enforcement action highlights the political momentum for minors’ online privacy and the CCPA’s increased enforcement activity. Consider the following actions to address the concerns raised in this enforcement action:

  • Review all platforms, both apps and websites where you collect personal information to confirm choice mechanisms for consumer rights are clear and conspicuous so that users can easily effectuate those rights and understand those requests are being processed.

  • Implement choice mechanisms to properly regulate processing in accordance with data protection law and the consumer’s age.

  • Effectuate opt-out requests so that the consumer is opted out of such processing across apps, devices, and services where the business has information connecting the identity of the consumer.

  • Ensure age-gating processes comply with regulatory guidance, including not defaulting to an age above the relevant age range or suggesting a particular age range is required to access certain features.

  • Be mindful of data practices and obligations with respect to minors’ data, especially as more states pass legislation protecting children and teens’ privacy, in particular, if you are an app publisher, be prepared to put in place processes to properly handle child and teen data as you may gain knowledge of age under coming age assurance laws.


Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Hansenard Piou is an Associate at Hintze Law PLLC with experience in global data protection issues, including kids’ global privacy laws, AADC, privacy impact assessments, GDPR, and privacy statements.  

New York’s Algorithmic Pricing Disclosure Act Takes Effect

New York’s Algorithmic Pricing Disclosure Act Takes Effect

By Felicity Slater,Sam Castic, Clara De Abreu E Souza

New York's Algorithmic Pricing Disclosure Act, signed into law by Governor Kathy Hochul on May 9th, 2025, officially took effect this week. The act regulates algorithmic pricing and requires covered entities to clearly and conspicuously disclose to consumers when such pricing methods are used.

Read More

Washington Marijuana Retailer Sued Under My Health My Data Act for Website Pixel Use

Washington Marijuana Retailer Sued Under My Health My Data Act for Website Pixel Use

by Sam Castic and Felicity Slater

A class action suit was recently filed against the companies that operate Uncle Ike's, a Seattle-area marijuana retailer. The suit filed in Washington federal court alleges common law tort claims, ECPA claims, and a claim under the My Health My Data Act (‘MHMDA’ or ‘the Act’). 

Read More

What is Government-Related Data Under the DOJ Rule?

What is Government-Related Data Under the DOJ Rule?

By Hansenard Piou and Sam Castic

This is the third in a series of blog postsabout the DOJ Rule regarding Access To U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons(the “DOJ Rule”). It provides an overview of the second type of data that the DOJ Rule focuses on: government-related data.

Read More

Hintze Law Recognized in 2026 Best Law Firms® Rankings

Hintze Law Recognized in 2026 Best Law Firms® Rankings

We are pleased to announce that Hintze Law has been recognized for excellence in the 2026 edition of Best Law Firms®, in both the national and Seattle area rankings for the firm’s work in Information Technology Law, Technology Law, and Advertising Law.

Read More

Federal District Court Dismisses VPPA Case, Ruling Apartments.com "Not a Videotape Business"

Federal District Court Dismisses VPPA Case, Ruling Apartments.com "Not a Videotape Business"

By Cameron Cantrell

On Monday, October 20, 2025, the Eastern District of Missouri dismissed a proposed class action based on the federal Video Privacy Protection Act ("VPPA") against CoStar, the company behind apartments.com. It isn't clear at this point whether the plaintiff will appeal.

Read More
Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze

California Prohibits AI Misrepresentations about Health Care Licenses

California Prohibits AI Misrepresentations about Health Care Licenses

By Cameron Cantrell

On October 11, 2025, California’s Governor Newsom signed AB 489, a law designed to address health advice from artificial intelligence (“AI”). It will take effect on January 1, 2026.

Read More

California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

California Amends Artificial Intelligence Transparency Act and Passes AI Defenses Act

By Leslie Veloz

On October 13th, 2025, Governor Gavin Newsom signed into law AB 853, which amends the California Artificial Intelligence Transparency Act (AI Transparency Act (SB 942)), a law placing obligations on makers of generative AI systems aimed at increasing transparency to allow individuals to more easily assess whether digital content is generated or modified using AI.

Read More

California Passes Law on AI Companion Chatbot Safety

California Passes Law on AI Companion Chatbot Safety

By Clara De Abreu E Souza

On Oct. 13, 2025, California Governor Gavin Newsom signed into law Senate Bill 243 – Companion Chatbots. SB 243, authored by Senator Steve Padilla, requires operators of companion chatbot platforms to notify users that the chatbot is AI, provide specific disclosures to minors, and restrict harmful content. The law also includes a private right of action.

Read More

California Passes Digital Age-Assurance Act Into Law

California Passes Digital Age-Assurance Act Into Law

By Hansenard Piou

On October 13th, 2025, Governor Newsom signed the Digital Age Assurance Act (AB 1043) into law. Introduced by co-authors Assembly Member Buffy Wicks and Senator Tom Umberg, the law establishes age-assurance requirements for computer and mobile operating system providers and app stores as well as app developers with an aim to protect children’s online safety. The Digital Age Assurance Act enters into effect on January 1, 2027.

Read More

California’s Social Media Account Cancellation Act Signed into Law

California’s Social Media Account Cancellation Act Signed into Law

By Clara De Abreu E Souza

On October 8, 2025, California Governor Gavin Newsom signed into law Assembly Bill 656 — Account Cancellation. AB 656, authored by Assembly member Pilar Schiavo, focuses on social media platforms and requires them to provide users with a clear and accessible way to delete their accounts. This action must also trigger the complete deletion of the user’s personal data.

Read More

California Opt Me Out Act Signed into Law

California Opt Me Out Act Signed into Law

By Cameron Cantrell

On October 8, 2025, California’s Governor Newsom signed AB 566—the California Opt Me Out Act—into law. The California Opt Me Out Act, using the same definitions as the CCPA, requires any business that develops or maintains an internet browser to build in an opt-out preference signal (“OOPS”) functionality. The law takes effect on January 1, 2027.

Read More

California Further Amends its Data Broker Registration Law

California Further Amends its Data Broker Registration Law

By Hansenard Piou

On October 8, 2025, Governor Gavin Newsom signed SB 361 into law. Introduced by Senator Josh Becker, the bill amends California’s Data Broker Registration Law (and amendments to the law under the Delete Act) with additional disclosure requirements for data brokers.

Read More

What is “Bulk U.S. Sensitive Personal Data”?

What is “Bulk U.S. Sensitive Personal Data”?

By Emily Litka

This is the second in a series of blog posts about the DOJ Rule regarding Access To U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons (the “DOJ Rule”). It provides an overview of one of the categories of data that is in scope under the DOJ Rule: bulk U.S. sensitive personal data.

Read More

Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

Governor Newsom signs Transparency in Frontier Artificial Intelligence Act

By Clara De Abreu E Souza

On September 29, 2025, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA). Authored by Senator Scott Wiener, TFAIA follows the release of the Governor’s California Report on Frontier AI Policy, which was drafted by the Joint California Policy Working Group on AI Frontier Models.

Read More

IAPP Publishes EU Digital Laws Report 2025

IAPP Publishes EU Digital Laws Report 2025

By Hansenard Piou

On September 30th, the IAPP (formerly the International Association of Privacy Professionals) released its EU Digital Laws Report 2025, a comprehensive analysis explaining and synthesizing the requirements of core EU digital laws. The report aims to provide a resource to help the broadest possible class of organizations, platforms, and developers comply with the Data Governance Act, the Data Act, the Digital Markets Act, the Digital Services Act, the EU AI Act, and the NIS2 Directive.

Read More

Does the DOJ Rule Apply?

Does the DOJ Rule Apply?

By Hansenard Piou and Sam Castic

This is the first in a series of blog posts about the DOJ Rule regarding Access To U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons (the “DOJ Rule”).  It provides a high-level overview of the kinds of cross-border data transfers that are regulated by the DOJ Rule. Future blog posts will more closely examine the DOJ Rule, its requirements, potential impacts, and strategies to address compliance.

Read More