Hintze Law Global AI Legal Updates

Hintze Law’s monthly Global AI Update provides a curated overview of key AI‑related legal and regulatory updates from the past month. We spotlight new developments and emerging trends to help organizations that are developing, deploying, or relying on AI technologies stay ahead of what’s next. 

If you’d like to receive alerts for our blog posts, visit our blog to sign up.

US Updates

xAI Sues to Block Enforcement of California’s AB 2013

On December 29, 2025, xAI (parent company of X, formerly Twitter) filed suit against California’s Attorney General to enjoin enforcement of California’s AB 2013, a generative artificial intelligence (“Gen AI”) transparency law that requires “developers” of GenAI systems or services to publicly disclose information about the training data behind their Gen AI products. The lawsuit came just two days before the law took effect on January 1, 2026. xAI’s complaint primarily alleges that the law’s training disclosure requirements are an unconstitutional taking under the Fifth Amendment, forcing xAI to disclose valuable trade secrets without fair compensation, and that it is compelled speech that violates the First Amendment. The law is still in effect and enforceable, but companies subject to AB 2013 should watch these developments closely.

FTC Reverses Consent Decree for AI Service

On December 22, 2025, the Federal Trade Commission reopened and set aside a 2024 consent decree against Rytr, LLC, which offered an AI-enabled writing assistance service for subscribers to use to generate product and service reviews. The FTC’s 2024 action suggested that the reviews generated by the service could contain errors, and if they were subsequently posted by the service subscribers, could mislead other consumers. In its new order, the FTC reasoned that the service did not in fact violate Section 5 of the FTC Act, that it burdened AI innovation, was not in the public interest, and thus merited setting aside. This latest action signals that the current FTC will not view AI products and services with skepticism merely because of how users may choose to use them. Notably, this reversal was issued pursuant to recommended policy actions in the White House’s July 2025 AI policy statement and that statement’s underlying January 2025 executive order.

 

NY Enacts AI Frontier Model Law (the RAISE Act)

On December 19, 2025, New York’s Governor signed the Responsible AI Safety and Education (RAISE) Act into law, effective January 1, 2027. Critically, the version signed by the Governor (S6953B) is not the final text. The Governor agreed to sign it on the condition that chapter amendments would be introduced in the next legislative session. Those amendments (A9449) were published on January 6, 2026 , and were written to more closely mirror California’s Frontier Model law (SB 53).

The RAISE Act applies to “Large Frontier Developers” that build “Frontier AI Models” and requires Large Frontier Developers to (i) implement and publicly share a Frontier AI Framework, which must detail things like how it adopts recognized standards, assesses and mitigates catastrophic risks, uses third-party evaluations, maintains cybersecurity, responds to safety incidents, and governs internal processes, (ii) review such disclosures annually, and (iii) report certain “critical safety” incidents within 72 hours.  

Companies should determine whether they are in scope for the law, and in addition to building a compliance plan, ensure they have a plan in place to report critical safety incidents, which may borrow from or be included in existing incident response plans.  

NIST Invites Comments on Draft Cybersecurity AI Framework

On December 16, 2025, the National Institute of Standards and Technology (“NIST”) published a draft internal report NIST-IR 8596, setting out a preliminary Cybersecurity Framework Profile for Artificial Intelligence, or the NIST “Cyber AI Profile.” The Cyber AI Profile is designed to assist organizations in thinking about how to strategically adopt AI while also addressing emerging cybersecurity risks, addressing three main focus areas: (1) securing AI systems, (2) conducting AI-enabled cyber defense, and (3) thwarting AI-enabled cyberattacks. The draft report is open for public comment until January 30, 2026.

Trump Administration Issues Executive Order to Further a Standard National Policy for AI

On December 11, 2025, President Trump issued an executive order (“EO”) titled “Ensuring a National Policy Framework for AI.” The EO sets out that it is “the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework.” To achieve that end, the EO requires the following: (1) the Attorney General must set up an AI litigation task force with the sole purpose of challenging state AI laws on grounds that they violate rules on interstate commerce, are preempted, or are otherwise unlawful, (2) the Secretary of Commerce must publish an evaluation of laws that conflict with the stated policy and issue a policy notice making states with onerous AI laws ineligible for funding under the Broadband Equity Access and Deployment program, (3) the FCC must begin process to consider establishing a federal AI reporting and disclosure standard, (4) the FTC must clarify how its rules against unfair and deceptive practices apply to AI models and when state laws requiring changes to truthful AI outputs are pre-empted by the FTC Act, and (5) the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must recommend establishing a Federal AI policy framework that pre-empts state AI laws that conflict with the stated policy. 

Notably, EO’s can only direct the executive branch (i.e. federal agencies) to help effectuate the president’s Article II constitutional power to “take care that the laws be faithfully executed.”  EOs cannot override laws, direct agencies to act unlawfully, or dictate how state and local government may act.  To this end, this EO does not create new law, and any federal AI law must still be passed by Congress.

State Audit Finds Limited Enforcement and Noncompliance with NYC Job Applicant AI Law

On December 5, 2025, the New York State Comptroller shared the results of its audit of enforcement under New York Local Law 144 of 2021 (“NY LL 144”), which governs the use of “automated employment decision tools” in New York City. The audit found the state’s Department of Consumer and Worker Protection (“DWCP”), responsible for the law’s enforcement since July 5, 2023, has failed to implement an effective program to enforce the law. The audit results provide several corrective recommendations for DWCP, such as improving its process for receiving NY LL 144-related consumer complaints and implementing mechanisms to proactively address non-compliance with NY LL 144 through research, tools-testing, and DWCP audits of public-facing materials.

Employers and employee agencies subject to NY LL 144 should review their operations, and related compliance documentation, ahead of a potential enforcement wave. 

Multi-Law Class Action Filed Against AI Transcription Company

On December 5, 2025, a consolidated class action was filed against Otter.ai, the maker of AI transcription tools, in California federal court. The amended complaint, which before consolidation was focused on California’s Invasion of Privacy Act (“CIPA”), brings together claims under federal wiretap and computer fraud laws, state law counterparts in California and Washington, and Illinois’s Biometric Information Privacy Act. The plaintiffs allege that Otter.ai’s transcription service violated these laws by intercepting, accessing, recording, and copying conversational data and participants’ voiceprints without participant consent. The complaint also argues the alleged acts, including Otter.ai’s use of communications to train underlying AI models, further violate common law tort (such as intrusion upon seclusion), and similar state laws (such as unlawful business acts and “theft” of conversational data). These theories aren’t necessarily novel, having previously been employed against pixels and other tracking technologies to varied success, but this appears to be the first high-profile case applying them to an AI service.

Organizations developing or deploying AI transcription tools or related AI tools should closely review their notice, consent, and data use practices to help mitigate the risk of threatened wiretap (and related) litigation.

Draft Regulations under Illinois’s New HR and Recruiting AI Law

In early December 2025, the Illinois Department of Human Rights informally circulated draft regulations to implement recent amendments to the state’s Human Rights Act addressing AI use in recruiting and employment contexts. The regulations build on the amendment’s requirement to provide notice to employees and prospective employees before using AI for employment decisions (such as hiring, promotion, employment opportunities, discipline, etc.). Amongst other requirements, the draft regulations give specific directions as to where, when, and how these notices must be provided, and what they must contain.

These draft regulations have not yet entered formal rulemaking, but requirements are not currently expected to change significantly. Companies covered by the law should review requirements now and consider updating applicable disclosures accordingly.

Washington State AI Task Force Provides AI Regulation Blueprint to Legislature

On December 1, 2025, Washington state’s AI Task Force published an interim report describing eight categories of recommendations for the state legislature to consider as the state moves to fill the regulatory gap left by federal inaction. The Task Force’s recommendations focus on transparency, accountability, and enabling innovation across both AI development and use. For example, they specifically recommend the state legislature enact laws that would require (1) certain disclosures concerning training data involved in AI development, (2) employers to give notice of AI use in the workplace (such as use for employee monitoring and termination decisions), (3) law enforcement to attest that AI-assisted reports have been reviewed by humans, (4) periodic impact assessments and independent audits for AI systems used to respond to healthcare prior authorization requests, and (5) high-risk AI systems be implemented within a governance framework that tracks NIST’s ethical AI principles. The Task Force’s final report to the legislature is due July 1, 2026.

Global Updates

EU AI Act Code of Practice

On December 17, 2025, the European Commission released the first draft of the “Code of Practice on marking and labeling of AI-generated content.” The Code of Practice outlines detailed steps for signatories relating to the obligations under Articles 50(2) (Providers must ensure that “outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated”) and 50(4) (Deployers of content that constitutes a “deep fake” “shall disclose that the content has been artificially generated or manipulated.”) Adherence to the Code of Practice is voluntary, and is a way to demonstrate compliance, but is not necessarily required for compliance. That said, organizations looking to address AI marking requirements (whether under the EU AI Act or otherwise) can look to this draft as a resource to understand more about possible solutions. The European Commission invites feedback on the draft, which is due on January 23, 2026.

Vietnam Enacts National AI Legislation

On December 10, 2025, Vietnam’s National Assembly passed a Law on Artificial Intelligence. The law will begin to take effect on March 1, 2026, and explicitly applies to foreign entities engaging in AI research, development, provision, deployment, or use within Vietnam. Obligations under the law vary across actors (developers, suppliers, implementers) as well as the AI system’s “risk level” under the law’s classification scheme (high, medium, low). For example, suppliers of high-risk AI systems must complete a “conformity assessment” before deployment, and implementers (who actually deploy such systems) are responsible for ensuring the system is operated and used for its intended purposes.

Organizations operating in Vietnam or with Vietnamese customers should review the law to determine the full scope of their obligations.

UK Cybersecurity Office Guidance Warns of AI Prompt Injection Risks

On December 8, 2025, the UK’s National Cyber Security Centre published guidance aimed at organizations who are vulnerable to AI prompt injections, namely, organizations offering LLM-based products. The guidance contains technical explanations about the distinct risks that AI prompt injection pose over SQL injection and why existing measures against SQL injection may not be sufficient. It also provides mitigation steps organizations with LLM-based products should take, including applying privilege limitations to LLMs, incorporating emerging techniques to reduce the risk of an LLM acting on instructions hidden in data, and monitoring usage data for suspicious activity.

Organizations with LLM-based products should review this guidance to ensure current risk documentation and applied mitigations are appropriately addressed.

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law. 

Alex Schlight is a Partner at Hintze Law PLLC. Alex counsels US and international clients on data privacy & AI compliance and risk management strategies.

Taylor Widawski is a Partner at Hintze Law PLLC. Taylor advises clients on privacy and security matters and has experience providing strategic advice on AI & privacy programs as well as AI & privacy product counseling across a variety of industries and topics.

Cameron Cantrell is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.

Don’t Sleep on Maryland: The Maryland Online Data Privacy Act Will Keep Health and Wellness Companies Up at Night — Hintze