By: Alex Schlight, Taylor Widawski, and Cameron Cantrell
Hintze Law’s monthly Global AI Update by our AI + ML Group provides a curated overview of key AI‑related legal and regulatory updates from the past month. We spotlight new developments and emerging trends to help organizations that are developing, deploying, or relying on AI technologies stay ahead of what’s next.
If you’d like to receive alerts for our blog posts, visit our blog to sign up.
US Updates
Proposed Class Action Alleges AI Discrimination in Hiring Practices
On January 20, 2026, a proposed class action was filed in California state court against Eightfold AI, a company offering AI-powered candidate recruitment and hiring tools. The lawsuit alleges that Eightfold is a “credit reporting agency” under the federal Fair Credit Reporting Act and California’s state-law counterpart, and that it discloses “consumer reports” to be used for “employment purposes” without providing the required notices and consumer rights in violation of those same laws. The lawsuit further alleges that such a violation is an unlawful and unfair business practice under California’s consumer protection law. Organizations that offer or use AI-powered recruiting and hiring tools should consider reviewing their own practices against the allegations raised in this complaint, as well as FCRA and similar state laws to identify any additional obligations or risks that have not yet been addressed by the organization.
IAB Publishes AI Transparency and Disclosure Framework
On January 15, 2026, the IAB published its first “AI Transparency and Disclosure Framework” for members. This Framework is designed to standardize best practices for AI transparency in the advertising industry. It addresses consumer-facing disclosures, machine-readable metadata, and format-specific guidance across images, video, audio, text, and synthetic influencers. Notably, the Framework does not require consumer-facing disclosures of AI uses wholesale; such disclosures are only required where nondisclosure would risk misleading consumers about someone’s identity or character.
CA AG Announces Surveillance Pricing Investigative Sweep
On January 27, 2026, California’s Attorney General announced an investigative sweep focused on surveillance pricing. The California DOJ will be sending letters to businesses with significant online presence in retail, grocery and hotel sectors. These letters will request information regarding how businesses use consumers’ shopping and internet browsing history, location, demographics, inferential, or other data to set the prices of goods or services.
NY AG Demands Information from Instacart about Algorithmic Pricing
On January 8, 2026, the New York Attorney General's Office announced that it had sent a letter to Instacart asking for more information about the company's use of algorithmic pricing. This signals that the state is prepared to enforce the recently enacted New York Algorithmic Pricing Disclosure Act. If in scope for this law, and your company uses algorithms to help set pricing, note that, amongst other obligations, the law requires a disclosure near any algorithm-set price stating: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” This may create additional risk, and further scrutiny of the data used as it may invite scrutiny from additional regulators. Read our post on the New York Algorithmic Disclosure Act by Felicity Slater.
Kentucky AG Sues Character.AI Under its Newly Effective Privacy Law
On January 8, 2026, eight days after the Kentucky Consumer Data Protection Act (KCDPA) went into effect, the Kentucky AG announced its first lawsuit under the new law. The lawsuit was filed against Character Technologies, Inc., owner of Character.AI, a platform offering companion chatbots. The complaint asserts multiple claims under both the Kentucky Consumer Data Protection Act (KCDPA) and Kentucky’s general consumer protection law, including allegations that Character.AI collected and used personal data of children under the age of 13 without obtaining verifiable parental consent. The allegations also suggest that sole reliance on users’ self-declared age was insufficient for age verification. This lawsuit highlights the continued regulatory focus on protecting children and minors, as well as the growing scrutiny of companion chatbot platforms. Organizations that offer products or services that may be used by children or minors should make sure to: (i) calibrate age assurance measures based on the types of risks children and minors can face; and (ii) implement and test features and controls to protect children and minors from identified risks.
Global Updates
Ontario Privacy Regulator Publishes Responsible AI Principles
On January 21, 2026, Ontario’s Information and Privacy Commissioner (“OIPC”) and Human Rights Commission (“OHRC”) released joint Principles for the Responsible Use of Artificial Intelligence. The Principles borrow their definition of in-scope “AI systems” from Ontario’s Enhancing Digital Security and Trust Act of 2024, and explicitly apply to automated decision-making systems, systems designed to undertake activities typically performed using human intelligence and skills, generative AI systems, foundational large language models and their applications, traditional AI technologies, and any other emerging innovative uses of AI technologies. Collectively, the Principles require that governed AI Systems are used in a way that is (1) valid and reliable, (2) safe, (3) privacy-protective, (4) human rights affirming, (5) transparent, and (6) accountable. While organizations are not required to comply with these Principles, OIPC and OHRC indicate that doing so will help ensure compliance with Ontario’s human rights and privacy laws.
British Columbia Privacy Regulator Publishes Guidance on AI Scribes in Healthcare
On January 28, 2026, British Columbia’s Information and Privacy Commissioner published guidance addressing key considerations for using AI scribes in the health sector. The guidance details how British Columbia’s Personal Information Protection Act (BCPIPA) applies to “tools that use generative AI to listen to, transcribe, and summarize real-time conversations between patients and healthcare providers.” Healthcare organizations considering adopting AI scribes should review this guidance to ensure they are accounting for all legal requirements under BCPIPA (if applicable). More generally, healthcare organizations can also review the guidance to benchmark approaches and understand key issues with AI scribe use related to output accuracy, vendor agreements, patient consents, and cybersecurity.
Taiwan’s Basic Law on AI Takes Effect
On January 14, 2026, Taiwan’s Basic Law on Artificial Intelligence took effect after being passed only a few weeks prior, on December 24, 2025. The law establishes a fundamental AI framework, sets out core principles for Taiwan’s own AI usage (i.e. in the public sector), and provides high-level policy objectives. Taiwan’s National Science and Technology Council will serve as the law’s central competent authority, and the country’s Ministry of Digital Affairs is tasked with creating a risk-based classification framework to implement under the law, to be based on international standards. While the law does not include any obligations for organizations in the private sector at this time, detailed sectoral regulations are expected to be forthcoming.
South Korea’s Basic Act on AI Takes Effect
On January 22, 2026, South Korea’s Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness (the “Act”) took effect, available in English here. The Act applies to AI across its lifecycle, including AI development and implementation. There are special obligations for generative, high-impact, and high-performance (high-computational power) AI, including unique notice, safety, and risk assessment requirements. Certain in-scope entities may also need to designate a South Korean agent. The Ministry of Science and ICT has provided some information about its interpretation of the Act, though more details are expected to be forthcoming. The Act applies broadly to AI that has “an impact on the domestic [South Korean] market or users.” Entities that believe they may meet this threshold should review the Act closely to determine whether they are in-scope for the Act’s requirements.
Spain’s DPA Publishes Guidance on AI Voice Transcription Issues
On January 14, 2026, Spain’s DPA published a blog covering the data protection implications of AI voice transcription services under the GDPR, EU AI Act, and similar laws. The guidance highlights several considerations for controllers implementing these technologies, applying each GDPR data protection principle to AI voice transcription. For example, controllers should be aware of the distinct but related processing activities in many of these services, such as the recording of audio, the creation of the transcription itself, and the act of validating or fine-tuning underlying speech-to-text models. Data subjects should be adequately notified of all involved processing, including “whether third parties will listen to their conversation (for example, in retraining).” Organizations processing personal data about residents of Spain should review this and other recent AI-related guidance from Spain’s DPA, including a look into how the DPA handles generative AI use internally.
Singapore Publishes Model AI Governance Framework for Agentic AI
On January 22, 2026, Singapore’s telecom regulator announced its “Model AI Governance Framework for Agentic AI.” The Framework provides an overview of agentic AI as a technology (primarily focused on large language model-based agents), standardized terminology to discuss agentic AI, and sources and types of risk in using agentic AI. It also outlines best practices for organizations considering using agentic AI, including: (1) assessing and bounding the involved risks upfront, (2) making humans meaningfully accountable in agentic workflows, (3) implementing technical and non-technical controls and processes across the agent life cycle, and (4) informing end-users of their responsibilities vs. the agent’s. The Framework is not strictly keyed to Singapore law and may serve as a useful reference point for any organization as it develops or refines its internal governance framework for agentic AI.
Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.
Alex Schlight is a Partner at Hintze Law PLLC recognized by Best Lawyers & Super Lawyers. Alex is co-chair of our AI + ML Group and counsels US and international clients on data privacy & AI compliance and risk management strategies.
Taylor Widawski is a Partner at Hintze Law PLLC recognized by Best Lawyers & Super Lawyers. Taylor is co-chair of our AI + ML Group and advises clients on privacy and security matters and has experience providing strategic advice on AI & privacy programs as well as AI & privacy product counseling across a variety of industries and topics.
Cameron Cantrell is an Associate at Hintze Law PLLC recognized by Best Lawyers. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.
