By Emily Litka
On July 9, 2024, The Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office (LA DA) reached a settlement with NGL Labs, the maker of the “NGL: ask me anything” app and its co-founders. The complaint alleged violations of the Federal Trade Commission Act (FTC Act), the Children’s Online Privacy Protection Act (COPPA), the Restore Online Shoppers’ Confidence Act (ROSCA), and similar California state laws. In the complaint, the FTC and LA DA also brought claims against NGL’s cofounders individually.
NGL (short for “not gonna lie”) is an app that enables its users to receive anonymous messages. A user could prepare a question and share a link to that question via social media. The link directed the user’s friends, social contacts, and anyone who had access to the link to respond anonymously on NGL’s website. The FTC and LA DA took issue with several of NGL’s practices, we highlight two key takeaways:
Regulators are scrutinizing false AI claims
Recognizing the potential harms of other anonymous messaging apps, NGL claimed that their “world class AI content moderation” and “deep learning … algorithms” could prevent cyberbullying. They even claimed that it could detect the semantic meaning of emojis to help “filter out harmful messages.” NGL received numerous complaints from concerned parents and educators claiming use by students to send threatening and sexually explicit content on NGL. Social media outlets that tested the app claimed that the harms NGL purported to prevent using AI were, instead, rampant. According to the complaint, NGL made no changes to its marketing or AI in light of the concerns raised.
The FTC and LA DA alleged that NGL’s marketing claims were deceptive and unfair because NGL’s representations that their AI prevented cyberbullying and created a safe environment for children and teens were false.
The FTC is willing to ban minor users under COPPA
In addition to the unfairness and deceptiveness claims, the FTC concluded that NGL actively marketed the NGL app to children and teens, identifying them as a key market to win. They alleged that NGL violated COPPA by failing to: provide notice, obtain verifiable parental consent, provide reasonable means for parents to delete the personal information collected from their children, and delete personal information after it was no longer necessary.
As part of the settlement order, NGL is permanently banned from offering its service to individuals under 18. NGL is further required to have a “neutral age gate” and bar new and existing users who indicate that they are under 18 from accessing the service.
This order is notable for several reasons. It’s the first time that the Agency has barred an online service from offering it to minors. The FTC also mandated that these protections apply to children and teens under 18, which is a departure from COPPA which applies to children under 13. COPPA also doesn’t typically require organizations to age-gate to ensure that children aren’t accessing a service (instead relying on an “actual knowledge” or “directed to children” standard); however, the Agency mandated that NGL enforce this requirement via an age-gate. Overall, this Order is reflective of the FTC’s broader agenda of strong enforcement in the name of child and teen online safety, and the levers they may pull in furtherance of that agenda.
Practice Tips
Before making a representation about what your AI can or cannot do, ensure that the claim has been substantiated. For as long as you maintain the representation, ensure that you have documentation and data to support it. This might include regular testing or red-teaming of a model to demonstrate that the AI is meeting defined confidence levels.
Consider having mechanisms and processes for receiving complaints about the efficacy of your AI and ensure those complaints are addressed by the right individuals and the resolution documented. You might accomplish this by having a reasonably accessible customer support email, and procedures for routing concerns to the appropriate teams to investigate. You might also consider having a “thumbs-up/down” button or an in-experience mechanism for your customers to report concerns.
So long as your service is not primarily directed to children, consider managing risk using age-gating, especially where you have actual knowledge that children are using your service or your service is reasonably likely to be accessed by children. Where you use an age-gate, make sure that it’s “neutral” by not defaulting to an age 18 or over, not encouraging users to misrepresent their age by stating that certain features won’t work if they are under 18, and preventing users from back-buttoning or “trying again” to evade the age-gate.
Emily Litka is a Senior Associate at Hintze Law PLLC focusing her practice on global privacy and emerging AI laws and regulations.
Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.