
Grammarly's $5M 'Expert Review' Meltdown: When AI Cosplays As Stephen King
Everyone thinks AI ethics lawsuits are about copyright infringement and training data scraping. Wrong. The juiciest legal battle brewing isn't about what AI companies learned from—it's about who they're pretending to be.
<> "I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise." — Julia Angwin/>
Journalist Julia Angwin just filed a class-action lawsuit against Grammarly's parent company for over $5 million in damages. The crime? Grammarly's "Expert Review" feature—a $12/month premium tool launched in August 2025—was generating AI writing suggestions while claiming they came from real experts like Angwin, Stephen King, and Neil deGrasse Tyson.
None of these people consented to become Grammarly's AI puppets.
The feature worked by using large language models trained on public works to mimic these experts' writing styles, then slapping their names on the output. Sure, there was a disclaimer saying these weren't real endorsements. But when you're charging users $12 monthly for "expert feedback," a tiny legal notice doesn't exactly scream transparency.
The Elephant in the Room
This isn't just about Grammarly being sneaky with marketing copy. This lawsuit exposes a fundamental problem with how AI companies think about identity theft at scale.
Consider the technical implications here:
1. Explicit consent mechanisms will need to be baked into AI development workflows
2. Auditable training data provenance becomes legally mandatory, not just ethically nice-to-have
3. Fine-grained model cards detailing identity usage could become standard
Peter Romer-Friedman, Angwin's attorney, called this "legally straightforward, but the broader implications are far-reaching." He's right. Every AI company building persona-based features just got put on notice.
Grammarly discontinued the feature after The Verge and Casey Newton's Platformer newsletter exposed the whole scheme. The company admitted it "fell short"—corporate speak for "we got caught red-handed."
But here's what's really wild: they were using deceased authors alongside living experts. The ethical gymnastics required to justify that decision must have been Olympic-level.
Why This Matters Beyond Grammarly
This case could trigger market-wide consent protocols that fundamentally change how AI companies operate. Imagine needing explicit permission before your LLM can generate content "in the style of" any named individual.
The ripple effects are massive:
- Higher operational costs for licensing expert endorsements
- Slower feature rollouts due to compliance requirements
- Enterprise clients getting spooked about legal exposure
- Investor demands for ethics audits before funding
Frameworks like Hugging Face might need to integrate legal compliance layers. Enterprise AI tools will need to avoid public figure names in prompts entirely.
The irony is delicious: Grammarly, a company built on helping people write better, couldn't craft a legal strategy to save their most premium feature.
What Developers Should Do Now
Don't wait for the court ruling. If you're building AI that references real people:
- Implement explicit opt-in databases for any named personas
- Make synthetic content generation transparently obvious beyond weak disclaimers
- Audit your training data for identity usage patterns
- Build consent workflows into your development pipeline now
Angwin's case is being heard in the U.S. District Court for the Southern District of New York. Legal experts are calling it precedent-setting for AI branding with public figures' identities.
The tech industry's pattern of "move fast and ask forgiveness later" just hit a $5 million+ wall. And honestly? It's about time.

