Blake Stockton's AI Tell Became a $50M Content Crisis
Have you noticed how every LinkedIn post sounds like it was written by the same overly dramatic intern?
Blake Stockton spotted it first. The "it's not just X—it's Y" construction that's now so pervasive in AI-generated content that it's basically a synthetic writing confession. What began as ChatGPT's favorite rhetorical flourish has evolved into the most reliable tell for spotting AI slop.
<> "One of the most beloved writing techniques of AI," Stockton calls it in his "Don't Write Like AI" series, urging developers to explicitly ban it with prompts like "Avoid any sentence structures that set up and then negate..."/>
But here's where it gets wild. This isn't just about bad writing anymore.
The detection arms race is heating up. Content creators are now using cheat sheets with 24+ banned patterns and 100+ forbidden phrases. Tom Orbach's "Anti-AI Writing Cheat Sheet" warns that these patterns are "invisible to the writer but obvious to everyone reading."
Companies like Duolingo got caught red-handed using this exact pattern: "AI isn't just a productivity boost—it helps us get closer to..." The examples are everywhere once you start looking.
The Technical Rabbit Hole Goes Deep
Developers are scrambling to patch their prompts. The negation pattern sits alongside other dead giveaways:
- Excessive em dashes (AI's favorite punctuation)
- Formulaic rule-of-three structures
- Generic proper nouns like "Emily" or "Sarah" appearing in 60-70% of AI stories
- Uniform paragraph lengths that scream "algorithm"
Pangram Labs found that AI consistently underuses semicolons and parentheses while overusing phrases like "is a testament" and "it's crucial to note." The patterns are so predictable that detection tools can spot them with scary accuracy.
But the real kicker? Post-editing workflows now require explicit "humanization" steps. Mix sentence lengths. Add imperfections. Throw in some tense shifts. The irony is delicious—we're teaching machines to write badly on purpose.
The $50M Content Credibility Crisis
Here's what nobody's talking about: brands are hemorrhaging credibility. When your marketing content gets flagged as AI-generated, you've lost the authenticity game before it started. The rise of detection tools like those from Pangram Labs is creating a parallel market for "humanized" AI outputs.
Marketing professionals are now hiring human editors specifically to strip out AI tells. It's like having a reverse Turing test—can you make the machine writing less perfect?
<> Critics argue AI strips voice, forcing generic patterns that "elevate writing" superficially but deliver vague info—humans demand specifics./>
LinkedIn has become ground zero for this phenomenon. Every "hot take" follows the same template: provocative setup, negation structure, buzzword finale. The platform's edge-lord AI posts are so formulaic they've become self-parody.
Hot Take: We're Fighting the Wrong Battle
Everyone's obsessing over detection when the real problem is creative bankruptcy.
The "it's not just X—it's Y" pattern became ubiquitous because it works. It creates drama. It reframes problems. It sounds sophisticated while saying nothing.
But here's my controversial take: maybe we needed this wake-up call. The fact that AI defaulted to this pattern exposes how much human writing was already formulaic. We were just better at hiding it.
The solution isn't better detection—it's better humans. Specificity over vagueness. Personal experience over generic wisdom. Actual opinions instead of diplomatic non-statements.
Because when Blake Stockton can identify your writing as synthetic based on sentence structure alone, the problem isn't the AI. It's that we forgot how to write like humans in the first place.

