
LLM Tropes: Exposing the Bland Soul of AI Writing—and How to Hack It
# LLM Tropes: Exposing the Bland Soul of AI Writing—and How to Hack It
AI-generated text isn't just detectable—it's insultingly predictable. Dive into tropes.md, a brutal Markdown catalog of LLM writing sins that's blowing up on Hacker News with 203 points and 83 comments. Created AI-assisted (irony noted), this single file at tropes.fyi lists hallmarks like the egregious "It's not X—it's Y" negative parallelism, bold-first bullets, and invented buzz like "supervision paradox." It's not subtle: LLMs regurgitate RLHF-trained blandness, converging on statistically average slop that no human writes at scale.
<> "The single most commonly identified AI writing tell. Man I fcking hate it."/>
That's the raw fury from tropes.fyi's creator on negative parallelism—em-dashes faking depth, turning every insight into a "surprise reframe." I've seen it infest blog posts: one is quirky; ten scream "AI slop." HN devs nail it: RLHF rewards this as "good writing," optimizing for low-perplexity mush over spark. Base models lack these tics; instruction tuning injects them, per researchers studying style shifts. It's a prompt engineering nightmare—LLMs regress to training data means, amplifying safe, emotionless prose.
Why Developers Should Care (and Act)
This isn't academic trivia. tropes.md is your new system prompt weapon. Append it to suppress patterns, forcing rarity: unique words, jagged structures, real variability. But beware the arms race—detectors like tropes.fyi/vetter flag tropes in comments and articles, yet false positives hit human text hard. HN tester ghgr got flagged on legit writing; reliability's shaky.
- Prompt hacks: Enforce rules like "respond in 4-12 hours" or ban midnight replies to dodge timing tells.
- Self-editing loops: Make LLMs critique their output against tropes.md for human-like quirks.
- RLHF fixes: Retrain to reward edge, not bland means—echoing EMNLP research on trope clustering for bias detection.
Business-wise, it's gold for stealth content in journalism or wikis, but a trust-killer if fakes flood HN or edits fabricate notability. lcamtuf's unease rings true: unconsented data scraping dooms creators, birthing deceptive AIs.
The Ugly Truth: AI Writing Sucks (For Now)
Critics cry foul—tropes.md "helps AIs go invisible," fueling deception. Fair, but humans share patterns too; the sin is scale and sterility. Wikipedia logs rigid "Challenges" sections and formulaic ends as LLM rot. Yet this cat-and-mouse? Thrilling for devs. Vetter your user content, fine-tune for variability, analyze tropes to unearth biases (demographics flip PCT stances). It's not collapse—it's evolution. Grab tropes.md, rewrite the game, and make AI write like it gives a damn.
(Word count: 512)

