Why You Say 'Unalive' Now (And Why That's Exactly the Point)
The Resistance Sabotage Manual: Day 2 of 12
August 15, 2025
Want to find videos of the LA protests? Don't search "LA protests."
Search "LA Music Festival."
That's what works this week. By next week, when the algorithm catches up, it'll be something else. "LA street gathering." "Los Angeles walking tour." "LA community event."
In China, they call it "harmony" when posts disappear. In Russia, you learn which words make you "unfindable." In Turkey, journalists use symbols instead of politicians' names. Every authoritarian system trains its citizens to self-censor. America just learned to do it faster.
There's no law against posting protest footage. No regulation. No policy. But creators know — somehow, they just know— that "protest" is a dangerous word now.
Just like you know "suicide" is now "unalive." Dead is "d*ad." Kill is "k!ll" or "unaliving someone else." Sex is "seggs." Lesbian is "le dollar bean."
You learned this without anyone teaching you. You absorbed it through a thousand shadowbans and suppressed posts. And now — this is the insidious part — you do it everywhere. Not just online. In emails. In texts. In actual conversation.
A college professor in Michigan received an email in July that stopped her cold. A student was writing about mental health resources, describing prevention programs for those who might "unalive themselves."
In an academic paper. At a university.
The word "suicide" hadn't been banned. No law prohibited it. No administrator issued a warning. But the student had been so thoroughly trained by TikTok's algorithm to avoid certain words that she carried that self-censorship into the real world.
She's not alone. You probably do it too.
The Language You Didn't Realize You Lost
Maybe you write "seggs" instead of sex. Maybe you use asterisks in "kll" or "de." Maybe you've said "PDF file" when you meant pedophile, or "le dollar bean" for lesbian. You learned to speak in code because an algorithm might suppress your post otherwise.
But ask yourself: What law requires this? What regulation? What actual consequence beyond a slightly smaller view count?
The answer: None. There's no law against saying "suicide." No fine for typing "protest." No jail time for "lesbian." You're censoring yourself to avoid... what exactly? Getting in trouble? What trouble? With whom?
You can't answer because there is no answer. You're following rules that don't exist, issued by an authority that was never appointed, to avoid punishments that haven't been defined.
A 2023 study in Sage Journals tracked this linguistic evolution in real-time. Researchers documented 250+ terms creators invented to dodge algorithmic censorship. PBS confirmed in 2024 that these made-up words now appear in formal emails, academic papers, and workplace communications.
We're witnessing linguistic contamination on a massive scale. And it's not accidental.
69% of college students now self-censor, according to Knight Foundation data. Not because of laws. Not because of policies. Because they've internalized invisible boundaries about what can and cannot be said.
This is self-censorship, the second most effective way populations enable authoritarianism. You police yourself so they don't have to.
How Algorithms Trained You to Surrender
TikTok's content moderation doesn't just hide certain words — it teaches you to hide them from yourself. When sex educator Evie Plumb lost her entire account in July 2023 for educational content about human anatomy, the message was clear: even academic discussion is dangerous.
So creators adapted. "Corn" replaced porn. "Accountant" became code for sex worker. "Spicy time" meant anything intimate. Each euphemism represented a small surrender.
The real genius? The platforms never published official banned word lists. Users had to guess, creating ever-expanding circles of self-censorship. You avoided not just banned words, but words that might be banned, words that sound like banned words, concepts adjacent to banned words.
Deutsche Welle journalists admitted using symbols and asterisks to circumvent TikTok's moderation when covering terrorism, LGBTQ+ issues, and drug education. Journalists. Self-censoring news. To appease an algorithm.
The Historical Pattern We're Repeating
This happened before. We have the receipts.
Weimar Germany, 1926: The "Law for the Protection of Youth from Trashy and Filthy Writings" didn't ban specific books. It created vague categories of "harmful" content. Publishers, terrified of crossing invisible lines, began extensive self-censorship.
America, 2025: Trump's "Restoring Truth and Sanity to American History" executive order demands removal of "divisive concepts" from museums. No specific list. Curators now removing exhibits about Japanese internment, Native genocide, and slavery "just to be safe."
Germany, 1933: The Editor's Law made journalists responsible for "weakening the strength of the German Reich." Reporters, unsure what qualified, stopped investigating anything controversial.
America, Now: The National Archives fired. The Smithsonian under review for "unpatriotic" content. Museums told to emphasize "American greatness." Define greatness. You can't? That's the point.
Germany, 1935: Writers had already internalized what couldn't be said. The regime didn't need to ban much.
America, Today: Teachers removing rainbow stickers without being asked. Libraries pulling books no one challenged. Museums "updating" exhibits to be "less controversial." Everyone guessing what might get them in trouble. Everyone overcorrecting.
They're not even hiding the playbook anymore. In March, officials literally said museums should tell "patriotic" history. Define patriotic. You can't? That's the point.
You're supposed to guess. And in guessing, you'll censor far more than they ever could.
This Week's Linguistic Casualties
Right now, August 2025, you can't find:
LA protests (search "LA Music Festival" instead)
Palestine content (watermelon emoji only)
Trump criticism (shadowbanned within hours)
Gaza coverage (try "Mediterranean coastal region")
Police accountability videos (use "community safety discussion")
Climate protests (hidden unless tagged "outdoor gathering")
But here's the insidious part: creators don't just avoid these topics on TikTok. They carry that self-censorship to Instagram, Twitter, real-life conversations. The algorithmic training follows them everywhere.
Your friend who used to post about police reform? Now they post about "community safety." Your cousin who shared climate articles? They're sharing "weather updates." Everyone's speaking in code, even when they don't need to.
And the platforms never published a list of banned words. They trained you to guess. To overcorrect. To censor yourself harder than they ever would.
42% of students now self-censor on gender topics. 36% on racial issues. Not on social media. In classrooms.
How to Reclaim Your Language
Easy Mode: The Translation Practice Write down every euphemism you use online. Next to each, write the real word. Practice saying the real words out loud, in private. "Suicide," not "unalive." "Sex," not "seggs." "Protest," not "music festival." Your brain needs to remember these words exist.
Start with one platform where you use real words. Just one. See what actually happens.
Medium Mode: The Reality Check For one week, document every time you self-censor. Every asterisk. Every euphemism. Every "I better not say that." Then ask: What am I actually afraid of? A shadowban? Fewer likes? Someone disagreeing?
Most users discover their fears exceed reality — posts might get less reach, but rarely face actual punishment. One therapist told me: "I lost 20% of my reach but gained 100% of my integrity back."
Hard Mode: Full Linguistic Resistance Use real language everywhere. Say "protest" not "gathering." Write "suicide prevention" not "unalive prevention." Post about "police accountability" not "community safety issues."
Yes, your reach might drop. But you're training others that these words are normal, necessary, and worth preserving. Run for Something maintains consistent messaging across all platforms — they refuse to self-censor core content. Their reach? 100,000+ candidates, 488+ victories. Authenticity beats algorithmic optimization.
Your Self-Censorship Inventory
Check which of these you do:
☐ Add asterisks to "controversial" words ☐ Use cutesy substitutes for serious topics
☐ Avoid entire subjects online ☐ Carry online censorship into real conversations ☐ Assume you "can't say that" without checking ☐ Pre-edit thoughts before they're fully formed
The average person does four of these. How many do you do?
The Victory Hiding in Plain Sight
Here's what the platforms don't want you to know: organized resistance works.
When TikTok mass-suppressed #BlackLivesMatter posts in 2020 (showing zero views during peak protests), users didn't switch to euphemisms. They flooded every alternative hashtag until the platform reversed course. The content that survived? The ones using real language, posted repeatedly, across multiple accounts.
When LA protest videos vanished this month, creators didn't give up. They tagged them "LA Music Festival" and kept posting. By the time you read this, they've probably moved to a new code. But they're still saying "protest" in the videos. Still documenting. Still resisting.
A therapist in Portland told me she returned to using clinical terms on social media six months ago. "I lost some reach," she said, "but gained credibility. And my clients stopped saying 'unalive' in sessions."
Small rebellion. Massive impact.
The word "protest" isn't illegal. Yet. But if we stop saying it now, it won't need to be. We'll have done their work for them.
Say the real words. Today. Before you forget they exist.
Tomorrow: The $25 Million Deepfake Meeting (And Why You Can't Tell What's Real Anymore)
The Resistance Sabotage Manual is a 12-day series examining the specific ways we accidentally collaborate with authoritarianism — and how to stop. Based on analysis of democratic collapses from Weimar Germany to present day.
What words have you stopped saying? What language have you surrendered? Reclaim one word in the comments.