How Algospeak Changed the Way We Talk to Each Other Online

About a month ago, when I wrote an explainer about the viral TikTok trend Cute Winter Boots, I knew I wanted to come back to the topic of algospeak. A portmanteau of “algorithm” and “speak”, algospeak is a type of coded language used online to avoid being flagged by a social media platform’s content moderation algorithm.

The most popular ones, like TikTok, YouTube and Twitch don’t have official lists of words to avoid, but creators have learned from experience that saying certain things can get their content demonetised or taken down. To avoid the loss of revenue and visibility – and even a potential ban from the platform – users have started self-censoring with algospeak, just to be safe.

Over time, however, digital slang has started to bleed out of social media platforms and into Reddit threads, forums and other spaces where self-censorship isn’t needed, confusing users in the process. This is likely because the moderation rules on many of these platforms are opaque to creators and appear constantly shifting. TikTok has been accused of removing or hiding tags for LGBTQ+, racial, political and sex-related topics, but so far hasn’t confirmed or denied that these tags were censored on purpose as part of a broader content moderation effort. Usually, when users complain, the platform reinstates the tags and blames their absence on a temporary glitch.

As tech historian Mar Hicks explained to The Verge in 2023, creators feel they have to be extra-careful, as TikTok’s rules around sanctioned and “forbidden” words could change at any moment. The sudden disappearance of tags can have serious negative effects on groups that are already marginalised. As a result, algospeak is everywhere on the platform – and it’s taking over online discourse.


Though users might be doing this out of a misplaced abundance of caution, the result is a blunting of language. This also has the effect of splitting the dictionary into “marketable” and “unmarketable” words. When users speak as if their content could get demonetised, that implies it could just as easily be monetised, even when that isn’t the case. This turns them into guardians of a platform’s brand safety rules at their own expense. And it gives even the most sincere comments a commercial feel that almost negates the point of being vulnerable online. 

Another concern is the potential erasure of entire topics and communities by forcing them under the umbrella of frivolous or infantile language. A 2023 research paper from Gordon College, Washington and Lee University and Carnegie Mellon University actually compiled a list of the most common algospeak terms and what stood out to me is the abundance of funny euphemisms and cartoon-ey language used to refer to very serious issues. The list is worth reading in its entirety, but notable mentions include: “kermit sewer slide” (meaning “commit suicide”), “shmex” and “seggs” (“sex), “le$bean” and “le dollar bean” (“lesbian”, later turned into its own meme), “grape” (“rape”), “unalive” (“to kill”, “die” or “commit suicide”). Some examples not included in the paper, but which became popular later on the platform include “mascara”, referring to sexual assault and “cute winter boots”, tagging tutorials on protesting Donald Trump’s second term.  

The problem here is twofold. On the one hand, algospeak trivialises very real issues such as suicide and sexual assault, and can enforce an attitude of secrecy and shame towards them. With things like suicide, that’s the opposite of what research says is best practice. On the other hand, many use social media platforms to connect about some of the worst things they have ever been through. Having to use algospeak instead of calling things by their name can feel significantly less supportive. And when users can’t discuss mental health or politics outright, they won’t just shut up and go away. They will resort to creative alternatives – even if these don’t fulfill the same need.

Algospeak is now ubiquitous not just on TikTok, but also across YouTube, Twitch, Snapchat, Reddit and comment sections. The tendency to self-censor has bled into almost every online space, whether it’s needed or not, regardless of the presence of content moderation algorithms. Even some online publishers have taken it up, like this article where “suicide” was bleeped out as if it were a swear word. 

Reddit has no official guidelines regarding banned words and every community obeys slightly different rules, as enforced by its human and bot moderators. The site’s Contributor Program specifies NSFW, sexual and violent content don’t qualify for monetisation, along with contributions related to firearms, gambling, drugs and illegal or harmful practices. However, there are no mentions of words to be avoided and no explanation of what counts as a harmful practice. The community is left to self-censor as it sees fit, just in case.  

Finally, overusing algospeak arbitrarily splits language into “good” and “bad”, leaving the distinction up to machines and advertisers’ bottom lines. The boundary between acceptable and unacceptable speech on TikTok shifts all the time, depending on the day’s buzzwords and crises. And from there, this divide spreads to other online spaces like an invading language goo. 

The “good” language is stripped of unpleasantness (mentions of death, violence, guns, suicide, assault, porn, addiction, drugs) and worldliness (global conflict, racial and gender issues, sex work, health and climate crises), while the “bad” must flatten itself into algospeak or risk a ban. This happens whether or not that threat is real on a given site. The result is online communication that feels uncanny and tense with the unsaid, the implied. And conversations where, in order to talk about “real-world” problems, users feel they must play by the algorithm’s rules.

Next
Next

Does AI Get the Kiki Bouba Effect?