Voiding words as a disinformation process

Cassiano Gobbet
3 min readMar 21, 2021

“We have to cease to think, if we refuse to do it in the prison house of language; for we cannot reach further than the doubt which asks whether the limit we see is really a limit.” (Friedrich Nietzsche)

Most of the times, the establishment does not need to put effort to support its own good. It is far easier to put brakes on a ten-ton heavy thing than making it start to run. When choosing words that live or die, actors can mutate the reality fingerprint and change the colour of the sky — without anyone noticing.

Disinformation can use elusive resources. It does not need to bring new, unreliable information to the table. Simple language handling can be the most efficient limiter ever created. When you pick the keywords, you lead the conversation.

Think about the switch from “global warming” to “climate change” or how US right-wing politicians managed to frame democrats as “left-wing”, “socialists”, or communists in a country whose domination era were born out of the fight against communism. When senator Bernie Sanders self-labels himself as “socialist”, he basically loses the whole share of socialism-fearing Americans.

Some words are linked to concepts and perceptions, deep into peoples’ minds that simply won’t change. “Warming”, “oil”, “military”, “army”, “herbicide” are words intrinsically connected to threatening perceptions. “change”, “defence” or “agricultural products”. Replacing the words of the discussion changes the nature of the engagement.

The tactic is widespread. Oil companies rebranded themselves as “energy companies”. Military contractors are “defence” companies. Governments use the repression apparatus labelling them as “defence forces”. It is not a coincidence. The semantic manipulation helped industries, companies and organisations that should have been dead for decades to stay alive and licking.

Disinformation actors exploit the fears of audiences to play with words. Once the environment absorbed the desired terms, the needed effort to manipulate concepts shrinks. It distorts the fingerprint of the debate. It changes the rules of engagement to the point that the opponents repeat their broken mantras uselessly because the audience is not listening anymore.

Word manipulation is not disinformation as we know it, but its effects on the discussion’s fairness are massive. The inability to reference the desired concept breaks any rationale. Once in captivity, the word is like a pariah. It’s not a coincidence that one of the preferred modus operandi of disinformation actors is to ruin the source’s credibility. Even if, in the long run, the source proves to be reliable, the discussion is long gone, and all the consequences are solidified.

The decentralised anonymous fake news engines work hard to isolate and nullify concepts, words and meanings. Before engaging in combat, bad actors corrode their targets’ foundations, so they lose resources even if they don’t realise. Even if they appeal to such resources, the effort is fruitless. Just take a look at how media companies in the US (and many other Western democracies) try to establish the “facts” without realising that the concept no longer resounds into the bubbles under the disinformation effect.

Machine learning and natural language processing are tools that will allow spotting words that have been voided by one of the parts of the debate but will be useless to fix the issue. Communities, audiences, and other groups tied together by a purpose are the places where the word-kidnapping takes place. That’s why collective organisations are central to any process intended to quash unreliable information. Algorithms are powerful, but only if working in the right place. It’s pretty much certain that they are still barking at the wrong tree.

--

--

Cassiano Gobbet

Founder @Troovr. Data ownership advocate, life-long digital media user, seeking ways to fight disinformation with tech & collective intelligence.