Social Media, Disinformation and Behaviour Modification
Social networks by eliciting Pavlovian responses in their users are designed to be addictive. However, rather than training us to salivate at the sound of a bell, social media has trained us to respond and adapt our behaviour to notifications. That is to say, we adapt and modify our behaviour on social platforms to generate positive outcomes. For instance, when we post to Twitter, Facebook, or Instagram we await ‘likes’, comments ,or retweets, all of which are considered to be positive reinforcement, to come flooding in. If this is the case, we continue as normal but if we receive criticism, fewer likes, backlash, or other forms of negative reinforcement, we modify our behaviour and content to receive better outcomes. This phenomenon has led public figures like Jaron Lanier to rebrand tech giants like Facebook and Google as well as social media platforms like Instagram as ‘behaviour modification empires’. These ‘empires’ acknowledge the loop of being caught in between positive and negative stimuli and according to behavioural theory, negative stimuli not only spread faster than positive stimuli, have greater impact and are cheaper. Online, this phenomenon enables not only social networks but also advertisers to modify our behaviour to their advantage and in a sense, manufacture our consent.
Social media, according to Facebook’s former executive, Chamath Palihapitiya, exploits the natural tendencies in human beings to get and want feedback. Users are caught in a fast-paced feedback loop of understanding what works online and what doesn’t. This loop commonly referred as the dopamine feedback loop, drives people to scroll through social platforms in anticipation of finding something pleasurable. As they scroll, this loop manifests through the content they see and post but given the feature of the system to negative stimuli, users are exposed to negative emotions faster than positive ones. More often than not, this system favours popularity over truth and consequently the two are confused. More than that, the ability of companies to pay for certain viewpoints to be amplified generates perceptions that what is popular is also truthful. And this feature of social platforms is what makes them dangerous as they’ve become the perfect medium for disinformation campaigns.
Disinformation is the deliberate dissemination of false or inaccurate information to discredit an individual or organisation. When done well it is not easy to spot, it might be completely false statements, information taken out of its original context, or manipulated photos and videos (so called deepfakes). Regardless of the form it takes, disinformation attempts to portray reality in a distorted manner that causes or intensifies conflict, undermines trust in state institutions, generate emotional responses – in other words create division. For the most of the twentieth century disinformation campaigns were effectively waged in an analogue format with the Cold War being the arena in which they could be perfected. However, the development of the Internet in 1983 created a new platform for exploitation: one developed enough to enable global outreach and support disinformation, but not advanced enough for it to be detected or traced. Over the next forty years, as information spread faster and more uncontrollably, disinformation became easier and cheaper for perpetrators to use, as shown by the Russian Internet Research Agency’s influence in the 2016 US Presidential Election. A case which shows very clearly how the same techniques used by social platforms to change human behaviour can be used to sway political opinions.
The Internet Research Agency’s (IRA) influence on the then upcoming election began in 2014 with creation of a ‘troll farm’ – a group of internet trolls who conduct disinformation propaganda activities that usually target the political and economic spheres. In the case of US, the IRA’s function was to spread distrust towards the candidates and political system. They essentially became an industrial assembly line turning out hundreds of lies and untruths in the forms of tweets and Facebook posts every day. In the same way that our searches are analysed by Tech Giants to provide ads tailored to our likes, the IRA was creating disinformation strategies based on their analysis of different habits across the US political spectrum. For instance, it was found that liberals responded stronger than conservatives to infographics. Another finding highlighted the average daily online activity of each political affiliate, liberals would be more active at night whereas conservatives were more active at morning.
This information drove the IRA’s operations, producing and disseminating tailored disinformation products including videos, memes, infographics, false reports, interviews, and even fake events towards their target audiences. Their most engaged content focused on building communities rather than polarizing them and by Election Day in 2016, their outreach on Facebook had achieved 12 million shares, just under 15 million ‘likes’ and over 1 million comments. Despite these stark figures, the IRA’s success was limited. One of its earliest sponsored Facebook posts depicting Jesus arm-wrestling the Devil with the caption “SATAN: IF I WIN CLINTON WINS! JESUS: NOT IF I CAN HELP IT” received only seventy interactions.

This post was made famous however when The New York Times printed a front-page article describing the image, and from there both national and international news agencies reported on it. The ad soon became iconic and amplified the IRA’s disinformation operation showing that social media could not only be used to influence individuals but also traditional journalism.
The same designs of social media platforms that enable tech giants to act as behaviour modification empires that can influence our likes, dislikes and actions have made the art and craft of disinformation easier. The digital age affects news and cultural narratives and as we scroll on Facebook in search of pleasing content, in search of information that aligns with our viewpoints , it leaves us vulnerable to disinformation. It’s a problem that’s only growing as nowadays anyone with a smart-phone and an internet connection can create and spread disinformation albeit for political or personal motives. Therefore to navigate this environment we must proceed with caution and take the information we’re exposed to not only with a pinch of salt, but a handful.