What’s the propaganda link between Brexit and the US Presidential race – and what role did Russia play?

Social media bots endanger democracy

By Olivia Gordon

A form of mass propaganda more insidious than anything used in the 20th century is being used to manipulate global politics, according to the latest research. The culprit is social media and the lax regulation that allows voters to be bombarded with politically slanted misinformation — fake news.

‘Ruling elites have often used propaganda to sustain their power, but this latest wave is different,’ says Professor Philip Howard, Director of Research at the Oxford Internet Institute and a Fellow of Balliol College. ‘Targeted messaging over social media is deeply personalised compared to the messaging that “all communists are evil” that used to be distributed by mass films. It’s much more difficult to source who’s generating the content. Users think the messaging may be coming from their family and friends; it’s about particular issues the programmer knows you care about. And it appears to be pretty effective.’

‘Bots’ or ‘cyborgs’ — automated social media accounts which churn out propaganda — were first developed by marketing companies selling products through spam, but are now also used by political groups.

Professor Howard’s analysis of how ‘bots’ helped win a vote for Brexit and the election of US President Donald Trump made news in 2016. Brexit and Trump supporters seemingly had bots tweeting in much higher volume than campaigners for Remain or Hilary Clinton.

Now, says the Montreal-raised researcher, with Brexit in motion and Trump in place, these same propaganda accounts are posting about the Italian referendum.

Social media bots endanger democracy

Professor Howard (above) and his team have found it difficult to link such ‘bot’ accounts directly to politicians, but they have drawn lines to campaign managers as well as ‘ordinary’ citizens who call themselves ‘patriotic programmers’ — political spammers who pre-write propaganda and release it through thousands of automated tweets, day and night, or every six seconds for an hour. Fake news doesn’t get far without being ‘pushed around’ by these bots, Professor Howard observes. For example, if spammers want to persuade you that Hilary Clinton is corrupt and should go to jail, they write accusations with links to fake news stories — claims that have not been fact-checked — and get bots to spread the message.

Many bots appear to originate in Russia. ‘We know that the Russians have spent money on propaganda efforts to improve Trump’s profile over Twitter,’ says Howard, ‘and that seems to have included creating bots that follow Trump and re-tweet a lot of what he says, as well as news from Russia. They often tweet stories about Democrats, western elites and corruption.’ There are links between propagandists for Brexit and for Trump, he adds. ‘My speculation would be that Russia would like to see the EU smaller and further from consensus. There’s a handful of accounts that were passionate about getting the UK to leave Europe and then they suddenly became interested in American politics.’

This form of political marketing is increasingly sophisticated. A few years ago, if you wanted to make yourself look very popular on social media, you would buy followers: £200–300 would buy you between a thousand and two thousand followers from a company in Singapore. If you had a bit more money you could pay for bots, automated accounts, to say things for you — you’d type the content yourself and over time the bots would release your messages. But these days, Howard says, you can rent hundreds or even thousands of ‘shadow profiles’.

Social media bots endanger democracy

These are fake Facebook or Twitter accounts which appear to be genuine because the user has been posting for five or six years, including, for example, pictures of their children (in reality these are taken from image banks such as Flickr). The profiles’ apparent legitimacy is also boosted by the way they are attached to Gmail addresses and mobile phone sim cards. The rented dummy profiles will post or tweet propaganda messages which supports any cause, for a fee. The only real person involved is the mastermind at the political consultancy firm, which ‘grooms’ (creates) tens of thousands of these fake personalities over an extended period.

‘If you look at these profiles, they seem like coherent people,’ says Howard. ‘Then they’ll suddenly start tweeting about a certain medicine because a pharmaceutical firm has hired them.’ Fake accounts used to be easily detectable with names like ‘marco249z’ , but now people are easily duped to befriend or follow a far more genuine-seeming construct — say, Marco Zillick, online since 2012, who loves his kids, plays soccer, and drives a BMW.

Take a closer look at your Twitter followers, says Professor Howard — some of them are possibly bots. Howard explains that ‘people in the public eye, and journalists, tend to attract more bots — and bots are especially targeting feminists, female journalists, and female politicians.’ The bots hope you will follow them back and suck up their messages, and he adds: ‘If you were to write something the bot owner didn’t like, the bot might start spamming you, trying to goad you into an argument.’

Facebook does not share its data — unlike Twitter — and we don’t understand the algorithms Facebook uses to determine what we see on our newsfeeds. Nor do we see everything our network posts, but rather a selection of it — for example, Howard speculates, posts from ‘people who’ve got news about something Facebook knows you watched on YouTube’. Facebook ‘has much greater reach’ than Twitter, he says, and posts on Facebook are more likely to win trust because its networks are made of family and friends. Bots are increasingly using it to spread fake news there.

Social media bots endanger democracy

The dangers of misinformation are compounded by the fact we are increasingly living in self-imposed ‘bubbles’ — Clinton and Trump supporters, and Brexiteers and Remainers, are unfriending one another and losing access to news from outside each bubble.

This segregation means that ‘for elections going forward, the bubbles will be more bubbly’, Howard warns. Trump’s current criticism of the media as biased is a smart strategy along these lines, Professor Howard says, because ‘he gets his followers to stop following credible media organisations’. He has even turned the term ‘fake news’ against them.

Why are political conservatives winning the propaganda war? The Oxford Internet Institute team’s research has found that the conservative bots campaigned ‘more negatively’, for example by using messages about immigrants taking jobs. ‘Negative campaigning goes much further with computational propaganda,’ says Howard — in other words, on social media negative messages grab people more effectively than positive ones. Meanwhile, he believes, ‘Conservative groups tend to be more aggressive and creative in applying new technologies to target voters, and more willing to violate privacy norms to get messaging across.’ They are, in other words, more likely to play dirty with spam emails and calls, direct mail campaigns, push polling (where the question pushes the voter towards a given response) and, now, social media messaging.

The solution must lie in preventing the spread of misinformation during elections, Professor Howard thinks. Outrageous lies could one day be punishable in court. The German government is proposing a €500,000 fine for Facebook for every time is fails to take down junk news. ‘That would make Facebook move fairly quickly to stop it,’ says Howard.

But in the first instance, Professor Howard says, we need governments to organise juries of randomly sampled citizens representing the diversity of society to meet with experts, thoroughly evaluate both sides of an argument and then produce a public document outlining the facts. ‘Voters can still ignore or disagree with the document, but this would be one of the few ways of improving referendum outcomes.’

But how can we distinguish between freedom of speech and freedom to spread fake news? Professor Howard admits that this is a difficult question, but says it boils down to distinguishing between comment and checked facts. Perhaps Facebook will one day create a reliable ‘news’ feed separate from a ‘commentary’ feed for opinion essays, he muses. For now, though, he believes we are losing sight of truth in an era of fake news.

Portrait courtesy of Philip Howard. Social media image by nopporn, Donald Trump by Andrew Cline, Brexit protest by Ms Jane Campbell, all via Shutterstock.

Comments

By Andrew Gauntlett
on

Oh come on, enough with this "The Russians are coming!" already.

I've been writing social media bots for years. Am I to blame if certain readers cannot discern fact from fiction? Where was all this bruhaha when television was poisoning our minds? Didn't my colleague Mr Matthias just get a CBE for doing on TV precisely what you are accusing - without evidence - the Russians of doing on Twitter?

Andrew/
Jesus, 1997

By RHFindlay
on

Sad to see that another piece of technology has become a double-edged sword. However "botting" is probably less damaging than interfering in elections through inspiring violent military action, such as that which saw the fascist General Pinochet overthrow a legitimately elected government in Chile back in 1975, or the action which prevented the national vote for reunification in Vietnam in 1956, or perhaps those actions which have seen the overthrow of legitimate governments in Iran, Iraq and Guatamala (1950s). Not to mention the meddling in eastern Europe which saw the elections/imposition of happy comrades in those governments.

One may suppose that it is natural human behaviour for great powers to meddle in other peoples' governments; and who needs "bots" when we have the continual pernicious drip-feed of international right-wing newspaper-owners? If we, the electors, are too lazy to separate fact from fiction, and continue to be too lazy to vote, we will continue to get the governments we deserve. Trump is a classic example; Brexit could well have been another as 28% of those eligible to vote did not, and in that instance the alleged majority was particularly underwhelming.

By Ian Phillips
on

What a load of sour grapes! What does "improving a referendum outcome" mean? Let me guess.....getting the result you want, perhaps? Which way did you vote last June, I wonder, Prof. Howard?
I don't touch mobile technology/social media and the rest, but we did campaign as Brexiteers last June in the lovey-dovey area around Totnes in Devon. Our 8'x4' Vote Leave billboard was one of the very few not slashed by remainers, being high on a wall. But we caught plenty of abuse hurled by EU-philes in the street and had eggs hurled at our home.
It's the 20s/30s generation who are particularly prone to this kind of behaviour.....they believe they are the chosen ones who will save is all from war and climate change. If you don't agree, you're 'no platformed' = contemptuously snubbed.
The one thing the remainers (and climate change fearlings) will never do is acually debate the issue. They simply can't bear to know that the whole world is not falling apart outside the EU, it's such an identity/ego trip for them.....world government with peace enforced...wow!!
We fear for the future.....a dark age approaches unless this rigid mindset can be exposed to light of proper dialogue and reality.
Ian P. Magd 1964.

By SR
on

I shouldn't need to point out that disseminating 'politically slanted misinformation' is not something engaged in only on social media but is an increasing, and increasingly obvious, feature of 'credible media organisations'. This is intimately related to the difficulty of discerning between 'comment and checked facts'. Much of what passes for 'fact', as delivered by the established media, by the new media outlets and by individuals posting comments on various fora, is merely opinion (comment), no matter how authoritatively or dogmatically presented. And even when it (apparently) isn't mere comment, this difference of opinion is often based on differing interpretations of the same data. Perhaps it was ever thus but it seems to me that we have now reached a dangerously divisive moment in the rhetoric of freedom of speech and the associated freedom to hold different opinions. Having been told that everyone is entitled to an opinion (because everyone's opinion matters), a goodly number of people are now routinely told that they're actually too stupid to have an opinion. Some are even going so far as to suggest that the franchise be revised to exclude 'stupid' people.

The polarisation that is now increasingly played out on the streets and on comment sites of various sorts cannot be laid solely and irreducibly at the feet of bots, Trump and Brexit. These things arise out of a wider context, one that includes the traditional media and traditional politics, and I rather doubt that trust in either of those things will be increased by the production of citizen documents.

By Steven Edrich
on

Great article that demonstrates the threat we are facing. Yes there has always been biased reporting and even misinformation, but the nature of our general use of social media is changing the game. It is creating both massively greater opportunity for the spread of such information, it is doing it in a way that is much harder to identify and has the potential to bombard us with supposed real people supporting it that never existed before. Oh, and in repsonse to Andrew Gauntlett, yes, i think you are to blame - millions of people in their 'bubbles' of social media cannot discern fact from fiction, you - as the perpetrators described by Professor Howard are absolutely taking advantage of this.

By Khaled Soubani
on

Good effort. The unease about this subject is due to the fact that this is a transition to a more realistic understanding of the Internet and social media, in particular. The two major events sighted and others possibly on the way are making it a very costly transition. The Internet has to fight these bots and algorithms in order to maintain user confidence.

By Thomas Sm
on

How is this even a respectable field of research, much less a respectable POV?

Most "fact-checking" organisations are highly biased and the fact-checkers are not unique experts but typically simple people with four-year degrees in English or Journalism. Often times, facts are "checked" against government statistics or the statistics produced by a single think tank or report from a consultancy firm (not a range of them). For example, I often see "fact-checks" on issues like the unemployment rate, citing merely the headline government stat (e.g., U3 in the United States) without any deeper discussion of what that means and how such statistics are kept.

In short, the "fact-checkers" are a typical club of people with mediocre intelligence who fancy themselves a type of intellectual élite for having a simple university degree and the "right" political opinions. I have a strong suspicion a more advanced version of the same applies to the esteemed "Professor of Internet Studies".

In any case, the goal of such propaganda as this is censorship. Populist and nationalist opinions were forced to use alternative media not because they had been "fact-checked" out of existence by honest and respectable people at the Beeb, CNN, NBC, CBS, ABC, ITV, Guardian, NYT, WaPo, etc. (the very same forces who routinely lie and mislead about foreign affairs to promote illegal wars). They used alternative channels because there an oppressive and exclusive media cartel with ideological bias already existed. What is amazing is the claim that the disadvantaged party won via foreign interference in both the Brexit referendum and the American election when the advantaged party, we were told, was the responsible choice because most of the world supported it.

By Michael
on

Thank you for continuing to write about this topic. I read some quotes from you in an article that appears on Politico while researching fake MAGA accounts online.

I have been thinking about this a lot lately, due to personal experience. I'm currently running a few largish Twitter accounts. A couple weeks ago I accidentally followed a hashtag MAGA account. (For those that don't know the acronym MAGA stands for "Make America Great Again" and is used in close conjunction with Trump supporters, both real and robotic.) Anyway, soon afterwards I was "followed" by a swarm of at least a hundred accounts all using this, and other pro-Trump, hashtags.
After inspecting a bunch of these accounts I realized a large number of them were fake. Most of the fake accounts were "dressed up" to look like real, (and attractive) people, and most were being followed by thousands of other accounts.

All of the accounts were zealously defending the Trump administration and attacking any idea or user they crossed paths with that did not align with the administration's agenda.

All of these puppet accounts followed nearly an even number of accounts as followed them, which is a sure sign that the account is using mass follow techniques that are usually employed by marketing people to artificially inflate their audience, and give them the appearance of authority.

A few years ago i would have scoffed at the notion that this was an actual problem, but after seeing the craziness of 2016 it is glaringly obvious that a lot of people are being swayed by this endless online barrage of misinformation and propaganda. The puppet accounts seem rather benign at first glance, until you realize that they actually hold power to sway REAL public opinion among people who are easily convinced by marketing tactics which re-enforce existing bias.

I really think this issue deserves close scrutiny, and needs a huge spotlight to be shined upon it.

By peter west
on

If I want "fake news" or propaganda, I turn to the "Independent" or the BBC.

And neither of those supported Trump or Brexit.

Add new comment