fbpx
Search Donate

Show results for
  • News
  • Videos
  • Action Alerts
  • Events
  • Resources
  • MEND

Social media and the propagation of far-right hate

Social media and the propagation of far-right hate

Categories: Latest News

Monday November 19 2018

Humanity has constantly strived to develop new and more advanced tools to satiate our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this, at our own peril. Social media platforms have emerged as powerful tools to communicate with individuals across the world, allowing us to disseminate knowledge; develop powerful campaigns that transform geopolitical landscapes; tap into international start-ups; and, of course, help us maintain better contact with friends and family. However, the instantaneous nature of these platforms, their accessibility, and the anonymity that they provide have also resulted in the platforms being rife with hateful and divisive rhetoric which is immeasurably dangerous to the aims of community cohesion.

A particularly insidious problem is that, within the shadows of the net, networks have developed with aims to capitalise on hateful rhetoric and methodically sow further discord by propagating far-right populist narratives.

The recent report by the New York-based research institute, Data & Society, entitled: “Alternative Influence: Broadcasting the Reactionary Right on YouTube”, was one specific project that aimed to map this network, described as the Alternative Influence Network (AIN), investigating 81 channels on YouTube that gave platform to around 65 political influencers. The report describes “political influencers” as individuals “who shape public opinion and advertise goods and services through the “conscientious calibration” of their online personae” by building audiences and ‘selling’ them far-right ideology.

The report argues that the AIN acts methodically to collaborate and reinforce their narratives, ultimately aiming to normalise far-right rhetoric and shift the ‘Overton window’, an idea that at any time there is a range of political views that are considered politically acceptable to the majority of society, into dangerous territory.

They deploy tactics of “brand influencers” such as developing “highly intimate relationships with their followers” which are then exploited to pass on political opinion and views under the guise of being “light-hearted, entertaining, rebellious, and fun”.

Members of this network include infamous far-right activists such as: Stephen Yaxley Lennon, also known as Tommy Robinson, founder of the English Defence League (EDL) which is considered an “Extreme Right Wing activity” group by Max Hill, QC, the former Independent Reviewer of Counter-Terror Legislation; Richard Spencer, a prominent American White supremacist; and, Lauren Southern, a Canadian far-right activist who was denied entry to the UK because of her anti-Islamic views.

However, what is notable is that the network also contains individuals who self-describe as “libertarians” but connect with political influencers that are self-described “white supremacists”. An instance illustrating the problems of this is the YouTube ‘debate’ on scientific racism between Richard Spencer and Carl Benjamin, a self-described libertarian. The debate, live on the 4th of January 2018, was trending as the topmost video worldwide with “over 10,000 active viewers”. With Spencer having years of experience in spouting far-right rhetoric and justifying it with pseudo-science, many viewers were left with the feeling that Spencer had not only won the debate, but moreover his views on scientific racism were justifiable. Indeed, one fan eagerly commented: “I’ve never really listened to Spencer speak before but it is immediately apparent that he’s on a whole different level”. Benjamin having engaged with Spencer essentially gave a platform to the White-supremacist actor and allowed him access to his followers. Through this idea of connecting and collaborating, the AIN advertently and inadvertently propagates and reinforces far-right rhetoric.

Other social media platforms also experience similar problems whereby coordinated groups act in synchrony to create discord and propagate hate rhetoric.

The recent report by Demos, a UK-based cross-party think-tank, entitled: “Russian Influence Operations on Twitter”, considers the exploitation of ‘Twitter bots’ by the Russian state. The report looked at datasets released by Twitter in October 2018, composed of around “9 million tweets from 3,841 blocked accounts” which are associated with the Internet Research Agency (IRA), a Russian organisation that was founded in 2013 and has been heavily criticised for seemingly exploiting social media platforms to push pro-Russian propaganda internationally and intranationally. The report found that there was a significant amount of effort expended by the network of bots to propagate hate rhetoric against Muslims in particular. Indeed, the “most widely-followed and visible troll account” shared more than 100 tweets, 60% of which related to Islam. One such tweet was “London: Muslims running a campaign stall for Sharia law! Must be sponsored by @MayorofLondon! #BanIslam” another was “Welcome To The New Europe! Muslim migrants shouting in London “This is our country now, GET OUT!” #Rapefugees”. The report found that the most frequent topic of tweets sent during the six months prior to the Brexit referendum was “Islam” and “Muslims”.

What is most worrying is that the technologies that are being utilised by such networks on these social media platforms are rudimentary and fairly easy to spot compared to what technologies are currently being developed.

“Deep fakes” are a new form of technology that use machine learning techniques to generate videos and audio products that seem to show real people say or do things which they never did at a perplexingly realistic level. One example is the fake video of Donald Trump that was released in May by The Flemish Socialist Party sp.a., which led to hundreds of users on Twitter commenting on the President’s seemingly outrageous statements.

Therefore, the problem we face as a society, and Governments face across the world, is to grapple on the problem of fake news, the dispersion of misinformation on social media platforms that occur currently so that we have a chance to confront the problem of deep fakes once they become a more ubiquitous technology.

However, the problem the Government faces is a significant one.

Cyberspace is not a similar plane of existence as is the physical world, regulations are far more difficult to enforce. Governments originate from ideas of centralised power and concrete objects, whereas cyberspace propagates ideas of decentralised power. Indeed, groups operating in cyberspace share the mindset of John Barlow’s 1996 Declaration of the Independence of Cyberspace: “Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders”.

The Government must act to assert its authority over social media companies and hold them accountable for the hate prevalent on their platforms, because whilst we, as a society, have never been more connected, we are drowning in discourse and bombarded with politically-based rhetoric that is constantly propagating populist narratives. It is imperative that the Government pays far greater attention to the digital plane and the role of social media platforms in influencing the psyche of the nation.

In Joshua Kopstein’s words, “it’s no longer ok to not know how the internet works”.

Newsletter

Find out more about MEND, sign up to our email newsletter

Get all the latest news from MEND straight to your inbox. Sign up to our email newsletter for regular updates and events information

reCAPTCHA