fbpx
Search Donate

Show results for
  • News
  • Videos
  • Action Alerts
  • Events
  • Resources
  • MEND

Why are we so hateful online?

Why are we so hateful online?

Categories: Latest News

Friday February 22 2019

In recent months there have been several viral videos depicting Islamophobic hate crimes. In January 2019, a clip was widely circulated of a man filming himself hurling abuse at visibly Muslim schoolgirls walking down a street in East London. Similarly, footage of Syrian schoolboy Jamal being bullied attracted a lot of public and media attention. The videos themselves, as well as the comments left on them, were riddled with Islamophobic hate which has become all too familiar for users of social media. The dissemination of these videos illustrates the power and reach of social media platforms and the danger of leaving hate speech unregulated.

Many people often take to social media to express anti-Muslim sentiments following a terror incident. After the London Bridge attack in June 2017, the Guardian reported 32 of the top 100 most shared tweets expressed negative sentiments about Muslims. Demos found in 2017 that 143,920 Islamophobic tweets were sent in a year – approximately 400 a day. What these statistics show is that social media has become a safe-haven for the spread of hate-filled ideas, and if this trend is not reversed there is a threat of the rhetoric becoming normalised and acted upon in society.

The lack of accountability and regulation of social media platforms has even given prominent figures the green light to propagate hate speech. Tory MP Bob Blackman who has been previously accused of advocating Islamophobia was also found to be a member of several far-right and Islamophobic social media groups. UKIP councillor Eric Kitson shared Islamophobic and antisemitic pictures on his personal Facebook page. Similarly, Karen Sunderland, a 2018 candidate for Lewisham Council tweeted that “Islam has become the new Nazism” and faced no repercussions.

A 2018 Amnesty International report entitled ‘Toxic Twitter’ encapsulated how online spaces can be harmful for women, particularly for those from marginalised communities. The report  highlighted that minorities are usually targeted for violence and abuse because they are perceived to be representative of an entire community – with women of colour being 34% more likely to be targeted. Shaista Aziz explained that there are not many visibly Muslim women with public platforms, “so when you do have one, you become the individual that everything is targeted to.”

Women’s rights campaigner Talat Yaqoob said that she experiences sexist, racist and Islamophobic abuse online, in particular, because “people know they can get away with it [online abuse] more. If people do it, it’s behind an anonymous Twitter profile.”

Amnesty concluded that the abuse women face online is infringing their human rights, and social media platforms need to publish meaningful data on how they handle violence and abuse, improve reporting mechanism, provide clarity on how abuse reports are handled, and improve security and privacy features.

At the international level, the UN Human Rights Council (UNHRC) recognised in 2012 that the “same rights that people have offline must also be protected online”. Likewise, in 2018 Theresa May said “what is illegal office is illegal online” shared that the UK has a commitment to legislate on online harms, and encouraged world leaders to take action.

However, there is a lack of legislation surrounding hate speech online. Currently, there is the Malicious Communications Act 1998, and Section 127 of the Communications Act 2003; Both are outdated and were created before the rise of social media platforms. Furthermore, the Communications Act only extends to messages that are “grossly offensive”, which means that Islamophobic content is difficult to prosecute. Carl Miller, research director at Demos, said: “If you talk to lawyers about this, most of them will say they don’t even know which Act really applies here”.

The right legislation has the potential to deter people from engaging in such hateful activity by making it clear that it is a crime. Sadiq Khan, Mayor of London, in 2017 launched a new Online Hate Crime Hub made with five Met police officers and a Detective Inspector to help tackle online hate crime and improve support for victims. Khan said “there must be zero tolerance of this behaviour… We need to encourage more victims to report incidents and explore new ways of identifying, preventing and challenging hate crime in all forms.”

While initiatives like this should be applauded, they do not put an onerous enough obligation on social media service providers to take a more proactive approach towards eliminating hate speech on their platforms. Facebook currently have Community Standards where hate speech is combatted under Bullying Policies and Twitter have a hateful conduct policy that aims to curb dehumanising language, however, they do not have any repercussions for the perpetrators. Essentially, there is not enough legislation to govern hate speech online. The first step in this process is for them to take ownership and responsibility for hate-filled content shared on their platforms, and then develop a strategy to combat it.

Working with credible organisations from communities affected by hatred is crucial to the effectiveness of any strategy to combating hate speech. Their level of understanding of the challenges faced by their respective communities puts them in a great position to differentiate between hate speech and free speech. Ultimately, the need to protect free speech is a reason often cited by social media service providers for their inaction in tackling hate speech. However, by collaborating with experienced human rights organisations, social media service providers can uphold this fundamental freedom at the same time as safeguarding society from becoming an incubator for hatred.

 

Newsletter

Find out more about MEND, sign up to our email newsletter

Get all the latest news from MEND straight to your inbox. Sign up to our email newsletter for regular updates and events information

reCAPTCHA