Wiley: Social media websites need to change after Twitter failed to swiftly remove grime artist's antisemitic posts, MPs say

Facebook, Twitter, and Instagram suspended the grime artist but it is unclear what further action can be taken against racism and antisemitism

Adam Smith
Wednesday 29 July 2020 14:03 BST
Comments
(Getty Images)

Members of parliament and political groups have called for greater regulation of social media sites following grime artist Wiley posting a series of antisemitic remarks.

Wiley referred to Jewish people as “cowards” and “snakes”, as well as comparing Jewish people to the Ku Klux Klan.

The rapper was eventually suspended on Twitter and Facebook, as well as Facebook-owned Instagram.

The rapper’s Twitter account, @WileyCEO, says that it has been suspended for having “violated the Twitter rules”.

In response, many prominent figures boycotted Twitter including Green MP Caroline Lucas, Labour's Rosena Allin-Khan and David Lammy, former Labour MPs Neil Coyle and Caroline Flint, Conservative MPs Chris Clarkson and Jane Stevenson, and acting Liberal Democrats leader Ed Davey.

Prime Minister Boris Johnson has also said that the tweets were “abhorrent” and that Twitter’s response was “not good enough”, although did not take part in the boycott.

“Social media companies need to go much further and faster in removing hateful comment such as this”, a spokesperson for the PM said.

Home secretary Priti Patel wrote to Twitter and Instagram asking for a “full explanation” on why the posts were not removed sooner – remaining live on his profiles for 12 hours after being first posted.

“Abuse and harassment have no place on our service and we have policies in place – that apply to everyone, everywhere – that address abuse and harassment, violent threats, and hateful conduct. If we identify accounts that violate any of these rules, we’ll take enforcement action,” Twitter said in a statement, although would not comment on the boycott.

“There is no place for hate speech on Facebook and Instagram. After initially placing Wiley’s accounts in a seven day block, we have now removed both his Facebook and Instagram accounts for repeated violations of our policies,” a Facebook spokesperson said in a statement.

Two years ago the government proposed the Online Harms White Paper, which suggested that Ofcom should have regulatory powers to police what is posted on the internet and could issue fines for companies that fail to meet its standards.

The regulator will have power to enforce a “duty of care” on companies such as Facebook and Twitter “to protect users from harmful and illegal terrorist and child-abuse content”.

Shadow culture secretary Jo Stevens criticised the government over claims the Online Harms Bill is being delayed, however the bill itself has been criticised for its wide remit and the potential to become “a direct attack on the fundamental right to freedom of expression”, according to Big Brother Watch.

Twitter, Facebook, Instagram, and other social media sites have been routinely criticised for systems that do not adequately manage hateful content. Twitter recently updated its policy to stop people circumventing its rules by saying it will take stronger action against harmful websites shared on its platform.

Facebook, meanwhile, has reportedly shelved research that would have made its platform less divisive and has been criticised for outsourcing moderation policy resulting in serious deterioration of the mental health of those keeping its platforms relatively safer.

It is unclear what action these companies will take in future. Some have suggested that a greater use of artificial intelligence will be able to moderate content on the platforms; however, currently the algorithms have struggled with extreme content.

“The things that you never see are the successes,” Tata Communications’ future technologist, David Eden, told The Independent in 2017, when videos of two murders were watched by thousands of people on Facebook before being removed.

“What you don’t see are the things that have been removed. You only see the things Facebook’s AI left, and they tend to be massive, glaring mistakes.”

Similarly, it was challenging for Facebook’s AI system to detect the video of the Christchurch shooting because it could not discern it from similar media, such as video game footage.

“If thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground”, the company said.

It is also obvious that Twitter - and other social media sites - benefit culturally and financially from hosting controversial people and discussions.

Their existence as de-facto public spaces, despite being private companies, mean they must create exceptions for their own policies with regards to extreme politicians such as Donald Trump.

The question has been repeatedly raised about the benefits and drawbacks of such a policy, and experts have found few easy answers for how to police websites with more users than the populations of many countries.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in