This month the Home Affairs Select Committee produced a report stating social networks including Facebook, Twitter and Google are “consciously failing” to combat users promoting extremism. These networks were quick to refute these claims, stating they are committed to removing accounts and content that supports terrorist activity. Simon Milner, Director of policy at Facebook UK, explained that “In the rare instances that we identify accounts or material as terrorist, we’ll also look for and remove relevant associated accounts and content,” while Youtube responded that “We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks UK law.”
The reality is that we live in a world where technology is soaring off in to the future and everything else – businesses, culture, art and now national security – have to use it as a vessel to realise their aims. Removing extremist content from the likes of Facebook, Twitter, Instagram and Snapchat is an arduous task for anyone. What’s even more difficult for social media giants is knowing where to draw the line between what is extremism and what is critically engaging with social issues in an unfettered way. That’s not to say social media giants should not prioritise this. They should and they are, but more must still be done. What we should seek more than anything is clarification on what guides their decision-making and the importance they place on communicating this with the everyday user.
I looked for Twitter, Facebook, Snapchat and Instagram policies that relate specifically to measures taken to countering online extremism. In the midst of many terms and conditions pages and community guideline links, I found one blog post from Twitter dated February 2016, the Twitter Rules, and some Facebook commentary on what they term “dangerous organisations”. In the Twitter Rules, users are forbidden from making “threats of violence or promote violence including threatening or promoting terrorism”. I did not find any connecting links to the UK Government’s webpages on countering extremism if a user wanted to find out more about this issue. The condemnation of such threats of violence is relevant, but when users are at the centre of distributing, sharing, liking, and reacting to various forms of content, sites should think consciously about the kind of discussions they wish to see flourish.
With a combined 1.83 billion daily active users, Facebook, Instagram, Snapchat and Twitter need to use their position of power to educate their users on where they stand on online extremism and what to do when they come into contact with it. Counter-extremism cannot move forward without their cooperation, so the more information made available to their users, the more users can help securing their online safety. Social networks are communities, and those communities can be harnessed to push back against the extremists who abuse them.
Rooting out the extremism that we see on social media starts with collaborating with representative governments on delivering an educational conversation on what extremism is, how to spot it and how to react when you do. Those that seek to use social media networks to agitate national security strategy or promote speech that is hateful, undemocratic and damageable to civic society will be brought to justice, not only by law enforcers but also by everyday people in these online communities. It can no longer be the case that social media is a green room for those seeking to connect with others promoting hateful, extremist propaganda. Your move, Facebook.