As if facing the crushing defeat to Italy wasn’t enough, online trolls bombarded Marcus Rashford, Jadon Sancho and Bukayo Saka with vile abuse on social media after England’s Euro 2020 final on Sunday.
Twitter said it had taken down more than 1,000 tweets and permanently suspended several accounts following the “abhorrent” racist abuse directed at the players. Yet analysis by the i found that accounts that shared abuse widely had not been suspended on Twitter. Similarly, on Instagram, dozens of users who had posted racist abuse – from monkey emojis and bananas to derogatory slurs to telling the players to “go back to where they came from”- had not had their accounts disabled. Some users reported this abuse to the site’s moderator but were told it did not qualify for a ban. These findings have revealed cracks in social media companies’ policies for eradicating racism, abuse, and vile slurs on their platforms.
In response, Downing Street has referred to the UK’s flagship draft Online Safety Bill, introduced in May, as its primary way of tackling online abuse. Under the current plans, a “duty of care” would be imposed on social media platforms enforced by the communications regulator Ofcom, and there would be fines of up to £18 million on companies who fail to comply.
Yet significant questions remain as to how far the Bill will deal with the proliferation of harmful content online. Jo Stevens, Labour’s Shadow Culture Secretary, told POLITICO that the current plans are not satisfactory as they do not stamp out anonymous abuse and they have no codes of practice for kicking out racism.
She said: “If you were Marcus Rashford, Jadon Sancho and Bukayo Saka, you would have to do what you currently do, which is basically make a complaint to the platform themselves, wait a very long time for them to decide whether or not what was posted breaches their community guidelines and they might at some point get that person to take that stuff down or get them off the platform. But that would be it. It is still essentially a system of self-regulation.”
Since the furore, a petition to make ID a legal requirement for opening a social media account on the Parliament.UK website has gained traction. The petition – proposed by former glamour model Katie Price to tackle abuse toward her son, Harvey, who suffers from Prader-Willi syndrome – has received almost 700,000 signatures (600,000 more than what is needed for a debate in Parliament).
The thinking is that by requiring identification, the cloak of anonymity can be lifted from these trolls, and they can then be traced and held to account. Yet the government has responded to the petition, arguing that introducing compulsory user verification for social media could disproportionately impact users who rely on anonymity to protect their identity.
A spokesperson from the Department of Digital, Culture, Media and Sports said: “These users include young people exploring their gender or sexual identity, whistleblowers, journalists, sources and victims of abuse. Introducing a new legal requirement, whereby only verified users can access social media, would force these users to disclose their identity and increase a risk of their personal safety.
“Furthermore, users without ID, or users who are reliant on ID from family members, would experience a serious restriction of their online experience, freedom of expression and rights. Research from the Electoral Commission suggests that there are 3.5 million people in the UK who do not have access to a valid photo ID.”
This argument has been supported by sociologists and digital rights campaigners, who believe there would be a risk of data breaches by malicious actors, and that the fixation on “real-name policies” distracts from the reality that racism is embedded within society and is simply manifested by social media algorithms.
Imran Ahmed, CEO of the Centre for Countering Digital Hate (CCDH), explains how his organisation’s research suggests that a significant proportion of Instagram offenders – perhaps the majority – hid behind anonymous accounts whilst spewing abuse at the footballers after the match, but that the case for ID verification is a catch-22.
“It is reasonable to assume that racists would be more reluctant to publicly voice their hatred if they had to put their names to it, with all the potential social and professional costs associated”, he says. “But we also cannot ignore arguments for anonymity – especially for those living in politically repressive regimes elsewhere in the world, who may face terrible consequences for expressing certain options.”
Clearly, ID verification for social media is a double-edged sword. On the one hand, anonymity is crucial in periods of protest, for whistleblowing, for survivors of abuse. What’s more, ID verification would also deny 3.5 million people in the UK and millions more worldwide the fundamental right to express themselves and access information. On the other hand, ID verification could expose these vile trolls and hold them to account for what they truly are.
Whatever the merits of banning anonymous accounts, part of the solution is obvious, according to Ahmed: “What is beyond debate is that tech companies should close the accounts of racist abusers. No one has a human right to abuse others racially on social media. Britain needs legal mechanisms to hold platforms to account for their appalling inaction on racism by imposing proper financial penalties, as other countries already have done or are seeking to do.”