Discover more from REACTION
It’s a sad truth of our modern interconnected lives that tragedies are often defined by the platitudes they evoke. A terrible event, like that in New Zealand last week, makes room for “thoughts and prayers” as well as demands that “this cannot happen again”. Many are heartfelt but most remain mere signalling when it comes to taking real action. Few of them touch on the practical efforts needed to stop these things from happening. Therefore, in the spirit of “what might be done”, let me offer a few thoughts about “echo chambers”, widely regarded as being one of the reasons why we’re in such a dire place.
It has become a standard warning when using social media that we shouldn’t just follow people who agree with us. The argument runs like this: if we follow people we agree with, they will only reinforce our opinions. If we never expose ourselves to the other side of any argument, then we deny that important context that allows us to develop and grow as human beings. The result is an “echo chamber” or “bubble”, where you only hear your own words spoken back at you.
It’s an admirable sentiment. Except, perhaps we’ve been getting it all wrong…
The problem is that the argument ignores how the system works. It ignores the fact that social media isn’t passive. It works through algorithms and algorithms that, for all the talk of their cleverness, aren’t that bright. The actual system of social media (posting, reposting, liking, commenting, etc.) isn’t that complicated but complexity arises from naïve rules being applied billions of times. It produces results we could never anticipate.
In order to understand how simple systems can go wrong, consider a comparable but much smaller system. Goodreads is a website that tracks what books users are reading. You might have seen it integrated into Amazon’s Kindle as well as other devices – the moment you finish your eBook, you are asked to rate it. It’s a clever system, producing voluminous data that algorithms can chew up. What could possibly go wrong?
Well, what “goes wrong” is perhaps surprising: great books often get the worst ratings and bad books get the highest rating.
Does that mean we’ve been rating our literary classics wrong all these years? Is this a true reflection of a book’s quality? Well, this isn’t an argument about literary merits and what constitutes a “good book”. What this argument is about is how a system designed to promote “merit” inadvertently does the opposite because it hasn’t properly considered the habits of its users or the quality of the data it collects. Indeed, it doesn’t take much figuring to work out how the system causes this anomaly. It has nothing to do with the algorithms and everything to do with human nature.
People who want to read erotic vampire fiction involving cowboys taking their shirts off will, on the whole, want to read more erotic vampire fiction involving cowboys taking their shirts off. They won’t suddenly leap from “Dead Sexy” (“Tall, dark and delicious, he could be the ideal man—if he wasn’t a vampire”) to Proust or Charlotte Brontë. Extrapolate that a little further and nearly all the people who read those books will confine themselves to books that are similar in style and content. The same is true for most genre fiction. A person who reads Joanna Trollope might never read Anthony Trollope.
This is an example of an inherent bias in the system, which would be insignificant if we were only dealing with erotic vampire fiction involving cowboys taking their shirts off. It only becomes a problem when you look at the bigger picture. That’s when you need to understand that not all books are read in the same way. The literary classics, for example, are pushed into the hands of indifferent students or book-club members who would never choose to read them.
The result is that “David Copperfield”, rated by 181,229 readers, has a rating of 3.98. Joseph Conrad’s “Heart of Darkness” is a 3.42, after 357,800 reviews, though “Hearts in Darkness” (“Makenna James thinks her day can’t get any worse, until she finds herself stranded in a pitch-black elevator with a complete stranger”) has a more respectable score of 3.86 from 33,751 reviews. Then there are the thousands upon thousands of books with muscled lumberjack types on the covers with ratings in the mid to high 4.0s.
This flaw is typical of many rating systems. No two bits of data are equal but algorithms treat them as though they are in order to produce the average score. Amazon has a similar problem. Its algorithms cannot tell whether you’re marking down that cordless power drill because it’s a bad cordless power drill or because it arrived too late for Christmas.
This is why I find myself wondering if we misunderstand social media and especially the arguments about “echo chambers”. The kind of people who worry about “echo chambers” are not the extremists currently sitting in their bedrooms surrounded by Hitler memorabilia. Those people continue to exist entirely in echo chambers and will always be locked inside their bubbles. The people who take the echo chamber argument seriously are people like you and me, who think democracy is about a plurality of voices, often even terribly wrong voices. They are the free speech advocates who say, as I’ve often argued, that it’s better to hear bad arguments and counter them with good answers than to allow them to ferment in isolation where they get even worse.
All that remains true. Yet I increasingly wonder if social media is the wrong platform on which to advocate that.
Social media is too much at the mercy of its algorithms. It means that what we do matters and even those actions we feel are passive are, in fact, skewing the operation of that media. In other words: we effect how social media sees the objects that we consume. You might not agree with everything that “Norbert Dingleberry” is posting about “running with scissors” but you accept that it’s his opinion, so you continue to allow it in your timeline.
By not banishing Norbert, you’re nobly accepting the logic that stops you from creating a Dingleberry-free echo chamber. Yet the social media engine doesn’t understand this. To the social media engine, Norbert Dingleberry is +1 in the great equation and his post about running with scissors is now +1 in popularity. And because you’ve carried on tolerating Norbert Dingleberry’s nonsense, you’ve just given him +1 nudge towards acceptance. His tweet about running with scissors will spread a little bit further because you tolerated it. (And this is especially true of high-profile media types who retweet, ironically or not, the worst kind of abuse directed at them. Well done! You’ve just given some troll a huge social media boost!)
Now, the way I describe this makes it sound trivial and, on the scale of talking about you or me tolerating “bad” ideas, it remains trivial in the very same way that people innocently go about rating hunky cowboy vampire erotica. Yet social media doesn’t work on the scale of just you and me. It doesn’t even operate on the scale of our families, our book club, or even our local communities. There are millions of users producing billions – even hundreds of billions – of discrete data points.
And the result? That’s a system skewed to overpromote bad ideas simply because those with moderately liberal worldviews are only too willing to tolerate bad ideas. Those who have bad ideas do not tolerate anything that’s notionally liberal.
None of this should be taken as an argument against freedom of speech. What this does amount to, however, is the argument in favour of that freedom to listen which we all enjoy. I’ve written before that so-called “no-platforming” is often mischaracterized as intolerance when it is our fundamental choice not to listen to arguments we feel are beneath us, whether that’s astrologers demanding to be heard by astronomers, flat earthers wasting the time of geographers, or anti-vaxxers expecting doctors to respond to holy scripture or bad journalism.
In terms of social media, we shouldn’t be afraid to turn off bad ideas and, in fact, it’s probably our moral responsibility to do so. If you believe that sensible rational arguments will prevail, then it’s our duty to let the algorithms know what sensible rational arguments look like. Without that, we are mislabelling the data. Simply by allowing bad ideas to live in our timelines, we tacitly empower them. And that’s the deeper truth about social media. Your mere presence is as much a statement as everything you do there.
Subscribe to REACTION
Iain Martin and the team make sense of the news, providing commentary and analysis on the stories that matter in politics, geopolitics, economics and culture.