A developing thread of the Russian hacking story is (and will increasingly become) our susceptibility to disinformation. I’ve said it before but I think it’s worth repeating: Russia certainly hacked the American election and the outcome was affected, at least in terms of votes cast. Opinions were changed because people (quite possibly millions of people) believed that disinformation. It is too easy to simply dismiss what happened because the final result might not have been any different. What happened in America last year is important because it is happened and will happen in other countries too.

Today Facebook has rolled out a new tool that will “tag” certain stories with red alerts to warn reader when a story has been disputed by “fact checking” websites. It seems like an easy fix, yet it is not even a band-aid to what has quickly become a gushing wound. It doesn’t even begin to address the scale of this problem, which really lies not in technology but in people; in the way that information spreads through a social network, how lies perpetuate, untruths harden into convictions, and how we all casually ingest information with barely a thought to provenance.

The scale of the problem cannot be overstated. It’s unlikely that “disinformation” will be just the “story of 2017” and we’ll move on to have something else to worry about in 2018. We are most likely in the Age of Disinformation and it’s hard, initially, to see how or when we will emerge.

To understand why it’s such a problem, we have to understand that social media is not a powerful mechanism simply because it is grounded in technology. It is powerful because it is grounded in people and it is inherent to our natures that we believe things that our friends tell us. Communication has been the cornerstone of our civilisation for millions of years. Yet, for the first time, that is under direct attack.

It is now obvious that we are all being targeted with disinformation. There has been discussion this week about Russia’s use of “bots” to help disseminate the stories that are most useful to their goals. “Bots” are effectively the same technology as the old “web crawlers” and “spiders” that work quietly away indexing the internet. They are simply specialised code that autonomously prowls the internet looking for places where they can do their work.

In the case of Googlebots, their work is to add the data they find to the Google search index. At some point, a Googlebot will scan this article and notice that I’ve used the series of letters “Fnngarri” and index it under that search term. At the time of writing, there are no search results in the Google index for “Fnngarri”. Soon there will be one.

Yet bots need not be so passive as to simply “read” the web. Bots can be aggressive. They can even be aggressive when they are only being passive. If enough of them read data from a web server, they can crash a website in what’s called a Denial of Service attack using a “botnet”. Bots can also be set not just to trawl data but write data. What is known as “captcha” – those strange series of letters and numbers you have to enter before posting a comment on some sites – are there to stop bots from taking over. Without this layer of protection, bots can effectively destroy a website, blog, or forums in seconds by filling it with spam.

These examples tend, naturally, to make us think that bots are an irritant and that their effects are obvious. Bots are mechanical in what they do and, so we think, they lack the sophistication to really affect us in any deep or meaningful way. Yet what is now emerging is how the threat of bots is much more insidious. To understand why, we have to think on a much bigger scale. It helps, in fact, to think of the Earth…

In Douglas Adams’ Hitchhiker’s Guide to the Galaxy, there’s a wonderful comic conceit that increasingly feels less like a conceit. The planet Earth is not, it turns out, a planet at all. It is an artificial planet that works as a computer. Every life form on the planet (of which humans are only third most intelligent after the mice and the dolphins) are small but vital components in a vastly complete computer program that is working away for millions of years to work out the Ultimate Question of Life (the answer, they already know, is 42).

The Hitchhiker’s Guide is, of course, fiction but Adams relies for his genius on the fact that the science behind this concept of the Earth is not too far-fetched. There are coherent arguments that describe vastly complicated systems such as the Earth in terms of their programmability. The whole concept of “introduced species” is, for example, a result of viewing an ecology as a system capable of being influenced by new inputs to produce different outputs.

Not that most of our systems are as complicated as an ecology or an entire planet. In many respects, our most simple systems are computers which work in binary. Deep inside your machine, there are switches being set to either a “0” or a “1”. That, in essence, is all that a computer does. Yet from this very simple rule something as complicated as the technological revolution is fashioned.

Computational theories of mind take this argument a step further to suggest that human consciousness is itself a product of a vastly complicated system that can be thought of as a machine. Neurons might individually work relatively simply and exhibit nothing like what you or I would think of as consciousness. Yet when they work on the scale of the many billions, the result is that staggeringly complex “I” that’s currently reading this (or, on this side of the screen, writing it).

Without getting into this science in a hard way, it’s worth pointing out that every simple sets of rules can produce huge complexity if enough agents follow those rules. (Conway’s Game of Life is a popular example, though I won’t get into the details here.)

In the case of social media, however, we have a system which is thought to contain a relatively small number of agents. Twitter, for example, is said to have about 317 million active users compared, say, with the brain which contains 300 billion neurons or a modern processor which contains 1.4 billion transistors. Of course, there is difference between neurons, transistors and the users of social media. Each user is a fully developed human consciousness in itself. Yet if we also add in the number of bots increasingly swarming around the globe, we begin to recognise the scale and the complexity of the problem.

Human activity in the world of social media is already being dwarfed by that of artificial entities out to shape opinion. As nation states begin to recognise the nature of the threat, human individuality will be exposed to enormous pressures of information engineering for which there is, as yet, no bulwark. In 2016, it was calculated that just over half of the world’s entire web traffic was the product of bot activity. The Russian disinformation campaign is not simply a problem of putting in the right safeguards. On the scale of the social network, disinformation is becoming a form of psychosis that is affecting an entire system. In those terms, the deep intractability of the problem becomes apparent.

The problem of disinformation affecting a social network is beginning to resemble an aberration of psychology affecting the individual. When you consider that Freud publishing his seminal works at the turn of the twentieth century and realise how little we’ve really come to understand those problems, you begin to understand how the psychosis of disinformation might not be easily fixed. The problem of 2017? It might well become the defining problem of our century.