Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler…
We live in a world obsessed with the “road not travelled”. It’s an understandable fixation. As Robert Frost perhaps recognised, the unknown maintains a powerful hold on our imaginations and not just about paths through shaded woodland. History is our forest. What if Hitler had died in 1914 or JFK bent down to tie his shoelaces at that fateful moment in 1963? Forget about the constant “whataboutism” of the current tedious culture war. We have always been in thrall to “whatifism” and it shows no signs of easing anytime soon.
Doubt seems to be hard-wired into our brains and our capacity to believe we’ve made the wrong choice rooted in our DNA. Yet this feeling of “the grass always being greener” is particularly suited to a media age in which alternate histories have become synonymous with partisan bickering. American politics is routinely viewed through the prism of what-if rather than what-is. Democrats spent four years living in the darkest timeline set against the ideal of Hilary’s America, as Republicans act now like they’re in the alternative history to Trump’s second term. There’s always good money to be had by imagining the untaken path.
The inherent problem with this is that the untaken path always exists in the realm of hypothesis and, therefore, prediction. That’s where the problems begin. If you want to beat a politician with a particularly big stick, there are no sticks bigger than the alternate reality that would have existed but for their decisions. For the most part, it is a case of argumentum ad ignorantiam or an appeal to ignorance. What would the COVID numbers have looked like without government intervention or with more intervention? It’s impossible to say given there was government intervention at a specific level. We can speculate as much as we like because there’ll never be contrary evidence to prove us wrong.
In The Lancet last October, the computer model developed by Professor Neil Ferguson and his team at Imperial College was reviewed and the conclusion was drawn that “the initial projections were never going to be 100% accurate with a novel coronavirus. Initial projections built worst-case scenarios that would never happen as a means of spurring leadership into action.” That conclusion might be both accurate and damning. Throughout the pandemic, worst-case scenarios have become the stuff of political bat and ball. Ferguson’s regular interviews and appearances on TV popularised his work but to the discredit of his profession. Models started to resemble versions of history, used as if their predictions were absolute guarantees of outcomes or hysterical hand-waving by some nerdy Nostradamus.
These past two years should have made us acutely aware of the difficulties with predictions, but they do not appear to have made us any wiser. As famed baseball manager, Casey Stengel, once said: “Never make predictions, especially about the future” and, certainly, in terms of data models and the statistics of prediction, it appears that hypothesising a “could” has become a fool’s errand. Supposedly “inaccurate predictions” become a “failure” in the eyes of the general public. Yet, in terms of the models, these “failures” aren’t necessarily a failure. “Being wrong” is a feature and not a flaw of statistical forecasts. For example, a model might accurately predict that there’s only a 1% chance of an event happening, but it is not necessarily a failure of the model should that event happen. It might merely mean that conditions were met in this instance to produce that 1% event.
The danger of all this is that it turns the public away from science because it misrepresents science. Nassim Nicholas Taleb and Yaneer Bar-Yam, writing in The Guardian in March of 2020, blamed the government for inaction but also, more astutely, for committing a fundamental mistake in their understanding of the science around the relatively new subject of data modelling. “No 10 appears to be enamoured with ‘scientism’ – things that have the cosmetic attributes of science but without its rigour”. They concluded that “risk management – like wisdom – requires robustness in models.” Robustness too in how journalists report and politicians understand the function of models.
Yet Covid-19 has apparently made us all experts in epidemiology, virology, data modelling, forecasts, and computational statistics. Yet, in broad terms, too much of it amounts to picking and choosing the evidence that backs up our prejudices. For the best part of two years, the objective nature of science has been bent to the subjective nature of our lived experiences. We need reminding, often, that “being wrong” is itself a precondition of the scientific method. Science is Bayesian in that scientific consensus is merely the point at which the greatest amount of evidence stacks in favour of a theory. Science is therefore always an approximation of reality.
Which takes us back to the road not taken. Frost’s poem ends with the speaker confessing that “I took the one less traveled by, / And that has made all the difference.” But this leads us to the end where we began. The other path might represent a significant “difference” but it is entirely hypothetical. Obsess as he might, Robert Frost could never know what lay down that other path, even if he had tried to model it.