Too big to handle: Why we are so bad at preventing catastrophes

15. Oktober 2014

Too big to handle: Why we are so bad at preventing catastrophes

Financial crises, genocides, environmental catastrophes, epidemics, wars – constantly things happen we knew exactly that they would a) happen with some likeliness or even certainty, and b) be absolutely horrible. And still we have let it happen. And not just because we could not help it. But because somehow, all things considered, we did not want to. We haven’t done what we could have done. We didn’t want to know what we could have known.

Why? How is that? What is this strange phenomenon about? And how can we improve ourselves? To find answers to those questions, last week an extraordinarily illustrious group of scholars from all sorts of disciplines had assembled at the Wissenschaftskolleg in Berlin for a workshop under the (manifestly self-referential) title „Too big to handle„. Subitle: „Why Societies ignore Looming Disasters.“ I had the priviledge to sit in the back and take notes and ponder and wonder. Rarely have I experienced so much intellectual stimulus and so many reasons for despair at the same time.

The conveners were the international lawyer Anne van Aaken (St. Gallen) and the evolutionary biologist Janis Antonovics (Virginia). Both were fellows at the Wissenschaftskolleg* in 2011. Among the many merits of this praiseworthy institution is the fact that people who do research on – at first sight – entirely different questions get to engage with each other and thus come upon completely new research perspectives. The Van Aaken/Antonovics encounter is a prime example: Antonovics was despairing over his observation that we don’t do enough against the „looming disaster“ of microbiological drug resistance, and in van Aaken he found a conversation partner bothered by rather similar worries, albeit from totally different causes.

This was how the idea for this workshop came into existence. And beyond that, as Antonovics half jokingly announced, possibly even a new academic discipline, maybe under the nicely ambiguous name „disastrology“.

Idleness in the face of horror

Facing a „looming disaster“, why do we opt for idleness or other inadequate ways to react so often? The psychologist Andreas Glöckner (Göttingen) tried to explain this discrepancy by the necessity to decide quickly in a complex world full of uncertainties. In fact, informations are often far too abundant, and time far too scarce, to weigh all pros and contras (or even follow some heuristic rule of thumb) to find out which option promises most profit. Instead, we choose the option that fits in best with a coherent interpretation of what happens. If we think we know what is going on, we derive from that which information is relevant and which isn’t, und from that in turn we derive what to do. We blind ourselves to possible catastrophical consequences of our own action and to warning signals incongrous with our thus constructed world view. Only when that world view collapses our behavior flips, and it does so in a sudden, abrupt and incoordinated manner – which can by itself result in catastrophical consequences such as bank runs or mass panics.

The inclination towards wrong decisions in the face of a catastrophe hence appears to be individually hard-wired. But what does that tell us about the way societal division-of-labour institutions handle catastrophic risks?

From climate change to Ebola, most big-time catastrophes are defined by, first, the fact that it takes a common coordinated effort by many to prevent them and, second, that ressources, interests and costs are distributed highly unevenly among them. Plus, thirdly, every individual might be best off if he/she can get away with having the others shoulder all the costs. Idleness from that perspective can be a perfectly rational choice of action (or, rather, inaction).

Economists build complex game-theory models to reconstruct such constellations of decision-making and to order them by their chances of success. Todd Sandler (Dallas) and Scott Barrett (Columbia) put such models up for debate. They all assume that it is possible to calculate all positive and negative consequences in Dollar and Cent and thus make them balanceable. But is it?

Costs and benefits of Cost/Benefit Analysis

The lawyer and economist Susan Rose-Ackerman (Yale) tackled the attempts by the EU Commission and the OECD to propagate cost/benefit analysis in order to rationalize politics in Europe. With respect to „looming disasters“, such an „applied utilitarianism“ trying to balance all expectable benefits with all expectable costs is not just useless but downright detrimental. With big catastrophes evolving over long stretches of time at hand, cost/benefit analysis awards political myopia since future costs and benefits are discounted over time and thus tend towards zero in the long run. From that perspective it looks like a fairly good idea to rest idle and leave it for future generations to pick up the bill.

Still: cost/benefit analysis makes alternatives clear (Scott Barrett) and facilitates communication with political decision makers (Todd Sandler). The lawyer Christopher McCrudden (Belfast) chimed in that the closely related method of impact assessment allows to bring non-marketable values such as human rights into the calculation.

Most around the table were sceptical, though. Economist Martin Hellwig (MPI Bonn) lashed out against cost/benefit analysis as „fake positivism“, abstracting from value decisions and promising cost compensation that in practice never occurs: „This is what gives economics a bad name!“ Experts of cost/benefit analysis, said Susan Rose-Ackerman, should not pretend to know ex cathedra all the right answers to every societal value question that were for democratic bodies of negotiation to give. Lumping costs and benefits together is a way to mask useful tradeoffs that need to be discussed publicly, added Anne van Aaken.

Speaking of Expertocracy

Expert knowledge versus popular will: when it’s about disasters and their prevention, science finds itself routinely confronted with that old dilemma, and not just rationalizing economists. This became visible when the lawyer Jonathan Wiener (Duke) held his presentation, allusively titled „The Tragedy of the Uncommons“.

With „uncommons“ Wiener means very VERY rare, but also very VERY catastrophic risks. Rare as in large-hadron-collider-produces-black-hole rare. Or in asteroid-hits-earth-and-destroys-all-life rare. Or in mars-mission-returns-and-has-extraterrestrial-life-on-board rare. All those were examples Wiener quoted without any signal of ironic distance. While with familiar, yet still rare risks (plane accidents, mine collapses) the public usually is more scared than experts, with completely unfamiliar risks, according to Wiener, it is totally the other way around: Those risks were, compared to what is at stake, woefully underestimated by the public. Experts who do research on those „uncommons“ and develop prevention mechanisms ought to be supported and their warnings heeded.

Any of the people around the table could lay claim on the label of expert. But this was too much for most of them. In a world with a highly unequal distribution of life chances it is for many in fact entirely rational to decide that there are more urgent problems to take care of than asteroids and Mars microbes, argued the philosopher Philip Kitcher (Columbia). The economist and historian Deidre McCloskey (Illinois/Chicago) called into rememberance the fact that experts had predicted catastrophes from nuclear winter to Saddam Hussein’s weapons of mass destruction that never materialized in the end. In modern times, the imagination of possible catastrophes had enormously increased. Isn’t the danger much more in the other side, that we end up all being hysterically worried about (what used to be in former times:) God knows what is going to happen?

Too bad that there wasn’t a religion sociologist present who could have enriched the discussion with insights about the role of apocalyptic redemption and end-of-time fantasies. The climate scientist Detlef Sprinz (Potsdam) at least raised the issue of the Book of Revelation. Literature scientist Françoise Lavocat (Paris III) contributed the observation that contemporary literature, as opposed to 18th century literature, has a marked preference for catastrophes of mythical dimensions – among them some of Wiener’s examples, from asteroids („Armageddon“) to world dominance by artificial intelligence („Matrix“).

Another reason why experts should occasionally exercise humility came up in another context.  Biologist Janis Antonovics, complaining about the stubborn refusal of medical science to take the findings of evolutionary biology about microbiological drug resistance into account, raised the issue of the self-referential way most scientific communities communicate. What counts in science is the applause you get from your own community, and it isn’t necessarily the best way to get it if you propagate knowledge from outside. This is a general problem, remarked Susan Rose-Ackerman, and Martin Hellwig as well felt strongly remembered of the ways of his own discipline of Economics. Occasionally, what science recognizes as a big disaster and what it doesn’t appears to be less a matter of cognition but a matter of academic herd instinct.

The Power of Narrative

And yet: the problem that scienctists‘ warnings are of little avail in the face of looming disaster if nobody listens to them is real. Anne van Aaken wondered if science shouldn’t develop more story-telling skills in order to reach people emotionally. Literature scientist Françoise Lavocat curbed her enthusiasm: That people liked watching disaster movies doesn’t necessarily imply that the wish to be lectured. Thank god, it’s fiction, after all.

A vigorous encouragement to make better use of the „power of narrative“ came from biologist Peter Kareiva. He was the only researcher from outside academia in the room: He works as chief scientist for the world’s largest environmental organization, the Nature Conservancy (and, in this capacity, is apparently in the eye of an environmentalist-scholarly hurrican right now). His advice to science: Don’t raise alarm for some remote catastrophe nobody feels individually responsible for (cue: „2 centigrades temperature raise!!“). Rather adress problems that hit the one you want to get taking action. If you want to mobilize the management of Dow Chemical in the middle of arch-conservative Texas against climate change, you better stay away from tipping points and such and tell them about drinking water getting scarce in Texas if they don’t do anything about it.

Now, as the editor of Verfassungsblog I am the last person to object if somebody calls for more interaction of science and the political public – that goes without saying. But there is one thing I would like to contribute from my journalistic experience. During my time as an editor with a large newspaper I had the privilege to experience several epidemological-cum-media boom-and-bust cycles: BSE, bird flu, SARS, you name it. Another example: does anybody remember the truly millennarian „Y2K problem“? When we all believed that at the stroke of midnight on January 1, 2000, the entire electronic world would come to an end?

At the beginning of all those episodes were experts warning of catastrophes: very powerful narratives, and very successful ones, too. The public responded, a feedback loop occured, and the more the public was taken by fear the more suction it developed, demanding ever more information to further feed the fear, and this went on and on until all of a sudden somebody remarked that actually nothing really bad had happened at all, and the whole gigantic wave collapsed within hours.

I don’t mean to be cynical, quite the opposite. My point is, scientists shouldn’t saddle the „power of narrative“ horse naively. You never know where it takes you, and it might turn out to be quite hard to control at times. And this much I know as a journalist: the public is a good old sport, time and again it joins happily every mass hysteria in town. But at some point it gets enough, too. Its willingness to be alarmed is a copious yet limited good, and scientists should think carefully how much of it they want to consume, and what for. If not, they might find out some gloomy millennial day that there isn’t actually anything left of it.

And that would be truly a catastrophe.

*Full Disclosure: Verfassungsblog has a cooperation with Wissenschaftskolleg since 2011.

Schreibe einen Kommentar