New Scientist about Martin Rees' new project at Cambridge focusing on so-called "existential" threats to humanity -- in particular ones caused by human technology: think Terminator-style robots, nuclear war, man-made plagues and human-induced climate change. I just hope they will be paying adequate attention to the human factor, informed by insights from the fields of risk communication and risk perception. It is a well-documented human foible to focus on extremely rare but dramatic threats (plane crashes) and think little about far more mundane and common -- and statistically far more likely -- threats (car crashes).
What concerns me is Rees' citation of the financial crisis as a Black Swan that nobody predicted. The problem is, the financial crisis was predicted (ever heard of Nouriel Roubini?), but policy makers and politicians ignored the warnings, just as they did with the 9/11 terrorist attacks. ("Mr. President, would you like to read today's Presidential Daily Briefing?" "No thanks, I'm going to my fake ranch in Texas to clear some fake brush.") I am also concerned by Rees' statement in the New Scientist piece that we focus too much on "tiny risks that are widespread." Actually, the opposite is probably true. Experts in risk communication will tell you that people tend to have far greater fear of exactly the kinds of risks Rees thinks are too easily dismissed right now: man-made versus natural risks; existential (threatening large numbers at one time) versus risks spread out over time and geography (and therefore apparently small, though large in the aggregate). This isn't to dismiss Rees' project as unnecessary, but it should have a clear focus on the realities of human risk perception.