Climate Change Science is Biased
Scientists have been vastly understating the degree of global warming for decades.
Scholars Naomi Oreskes, Michael Oppenheimer, and Dale Jamieson promote their new book about the history of the study of climate change at Scientific American. They start off on the results, which are rather shocking:
Recently, the U.K. Met Office announced a revision to the Hadley Center historical analysis of sea surface temperatures (SST), suggesting that the oceans have warmed about 0.1 degree Celsius more than previously thought. The need for revision arises from the long-recognized problem that in the past sea surface temperatures were measured using a variety of error-prone methods such as using open buckets, lamb’s wool-wrapped thermometers, and canvas bags. It was not until the 1990s that oceanographers developed a network of consistent and reliable measurement buoys.
Then, to develop a consistent picture of long-term trends, techniques had to be developed to compensate for the errors in the older measurements and reconcile them with the newer ones. The Hadley Centre has led this effort, and the new data set—dubbed HadSST4—is a welcome advance in our understanding of global climate change.
From a layman’s point of view, it’s rather remarkable that we were only off by “about 0.1 degree Celsius” using decades-old measuring techniques on something that would seem rather obviously challenging to measure. Then again, from a layman’s point of view, “about 0.1 degree Celsius” is literally a rounding error. Not so much, it seems.
Because the oceans cover three fifths of the globe, this correction implies that previous estimates of overall global warming have been too low. Moreover it was reported recently that in the one place where it was carefully measured, the underwater melting that is driving disintegration of ice sheets and glaciers is occurring far faster than predicted by theory—as much as two orders of magnitude faster—throwing current model projections of sea level rise further in doubt.
A hundred times faster?! How could they be off by that much?
To me, that’s the more fascinating part of the essay. It turns out, science—particularly on a topic as controversial as this—is rather political.
These recent updates, suggesting that climate change and its impacts are emerging faster than scientists previously thought, are consistent with observations that we and other colleagues have made identifying a pattern in assessments of climate research of underestimation of certain key climate indicators, and therefore underestimation of the threat of climate disruption. When new observations of the climate system have provided more or better data, or permitted us to reevaluate old ones, the findings for ice extent, sea level rise and ocean temperature have generally been worse than earlier prevailing views.
Consistent underestimation is a form of bias—in the literal meaning of a systematic tendency to lean in one direction or another—which raises the question: what is causing this bias in scientific analyses of the climate system?
The question is significant for two reasons. First, climate skeptics and deniers have often accused scientists of exaggerating the threat of climate change, but the evidence shows that not only have they not exaggerated, they have underestimated. This is important for the interpretation of the scientific evidence, for the defense of the integrity of climate science, and for public comprehension of the urgency of the climate issue. Second, objectivity is an essential ideal in scientific work, so if we have evidence that findings are biased in any direction—towards alarmism or complacency—this should concern us. We should seek to identify the sources of that bias and correct them if we can.
In our new book, Discerning Experts, we explored the workings of scientific assessments for policy, with particular attention to their internal dynamics, as we attempted to illuminate how the scientists working in assessments make the judgments they do. Among other things, we wanted to know how scientists respond to the pressures—sometimes subtle, sometimes overt—that arise when they know that their conclusions will be disseminated beyond the research community—in short, when they know that the world is watching. The view that scientific evidence should guide public policy presumes that the evidence is of high quality, and that scientists’ interpretations of it are broadly correct. But, until now, those assumptions have rarely been closely examined.
We found little reason to doubt the results of scientific assessments, overall. We found no evidence of fraud, malfeasance or deliberate deception or manipulation. Nor did we find any reason to doubt that scientific assessments accurately reflect the views of their expert communities. But we did find that scientists tend to underestimate the severity of threats and the rapidity with which they might unfold.
Among the factors that appear to contribute to underestimation is the perceived need for consensus, or what we label univocality: the felt need to speak in a single voice. Many scientists worry that if disagreement is publicly aired, government officials will conflate differences of opinion with ignorance and use this as justification for inaction. Others worry that even if policy makers want to act, they will find it difficult to do so if scientists fail to send an unambiguous message. Therefore, they will actively seek to find their common ground and focus on areas of agreement; in some cases, they will only put forward conclusions on which they can all agree.
How does this lead to underestimation? Consider a case in which most scientists think that the correct answer to a question is in the range 1-10, but some believe that it could be as high as 100. In such a case, everyone will agree that it is at least 1-10, but not everyone will agree that it could be as high as 100. Therefore, the area of agreement is 1-10, and this is reported as the consensus view. Wherever there is a range of possible outcomes that includes a long, high-end tail of probability, the area of overlap will necessarily lie at or near the low end. Error bars can be (and generally are) used to express the range of possible outcomes, but it may be difficult to achieve consensus on the high end of the error estimate.
The push toward agreement may also be driven by a mental model that sees facts as matters about which all reasonable people should be able to agree versus differences of opinion or judgment that are potentially irresolvable. If the conclusions of an assessment report are not univocal, then (it may be thought that) they will be viewed as opinions rather than facts and dismissed not only by hostile critics but even by friendly forces. The drive toward consensus may therefore be an attempt to present the findings of the assessment as matters of fact rather than judgment. [emphases all mine—jj]
I spend a lot of time over the course of the academic year talking to my students about various forms of bias in research, in organizations, bureaucracies, military planning, etc. Many of them are well-known to those who study the phenomena but nonetheless persist.
None of what the authors here say is the least bit surprising to me given what I know about how groups of people work. And, yet, because I don’t study this particular group, I naturally revert back to a “rational actor model” assumption. That is, I tend to think of those in the physical sciences are simply reporting their findings without bias and never consider that they’re under political pressure to bias their estimates low to present the image of consensus. But, upon a moment’s reflection, of course they are.
Indeed, this is likely an intractable problem. Even if this book vaults to the top of the bestseller list and makes an impact on the public discourse, the underlying pressures will remain. Lay people simply don’t understand how science works and will naturally interpret uncertainly in the form of “It’s probably somewhere between 7 and 12 but could be as high as 27” as “We have no idea what the number is.”