Global Warming Bet
A professor at the University of Pennsylvania Wharton School of business has challenged Al Gore to a bet over global temperatures. The bet is for $10,000 and the professor, Scott Armstrong, is betting that global mean temperatures will be pretty much as they are now, while Al Gore will have to take the position that global mean temperatures will be (dangerously) higher than they are today.
‘Gore says there are scientific forecasts that the Earth will become warmer very rapidly. But I have not found a scientific forecast that supports that view. There are forecasts made by scientists, of course, but they are very different from a scientific forecast’, says Armstrong.–italics in the original
That is some pretty strong stuff. But Armstrong has quite the background in forecasting (Armstrong’s CV). He is a very well regarded expert on forecasting methods. Much like Steve McIntyre has done with several pieces of research into global warming, Armstrong, and his colleague Kesten Green, have audited a large number of forecasts that have gone into the Fourth Assesment Report of the IPCC on Global Warming. Their conclusion in regards to the forecasts,
In 2007, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme issued its updated, Fourth Assessment Report, forecasts. The Intergovernmental Panel on Climate Change’s Working Group One Report predicts dramatic and harmful increases in average world temperatures over the next 92 years. We asked, are these forecasts a good basis for developing public policy? Our answer is “no”.
Interestingly, one of their objections is the following,
Agreement among experts is weakly related to accuracy. This is especially true when the experts communicate with one another and when they work together to solve problems. (As is the case with the IPCC process).
In other words, one of the most touted aspects of the pro-global warming side of the issue is actually not as good as it often sounds. I have to admit I had not heard of this problem with forecasts before, it is a rather counter-intuitive claim.
Another problem which isn’t counter-intuitive was this one,
Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply. That is, they tend to magnify one another. Ascher (1978), refers to the Club of Rome’s 1972 forecasts where, unaware of the research on forecasting, the developers proudly proclaimed, “in our model about 100,000 relationships are stored in the computer.” (The first author was aghast not only at the poor methodology in that study, but also at how easy it was to mislead both politicians and the public.) Complex models are also less accurate because they tend to fit randomness, thereby also providing misleading conclusions about prediction intervals. Finally, there are more opportunities for errors to creep into complex models and the errors are difficult to find. Craig, Gadgil, and Koomey (2002) came to similar conclusions in their review of long-term energy forecasts for the US made between 1950 and 1980.
And this one,
Given even modest uncertainty, prediction intervals are enormous. For example, prediction intervals expand rapidly as time horizons increase so that one is faced with enormous intervals even when trying to forecast a straightforward thing such as automobile sales for General Motors over the next five years.
In regards to climate forecasting in particular they offer some pretty grim news,
- What will happen to the mean global temperature in the long-term (say 20 years or longer)?
- If accurate forecasts of mean global temperature changes can be obtained and these changes are substantial, then it would be necessary to forecast the effects of the changes on the health of living things and on the health and wealth of humans. The concerns about changes in global mean temperature are based on the assumption that the earth is currently at the optimal temperature and that variations over years (unlike variations within years) are undesirable. For a proper assessment, costs and benefits must be comprehensive. (For example, policy responses to Rachel Carson’s Silent Spring should have been informed by forecasts of the number of people who might die from malaria if DDT use were reduced).
- If reliable forecasts of the effects of the temperature changes on the health of living things and on the health and wealth of humans can be obtained and the forecasts are for substantial harmful effects, then it would be necessary to forecast the costs and benefits of alternative policy proposals.
- If reliable forecasts of the costs and benefits of alternative policy proposals can be obtained and at least one proposal is predicted to lead to net benefits, then it would be necessary to forecast whether the policy changes can be implemented successfully.
In other words simply saying, “Temperatures are going to rise by quite a bit, and we are pretty certain with these forecast,” you’d still be left with a boat load of forecasting to do.
Has any of this been done? In a word, “No.” William Nordhaus has done some work on this and his conclusions are quite differnt from Al Gore’s “We are destroying the Earth!” claims. The Stern Report also did some work in this area and it was not well recieved by some notable people in the areas of economic growth theory.
Some might be tempted to say, “Yeah, but these are scientists. They are experts and are supposed to be objective, so can’t we listen to their recommendations, forecasts, and advice?” I a word, “No.” Why not? Well Armstrong and Green give us the answer,
Comparative empirical studies have routinely concluded that judgmental forecasting by experts is the least accurate of the methods available to make forecasts. For example, Ascher (1978, p. 200), in his analysis of long-term forecasts of electricity consumption found that judgment was the least accurate method.
In other words, going to the expert and askin him his expert opinion really isn’t all that great. Armstrong and Green point to one study of experts and their forecasts and forecasts by non-experts and found that generally there was no difference in predictive accuracy. In short, we should probably rely on something other than just expert opinion. And this gets back to the agreement amongst experts noted above, it is only weakly correlated with forecast accuracy.
Here are some particularly interesting parts, at least to me,
Pilkey and Pilkey-Jarvis (2007) concluded that the long-term climate forecasts that they examined were based only on the opinions of the scientists. The opinions were expressed in complex mathematical terms. There was no validation of the methodologies. They referred to the following quote as a summary on their page 45: “Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation and eventually build a structure which has no relation to reality. (Nikola Telsa, inventor and electrical engineer, 1934.)”
Carter (2007) examined evidence on the predictive validity of the general circulation models (GCMs) used by the IPCC scientists. He found that while the models included some basic principles of physics, scientists had to make “educated guesses” about the values of many parameters because knowledge about the physical processes of the earth’s climate is incomplete. In practice, the GCMs failed to predict recent global average temperatures as accurately as simple curve-fitting approaches (Carter 2007, pp. 64 — 65) and also forecast greater warming at higher altitudes when the opposite has been the case (p. 64). Further, individual GCMs produce widely different forecasts from the same initial conditions and minor changes in parameters can result in forecasts of global cooling (Essex and McKitrick, 2002). Interestingly, modeling results that project global cooling are often rejected as “outliers” or “obviously wrong” (e.g., Stainforth et al., 2005).–bold added
Actually, including one’s opinions, or as I like to call them, beliefs into one’s research is fine. However, one cannot stop there. One’s beliefs are fine as a starting off point, but those beliefs should be updated according to data. There are ways to do this in a scientifically rigorous fashion, but it doesn’t seem that in regards to forecasting models that this was done.
Overall, it is more evidence that prior to implementing any policy in regards to global warming the research behind things like the IPCC needs to be closely audited. We do this with various utilities in this country. There are small armies of auditors at various investor owned utilities that look at money, usage, customers, where the revenues go, are rules being followed, etc. And how much money do all of these utilities account for? Maybe several hundred billion dollars? Chump change when compared to the potential costs of dealing with climate change. Taking the time and the resources to make sure that the forecasts are up good and following acceptable methodologies and procedures seems like a no brainer. Of course, despite this, I imgaine there will be a huge backlash against such and idea.
See also the website Climate Bet, a site that is run by Prof. Armstrong about the bet he is proposing to Al Gore.
- None Found