HadCRU and Rumsfeld [2004]

Wegman observed last summer that climate scientists failed to involve statisticians to an appropriate degree in the work. Yesterday Simon Tett drew our attention to Brohan et al 2006 as an explanation of uncertainties in HadCRU3. Brohan et al, of which Tett is a coauthor, used the prominent statistician, Donald Rumsfeld, as an authority for their uncertainty model. Brohan et al:

A definitive assessment of uncertainties is impossible, because it is always possible that some unknown error has contaminated the data, and no quantitative allowance can be made for such unknowns. There are, however, several known limitations in the data, and estimates of the likely effects of these limitations can be made [Rumsfeld, 2004].

Rumsfeld, 2004 has the following reference (to a 2002 press conference):

The Acronym Institute. Disarmament documentation. Back to disarmament documentation, June 2002. Defense secretary Rumsfeld press conference, June 6. “Secretary of Defense Donald H. Rumsfeld, press conference at NATO headquarters, Brussels, Belgium, June 6, 2002,” US Department of Defense transcript. http://www.acronym.org.uk/docs/0206/doc04.htm

The salient quote appears to be:

Question: Regarding terrorism and weapons of mass destruction, you said something to the effect that the real situation is worse than the facts show. I wonder if you could tell us what is worse than is generally understood.

Rumsfeld: Sure. All of us in this business read intelligence information. And we read it daily and we think about it and it becomes, in our minds, essentially what exists. And that’s wrong. It is not what exists. I say that because I have had experiences where I have gone back and done a great deal of work and analysis on intelligence information and looked at important countries, target countries, looked at important subject matters with respect to those target countries and asked, probed deeper and deeper and kept probing until I found out what it is we knew, and when we learned it, and when it actually had existed. And I found that, not to my surprise, but I think anytime you look at it that way what you find is that there are very important pieces of intelligence information that countries, that spend a lot of money, and a lot of time with a lot of wonderful people trying to learn more about what’s going in the world, did not know some significant event for two years after it happened, for four years after it happened, for six years after it happened, in some cases 11 and 12 and 13 years after it happened. Now what is the message there?

The message is that there are no “knowns.” There are things we know that we know. There are known unknowns. That is to say there are things that we now know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. So when we do the best we can and we pull all this information together, and we then say well that’s basically what we see as the situation, that is really only the known knowns and the known unknowns. And each year, we discover a few more of those unknown unknowns. It sounds like a riddle. It isn’t a riddle. It is a very serious, important matter. There’s another way to phrase that and that is that the absence of evidence is not evidence of absence. It is basically saying the same thing in a different way. Simply because you do not have evidence that something exists does not mean that you have evidence that it doesn’t exist. And yet almost always, when we make our threat assessments, when we look at the world, we end up basing it on the first two pieces of that puzzle, rather than all three.

Now there is considerable practical sense in what Rumsfeld says here, but I don’t see that these comments justify Brohan (Tett) et al applying these comments as authority for the policy:

There are several known limitations in the data, and estimates of the likely effects of these limitations can be made [Rumsfeld, 2004].

Also, merely as a bit of practical advice to the authors of Brohan et al 2006, despite the renown of Rumsfeld as a statistician, there is a distinct possibility that some, if not most, readers may think that the “estimations of the limitations in the data” were not accurately made in this particular case. It’s a very odd citation. One wonders if anyone actually reads these papers/

17 Comments

  1. cbone
    Posted Jan 30, 2007 at 10:43 AM | Permalink

    Thats priceless.

    On a more important note. Did you see the quick dog and pony show they did to try and justify the ridiculously low 0.0055C/decade UHI adjustment? But, the ultimate.. here’s the conclusion:

    “The same value is used over the whole land surface, and it is one-sided: recent temperatures may be too high due to urbanisation, but they will not be too low.”

    So they basically admit to biasing their sample on the high side, by refusing to calculate the actual impact of the UHI.
    It must be nice to be able to pick and choose your data like that, I wish I could have done that in my engineering classes. It would have made my projects so much easier, you know pick the inputs that give me the results I want instead of dealing with the results that the actual DATA gives me..

  2. PHE
    Posted Jan 30, 2007 at 10:51 AM | Permalink

    This subject relates to a recent comment of mine on this site (part of which I will repeat).

    The ‘Science’ Editor of the UK’s Independent newspaper recently stated the following in a recent article on ‘global warming hysteria’:

    “The IPCC recognises that there are many “positive feedbacks” in the climate system – more apparently than the negative feedbacks that tend to modulate climate change – which could make matters worse as levels of carbon dioxide and global temperatures continue to rise. Some of these feedbacks are pretty well understood, but many are not. And there may even be some that we don’t even know about. This is one of the reasons why there are still many levels of uncertainty when it comes to the future. ” Thus, the greater the uncertainty, the more we have to fear. Its the unkown unkowns that could get us in the end!

    This a regular approach of the AGW faithful. Uncertainty allows you to speculate. The greater the uncertainty, the greater the potential for speculation. Thus, you can suggest all sorts of extreme possibilities, and then imply that because we can’t disprove them, we have to assume they might happen, and that we need to take precautions. W”We can’t risk doing nothing” is a common mantra.

    With the risk of being ‘political’, this was exactly the type of argument used for the Iraq War, and it is why Rumsfield made those comments. Personally, I found the parallels between the ‘case’ for WMD in Iraq and the ‘case’ for AGW significant. The policymakers are leading the way, but bringing compliant ‘experts’ with them. Inevitably, when AGW proves to be a damp squib, its the experts who will get the blame for getting it wrong.

  3. Jean S
    Posted Jan 30, 2007 at 11:10 AM | Permalink

    I’m almost speechless. Someone with the access, please check if the citation appears also in the printed version:
    http://www.agu.org/pubs/crossref/2006/2005JD006548.shtml

    Seriously, what is the point of this citation? Is it just me, but I really don’t even get the humour value out of the citation. Or are they trying to mock statisticians and/or the scientific practice of quoting relevant literature?

  4. pochas
    Posted Jan 30, 2007 at 1:01 PM | Permalink

    #3:

    Rumsfeld’s discussion brought to mind another commonly heard expression, “Garbage in – garbage out.” What he is saying is that until you have reduced the “unknown unknowns” to the point where they cannot affect your conclusions, you have garbage and will have garbage for your conclusions.

  5. Posted Jan 30, 2007 at 1:08 PM | Permalink

    Congratulations to Prof Rumsfeld. Sean Carroll, although he is otherwise a somewhat obnoxious propaganda maker, had a nice comment that I subscribe about Rumsfeld’s analysis of unknown knowns – Rumsfeld forgot the “known unknowns”. 😉

    http://cosmicvariance.com/2006/11/10/toward-a-unified-epistemology-of-the-natural-sciences/

    Citing Rumsfeld could be funny but otherwise I think that he is a bright Gentleman.

  6. Steve McIntyre
    Posted Jan 30, 2007 at 1:14 PM | Permalink

    Sunlight is one of the best ways to reduce the unknown unknowns. These are the same characters as: We have 25 years invested in this work. Why should we let you see your data when your only objective is to find fault with it?

    Lubos, that’s a nice link.  The link to “known unknowns” is rather pretty – although the article would have done better by linking that to Satchell Paige (?) – the legendary pitcher of 50 years ago –  who put it: It ain’t the things that you don’t know that cause you problems; it’s the things that you know that ain’t so.

  7. Sara Chan
    Posted Jan 30, 2007 at 1:14 PM | Permalink

    Re #3 (by Jean S). Here is the sentence in the published paper.

    There are, however, several known limitations in the data, and estimates of the likely effects of these limitations can be made (Defense secretary Rumsfeld press conference, June 6, Back to disarmament documentation, June 2002, London, The Acronym Institute (available at http://www.acronym.org.uk/docs/0206/doc04.htm)).

  8. Posted Jan 30, 2007 at 1:17 PM | Permalink

    ..and I read it ‘[Rutherford, 2004]’. Too predictive reading, I guess 😉

  9. Sara Chan
    Posted Jan 30, 2007 at 1:21 PM | Permalink

    Just to clarify my previous comment (#7), the text of the sentence that cites Rumsfeld is identical in the published and the accepted versions. Only the referencing format has chandged: the reference is now inline with the text. (The change was probably made by AGU copy editors, because AGU publication guidelines require that formal references be to formally-published works, and all else is inline.)

    I think that this could be really funny. If it didn’t show so much contempt for the peer review system and readership.

  10. Paul Penrose
    Posted Jan 30, 2007 at 1:31 PM | Permalink

    Astonishing.

  11. Sara Chan
    Posted Jan 30, 2007 at 1:51 PM | Permalink

    Since the work of Brohan et al. appeared in one of the AGU’s most prestigious journals, it seems only fair to ask what sort of standards AGU applies before accepting a paper for publication. Fortunately, that questioned has recently been answered.

    Scientific papers appearing in our journals are subject to rigorous scrutiny by scientific peers prior to acceptance….—John Orcutt, President of AGU [Letter to The Honorable Joe Barton, 08 August 2005]

  12. Derek Walton
    Posted Jan 30, 2007 at 1:57 PM | Permalink

    It reminds me of a story published in New Scientist a few years ago. A doctoral student went to defend his thesis and took in a bottle of malt whisky. He put it under his chair. The defense went well, and the student passed. He picked up the bottle and made to leave, but was asked what the whisky was for. He replied that the thesis committee should turn to a particular page and line. It read “To whichever examiner reads this line, I will give a bottle of malt whisky”.

    So in this case the question might be: Did the reviewers actually look at the references?

  13. per
    Posted Jan 30, 2007 at 4:41 PM | Permalink

    Forgive me if I am a bit at odds with the tone here; I read the authors as quoting rumsfeld to make the point that the statistical model will be all fine and dandy, but fail to take account of the real errors of the dataset. Which seems reasonable.

    The sad thing is that if you were to make that case explicitly, the referees might well reject your paper.

    None of which matters; issues regarding availability of the underlying data so that you could do your own analysis seem more salient.
    per

  14. EP
    Posted Jan 30, 2007 at 9:40 PM | Permalink

    The trick is to find limits to the “unknown unknowns” if you can, which outside pure mathematics is impossible.

  15. Mark T.
    Posted Jan 31, 2007 at 9:46 AM | Permalink

    One wonders if anyone actually reads these papers

    Apparently you do. I’d venture if you were on that committee, you’d have also been drinking some whiskey.

    If they are using his quote as some sort of evidence that “yeah, you can never know everything,” I’ll bite, i.e. “we expect their to be unknowable uncertanties in data.” If, however, they expect this as validation that their methods are correct because of the unknowable uncertanties, they’re just joking, right?

    Mark

  16. Posted Jan 31, 2007 at 10:27 AM | Permalink

    There may be additional sources of uncertainty as yet unquantified

    It seems likely that different groups of observations may be measuring SST in different ways even in recent decades, and therefore there may be unresolved bias uncertainties in the modern data. Quantifying such effects will be a priority in future work on marine data.

    Pl. hurry.

  17. Posted Feb 2, 2007 at 8:03 AM | Permalink

    Citing Rumsfeld on uncertainty is particularly inapt because he is remarking on knowledge (intelligence) gained against an intentioned enemy whereas science attempts to gain knowledge against a mysterious but impartial Nature. It wasn’t always this way. As Carl Sagan wrote in his book “A Demon Haunted World” it used to be that unpleasant natural phenomena were considered to be literally the work of the devil. The whole scientific revolution has been a retreat from that world view. However in the world of military intelligence there really are demons and devils. That was the world that Rumsfeld was referring to.

    Most inferential statistics trades error for bias. We get sample sizes large enough so that we can treat the uncertainties of estimates mathematically. This only works if the sample is unbiased. In measuring natural phenomena we can usually ignore variation and treat it statistically because we are playing against Nature which has no stake in the game. In financial auditing we have to worry about intentions. People unlike Nature have bad intentions – pennys from millions of accounts are not just random errors if they all accumulate in an employee’s account.

    The reason this blog exists is because Steve decided to treat the Hockey Stick statistics not just as one would treat data from Nature but as data from a source with an agenda. That’s why it’s called Climate Audit not Climate Anaysis.