North has a Texas A&M Seminar presentation here (deleted available here) . North is a nice and decent guy but this is a frustrating presentation in a lot of ways. At minute 55 or so, he describes panel operating procedure by saying that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.” He said that’s what you do in these sort of expert panels. Obviously I suspected as much, but it’s odd to hear him say it.
-
Tip Jar
-
Pages
-
Categories
-
Articles
-
Blogroll
- Accuweather Blogs
- Andrew Revkin
- Anthony Watts
- Bishop Hill
- Bob Tisdale
- Dan Hughes
- David Stockwell
- Icecap
- Idsos
- James Annan
- Jeff Id
- Josh Halpern
- Judith Curry
- Keith Kloor
- Klimazweibel
- Lubos Motl
- Lucia's Blackboard
- Matt Briggs
- NASA GISS
- Nature Blogs
- RealClimate
- Roger Pielke Jr
- Roger Pielke Sr
- Roman M
- Science of Doom
- Tamino
- Warwick Hughes
- Watts Up With That
- William Connolley
- WordPress.com
- World Climate Report
-
Favorite posts
-
Links
-
Weblogs and resources
-
Archives
69 Comments
I win the CA Psychic Prediction Award.
Folks,
invest inus, trust us, we will make you rich without ever having to do any work.
Or how some share brokers analyse data.
Here’s an extract from an article in The Chronicle of Higher Education, quoting North:-
You can read the whole article here, if you’re interested (not found by me, BTW, found by http://www.junkscience.com).
Doh! Should have read further down the front page before I pulled the posting trigger, shouldn’t I?
Delaying tactic? Maybe scientists publishing uncertainty-free hockey stick graphs in IPCC reports is a fast-tracking tactic?
From the link in #3:
This shows why I have backed JMS’s view that Bradley is a highly competent dendroclimatologist. Now if only that BCP chronology were updated …
Gerry North is online right now at http://www.chronicle.com. Send him a question.
RE:
“Question from Patrick Frank, Stanford University:
The original hockey stick has been shown not just flawed but wrong. Why was the NAS committee unable to clearly state that? A more basic question: There is no analytical theory that can extract a growth temperature from tree ring widths or tree ring densities. On what scientifically valid grounds, therefore, can anyone possibly “calibrate” a tree ring series using temperature vs. time data?
Gerald North:
I am not a tree ring expert, but we had an expert on our panel. We also listened to several experts and I read most of Fritts’s book on the subject. I concluded that there is something to the signals extracted from tree rings. I suppose we disagree on this matter. Tree rings have been studied for about a century now and I believe that they are probably the best indicator we have at this time. Other proxies will undoubtedly become better as they are developed. When this happens we will have a check on the tree rings. So far the tree ring estimates of surface temperature changes do agree with borehole temperatures in ground and ice and with glacial length data for the last 400 years.”
********
Notice how he completely dodged “The original hockey stick has been shown not just flawed but wrong. Why was the NAS committee unable to clearly state that?”
That’s what happens when you bundle questions. Sub-questions are easily dodged.
Question from Stephen McIntyre:
The NRC Panel stated that strip-bark tree forms, such as found in bristlecones and foxtals, should be avoided in temprature reconstructions and that these proxies were used by Mann et al. Did the Panel carry out any due diligence to determine whether these proxies were used in any of the other studies illustrated in the NRC spaghetti graph?
Gerald North:
There was much discussion of this matter during our deliberations. We did not dissect each and every study in the report to see which trees were used. The tree ring people are well aware of the problem you bring up. I feel certain that the most recent studies by Cook, d’arrigo and others do take this into account. The strip-bark forms in the bristlecones do seem to be influenced by the recent rise in CO2 and are therefore not suitable for use in the reconstructions over the last 150 years. One reason we place much more reliance on our conclusions about the last 400 years is that we have several other proxies besides tree rings in this period.
Question from Stephen McIntyre:
Is it your view that meaningful error bars can be obtained from calibration period residuals using Mann et al methodology, considering both their regression and principal components methodology?
Gerald North:
This is a difficult question (you always pose difficult questions!). My own view (not necessarily the committee’s) is that the verification period is misleading. I do not think there is enough data in that period to really nail down the matter. There are also the questions about using the mean (low frequency) versus the variations (higher frequency) parts in the verification procedure. Personally, I like the way Mann did it better than the others, because it is the long term stuff we want to check on. But this is a personal opinion. The fact is, there is no one way to do this — especially when we have so little data. That is why the committee was reluctant to put error bars on the early part of the record (or even the late part).
C’mon folks. Ask North a question.
I’ve always admired your psychic abilities, John A.
Mark
re: #11
What exactly does he mean here? I’m certainly not the expert here, but isn’t this precisely the sort of thing which gets messed up because of either overfitting or colored errors?
Some of the answers are weird. For example, he says that the Committee was reluctant to put error bars on the reconstruction ecause they can’t be calculated, but states in his article that omission of these error bards by the WSJ was an error.
This statement alone highlights the uncertainty. I.e., “we are so uncertain of the results we cannot make valid claims to the level of uncertainty!”
Mark
Question from Concerned pulic servant scientist:
Dr. North, I, like you, have the utmost confidence in dendroclimatologists such as Dr. Malcolm Hugues, co-author on the original hockey-stick paper. But given the importance of bristlecone/foxtail pines (and Dr. Gordon Jacoby’s Gaspé cedars) to the “global” temperature reconstruction, what is one to make of the fact that these chronologies not been updated for what is now 8 years? If new samples have been taken – and I understand they have – why do you think the data have not been published? Doesn’t this suggest to you that they are dragging their heels? Why would they do that?
Gerald North:
Sorry, I do not know these individuals more than acquaintances. Hence, I cannot answer any questions about motivation. I can say, however, that if they could prove the hockey stick or spaghetti graphics wrong, I am sure they would jump to the opportunity — and what scientist wouldn’t?
North says:
(a) “The problem is that in these kinds of reconstructions, the errors are not always quantifiable as they are in purely statisical sampling errors where we can really quantify the error margins. Here we are really into the unknown and the biases are not well understood.”
(b) “When we put the forcing into our models (even with their uncertainties) we are able to link the cause and effect pretty certainly.”
How on earth did he put the uncertainties into the models in (b) if they are not in fact quantifiable in (a)? That was a clever dodge. Fact is, the uncertainties are probably so large that nothing conclusive can be said about the magnitude and precision of the estimate of the A in AGW.
My question :
In your NRC report and the subsequent press conference, you described the MBH hockey stick as plausible. To me, this means that it is not obviously wrong.
You also made reference to the “cartoon” chart in the first IPCC report which showed a Mediaeval Warm Period that was warmer than to day. Would you also regard this chart as plausible ? If not, why not ?
Keeping it simple …
Question from Joel McDade, bystander:
Greetings Dr. North: I am curious what you thought of the primary part of the Wegman Report, that dealing with the statistical issues in Mann, et al. Specifically, the statement (or similar), “Incorrect mathematics + correct result = bad science.” I must say that the NAS Report appeared, to me, to find fault with the Mann methodology but then went on to seemingly endorse the result. The later was the media’s take, anyway. TIA
Gerald North:
There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not ‘wrong’ and the science was not ‘bad’. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).
“The minor technical objections serve as a weapon for those special interests who want to delay any action on GW.”
This gets my vote for understatement of the year. I find it morbidly amusing that serious questions that undermine the methodology, data integrity, and overall validity of a peer reviewed study only rise to the level of “minor technical objections.” I didn’t realize that evicerating a paper of any valid conclusions is only a minor technical objection. I got a good laugh out of that one.
Question from Joel McDade, bystander:
Greetings Dr. North: I am curious what you thought of the primary part of the Wegman Report, that dealing with the statistical issues in Mann, et al. Specifically, the statement (or similar), “Incorrect mathematics + correct result = bad science.” I must say that the NAS Report appeared, to me, to find fault with the Mann methodology but then went on to seemingly endorse the result. The later was the media’s take, anyway. TIA
Gerald North:
There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not ‘wrong’ and the science was not ‘bad’. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).
Huh. I’ve been censored.
oops, already posted
#23. I don’t think that you were “censored”; they had to pick and choose and it wasn’t screening by realclimate. I would have liked to see the answer as it was a good question. Not a lot of meat in the responses.
If I were to ask a question, it would be:
I’m not yet strong enough with the particulars of the impact to back this up. I am working on it, albeit slowly.
Mark
My reading of Rutherford code is the same as yours. I think that you’re on the right track. It’s pretty amazing to run into another centering issue.
The following questions did not make it to the discussion:
Q: It is often said that the scientific process, under peer-review, is “self-correcting”. As a scientist I recognize this. However it also seems to me that the rate of self-correction is painfully slow compared to the fast wheels of policy. If bad science is fast-tracked to the level of policy, and that science is ultimately reversed, should the victims of that policy be compensated?
Q: I sense an important contradiction on your statement about how uncertainties are handled. You say:
How do the uncertainties get put into the models in (b) if they are not in fact quantifiable in (a)? (And I realize that (a) was in reference to multiproxies, and (b) GCMs and the like. Still …) Don’t you agree that the uncertainties may be so large that nothing conclusive can be said about the magnitude and precision of the estimate of the A in AGW? For example a CO2 warming projection of 3±4°C dictates a very different policy than one of 3±1°C.
(Delurk) I asked the following questions (not in time):
(1)
2. (paraphrase) did the committee agree to the changed objective (broadened focus, specific-Mann-examination omitted)?
3. (paraphrase) how do we get Steve to finish thoughts, disaggregate issues, and publish.
This didn’t make it either:
Nice to have you back, Jean S.
re #26: I think you are on a right track 😉
re #31: Thanks. I should be praparing my lectures and finishing papers, but could not help to come here to see what’s going on 🙂
How ’bout inviting Dr. North to come and answer some of these questions that didn’t get answered (I had one myself)? After all, he’s not such a bad egg:)
He does seem like an OK guy, at least, he’s not nearly as antagonistic as those he supports.
BTW, I HAVE done some tests with the regem code, and I have also plotted the means out at various stages of the algorithm. The results are interesting, though I’m not ready to comment as I don’t fully understand WHY the plots look the way they do. One note, “zero mean” does not ever exist unless by fluke for a single proxy. The largest means are always in the proxy, rather than instrumental, data (which makes sense because the instrumental data is inherently +/- from some mean value, near zero to begin with).
Mark
I submitted a question, but the discussion was already closed.
I loved the question by Joel McDade:
Greetings Dr. North: I am curious what you thought of the primary part of the Wegman Report, that dealing with the statistical issues in Mann, et al. Specifically, the statement (or similar), “Incorrect mathematics + correct result = bad science.” I must say that the NAS Report appeared, to me, to find fault with the Mann methodology but then went on to seemingly endorse the result. The later was the media’s take, anyway. TIA
And North answered that, yes indeed, one actually can get the “right” answer from faulty methods. Unbelieveable!!
RE: #36 – “And North answered that, yes indeed, one actually can get the “right” answer from faulty methods. Unbelieveable!!”
The similarities between AGW obsession and uniformitarianism are uncanny. Interestingly, one of the straws that broke the latter’s back was a book funded by Ciba Geigy on plate tectonics, I think it came out about 1970 or 71. Not making any predictions here for it seems that there are no corporations with the courage or even the knowledge to fund a similar think in this case.
Re #35
It’s easy to be an “OK guy” when you’re not the protaganist. The issue here is competence, not congeniality. And one should not be too hasty judging North’s competence. Everyone loves a “grandpa” figure. Everyone, that is, except the cold, hard scientist who loves only the facts.
Oh, I’m not. I was just noting that at least he’s not an ass, which gets to be tiresome.
Competence is another issue altogether. Incompetence, or at least, an unbending will to believe flawed science just because “it must be right” is equally tiresome, though you don’t want to punch a nice, but incompetent grandpa. The same cannot be said for an incompetent ass.
Mark
The nice thing is that no one need punch grandpa in this case. Because it’s his arguments that are flawed, not his being. Every rational scientist since Socrates understands that the reason arguments are made public is to maximize the opportunities for punching. So punch away. North has surely punched a few in his time. Maybe even some grandpas.
The NAS panel should have edited one of its conclusions.
“- 30 year averages highest in 400 years” – should be rewritten to:
– 30 year averages highest in 400 years but part of the increase is simply recovery from a relatively cold period over the past 400 years;
This is a much more accurate statement in terms of explaining the situation to the general public.
Re#17:
Could he possibly be serious? People are chomping at the bit to take-on the hockey stick and face the slander and attacks of the hockey team – not lining-up to publish the status-quo and be a part of the “consensous?”
Re #42:
Cute: rhetorical question as dodge. I laughed at this one too.
The fact is, for an insider taking on the team, there would be much to lose, materially, and little to gain, in terms of honor – especially among the “save the earth” moral majority. Better to ask “what scientist would?” [Ans: Someone very bold, with a huge ego – a lone wolf with all the necessary stats training, climatology training, access to data and codes, etc … Fair to ask: Does such a person even exist?]
“”At minute 55 or so, he describes panel operating procedure by saying that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.”””
Probably they were led by the nose of permanent staffers juiced into the Kyoto lobbies. First question – who literally wrote the report? It’s narrative structure read like corporate public relations clean-up, not scientific study.
Was it Ken Fritsch or some equally cogent contributor who mentioned bureaucratic inertia & policy entrenchment as one force to fear, regardless what direction is chosen?
And somehow this qualifies as a “peer reviewed” report yet Wegman’s does not. Oy vey.
Mark
He really did say that. WEgman did some checking of code, which the North panel did not.
Would they even be qualified to check the code that RM05 used anyway? Heck, it’s bad enough even for those of us that can read Matlab, and understand the methodology, let alone for those that have neither experience.
Mark
The scientist that can’t get a paper past the hockey team peer review roadblock probably wouldn’t, but not for the right reasons…
Wow! As a non-scientist I would like to say just how much confidence this inspires in me regarding the trillion dollar public policy decisions being based on this ‘research’.
I’m glad to know that they take the prospect of the world’s economy being stood on its ear serious enough to do no research and ‘wing it’. Why not just put on Karnak’s turban and hold the data up to their forehead? Amazing.
RE 26, Mark T, aren’t they just making an ergodicity assumption? What line are you looking at in the script? Have you looked at the use of the filtfilt function to lowpass filter the data to form the high and low frequency terms? For the nonMatlab folks, the filtfilt function is a noncausal zero phase filter that is implemented by filtering the data in the normal direction and then flipping the filtered data and filtering that flipped data to get the result. The filt function had been commented out.
Ergodicity w.r.t. which part of my statement? Do you mean the E{E{X}} portion? If so, then maybe they are, but it is not relevant to that implementation since each proxy a) measures something different (not all are tree-rings) and b) each has differing statistics (very easy to show).
As for the filtfilt, it works out to pseudo-sort of work. At least, what they refer to as the high and lowpass pairs really are uncorrelated after the filtfilt function. You can perform the function on white noise, for example, then subtract that result from the original data to get the highpass portion, and do a cross-correlation between them. The r is near 0 with a p near 1. Now, how that mucks with the data used later on, particularly for reconstruction, I have not investigated.
What filtfilt does is provide very steep filtering walls (response squared) which separates out the low frequency data better. However, typical dyadic decompositions use something akin to a quadrature-mirror filter, though their specific implementation obviously is not looking for an exact pi/2 cutoff (I don’t recall, offhand, their actual cutoff point). I.e. typically you would take the “mirror” highpass filter and apply it to the original data to get the proper output. Definitely a non-standard implementation and I think it needs proving to be of use.
Mark
Phil B., there’s also a stationarity argument. In a non-stationary environment, block methods (no, not their stepwise method) or online (adaptive) implementations can overcome _some_ of the varying statistics. If they vary too rapidly (there is an equation I’ve seen in the CA text, I think), then it becomes an impossible task to track the changes.
Mark
Look at their functions center and center2. They’ve taken Schneider’s original script, which seems to be correct, and modified it with an additional mean/nanmean on the output vector. I don’t recall the specifics offhand. I can probably comment further tomorrow evening when I’m home again (tonight is pool night).
Mark
Re #52, Mark T., Yes, the E{E{x}},I thought you were referring to a single proxy or gridcell temp. Re #53, RegEM assumes the time series are wide sense stationary. I haven’t dug deep enough myself, but does RM05 use their reconstructed gridcell temps to bootstrap the stat calculations and also recontruct earlier gridcell temperatures?
Is that Beunadonna?
Nope, they take the mean of each proxy, then the mean of the means and use that to create a zero mean _matrix_, not zero mean _vectors_.
They aren’t, however, from what I can tell. Maybe over small blocks, say 100 years or so, but in general, no.
I can’t answer this question yet.
Mark
Their initial “centering,” btw, is done on the years 1899-1959, not even the whole vector length.
That’s the jist of the “centering issue” that has been discussed here.
Mark
Hey, TCO. Nice to hear from you.
AR4 anyone?
I didn’t understand this one:
I think hockey stick says ‘climate is extremely sensitive to CO2’.
Dear Dr Gerald North,
That is a disgraceful answer and you of all people should be ashamed as to endorse faulty methodology which somehow gets the “right answer”. Would you allow any of your students to make such a statement? Would a PhD thesis ever get accepted if the candidate were to make such a statement?
The endorsement of bad scientific methodology that somehow gets the “right answer” is a statement of blinkered religious dogma and not of science. Such a response could equally endorse the cold fusion claims of Pons and Fleischmann or the stem cell results claimed by Hwang woo Suk.
Good science goes hand-in-hand with good scientific ethics, and it is simply unethical to endorse the demonstrably bad and wrong methodology of MBH98/99 because it got “the right answer”.
Is this the same Gerald North?
http://www.met.tamu.edu/people/faculty/north.php
I repeat; “The group also collaborates with statisticians and mathematicians on problems of observing system error analysis.”
Could somebody do me a favor – could someone make an audio-clip of NOrth’s comment about “just sort of winged it” which occurs at minute 55 or so, in a form that I can use in a Powerpoint presentation. If so email it to me. Thanks. (I’m editing my presentation and watching Federer v Roddick.)
#64. One of you sent me the clip – thanks very much.
I found this overview on a site unrelated to climate science; it sure fits:
FIRST, corrupt science is science that moves not from hypothesis and data to conclusion but from mandated or acceptable conclusion back to selected data in order to reach the mandated or acceptable conclusion. That is to say, it is science that uses selected data to reach the “right” conclusion, a conclusion that by the very nature of the data necessarily misrepresents reality.
SECOND, corrupt science is science that misrepresents not just reality, but its own process in arriving at its conclusions. Rather than acknowledging the selectivity of its process and the official necessity of demonstrating the right conclusion, and rather than admitting the complexity of the issue and the limits of its evidence, it invests both process and its conclusions with a mantle on indubitability.
THIRD, and perhaps most important, whereas normal science deals with dissent on the basis of the quality of its evidence and argument and considers ad hominem argument as inappropriate in science, corrupt science seeks to create formidable institutional barriers to dissent through excluding dissenters from the process of review and contriving to silence dissent not by challenging its quality but by questioning its character and motivation. In effect then, corrupt science is science that is flawed in both its substance and its process and that seeks to conceal these essential flaws. It is essentially science that wishes to claim the policy advantages of genuine science without doing the work of real science.
Sadly, you are correct. It does fit.
#61
This keeps raising its head, and was the subject of a delightfully confused piece some while ago at RC. The argument is that if CO2 is the most significant factor, then large(~ish) changes in temperature over a period when we know CO2 to have been relatively stable imply absolutely horrendous changes in temperature in prospect from the current and future projected levels.
Looking at it another way, a significant MWP would cast huge doubt on the whole CO2-AGW hypothesis since, from the argument above, it would imply that temperatures should already be massively higher than they are in reality.
Looking at it yet another way, a significant MWP would imply that there is more than one thing that affects the global temperature. Maybe that big shiny thing in the sky that keeps us warm?
North says (quote in #61):
Nonsense. The fact that the PDO exists does not imply that the PDO is driven by, or sensitive to, “external perturbations”.
And the fact that the temperature naturally varies by tens of degrees every twenty-four hours does not mean that it is therefore more sensitive to, say, CO2.
w.
2 Trackbacks
[…] Surface Temperature Reconstructions is cited in the EPA Technical Support Document. In a CA thread here, I quoted comments by the panel chairman, Gerry North, in which he stated that they […]
[…] I am also pleased by the new interest of these scientists in due diligence. Because journals have such limited capacity for due diligence, archiving data and code is obviously one effective measure of protecting the public interest by ensuring quality control of information disseminated to the public through journal articles. And yet complainant Phil Jones has refused requests to provide station data and even the identity of stations. The complaining scientists cite the NAS Panel apparently without considering North’s description of their manner of carrying out “due diligence: that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.” He said that’s what you do in these sort of expert panels. See CA post here . […]