Gerry North Presentation on NAS Report

North has a Texas A&M Seminar presentation here (deleted available here) . North is a nice and decent guy but this is a frustrating presentation in a lot of ways. At minute 55 or so, he describes panel operating procedure by saying that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.” He said that’s what you do in these sort of expert panels. Obviously I suspected as much, but it’s odd to hear him say it.

69 Comments

  1. John A
    Posted Sep 6, 2006 at 4:41 AM | Permalink

    I win the CA Psychic Prediction Award.

  2. Louis Hissink
    Posted Sep 6, 2006 at 4:44 AM | Permalink

    Folks,

    invest inus, trust us, we will make you rich without ever having to do any work.

    Or how some share brokers analyse data.

  3. BradH
    Posted Sep 6, 2006 at 7:48 AM | Permalink

    Here’s an extract from an article in The Chronicle of Higher Education, quoting North:-

    Gerald North, a professor of atmospheric sciences at Texas A&M University at College Station, served as chairman of the National Research Council committee that has investigated the hockey-stick curve. Despite finding some problems in the seven-year-old study that was the basis of the curve, the panel determined that it is basically correct in its conclusions, which have been corroborated by more-recent work. The scrutiny by Congress “is kind of a delaying tactic to find little things like this to slow down government action on greenhouse-gas limitations,” says Mr. North.

    You can read the whole article here, if you’re interested (not found by me, BTW, found by http://www.junkscience.com).

  4. BradH
    Posted Sep 6, 2006 at 7:51 AM | Permalink

    Doh! Should have read further down the front page before I pulled the posting trigger, shouldn’t I?

  5. bender
    Posted Sep 6, 2006 at 7:54 AM | Permalink

    Delaying tactic? Maybe scientists publishing uncertainty-free hockey stick graphs in IPCC reports is a fast-tracking tactic?

  6. bender
    Posted Sep 6, 2006 at 8:05 AM | Permalink

    From the link in #3:

    Mr. Bradley notes that even though he has published with Mr. Jones and Mr. Mann, he disagrees with some of their recent work together in which they have pushed the temperature record back 2,000 years. “I don’t buy any of that, frankly,” he says. “I think there’s too much uncertainty.”

    This shows why I have backed JMS’s view that Bradley is a highly competent dendroclimatologist. Now if only that BCP chronology were updated …

  7. Steve McIntyre
    Posted Sep 6, 2006 at 10:06 AM | Permalink

    Gerry North is online right now at http://www.chronicle.com. Send him a question.

  8. Steve Sadlov
    Posted Sep 6, 2006 at 10:18 AM | Permalink

    RE:

    “Question from Patrick Frank, Stanford University:

    The original hockey stick has been shown not just flawed but wrong. Why was the NAS committee unable to clearly state that? A more basic question: There is no analytical theory that can extract a growth temperature from tree ring widths or tree ring densities. On what scientifically valid grounds, therefore, can anyone possibly “calibrate” a tree ring series using temperature vs. time data?

    Gerald North:

    I am not a tree ring expert, but we had an expert on our panel. We also listened to several experts and I read most of Fritts’s book on the subject. I concluded that there is something to the signals extracted from tree rings. I suppose we disagree on this matter. Tree rings have been studied for about a century now and I believe that they are probably the best indicator we have at this time. Other proxies will undoubtedly become better as they are developed. When this happens we will have a check on the tree rings. So far the tree ring estimates of surface temperature changes do agree with borehole temperatures in ground and ice and with glacial length data for the last 400 years.”

    ********

    Notice how he completely dodged “The original hockey stick has been shown not just flawed but wrong. Why was the NAS committee unable to clearly state that?”

  9. bender
    Posted Sep 6, 2006 at 10:28 AM | Permalink

    That’s what happens when you bundle questions. Sub-questions are easily dodged.

  10. Steve McIntyre
    Posted Sep 6, 2006 at 10:32 AM | Permalink

    Question from Stephen McIntyre:

    The NRC Panel stated that strip-bark tree forms, such as found in bristlecones and foxtals, should be avoided in temprature reconstructions and that these proxies were used by Mann et al. Did the Panel carry out any due diligence to determine whether these proxies were used in any of the other studies illustrated in the NRC spaghetti graph?

    Gerald North:

    There was much discussion of this matter during our deliberations. We did not dissect each and every study in the report to see which trees were used. The tree ring people are well aware of the problem you bring up. I feel certain that the most recent studies by Cook, d’arrigo and others do take this into account. The strip-bark forms in the bristlecones do seem to be influenced by the recent rise in CO2 and are therefore not suitable for use in the reconstructions over the last 150 years. One reason we place much more reliance on our conclusions about the last 400 years is that we have several other proxies besides tree rings in this period.

  11. Steve McIntyre
    Posted Sep 6, 2006 at 10:35 AM | Permalink

    Question from Stephen McIntyre:

    Is it your view that meaningful error bars can be obtained from calibration period residuals using Mann et al methodology, considering both their regression and principal components methodology?

    Gerald North:

    This is a difficult question (you always pose difficult questions!). My own view (not necessarily the committee’s) is that the verification period is misleading. I do not think there is enough data in that period to really nail down the matter. There are also the questions about using the mean (low frequency) versus the variations (higher frequency) parts in the verification procedure. Personally, I like the way Mann did it better than the others, because it is the long term stuff we want to check on. But this is a personal opinion. The fact is, there is no one way to do this — especially when we have so little data. That is why the committee was reluctant to put error bars on the early part of the record (or even the late part).

  12. Steve McIntyre
    Posted Sep 6, 2006 at 10:38 AM | Permalink

    C’mon folks. Ask North a question.

  13. Mark T.
    Posted Sep 6, 2006 at 10:44 AM | Permalink

    I win the CA Psychic Prediction Award.

    I’ve always admired your psychic abilities, John A.

    Mark

  14. Dave Dardinger
    Posted Sep 6, 2006 at 10:45 AM | Permalink

    re: #11

    Personally, I like the way Mann did it better than the others, because it is the long term stuff we want to check on.

    What exactly does he mean here? I’m certainly not the expert here, but isn’t this precisely the sort of thing which gets messed up because of either overfitting or colored errors?

  15. Steve McIntyre
    Posted Sep 6, 2006 at 10:48 AM | Permalink

    Some of the answers are weird. For example, he says that the Committee was reluctant to put error bars on the reconstruction ecause they can’t be calculated, but states in his article that omission of these error bards by the WSJ was an error.

  16. Mark T.
    Posted Sep 6, 2006 at 10:58 AM | Permalink

    That is why the committee was reluctant to put error bars on the early part of the record (or even the late part).

    This statement alone highlights the uncertainty. I.e., “we are so uncertain of the results we cannot make valid claims to the level of uncertainty!”

    Mark

  17. Steve McIntyre
    Posted Sep 6, 2006 at 10:58 AM | Permalink

    Question from Concerned pulic servant scientist:

    Dr. North, I, like you, have the utmost confidence in dendroclimatologists such as Dr. Malcolm Hugues, co-author on the original hockey-stick paper. But given the importance of bristlecone/foxtail pines (and Dr. Gordon Jacoby’s Gaspé cedars) to the “global” temperature reconstruction, what is one to make of the fact that these chronologies not been updated for what is now 8 years? If new samples have been taken – and I understand they have – why do you think the data have not been published? Doesn’t this suggest to you that they are dragging their heels? Why would they do that?

    Gerald North:

    Sorry, I do not know these individuals more than acquaintances. Hence, I cannot answer any questions about motivation. I can say, however, that if they could prove the hockey stick or spaghetti graphics wrong, I am sure they would jump to the opportunity — and what scientist wouldn’t?

  18. bender
    Posted Sep 6, 2006 at 11:03 AM | Permalink

    North says:

    (a) “The problem is that in these kinds of reconstructions, the errors are not always quantifiable as they are in purely statisical sampling errors where we can really quantify the error margins. Here we are really into the unknown and the biases are not well understood.”

    (b) “When we put the forcing into our models (even with their uncertainties) we are able to link the cause and effect pretty certainly.”

    How on earth did he put the uncertainties into the models in (b) if they are not in fact quantifiable in (a)? That was a clever dodge. Fact is, the uncertainties are probably so large that nothing conclusive can be said about the magnitude and precision of the estimate of the A in AGW.

  19. fFreddy
    Posted Sep 6, 2006 at 11:04 AM | Permalink

    My question :
    In your NRC report and the subsequent press conference, you described the MBH hockey stick as plausible. To me, this means that it is not obviously wrong.
    You also made reference to the “cartoon” chart in the first IPCC report which showed a Mediaeval Warm Period that was warmer than to day. Would you also regard this chart as plausible ? If not, why not ?

    Keeping it simple …

  20. Steve McIntyre
    Posted Sep 6, 2006 at 11:09 AM | Permalink

    Question from Joel McDade, bystander:

    Greetings Dr. North: I am curious what you thought of the primary part of the Wegman Report, that dealing with the statistical issues in Mann, et al. Specifically, the statement (or similar), “Incorrect mathematics + correct result = bad science.” I must say that the NAS Report appeared, to me, to find fault with the Mann methodology but then went on to seemingly endorse the result. The later was the media’s take, anyway. TIA

    Gerald North:

    There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not ‘wrong’ and the science was not ‘bad’. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).

  21. cbone
    Posted Sep 6, 2006 at 11:09 AM | Permalink

    “The minor technical objections serve as a weapon for those special interests who want to delay any action on GW.”

    This gets my vote for understatement of the year. I find it morbidly amusing that serious questions that undermine the methodology, data integrity, and overall validity of a peer reviewed study only rise to the level of “minor technical objections.” I didn’t realize that evicerating a paper of any valid conclusions is only a minor technical objection. I got a good laugh out of that one.

  22. Joel McDade
    Posted Sep 6, 2006 at 11:09 AM | Permalink

    Question from Joel McDade, bystander:

    Greetings Dr. North: I am curious what you thought of the primary part of the Wegman Report, that dealing with the statistical issues in Mann, et al. Specifically, the statement (or similar), “Incorrect mathematics + correct result = bad science.” I must say that the NAS Report appeared, to me, to find fault with the Mann methodology but then went on to seemingly endorse the result. The later was the media’s take, anyway. TIA

    Gerald North:

    There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not ‘wrong’ and the science was not ‘bad’. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).

  23. fFreddy
    Posted Sep 6, 2006 at 11:10 AM | Permalink

    Huh. I’ve been censored.

  24. Joel McDade
    Posted Sep 6, 2006 at 11:10 AM | Permalink

    oops, already posted

  25. Steve McIntyre
    Posted Sep 6, 2006 at 11:13 AM | Permalink

    #23. I don’t think that you were “censored”; they had to pick and choose and it wasn’t screening by realclimate. I would have liked to see the answer as it was a good question. Not a lot of meat in the responses.

  26. Mark T.
    Posted Sep 6, 2006 at 11:14 AM | Permalink

    If I were to ask a question, it would be:

    Dr. North, you state “Despite finding some problems in the seven-year-old study that was the basis of the curve, the panel determined that it is basically correct in its conclusions, which have been corroborated by more-recent work,” however, more recent works such as Rutherford and Mann ’05 have again again corrupted the use of standard statistical analysis methods. In particular, RM05 redefines the expected value of a random vector to be the mean of the expected values of the vector, rather than a vector of individual expected values. I.e., they compute E{E{X}}, which results in a single value (say m_x), and then replicate that value for each random variable in X (which would be each individual proxy) and subtract to generate zero mean random variable. The proper method for “centering” a random vector is to take E{X}, which results in a vector M_x = [m_x1, m_x2, … m_xn], and subtract _that_ from the random vector. If each of the means is nearly the same, this corruption will have subtle effects, however, as in the case with the proxy data, not all means are created equally, or even created within the same ballpark, which can severely bias the results. How then, can “recent works” be used to corroborate previously corrupt methods if they employ equally corrupt methods themselves?

    I’m not yet strong enough with the particulars of the impact to back this up. I am working on it, albeit slowly.

    Mark

  27. Steve McIntyre
    Posted Sep 6, 2006 at 11:15 AM | Permalink

    My reading of Rutherford code is the same as yours. I think that you’re on the right track. It’s pretty amazing to run into another centering issue.

  28. bender
    Posted Sep 6, 2006 at 11:16 AM | Permalink

    The following questions did not make it to the discussion:

    Q: It is often said that the scientific process, under peer-review, is “self-correcting”. As a scientist I recognize this. However it also seems to me that the rate of self-correction is painfully slow compared to the fast wheels of policy. If bad science is fast-tracked to the level of policy, and that science is ultimately reversed, should the victims of that policy be compensated?

    Q: I sense an important contradiction on your statement about how uncertainties are handled. You say:

    (a) “The problem is that in these kinds of reconstructions, the errors are not always quantifiable as they are in purely statisical sampling errors where we can really quantify the error margins. Here we are really into the unknown and the biases are not well understood.”

    (b) “When we put the forcing into our models (even with their uncertainties) we are able to link the cause and effect pretty certainly.”

    How do the uncertainties get put into the models in (b) if they are not in fact quantifiable in (a)? (And I realize that (a) was in reference to multiproxies, and (b) GCMs and the like. Still …) Don’t you agree that the uncertainties may be so large that nothing conclusive can be said about the magnitude and precision of the estimate of the A in AGW? For example a CO2 warming projection of 3±4°C dictates a very different policy than one of 3±1°C.

  29. TCO
    Posted Sep 6, 2006 at 11:17 AM | Permalink

    (Delurk) I asked the following questions (not in time):

    (1)

    One criticism of the Mann work has been that it is very complex and in some cases novel math and statistics (not vanilla even within PCA) but that the methods are not adequately proved as valid. For instance, (1) that the methods were not proven theoretically or on a known case, prior to application to the subject case of reconstruction, and (2) that the algorithm was not adequately described in the publication or SI (for instance the off-centering was not listed, among various items). I am used to novel methods in other fields (e.g. crystallography) being either well defended theoretically or characterized on known examples prior to use on new cases (e.g. a new molecule). Comments?

    2. (paraphrase) did the committee agree to the changed objective (broadened focus, specific-Mann-examination omitted)?

    3. (paraphrase) how do we get Steve to finish thoughts, disaggregate issues, and publish.

  30. Jean S
    Posted Sep 6, 2006 at 11:19 AM | Permalink

    This didn’t make it either:

    You stated in an earlier answer:
    “When we put the forcing into our models (even with their uncertainties) we are able to link the cause and effect pretty certainly.”
    Since the climate models are extremely complex, how do you guard against overfitting with only 100 years of instrumental data? Wouldn’t it be exactly here where we need reliable temperature reconstructions?

  31. bender
    Posted Sep 6, 2006 at 11:24 AM | Permalink

    Nice to have you back, Jean S.

  32. Jean S
    Posted Sep 6, 2006 at 11:25 AM | Permalink

    re #26: I think you are on a right track 😉

  33. Jean S
    Posted Sep 6, 2006 at 11:27 AM | Permalink

    re #31: Thanks. I should be praparing my lectures and finishing papers, but could not help to come here to see what’s going on 🙂

  34. BKC
    Posted Sep 6, 2006 at 11:29 AM | Permalink

    How ’bout inviting Dr. North to come and answer some of these questions that didn’t get answered (I had one myself)? After all, he’s not such a bad egg:)

  35. Mark T.
    Posted Sep 6, 2006 at 11:39 AM | Permalink

    He does seem like an OK guy, at least, he’s not nearly as antagonistic as those he supports.

    BTW, I HAVE done some tests with the regem code, and I have also plotted the means out at various stages of the algorithm. The results are interesting, though I’m not ready to comment as I don’t fully understand WHY the plots look the way they do. One note, “zero mean” does not ever exist unless by fluke for a single proxy. The largest means are always in the proxy, rather than instrumental, data (which makes sense because the instrumental data is inherently +/- from some mean value, near zero to begin with).

    Mark

  36. John Hekman
    Posted Sep 6, 2006 at 11:47 AM | Permalink

    I submitted a question, but the discussion was already closed.

    I loved the question by Joel McDade:

    Greetings Dr. North: I am curious what you thought of the primary part of the Wegman Report, that dealing with the statistical issues in Mann, et al. Specifically, the statement (or similar), “Incorrect mathematics + correct result = bad science.” I must say that the NAS Report appeared, to me, to find fault with the Mann methodology but then went on to seemingly endorse the result. The later was the media’s take, anyway. TIA

    And North answered that, yes indeed, one actually can get the “right” answer from faulty methods. Unbelieveable!!

  37. Steve Sadlov
    Posted Sep 6, 2006 at 11:58 AM | Permalink

    RE: #36 – “And North answered that, yes indeed, one actually can get the “right” answer from faulty methods. Unbelieveable!!”

    The similarities between AGW obsession and uniformitarianism are uncanny. Interestingly, one of the straws that broke the latter’s back was a book funded by Ciba Geigy on plate tectonics, I think it came out about 1970 or 71. Not making any predictions here for it seems that there are no corporations with the courage or even the knowledge to fund a similar think in this case.

  38. bender
    Posted Sep 6, 2006 at 11:58 AM | Permalink

    Re #35
    It’s easy to be an “OK guy” when you’re not the protaganist. The issue here is competence, not congeniality. And one should not be too hasty judging North’s competence. Everyone loves a “grandpa” figure. Everyone, that is, except the cold, hard scientist who loves only the facts.

  39. Mark T.
    Posted Sep 6, 2006 at 12:50 PM | Permalink

    Oh, I’m not. I was just noting that at least he’s not an ass, which gets to be tiresome.

    Competence is another issue altogether. Incompetence, or at least, an unbending will to believe flawed science just because “it must be right” is equally tiresome, though you don’t want to punch a nice, but incompetent grandpa. The same cannot be said for an incompetent ass.

    Mark

  40. bender
    Posted Sep 6, 2006 at 1:13 PM | Permalink

    The nice thing is that no one need punch grandpa in this case. Because it’s his arguments that are flawed, not his being. Every rational scientist since Socrates understands that the reason arguments are made public is to maximize the opportunities for punching. So punch away. North has surely punched a few in his time. Maybe even some grandpas.

  41. Jeff Weffer
    Posted Sep 6, 2006 at 1:32 PM | Permalink

    The NAS panel should have edited one of its conclusions.

    “- 30 year averages highest in 400 years” – should be rewritten to:

    – 30 year averages highest in 400 years but part of the increase is simply recovery from a relatively cold period over the past 400 years;

    This is a much more accurate statement in terms of explaining the situation to the general public.

  42. Michael Jankowski
    Posted Sep 6, 2006 at 1:41 PM | Permalink

    Re#17:

    Gerald North:

    Sorry, I do not know these individuals more than acquaintances. Hence, I cannot answer any questions about motivation. I can say, however, that if they could prove the hockey stick or spaghetti graphics wrong, I am sure they would jump to the opportunity “¢’‚¬? and what scientist wouldn’t?

    Could he possibly be serious? People are chomping at the bit to take-on the hockey stick and face the slander and attacks of the hockey team – not lining-up to publish the status-quo and be a part of the “consensous?”

  43. bender
    Posted Sep 6, 2006 at 2:10 PM | Permalink

    Re #42:

    What scientist wouldn’t?

    Cute: rhetorical question as dodge. I laughed at this one too.

    The fact is, for an insider taking on the team, there would be much to lose, materially, and little to gain, in terms of honor – especially among the “save the earth” moral majority. Better to ask “what scientist would?” [Ans: Someone very bold, with a huge ego – a lone wolf with all the necessary stats training, climatology training, access to data and codes, etc … Fair to ask: Does such a person even exist?]

  44. Follow the money
    Posted Sep 6, 2006 at 2:24 PM | Permalink

    “”At minute 55 or so, he describes panel operating procedure by saying that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.”””

    Probably they were led by the nose of permanent staffers juiced into the Kyoto lobbies. First question – who literally wrote the report? It’s narrative structure read like corporate public relations clean-up, not scientific study.

  45. bender
    Posted Sep 6, 2006 at 2:32 PM | Permalink

    Probably they were led by the nose of permanent staffers juiced into the Kyoto lobbies.

    Was it Ken Fritsch or some equally cogent contributor who mentioned bureaucratic inertia & policy entrenchment as one force to fear, regardless what direction is chosen?

  46. Mark T.
    Posted Sep 6, 2006 at 2:36 PM | Permalink

    “”At minute 55 or so, he describes panel operating procedure by saying that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.”””

    And somehow this qualifies as a “peer reviewed” report yet Wegman’s does not. Oy vey.

    Mark

  47. Steve McIntyre
    Posted Sep 6, 2006 at 3:08 PM | Permalink

    He really did say that. WEgman did some checking of code, which the North panel did not.

  48. Mark T.
    Posted Sep 6, 2006 at 3:16 PM | Permalink

    Would they even be qualified to check the code that RM05 used anyway? Heck, it’s bad enough even for those of us that can read Matlab, and understand the methodology, let alone for those that have neither experience.

    Mark

  49. Spence_UK
    Posted Sep 6, 2006 at 3:20 PM | Permalink

    I can say, however, that if they could prove the hockey stick or spaghetti graphics wrong, I am sure they would jump to the opportunity “¢’‚¬? and what scientist wouldn’t?

    The scientist that can’t get a paper past the hockey team peer review roadblock probably wouldn’t, but not for the right reasons…

  50. Barney Frank
    Posted Sep 6, 2006 at 4:21 PM | Permalink

    “”At minute 55 or so, he describes panel operating procedure by saying that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.”””

    Wow! As a non-scientist I would like to say just how much confidence this inspires in me regarding the trillion dollar public policy decisions being based on this ‘research’.
    I’m glad to know that they take the prospect of the world’s economy being stood on its ear serious enough to do no research and ‘wing it’. Why not just put on Karnak’s turban and hold the data up to their forehead? Amazing.

  51. Phil B.
    Posted Sep 6, 2006 at 4:28 PM | Permalink

    RE 26, Mark T, aren’t they just making an ergodicity assumption? What line are you looking at in the script? Have you looked at the use of the filtfilt function to lowpass filter the data to form the high and low frequency terms? For the nonMatlab folks, the filtfilt function is a noncausal zero phase filter that is implemented by filtering the data in the normal direction and then flipping the filtered data and filtering that flipped data to get the result. The filt function had been commented out.

  52. mark
    Posted Sep 6, 2006 at 5:17 PM | Permalink

    Ergodicity w.r.t. which part of my statement? Do you mean the E{E{X}} portion? If so, then maybe they are, but it is not relevant to that implementation since each proxy a) measures something different (not all are tree-rings) and b) each has differing statistics (very easy to show).

    As for the filtfilt, it works out to pseudo-sort of work. At least, what they refer to as the high and lowpass pairs really are uncorrelated after the filtfilt function. You can perform the function on white noise, for example, then subtract that result from the original data to get the highpass portion, and do a cross-correlation between them. The r is near 0 with a p near 1. Now, how that mucks with the data used later on, particularly for reconstruction, I have not investigated.

    What filtfilt does is provide very steep filtering walls (response squared) which separates out the low frequency data better. However, typical dyadic decompositions use something akin to a quadrature-mirror filter, though their specific implementation obviously is not looking for an exact pi/2 cutoff (I don’t recall, offhand, their actual cutoff point). I.e. typically you would take the “mirror” highpass filter and apply it to the original data to get the proper output. Definitely a non-standard implementation and I think it needs proving to be of use.

    Mark

  53. mark
    Posted Sep 6, 2006 at 5:20 PM | Permalink

    Phil B., there’s also a stationarity argument. In a non-stationary environment, block methods (no, not their stepwise method) or online (adaptive) implementations can overcome _some_ of the varying statistics. If they vary too rapidly (there is an equation I’ve seen in the CA text, I think), then it becomes an impossible task to track the changes.

    Mark

  54. mark
    Posted Sep 6, 2006 at 5:39 PM | Permalink

    What line are you looking at in the script?

    Look at their functions center and center2. They’ve taken Schneider’s original script, which seems to be correct, and modified it with an additional mean/nanmean on the output vector. I don’t recall the specifics offhand. I can probably comment further tomorrow evening when I’m home again (tonight is pool night).

    Mark

  55. Phil B.
    Posted Sep 6, 2006 at 6:14 PM | Permalink

    Re #52, Mark T., Yes, the E{E{x}},I thought you were referring to a single proxy or gridcell temp. Re #53, RegEM assumes the time series are wide sense stationary. I haven’t dug deep enough myself, but does RM05 use their reconstructed gridcell temps to bootstrap the stat calculations and also recontruct earlier gridcell temperatures?

  56. TCO
    Posted Sep 6, 2006 at 6:20 PM | Permalink

    Is that Beunadonna?

  57. mark
    Posted Sep 6, 2006 at 6:26 PM | Permalink

    Yes, the E{E{x}},I thought you were referring to a single proxy or gridcell temp.

    Nope, they take the mean of each proxy, then the mean of the means and use that to create a zero mean _matrix_, not zero mean _vectors_.

    Re #53, RegEM assumes the time series are wide sense stationary.

    They aren’t, however, from what I can tell. Maybe over small blocks, say 100 years or so, but in general, no.

    I haven’t dug deep enough myself, but does RM05 use their reconstructed gridcell temps to bootstrap the stat calculations and also recontruct earlier gridcell temperatures?

    I can’t answer this question yet.

    Mark

  58. mark
    Posted Sep 6, 2006 at 6:27 PM | Permalink

    Their initial “centering,” btw, is done on the years 1899-1959, not even the whole vector length.

    That’s the jist of the “centering issue” that has been discussed here.

    Mark

  59. Steve McIntyre
    Posted Sep 6, 2006 at 6:54 PM | Permalink

    Hey, TCO. Nice to hear from you.

  60. Jeff Norman
    Posted Sep 6, 2006 at 7:21 PM | Permalink

    they got 1,000 “people around the table” and “just kind of winged it”

    AR4 anyone?

  61. Posted Sep 6, 2006 at 11:57 PM | Permalink

    I didn’t understand this one:

    Question from Georg Hoffmann, LSCE, Paris:
    Wouldn’t we all be better off when MBH were right and so millenial variability is rather small. Larger variability indicates more sensitivity of the Earth system and so also more sensitivity to Greenhouse gases. If we would triple, for example,MBH variability, is this still in agreement with variability on longer timescales such as the last glacial or the Holocene?

    Gerald North:
    This is a good point. The Mann et al. studies probably made the handle of their hockey stick a bit too straight. Some of the later studies showed more low frequency variability. Of course, you are right, the larger the natural variability the larger the sensitivity of climate to external perturbations. So we might prefer the Mann et al. hockey stick to the ‘spaghetti graphic’ of our report. I cannot answer the last question on my feet. It does appear that present moderate sensitivities are in pretty good agreement (cf., recent papers by James Hansen) with the last glacial max.

    I think hockey stick says ‘climate is extremely sensitive to CO2’.

  62. John A
    Posted Sep 7, 2006 at 2:15 AM | Permalink

    There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not “wrong’ and the science was not “bad’. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).

    Dear Dr Gerald North,

    That is a disgraceful answer and you of all people should be ashamed as to endorse faulty methodology which somehow gets the “right answer”. Would you allow any of your students to make such a statement? Would a PhD thesis ever get accepted if the candidate were to make such a statement?

    The endorsement of bad scientific methodology that somehow gets the “right answer” is a statement of blinkered religious dogma and not of science. Such a response could equally endorse the cold fusion claims of Pons and Fleischmann or the stem cell results claimed by Hwang woo Suk.

    Good science goes hand-in-hand with good scientific ethics, and it is simply unethical to endorse the demonstrably bad and wrong methodology of MBH98/99 because it got “the right answer”.

  63. Jeff Norman
    Posted Sep 7, 2006 at 7:57 AM | Permalink

    Is this the same Gerald North?

    http://www.met.tamu.edu/people/faculty/north.php

    North and his research group are interested in climate change and the determination of its origins. We work with simplified climate models which lend themselves to analytical study, estimation theory as applied to observing systems, and the testing of all climate models through statistical approaches. Often all three themes are combined for a particular application.

    Over a period of 25 years North and associates have studied a hierarchy of simplified models known as Energy Balance Climate Models (EBCMs). Both linear, nonlinear, and stochastic versions of these models have been shown to be good analogs to the real climate of the surface temperature field including the two dimensional seasonal cycle and the field of fluctuations. These models have very interesting properties from mathematical as well as physical points of view. For instance, multiple solutions occur for the present external conditions and their stability properties are amenable to analysis. Stochastic versions of the models are useful analogs to more comprehensive models making them a useful laboratory for preliminary analyses before expensive experiments are performed.

    The group also collaborates with statisticians and mathematicians on problems of observing system error analysis. For example, we continue to be interested in the ground validation program and the sampling error problems for the Tropical Rainfall Measuring Mission. We also are interested in the problem of estimating climate parameters (e.g., global average, spherical harmonic coefficients, space-time power spectra, EOFs, etc.) from observing systems consisting of a finite number of point gauges distributed over the globe or from satellite orbital observing systems. We also want to know how data from disparate sources can be optimally combined.

    I repeat; “The group also collaborates with statisticians and mathematicians on problems of observing system error analysis.”

  64. Steve McIntyre
    Posted Sep 10, 2006 at 3:30 PM | Permalink

    Could somebody do me a favor – could someone make an audio-clip of NOrth’s comment about “just sort of winged it” which occurs at minute 55 or so, in a form that I can use in a Powerpoint presentation. If so email it to me. Thanks. (I’m editing my presentation and watching Federer v Roddick.)

  65. Steve McIntyre
    Posted Sep 10, 2006 at 10:55 PM | Permalink

    #64. One of you sent me the clip – thanks very much.

  66. welikerocks
    Posted Sep 11, 2006 at 7:18 AM | Permalink

    I found this overview on a site unrelated to climate science; it sure fits:

    FIRST, corrupt science is science that moves not from hypothesis and data to conclusion but from mandated or acceptable conclusion back to selected data in order to reach the mandated or acceptable conclusion. That is to say, it is science that uses selected data to reach the “right” conclusion, a conclusion that by the very nature of the data necessarily misrepresents reality.

    SECOND, corrupt science is science that misrepresents not just reality, but its own process in arriving at its conclusions. Rather than acknowledging the selectivity of its process and the official necessity of demonstrating the right conclusion, and rather than admitting the complexity of the issue and the limits of its evidence, it invests both process and its conclusions with a mantle on indubitability.

    THIRD, and perhaps most important, whereas normal science deals with dissent on the basis of the quality of its evidence and argument and considers ad hominem argument as inappropriate in science, corrupt science seeks to create formidable institutional barriers to dissent through excluding dissenters from the process of review and contriving to silence dissent not by challenging its quality but by questioning its character and motivation. In effect then, corrupt science is science that is flawed in both its substance and its process and that seeks to conceal these essential flaws. It is essentially science that wishes to claim the policy advantages of genuine science without doing the work of real science.

  67. Brooks Hurd
    Posted Sep 11, 2006 at 8:13 AM | Permalink

    Sadly, you are correct. It does fit.

  68. John Davis
    Posted Sep 11, 2006 at 10:12 AM | Permalink

    #61
    This keeps raising its head, and was the subject of a delightfully confused piece some while ago at RC. The argument is that if CO2 is the most significant factor, then large(~ish) changes in temperature over a period when we know CO2 to have been relatively stable imply absolutely horrendous changes in temperature in prospect from the current and future projected levels.
    Looking at it another way, a significant MWP would cast huge doubt on the whole CO2-AGW hypothesis since, from the argument above, it would imply that temperatures should already be massively higher than they are in reality.
    Looking at it yet another way, a significant MWP would imply that there is more than one thing that affects the global temperature. Maybe that big shiny thing in the sky that keeps us warm?

  69. Willis Eschenbach
    Posted Sep 11, 2006 at 2:11 PM | Permalink

    North says (quote in #61):

    Of course, you are right, the larger the natural variability the larger the sensitivity of climate to external perturbations.

    Nonsense. The fact that the PDO exists does not imply that the PDO is driven by, or sensitive to, “external perturbations”.

    And the fact that the temperature naturally varies by tens of degrees every twenty-four hours does not mean that it is therefore more sensitive to, say, CO2.

    w.

2 Trackbacks

  1. […] Surface Temperature Reconstructions is cited in the EPA Technical Support Document. In a CA thread here, I quoted comments by the panel chairman, Gerry North, in which he stated that they […]

  2. […] I am also pleased by the new interest of these scientists in due diligence. Because journals have such limited capacity for due diligence, archiving data and code is obviously one effective measure of protecting the public interest by ensuring quality control of information disseminated to the public through journal articles. And yet complainant Phil Jones has refused requests to provide station data and even the identity of stations. The complaining scientists cite the NAS Panel apparently without considering North’s description of their manner of carrying out “due diligence: that they “didn’t do any research”, that they just “took a look at papers”, that they got 12 “people around the table” and “just kind of winged it.” He said that’s what you do in these sort of expert panels. See CA post here . […]