Reply to Ritson

A few weeks ago, I mentioned here that the new editor-in-chief of GRL, Jay Famiglietti, had removed James Saiers as our editor, had made remarks about our papers to Environmental Science & Technology that can be construed as critical, had pulled two rejected Comments out of the garbage can (including one that had been press released by Ammann of UCAR) and had advised us that one of the Comments, by David Ritson of Stanford, had been sent out for refereeing without an accompanying Reply (in breach of AGU policies on reviewing Comments), had been accepted and gave us 3 weeks notice to submit a reply. Our 3 weeks was up today and we submitted a Reply to Ritson posted up here. The Reply was submitted today and is being reviewed – all rights are reserved to the American Geophysical Union – not to be reproduced.

You’ll have to try to figure out the Ritson Comment from our Reply, but you’ll see that we don’t think much of it. Our Reply is written fairly strongly. The unfortunate thing for Famiglietti is that all our comments are correct and there is no way to sugarcoat the Reply. I previously expressed my astonishment at Famiglietti pulling the Ritson Comment out of the garbage can and my equal astonishment at Famiglietti breaching AGU review policies for an already rejected Comment. One would almost think that there had been pressure from the "community", but I don’t believe in the Easter Bunny.

What Famiglietti is probably going to find out is that there’s usually a good reason for policies, that they protect editors as well as authors and, if you have a controversial file, it’s usually a good idea to do it by the numbers. I can’t imagine that this file is going to be any fun for him, but he made his own bed by circumventing AGU policies in the first place. None of the choices for him right now will seem very appetizing and it’s hard to figure out where it will all end. For us, it’s been a total waste of time having to deal with the Ritson Comment all over again and I wish Famiglietti had done things by the numbers.

He’s going to find that the Ammann and Wahl file will be even worse – it will be interesting to see how he deals with their bile and with their withholding adverse cross-validation statistics – just as their mentors did before them.

41 Comments

  1. Ed Snack
    Posted Oct 18, 2005 at 11:14 PM | Permalink

    Steve, no pdf behind that link, error is "file not found". Look forward to reading it when available.

    Steve: Fixed.

  2. IL
    Posted Oct 19, 2005 at 12:12 AM | Permalink

    Are you saying that GRL rejected your reply within a few hours?? Whoo, that’s some going for peer review – or was it some sort of technicality like it was too long?

  3. Steve McIntyre
    Posted Oct 19, 2005 at 12:25 AM | Permalink

    No, just that it’s under review. I’ve changed the wording, which wasn’t very clear.

  4. Gil Pearson
    Posted Oct 19, 2005 at 2:57 AM | Permalink

    Steve

    A quick question from someone able to follow only in generalities. (a pre-apology in case I am off the mark!) I understand that the MBH98 hockey stick shape representing NH temperature is produced by a combination of a non-centered PC methodology and the use of a contaminated proxy series (bristlecones). What percentage of the original raw data are bristlecones and from a geographic point of view, how diverse are they?

  5. Paul Gosling
    Posted Oct 19, 2005 at 6:26 AM | Permalink

    You would think that these guys had never heard the phrase ‘flogging a dead horse’.

  6. John A
    Posted Oct 19, 2005 at 6:39 AM | Permalink

    Re #6

    Somebody re-animated the dead horse, and someone had to put a stake through its evil heart to kill it again. It’s never pretty and children may not want to watch.

  7. Paul
    Posted Oct 19, 2005 at 6:52 AM | Permalink

    I remember Ritson from my days in experimental particle physics. Seems not to have changed much.

  8. Steve McIntyre
    Posted Oct 19, 2005 at 9:24 AM | Permalink

    Re #4: the number of proxies varies by period. The 15th century period is the one in controversy so it’s the one that I’ll summarize. The bristlecones/foxtails account for about 20 (by memory) of the 70 sites in the North American tree ring network. They are all located in the U.S. Southwest in very high, very arid locations on poor soils. So it’s quite possible that non-temperature factors could systemically affect them.There are a total of 95 proxies used altogether in the 15th century network. Using PCs, the 70-site network was summarized into 2 PCs, with the bristlecones dominating one PC. A total of 22 series (including the PC series) were used in the regression-inversion step. The series, other than the PC1 and the Gaspe series, essentially function as white noise. If you look at my Reply to Huybers #2, you’ll see that the MBH98 regression-inversion method using one simulated PC1 and 21 white noise series regularly yields RE statistics as good as MBH98. If they do their regression-inversion method without a PC summarization, as they’ve arguied, (they could use other summarization besides PC e.g. taking means over regions – which would have a different result), then the regression-inversion method, which has a mining effect as well, imprints the bristlecones.

    So the flawed PC method mines for hockey stick shaped series creating a lethal interaction with the flawed bristlecones. But this is not the ONLY way to create bristlecone dominance. MBH98 regerssion-inversion applied to a network with more than 70 out of 95 series being U.S. tree ring series, with 20 bristlecones, will also create bristlecone dominance. Once you are aware of the dependence on potentially flawed proxies, you can’t simply try to get them in through the back door (especially without explicit notice and analysis showing what you are doing.) realclimate uses code words for bristlecones – they say that anything that downweights bristlecones is “throwing out” data.

  9. John Hekman
    Posted Oct 19, 2005 at 10:56 AM | Permalink

    Relevant to this discussion of “peer review” and “editorial review” is an article today by Kim Strassel on the WSJ editorial page. It concerns the questionable science that underlay the shutdown of logging in the US Northwest to save the spotted owl. A new bill is being proposed to “fix” the endangered species act by requiring the science underlying its findings to be good science. How will it accomplish this? By insisting on “peer reviewed” science. It makes you want to laugh. No, cry. No, laugh.

  10. John A
    Posted Oct 19, 2005 at 12:22 PM | Permalink

    We’re going to have to produce something less open to political abuse and more rigorous than the peer review system. The behavior of some climate scientists has broken the honor system, on which it is based, completely.

  11. Jo Calder
    Posted Oct 19, 2005 at 12:57 PM | Permalink

    Re 10. Open and non-anonymous peer review is one possible answer. The BMJ apparently uses this system now. The downside is, of course, that you get far fewer reviewers — egg-on-face risk demands more thorough work.

    Cheers, — Jo Calder

  12. John A
    Posted Oct 19, 2005 at 1:04 PM | Permalink

    Re #11

    Jo,

    Does that also mean fewer submissions because nobody likes to look like a fool for missing something obvious?

    What about the quality of the work submitted? Has it improved?

  13. Posted Oct 19, 2005 at 2:11 PM | Permalink

    Re 11 and 12
    Talking about quality of peer review.
    Has anyone seen a worse example of an error ridden shambles slipping through peer rev than the 3 panel colour map I critique at;
    http://www.warwickhughes.com/climate/easterling.htm
    Look at the list of “blue blood” authors and the paper could fairly be described as a landmark re the DTR issue for its time.

  14. Dave Dardinger
    Posted Oct 19, 2005 at 2:13 PM | Permalink

    How about a layered blog review? Layer one would be only for editors and invited guests to review a preliminary paper. Layer 2 would allow those who’d shown past expertise or ability to reason logically on a subject to comment. Layer 3 would be for anyone, other than the continually abusive. Any of the layers might allow anonomity if requested. After a certain length of time the preliminary report would be either taken down by request of the authors or the presiding editor, or converted into a permanent format which could then become part of permanent an online-journal collection or published.

  15. John Hekman
    Posted Oct 19, 2005 at 2:38 PM | Permalink

    Re 14. Those are good ideas. But the dismantling of MBH98/99 would not have occurred without the extensive work done by M&M. That quality and quantity cannot be counted on to be available gratis. Don’t we need a mechanism whereby government funding is given to more than one group to do analysis of a problem, with some king of competition to dig out the real issues related to the validity of their empirical methods?

  16. Hans Erren
    Posted Oct 19, 2005 at 2:41 PM | Permalink

    How about a real science debate for invitees, like a judo match, on http://www.physicsforums.com/

    or perhaps ukweatherworld could be a venue for this type of debate?
    http://www.ukweatherworld.co.uk/forum/forums/forum-view.asp?fid=11

    Peter H?

  17. Dave Dardinger
    Posted Oct 19, 2005 at 3:53 PM | Permalink

    Well combining all the suggestions, the level 1 guests / invitees could be paid, perhaps out of the funds allocated for publication by a grant, and those who were qualified but not invited might have to pay their own way but again that could come from their own grants, etc. Then the people on level two and three would be a ready market for advertisments for books, on-line courses, etc. Of course this would be moot for 99% of such blogs, which would only have a relative few visitors, but the occasional well publicized one might draw quite a crowd. After all this blog has had over a million visitors.

  18. John A
    Posted Oct 19, 2005 at 4:15 PM | Permalink

    Well, I wouldn’t wish to have a free-for-all because it would attract crackpots who would, and can, waste lots and lots of time and delay publication.

    I think the layers should be:

    1. Full disclosure of methodology including the archiving of all source code and datasets used. These must be done before any review of the work takes place. If the author(s) publish or announce their results before these steps are done, the work is immediately thrown out.
    2. Anonymous refereeing of editorial issues: language, length of article, genuine novelty of results, etc
    3. Refereeing of paper’s scientific points and results to be by expert reviewers who are genuinely independent. At publication, the paper and the expert reviewer’s comments should both be published.

    Now I know that “expert reviewers who are genuinely independent” is an extremely difficult step, and climate science is a very politically contentious field, but the result of all this must be an instillment of a proper and enforced scientific ethic.

    Michelle Wie turned professional in golf last week. In her first professional tournament, she was disqualified for a very small infringement of a rule that was spotted by a spectator. No prize money. She was brought to the referee’s office and told of his decision to disqualify her. She accepted this saying:

    “I am really sad but rules are rules and I respect them.

    “I’ve been through so many unplayables, I know what to do.

    “But I learned a great lesson. From now on, I’ll call a rules official no matter where it is, whether its three inches or 100 yards.

    There’s a very strong ethic in golf against any form of cheating or gaining unfair advantage, even inadvertently. Would you play some climate scientists at golf, knowing the way they conduct themselves at work?

  19. Ed Snack
    Posted Oct 19, 2005 at 9:44 PM | Permalink

    Thanks Steve. An interesting read in one way. I can see why you were pleased initially that the comment was to be rejected, almost none of it seems to have been on topic.

  20. Dave Eaton
    Posted Oct 20, 2005 at 9:01 AM | Permalink

    Re 18:

    Once the trust that is at the heart of peer review is lost, I think that making everyone lay all the data and methods and code out is the only way to go. We rely (in chemistry at least) on a lot of assumptions, shortcuts etc, and no one really wants to review stuff because it eats research time. We rely on trust, likely more than we should.

    It seems to me that places where science will influence policy (read: where serious public money will be spent) that a coherent policy of making everything available for critique is sound governance, let alone science. Clearly, we can’t expect an M&M to arise spontaneously in every case, but having everything archived and accessible would make an ‘audit’ more facile. It would prevent the debate from hinging on the hidden.

  21. Paul Gosling
    Posted Oct 20, 2005 at 9:31 AM | Permalink

    John A

    RE #18

    A couple of practical considerations

    Point 1. “if the author(s) publish or announce their results before these steps are done, the work is immediately thrown out” Does this include presenting work at a conference, discussing it with colleuges etc?

    Point 2. Not sure I understand this.

    Point 3. What do you mean by “expert reviewer”. Most people will send back a paper they do not feel “expert” enough to review, if only to lighten the work load. How do you assess independance? In order to review a paper I must be an “expert” in the field, therefore I am going to have an opinion, indeed its difficult to review a paper if you don’t. Papers often go through more than one round of review, do you publish all comments, including those such as ‘the sentence begining blah blah blah on line x page y is not clear, please rephrase’. Who decides what comments to include?

    Proposals such as these are all well and good, but mean a lot more work, which someone will have to pay for.

  22. John A
    Posted Oct 20, 2005 at 11:34 AM | Permalink

    Point 1. “if the author(s) publish or announce their results before these steps are done, the work is immediately thrown out” Does this include presenting work at a conference, discussing it with colleuges etc?

    Yes. The situation would be analogous to being under NDA. You obviously would discuss the work with colleagues as long as you are sure they are under the same non-disclosure rules. Announce the result and forfeit the publication. End of Story.

    The purpose of this is to stop the rampant abuse of the scientific process known as "scientific publication by press conference" before any review has even been received.

    Point 3. What do you mean by “expert reviewer”. Most people will send back a paper they do not feel “expert” enough to review, if only to lighten the work load. How do you assess independance? In order to review a paper I must be an “expert” in the field, therefore I am going to have an opinion, indeed its difficult to review a paper if you don’t. Papers often go through more than one round of review, do you publish all comments, including those such as “the sentence begining blah blah blah on line x page y is not clear, please rephrase’. Who decides what comments to include?

    A: The editor would decide. Obviously the purely editorial issues would not need to be published, but the expert opinions and criticisms would need to be published. I would envisage that only review comments which refer only to the final published report would be published. The other criticisms and policy statements should be archived.

    The critical part is the critical part. The editorial policy should be adhered to, and should be skeptical in nature. Make the researchers think their answers through.

    In the world of mathematics, every page in gone over in great detail before publication because nobody wants to be the person who gets his great mathematical theorem debunked because of a silly error. In mathematics proofs the reviewers all turn skeptical in order to make sure the logic is watertight.

    Additionally there should be an independent Ethics Committee whose sole function is to declare papers invalid if there are violations of ethical standards. I mean, as in golf – you adhere to the rules or you get disqualified. If there’s any doubt about whether something is ethical or not, do what Michelle Wie learned:

    But I learned a great lesson. From now on, I’ll call a rules official no matter where it is, whether its three inches or 100 yards.

    The ethics board should be fully independent of the editor, reporting only to the publisher. If necessary, the ethics board should be able to call on the services of an auditor or investigator.

  23. John A
    Posted Oct 20, 2005 at 11:38 AM | Permalink

    I would add that the reason why the general quality of scientific papers is so poor is because there are so many, and so many are published because that how scientists get assessed.

    It might be an idea to allocate a certain amount of funding in order to defray the cost of proper scientific review. (I’ve no idea if this already happens)

  24. fFreddy
    Posted Oct 20, 2005 at 12:08 PM | Permalink

    Ummm, what is the budget of the IPCC ?

  25. John A
    Posted Oct 20, 2005 at 12:59 PM | Permalink

    I would add that there’s another layer to all of this which is where (Lord) Nigel Lawson made the most sense.

    If scientific or economic reports are used to form public policy then audit and open review must be performed on them.

  26. Jo Calder
    Posted Oct 20, 2005 at 1:08 PM | Permalink

    A few quick comments.

    Any improvements on the current peer review system must be incremental, and live within the system where, in effect, reviewers do their work on time paid for by students, research grants or the taxpayer (or combinations of all three), as a contribution to the general good. It must, since academics are involved, be essentially self-organizing. So, a contribution by review needs to be recognized as such. One could make a certain level of activity as a reviewer a requirement for promotion.

    Adding in an ethics board to verify good behaviour is, I think, a non-starter, and probably just as good a way of entrenching orthodoxy as the current system. On the other hand, requiring reviewers to sign off publicly on reviews goes a long way towards the prevention of cliques where mutual signing off occurs. Journals where the set of authors and set of reviewers largely overlap would be immediately suspect, and so editors have the incentive to increase the pool of reviewers. In the case of acceptance, reviewers could lodge agreed comments as part of the SI. The history of the article including previous rejections could also form a part of this record.

    Finally, one could ask editors to record any cases where someone otherwise unconnected with the article has intervened in the decision (or attempted to do so). Yes, I know that’s a branch of academia with a long and fascinating history, but progress has a price.

    Cheers, — Jo Calder

  27. Steve McIntyre
    Posted Oct 20, 2005 at 2:00 PM | Permalink

    My take on peer review is different. I agree with Jo that you can’t expect much more from peer reviewers than you’re currently getting – after all, people are doing it unpaid. It’s just that the public, abetted by IPCC, THINKS that it’s much more – that it’s more like an audit. I don’t think that audit-level verification is appropriate simply for publication in a journal.

    The immediate steps that I see that could be readily implemented seamlessly within the current system are :
    1) require archiving of data, methods (including source code) either at the journal or a permanent archive like WDCP. Authors would have to sign a web-based form confirming that they’ve done so prior to publication.
    2) in medical studies, advance notice of methodology now needs to be sent to a journal prior to the commencement of a study. This wouldn’t do any harm, to avoid cherry-picking.
    3) a confirmation statement (again web-based) of full disclosure, affirming that any material adverse results or data have been disclosed. I’m not sure that the implications of full, true and plain disclosure are fully understood by climate scientists, but at least it would be a start.

    climate models seem to be the main IPCC focus. Some form of due diligence on the climate models is surely needed. I’m not sure what form that would be take, but maybe a check of 1 or 2 studies by independent engineers would surely be a good idea. This would need a real budget.

  28. Ian Castles
    Posted Oct 20, 2005 at 4:16 PM | Permalink

    Re #27. The climate models have socio-economic inputs, and the IPCC lacks competence in this area. This is not just the Special Report on Emissions Scenarios authors. In the 2001 Report, the Panel feels competent to make pronouncements such as the following, without citing a source for the claim:

    “In addition, real-world comparisons must account for many commodities, services, and attributes. This causes enormous index number problems in computing conversion factors such as the PPP. Indeed, one country’s income can be higher or lower than another depending on which country is used as the base for the PPP index” (s. 2.5.5.2, “Comparisons across nations” in Chapter 2, “Methods and Tools”, Report of IPCC Working Group II, Third Assessment Report).

    This is an elementary factual error. The correct statement of the position is as follows (the authors are the founders of the UN International Comparisons Project):

    “[T]he choice of the United States as the numeraire country and the expression of values in terms of international dollars do not determine the per capita quantity relationships… Any one of the sixteen countries [in ICP Phase One] can legitimately be selected as the base country and the real per capita GDPs of the other countries can be expressed as a ratio to the real per capita GDP of that country. The full matrix of the relationships is presented in a convenient way in Table 1.3, which takes each country in turn as the base country” (Kravis, Heston and Summers, 1978, International comparisons of real product and purchasing power, p. 11-12).

    There followed a table showing the relative gross domestic product per capita for all pairs of countries, i.e., 16 x 16 cells. In World Product and Income, 1982, Kravis, Heston and Summers repeated the argument exactly and provided a similar tabulation for 34 countries (i.e., a 34 x 34 matrix). The IPCC’s statement that the relationship between one country’s income and another’s depends on which country is used as the base is simply wrong.

    Last July I decided to try and discover how such a statement could have survived the IPCC’s vaunted review processes. In a letter to the IPCC Secretary, I noted that the “Procedures for the Preparation, Review, Acceptance, Adoption, Approval and Publication of IPCC Reports” (Appendix A to the Principles Governing IPCC Work) state that “All written expert, and government review comments will be made available to reviewers on request during the review process and will be retained in an open archive in a location determined by the IPCC Secretariat on completion of the Report for a period of at least five years.” I requested access to the review comments that had been made on the section of the IPCC Third Assessment Report quoted above (and some other sections). I’ve had no reply.

    So far as I know, no one associated with the IPCC knows that the error was made. It’s entirely on the cards that the error will be repeated in the next Assessment Report.

  29. mikep
    Posted Oct 20, 2005 at 4:37 PM | Permalink

    Re 28,
    It sounds like a confusion about the price base. It is the case that it is possible for a pair of countries to change rank if a different price base is used. Most implementations of PPP use some kind of weighted world average price, and the problem will not arise with taht approach. But if an individual country’s price structure was used the possibility of a change in ranking is there. Of course the two countries would have to produce a different mix of goods with Country A having greater physical output of one set of goods and smaller outputs of another set than country B, as well as a very different relative price structure. But it could happen. The statement from the IPCC as it stands is of course wrong.

  30. Ian Castles
    Posted Oct 20, 2005 at 5:10 PM | Permalink

    Re #29. Thanks mikep. Of course I agree with you in relation to bilateral comparisons. But the whole point of multilateral comparisons such as those developed by the ICP over the past 40 years is that they must be base-country-invariant (which of course also means price-structure-invariant). There is a careful explanation of the attribute of base country invariance underlying multilateral intercountry comparisons in SNA 1993 (paras. 16.90-91). And in his masterly paper “Axiomatic and Economic Approaches to International Comparisons” in the NBER Studies in Income and Wealth vol. 61 (1999), Erwin Diewert, Professor of Economics at the University of British Columbia, articulated a “Multilateral country reversal test (Symmetrical Treatment of Countries)” and explains it as follows:

    “[This test] means that no country can play an asymmetrical role in the definition of the country’share functions. This property of a multilateral system was termed base country invariance by Kravis et al (1975). When multilateral indexes are used by multinational agencies such as the European Union, the OECD, or the World Bank, it is considered vital that the multilateral system must satisfy [this test]” (p. 17).

    The IPCC is itself a multilateral (intergovernmental) panel, created by two multilateral organisations (UNEP and WMO) to assess an inherently multilateral (crosscountry) issue. Surely it is extraordinary that they should make a statement purporting to be about “the real world” without seeing the need to check its validity with those actually producing these estimates in the multinational agencies such as the EU, OECD or the World Bank. In fact they’ve done exactly what the SRES authors wrongly accused Castles and Henderson of doing: creating a problem that doesn’t exist.

  31. Ian Castles
    Posted Oct 20, 2005 at 7:04 PM | Permalink

    I can’t resist adding a postscript to my #28. The following statement comes from the same section of Chapter II of the IPCC WG II report as the error about the attributes of multilateral PPP measures:

    “First principles of economic theory offer two approaches for comparing situations in which different people are affected differently. In the first”¢’‚¬?the utilitarian approach attributed to Bentham (1822) and expanded by Mills (1861)”¢’‚¬?a situation in which the sum of all individual utilities is larger is preferred.”

    By “Mills” the IPCC authors probably mean “Mill”, though one can’t be sure because no works by Bentham or Mill appear in the reference list at the end of the chapter. In the new Oxford Dictionary of National Biography John Stuart Mill gets a slightly longer entry than Isaac Newton, but the IPCC experts misspelled his name and forgot to list his book. Surely the review procedures of which the IPCC boasts in http://www.ipcc.ch/press/pr08122003.htm should have been proof against such sloppy scholarhip?

    One of the lead authors of the IPCC chapter in question was Stephen Schneider of Stanford, who excoriated Bjorn Lomborg’s The Skeptical Environmentalist in his contribution to the notorious suite of articles (“Science fights back…”) in the January 2002 issue of “Scientific American” (for an insight into this journal’s shameful behaviour in the matter, see http://www.greenspirit.com/lomborg/ ). In the course of his diatribe, Schneider abused Cambridge University Press for having published Lomborg’s supposedly error-filled volume and, along the way, sneered at Lomborg for giving the appearance of scholarship in “this massive 515-page tome with a whopping 2930 endnotes.” The IPCC reports with their thousands of authors and reviewers fall way short of Lomborg’s sole-authored work in terms of accuracy of detail and care in citation of sources.

  32. mikep
    Posted Oct 21, 2005 at 7:58 AM | Permalink

    And I can’t resist adding that Bentham was mummified and can still be seen at University college, London, which I believe he founded….

  33. Mike N
    Posted Oct 21, 2005 at 4:00 PM | Permalink

    I think there are some good ideas here on the integrity of science. But the quote of Michelle Wei hit an important nail on the head for me. She said: “From now on, I’ll call a rules official…” That is what science doesn’t have–a rules official, and I don’t mean a government one.

    In a private economy, such an official would be provided by philosophy. Sadly, today’s university departments of philosophy are mired in a swamp of irrationality thanks to guys like Hume and Kant et al. Nevertheless, they are the ones making the rules by which you gentlemen are supposed to discover truth. “You can’t be certain of anything”,”reality is an illusion”, “reason is flawed”, “what’s true today won’t be tomorrow” they tell us. Is it any wonder science is suffering?

    Under such an anti-intellectual barrage, it becomes clear to me at least, that the late philosopher Ayn Rand was right on in her essay “The Establishing of an Establishment” when she wrote: “Governmental encouragement does not order men to believe that the false is true: it merely makes them indifferent to the issue of truth or falsehood.” She’s right you know.

    So that is what science is facing today. Not some powerfull evil bent on destroying it, but an intellectual vacuum of irrelevance. I don’t know how to go about filling it but I think reducing government funding of science would be a good goal. Perhaps the next time you testify before congress you could not suggest but demand some of the ideas on this post be adopted. Science doesn’t have any rules officials right now except,maybe, you.

  34. John A
    Posted Oct 22, 2005 at 3:08 AM | Permalink

    Mike N

    I don’t believe there’s an intellectual vacuum, so much as an ethical vacuum. In the political sphere, there are laws of conduct in regard to what can and cannot be done and by what means – with checks and balances which are independently enforced (this is what’s happening to Tom deLay, for exmaple).

    In science, I think there is a vital need for proper checks and balances to be instituted to prevent poor work, overreaching claims, announcing scientific results by press conference and other such abuses. In that sense government can help by making full disclosure of data and relevant materials so as to permit replication before journal submission can even begin, as a mandatory requirement of funding. There also needs to be some cultural change in scientific bodies to arrest the current abuses of the journal publication system, in particular by de-coupling the link between journal publication and advancement. Additionally there must be a requirement to disclose conflicts of interest and to recuse oneself from even the appearance of such a conflict.

    In all of this, a strong ethic against bending the rules needs to be established. I don’t know where this is going to come from, because science is by its very nature a rather fragmented intellectual enterprise. But in the realm of public policy, the need for full and plain disclosure of all relevant facts, as is currently the standard for raising finance from investors, helps reduce the possibility of fraud. In public policy, there must be full and independent audit and open review of scientific evidence, because the stakes for democracy and market economies are that much higher. In the case of funding, scientific endeavor should be done through blind trusts so that government and the private sector can fund science without compromising the integrity of the research or the reputations of the scientists involved.

    I don’t know where all of this is going, but the incentives to cheat are too great and the incentives for ethical behavior are too poor. In that regard there must be detection, audit, enforcement and disqualification as a final resort, to arrest the abuses of the scientific method.

  35. per
    Posted Oct 22, 2005 at 5:34 AM | Permalink

    All due respect to john A, but I think there is an element of hyperbole creeping here.

    I think the full disclosure idea is inherently problematical. Sure, journals could make you sign a form requiring you to make full, true and plain disclosure, but where would you disclose all the data to the relevant level of detail ? Certainly not in paper journals, with their tight page limits. Maybe this will change with the rapid move to e-publication.

    There is already a strong scientific ethic against bending the rules. I suggest that is one reason why this site gets so much interest. How you force the implementation of perfect ethics everywhere is however a substantive problem.

    “de-coupling the link between journal publication and advancement”; this is verging on bizarre. Why is this even desirable?

    I think there is a tendency to invent more rules to cure problems, but this will bring its own crop of problems. Science is a self-righting process, so I am quite sure that the message here is being absorbed and slowly mobilised. For me, the messages are to do with having an auditable data collection, and full and open disclosure of that information. It is immediately evident that these are essentials for the scientific process.

    yours
    per

  36. John A
    Posted Oct 22, 2005 at 6:53 AM | Permalink

    Per,

    I always admire the caveat of “with all due respect..” which means “you’re completely wrong”

    I think the full disclosure idea is inherently problematical. Sure, journals could make you sign a form requiring you to make full, true and plain disclosure, but where would you disclose all the data to the relevant level of detail ? Certainly not in paper journals, with their tight page limits. Maybe this will change with the rapid move to e-publication.

    No, not what I meant. What I meant was that prior to journal submission, the data, methodology and all relevant materials (including adverse data) should be archived and made available for inspection. At first, that key information should be available for the editors and reviewers to refer to, otherwise what reviewers are doing is little more than litmus tests? Then after publication is granted, THEN full public disclosure to permit replication should be made.

    The journals should insist on this as a condition of submission – why don’t they?

    The funding authorities (such as the NSF) should insist on this a condition of funding – why don’t they?

    The IPCC which allowed such a situation to develop where one study could bring down the entire operation, did not and still does not independently audit the studies submitted – why don’t they?

    The US and UK Governments, in particular, have funded studies whose results are seriously affecting the present and future prosperity of their own nations, if not the world, and yet there is no mechanism of enforcement when scientific reports seriously mislead. Dr Andrew Wakefield, after producing an entire scare on the MMR vaccine which turned out to be false, for example, still carries on in his job. There appears no independent audit function that government can turn to to check the reliability of the science fed to it.

    There is already a strong scientific ethic against bending the rules. I suggest that is one reason why this site gets so much interest. How you force the implementation of perfect ethics everywhere is however a substantive problem.

    That’s correct. But when money is at stake, as in business and investments in the stock market, everyone is forced to take the issues of ethics and full disclosure very seriously. There is enforcement through laws like Sarbanes-Oxley and agencies like the SEC, to make fraud difficult to achieve and easy to prosecute. Everyone knows that when money is involved, ethics can get pushed into the background, because money corrupts, right?

    Which brings me neatly to the next bit.

    “de-coupling the link between journal publication and advancement”; this is verging on bizarre. Why is this even desirable?

    Because the institutions that people work for, make journal publication and not quality or accuracy, the key metric that decides who gets preferment and who not. The institutions would also be working on stuff that scares people rather than on stuff that reassures. So its not in their best interests to turn around to government through the funding agencies, saying “we took your money, investigated it thoroughly and found there was little to worry about”, because the funding agency may think the institution may not be doing a good enough job. Much better to produce a report that includes such phrases as “much worse than previously thought” and “this requires much more detailed work to assess the scale of the problem”.

    It’s this fundamental asymmetry that is driving science of all kinds to push the “Panic!” button rather than “Don’t Panic”.

    I think there is a tendency to invent more rules to cure problems, but this will bring its own crop of problems. Science is a self-righting process, so I am quite sure that the message here is being absorbed and slowly mobilised. For me, the messages are to do with having an auditable data collection, and full and open disclosure of that information. It is immediately evident that these are essentials for the scientific process.

    You are correct. Science is a self-righting process. But my reading of history is that for science to right itself usually takes a very long time. I believe once scientist described the progress of science as “one funeral at a time”.

    I also think that a catastrophic failure in confidence in science will cause a resistance to science for a generation. Consider what happened in the nuclear industry after Three Mile Island: only now, a generation later, a younger generation that didn’t go through that experience is now beginning to embrace nuclear power as a solution to future energy needs.

  37. Steve McIntyre
    Posted Oct 22, 2005 at 9:21 AM | Permalink

    I don’t think that more elaborate peer review at journals would accomplish very much or even necessarily be a good thing. Sometimes cures can be as bad as the disease. Bre-X has increased regulation in mining stocks fantastically, but most of the regulations are irrelevant to the original fraud. But little public companies now have even more formidable and expensive compliance burdens.

    Right now one of the major desiderata in climate science seems to me to be a really thorough outside “audit” of at least one of the big climate models by completely independent engineers or engineer-equivalents. This would be a pretty big project and would need some real funding – but why not?

    Archiving data and methodology seems like such a simple thing to do (and it is already required in econometrics showing that it is a reasonable best-practices target) that there’s no reason not to do it. Sometimes it’s a good idea to start first with the easily implemented measures and see what they accomplish.

  38. per
    Posted Oct 22, 2005 at 5:06 PM | Permalink

    Hi John
    you are covering a lot of ground !

    I merely point out that full, true and plain disclosure is a humoungous amount of work. For example, MBH is a relatively simple area to audit, and it is only recently that you have got the complete listing of data and the computer programme. That doesn’t mean it is in working order, nor does it cover the steps that were implemented to get the whole programme to that final stage.

    While I am appalled about the tardy disclosure in MBH, I note the severe difficulties in implementing improved audit. For example, implementing the audit standard of “GLP” in the pharmaceutical industry has been suggested to increase the cost of studies by ~50%. Even so, the more imaginative parts of the pharmaceutical industry (drug discovery) are generally excluded from this regulation; I don’t even know it is a feasible proposition. So that is a massive increase in costs, without even considering the costs of audit, or of replication, or of maintaining data archives, etc.

    As a mere side note, I believe that Wakefield was a medic (and not primarily a researcher), and that he moved on from his UK position.

    More substantially, the issue of how or even whether you “audit” cutting-edge science remains extremely difficult, given the scope breadth and complexity of what is there.

    Regarding journal publication and advancement, you are now conflating this with publishing scare stories to push your issue up the agenda. It is a different issue. I don’t see that you are making a convincing argument here, and while I may sympathise with your general position, it doesn’t address the issue that high impact publications gives recognition and grant income.

    cheers
    per

  39. TCO
    Posted Nov 7, 2005 at 5:12 PM | Permalink

    Steve, what’s the latest dirt on the GRL editor’s effort to give favorable treatment to your tendentious critics?

    • Steve McIntyre
      Posted Aug 12, 2011 at 6:59 AM | Permalink

      In the CLimategate emails, on Nov 15, 2005, a week after this comment, Michael Mann wrote to Phil Jones (591. 1132094873.txt)

      The GRL leak may have been plugged up now w/ new editorial leadership there

  40. Steve McIntyre
    Posted Nov 7, 2005 at 5:22 PM | Permalink

    No news.