Back from Georgia Tech

First, let me thank me thank Judy Curry for inviting me to make a presentation at their seminar series and for both spending so much time and energy showing me around the department and hosting me so hospitably. I was the guest at many interesting presentations by able young scientists and at splendid lunches and dinners on Thursday and Friday. I also wish to thank Julien Emile-Geay for his role in initiating the invitation.

Readers of this blog should realize that Judy Curry has been (undeservedly) criticized within the climate science community for inviting me to Georgia Tech. Given that the relatively dry nature of my formal interests and presentation (linear algebra, statistics, tree rings etc.) and that I’ve been invited to present by a panel of the National Academy of Sciences, it seems strange that such a presentation to scientists should provoke controversy, but it did. Readers here should recognize the existence of such a controversy before making ungracious remarks to my hosts. I must say that I was disappointed by many comments on the thread in which I announced that I was going to Georgia Tech (many of which broke blog rules during a period that I was either too busy or too tired to moderate and have now been deleted.)

For critics of the invitation, I wish to assure them that neither Julien (nor Judy) ever explicitly or implicitly agreed with anything that I said and that I do not interpret a failure to rebut any particular point or claim as acquiescence. Quite the opposite. However, any climate scientists who stridently criticized Judy Curry for the invitation should also consider the possibility that she was one chess move ahead of them in what she was trying to do and how my visit was organized.

Right now I have two related but functionally distinct “hats” in the climate debate.

One role is that of conventional (or in my case, slightly unconventional) scientific author, with a few articles and conference presentations on millennial reconstructions. This role is, of course, made livelier both by my unconventional route to writing these articles and by the interesting events that followed them, not least of which was consideration by the NAS and Wegman panels and an appearance before a House of Representatives subcommittee.

The other role is that of proprietor of a climate blog with a big, lively and vociferous audience, arguably a distinct role by now. The emergence of blogs is a media phenomenon in itself, but, in the climate community, blogs are uniquely active. (This is interesting in itself and deserves a little reflection.) Within that community and even within the larger blog community, Climate Audit has established both a noticeable presence and unique voice. I don’t want this post to turn into a reflection on Climate Audit (we can reflect on that on another occasion), but there was little doubt in my mind that scientists at Georgia Tech were far more familiar with Climate Audit than with MM 2005 (GRL) etc.

I’m pretty sure that Judy Curry perceived that: because so much of my personal exposure to climate scientists has been through the dross and bile of the Hockey Team, this has affected the representation and perception of third party climate scientists at a popular blog and it would be beneficial to the portrayal of climate scientists at this new media form for me to meet sane non-Hockey Team climate scientists doing valid and interesting work. I’m sure that other presenters to the EAS Friday afternoon seminar are also treated hospitably, but I suspect that most of them don’t get to spend two days meeting such a wide variety of Georgia Tech climate scientists in small meetings or that their meetings were quite like mine.

On Thursday, I spent most of the day seeing interesting and substantive work in areas unrelated to anything that I’d written about – things like establishing metrics for aerosols using Köhler Theory or laboratory procedures for speleothems. And whatever other criticisms people may have of me, I don’t think anyone has ever criticized me for not finding interest in details and methods. On Friday, I heard an extremely interesting exposition on the physical basis of hurricanes and their role in the overall balance of nature. An interesting context here (and one that I was previously unaware of) is Peter Webster’s interest in monsoons and Bangladesh.

On Thursday, I was also guest at a seminar on climate and the media (including blogs); on Friday early afternoon prior to my EAS seminar, there was a short Q and A session with the Hockey Stick class. At 3.30 Friday afternoon, I presented to the EAS seminar. I didn’t count the crowd, but it looked like there were about 100 people there, including a couple of (non-GA Tech) CA readers from Atlanta. There was a short question period after the presentation and then a beer-and-wine reception.

Readers who were worried about protests and fireworks at the EAS presentation can disabuse themselves of such fevered imaginations. On the one hand, the audience was polite. On the other hand, it would be hard for a student or uninvolved faculty to think up a technical question that hasn’t been raised previously. So there were no fireworks at the seminar, or for that matter, about the Hockey Stick on any occasion. I’ll review the questions below, but I really wasn’t asked very much at any of the public sessions about statistics or proxies. I’m not going to report or discuss any one-to-one sessions since the line between private scientist and blog reporter was not clearly discussed at the meetings; I am therefore treating them as private, even if they were scheduled on-campus meetings – other than to say that there was relatively little specific discussion of the statistics and proxy issues that directly concern me. Not that there wasn’t much lively discussion – just not about partial least squares, spurious regression, bristlecones, data mining, etc. If any of the parties wishes to put any views on such matters on the record here (or elsewhere), they are welcome to do so. Below I’ll limit my discussion to matters raised at the public seminar or in a classroom setting.

Not everything was sweetness and light. There were a couple of rough patches, not about my analysis of MBH or proxies, but about some incidents here at climateaudit. I’ll discuss blog manners and perceptions on another occasion and mention only one point right now. I regularly discourage people from being angry in their posts for a couple of reasons – even if you feel that the angry outburst is justified, it never convinces anyone of anything; and it gives people an excuse to ignore non-angry posts. Regular readers tend to filter out the angry posts and pay attention to the more substantive posts. However consider the possibility that visitors have the reverse filter – they tend to pay attention to the angry posts and ignore the substantive ones. As people know, I’ve modified my attitudes towards comments over time and now try to delete angry posts when I notice them (and these angry posts are 99% of the time condemning climate scientists and the horse that they rode in on, rather than this blog). It places an unreasonable burden on me to weed out these angry posts and I re-iterate one more time my request that readers refrain from making angry posts as they are entirely counter-productive.

After that long preamble, I’ll review my presentation to the EAS seminar (which I’ve now put online) and questions arising at the seminar or in the classroom.

EAS Seminar Presentation
I haven’t made a long presentation in nearly 18 months and I’ve only made a couple altogether. I actually haven’t spent a lot of time on HS matters during the past year and I used my preparation as an opportunity to pull together some lines of thought that I’ve presented from time to time on the blog, but not previously pulled together.

In addition to the usual things that a speaker deals with, in my case, it was necessary to provide a little personal history, something that most speakers can skip. (I could give a pretty lively talk, consisting only of such stories.) While it’s an interesting segment, it quickly eats into time allotments. Combined with the fact that there are more things that I want to say than I can practically cover and that I haven’t had previous audience feedback on what works and what doesn’t work, I tend to end up being rushed. (By contrast, Ross McKitrick has a nice easy way about him when he does this sort of presentation.) There were some style defects in the PPT (remedied in the online version) e.g. some missing y-axis labels; for short form citations, not just Yule 1926 but Yule 1926 (J Roy Stat Soc).

I also prefaced my talk with a few disclaimers e.g. that I did not argue that anything in the talk disproved global warming; that, if I had a big policy job, in my capacity as an office holder, I would be guided by the reports of institutions such as IPCC rather than any personal views (a point I’ve made on a number of occasions); and that I believed that policy decisions could be made without requiring “statistical significance” (such decisions are made in business all the time, and, in all my years in business, I never heard the words “statistical significance” pass anyone’s lips as a preamble to a business decision.

I then attempted to place temperature reconstruction recipes in a broader statistical context – first by showing, in relation to MBH, that all the MBH operations were linear; and that the MBH reconstruction (like other reconstructions) was thus a linear combination of underlying proxies. I showed a graphic (previously shown at CA, like most of the material) in which AD1400 MBH weights were represented by dot area on a world map, showing the tremendous influence of bristlecones. I posited that it should be possible to calculate weights for the RegEm of Mann et al 2007 and that its weights would look pretty similar to MBH weights – with a very high bristlecone weighting. I noted, but did not dwell on, the curious PC error in MBH98. While this particular MBH error has attracted much attention, it is only one of a number of problems and I spent 99% of my time on issues that had nothing to do with principal components.

I showed that, for the one-PC case of the MBH AD1400 and AD1000 steps, the proxies were correlation-weighted; that correlation weighting was equivalent to a technique actually known in the broader statistical community (one-step Partial Least Squares regression); that PLS coefficients were a rotation of OLS coefficients; that, in a situation (such as MBH) where there was little multicollinearity between the proxies, the rotation matrix was “near”-orthogonal. Given that it’s trivially easy to picture overfitting from a multiple “inverse” OLS regression of temperature (or temperature PC1) onto 65-90 non-collinear proxies in a period of only 79 years, it therefore follows (from the near-orthogonality of the rotation) that overfitting will occur in a PLS regression where there is little multicollinearity in the underlying proxies. In such cases, of course, you’re going to get a good fit in the calibration period, but confidence intervals calculated from such calibration residuals have no scientific meaning – a simple point that seems to have eluded far too many. I argued that the “no-PC” MBH98 variant that Wahl and Ammann put forward in an effort to salvage MBH falls prey to these overfitting problems (among others) and merely goes from the frying pan into the fire.

I referred to Stone and Brooks 1990, which showed that there was a one-parameter “continuum” between PLS coefficients and OLS coefficients via ridge regression, showing a slightly different, but equivalent, one-parameter mixing. (I skipped over a diagram showing another interesting arrangement methods derived from an approach of Magnus Borga). Because ridge regression is in a sense “intermediate” between OLS and PLS, overfitting problems that plague both OLS and PLS (such as the overfitting problem discussed above) would also affect ridge regression.

I showed a graphic from my 2006 CA post on VZ pseudoproxies showing what happened to the coefficients in an overfit network of very “tame” pseudoproxies. I’m convinced that this diagram is the most cogent explanation of the loss of low-frequency variance which was at the root of earlier (still unresolved) controversy. As an online editorial aside, the coefficients from ridge regression would gradually go from PLS-coefficients to OLS coefficients along a one-parameter path in coefficient space. Their ability to preserve low-frequency variance will therefore be intermediate between PLS and OLS and deteriorate as they approach OLS, something that one would expect to affect Rutherford et al 2005, which combined ridge regression with RegEM. (As an online editorial comment, Smerdon et al observed that, properly calculated, there was a substantial low-frequency variance loss in Rutherford et al 2005, which one might well expect from the above diagnosis.)

I didn’t comment on truncated total least squares as proposed in Mann et al 2007. It’s never a whole lot of fun wading through Mannian methodology, but, now that I’ve picked the file up again, I’m going to spend a little time in the next few weeks trying to work through this method and try to figure out what it does in the MBH98 proxy network recycled in Mann et al 2007. As another online editorial comment, it looks to me like there’s need to disentangle the relative impacts of changing from (1) partial least squares (MBH) to ridge regression (R05) to truncated total least squares (M07); (2) temperature principal components to gridcell matrices; (3) stepwise splicing to EM (Reg or otherwise). On another occasion, Tapio Schneider confirmed my surmise that, at the end of the day, RegEM would yield coefficients, although the coefficients in Mann et al 2007 were not reported. Because the Mann et al 2007 recon looks so much like the MBH98 reconstruction, I surmise that bristlecones and Gaspe are heavily weighted in the early segments as they were in MBH98, but this surmise needs to be demonstrated.

My only objective in discussing the linear algebra was to de-mystify the recon process by showing that the recon methods (including MBH) could be fit into a statistical framework. I didn’t expect that people would necessarily accept this merely by flashing a few PPT slides; my objective was merely to put this on the table, so that third party scientists might at least draw a breath whenever they heard phraseology like “overdetermined relationship between the optimal weights” and to be cautious in relying on results from some novel and poorly understood statistical methodology.

In my opinion, it’s long past time to move away from such esoterica as “overdetermined relationship between the optimal weights” and strained signal processing metaphors in general and time to re-formulate the proxy debate in the form of standard statistical questions e.g. is there a valid linear relationship between bristlecone ring widths and temperature such that this can actually be used to estimate past temperatures? If I do another presentation along these lines, I’ll try to express this even more forcefully.

Even for seemingly simple statistical questions like a relationship between temperature (or temperature PC1) and bristlecone ring widths – I should have mentioned “teleconnections” here – I tried to show the audience that these could not always be easily resolved on purely statistical grounds (e.g. using simple statistics such as correlation or RE.) While we touched on this topic in MM2005 (GRL), where “spurious significance” is used in the title, and included some good references there, I’m now in a position to frame the issues more precisely. In my PPT, I mentioned Yule 1926; Keynes 1940; Granger and Newbold 1974; Hendry 1980; Phillips 1986, 1998; Ferson et al 2003; Greene 2000 – all of which have been previously discussed at CA (most of which I’ve placed online as well). This is a literature that was unfamiliar to audience, although the autocorrelation problems that plague proxy studies have also been faced in econometrics – which is a small branch of statistics, but one which may well have tools that transport better into the proxy world.

Briefly, econometricians have pondered for many years why important and widely used statistics (correlation, t-statistics) can be “significant” and even “strongly significant” for “nonsense” (or “spurious”) relationships. Sometimes a “nonsense” relationship can have a Pearson correlation (r) that is “99.999% significant” (in the strange sense of Juckes et al 2007) , such as the examples of mortality versus proportion of Church of England marriages (Yule 1926) or cumulative rainfall versus inflation (Hendry 1980). Is there any statistical way – i.e. some quantitative calculation – by which “spurious”/”nonsense” relationships can be culled from valid relationships?

The Juckes et al 2007 approach obviously accomplishes nothing in this respect. It is understood that nonsense relationships can have very high r values.

A statistic to which the multiproxy community has seemingly attached strong magical properties in this respect is the RE test, which, in the hands of Mann and his associates, becomes virtually a talisman. However, the RE test has limited “power” (using the word as used by statisticians) to reject nonsense regressions; I observed that the RE test is unable to detect either the Yule 1926 or Hendry 1980 nonsense relationships, both passing the RE test with flying colors with standard splits of calibration and verification periods. n passing, as another online comment, the RE statistic is mentioned under another name in econometric literature in the 1970s prior to its adoption by dendros (Theil mentions the test – see Granger and Newbold 1973, not the more famous 1974), but it’s never really caught on in econometrics for some reason.

Another statistic that has been proposed for the identification of nonsense regression is the Durbin-Watson test (which quantifies first-order autocorrelation in residuals). Granger and Newbold 1974 argued that this could be used to distinguish one form of nonsense regression – between random walks, where impressive correlation and t-statistics frequently occurred, but which were found to fail the Durbin-Watson test. Phillips 1986 explained this phenomenon in a remarkable and seminal article.

Other econometric tests have been proposed. Nonsense regressions between random walks are hardly the only way in which spurious significance can rear its ugly head, but it’s a model that is tractable mathematically and has yielded insight into at least one class of problems.

At this point, it’s fair to say that there is no talisman that can be relied upon to separate “nonsense” from valid relationships (definitely not the RE statistic). Passing any individual statistical test does not guarantee that a relationship is not spurious, but failing any test (including verification r2) should raise red flags all over the place – see the NAS panel report for a specific comment on the impact of such failures on the ability to calculate confidence intervals). As I observed at AGU in 2006 and repeated in this talk, virtually all the canonical multiproxy reconstructions fail the Durbin-Watson test and verification r2 test, something that would raise alarm bells for any reader familiar with econometric literature.

As another online comment, I note that it’s not that climate scientists are inattentive to the phenomenon of spurious correlation – any attempt to link temperature to indices of solar activity typically instigates prompt statistical investigation by climate scientists. One Georgia Tech scientist criticized me for not applying myself to this particular topic, arguing, as others have, that my failure to do so made the blog one-sided. My response to him, as to others, was that it’s impossible for me to do everything, that I’m already overcommitted and that my priority is to deal with mainstream papers that are relied on by IPCC and that, other than our work, no comparable effort seems to have been made on the canonical multiproxy reconstructions and their key components such as bristlecone ring widths. Having said that, I said at that meeting (and again here) that it’s the sort of analysis that appears within my scope and I’ll try to organize the data and analysis on some future occasion.

Continuing on with spurious correlations, in econometrics literature, it has also been observed: (1) where data mining has taken place (either in an individual study or cumulatively in a discipline), the risks of spurious correlation increase, and (2) these risks are exacerbated when series are highly autocorrelated (as with series in the recons). A particular problem for the canonical multiproxy studies is that many of the multiproxy studies said to be “independent” actually use many of the same proxies over and over (bristlecones, Tornetrask, Polar Urals) so that problems affecting a repetitively-used proxies (e.g. spurious correlation) will affect multiple multiproxy studies – a point that I illustrated in a slide.

Greene (2000) observed that standard statistical distributions ceased to apply when a data set had been subject to prior mining or snooping. Since studies like Osborn and Briffa 2006, Hegerl et al 2007, Juckes et al 2007 overtly re-cycle data, any purported results from these studies are compromised by the inapplicability of standard distributions to data mined networks.

Greene (2000) offered an interesting suggestion on a way to work around data mining concerns, which also deals in a straightforward way with concerns about whether bristlecone ring widths are a reliable proxy (through teleconnection) for world temperature. Greene observed that one effective way of check econometric relationships for which data mining was a concern, was, for data sets ending in 1980, simply to wait 30 years, update the data and see if the proposed relationship still held up. By sheer coincidence, the date used in Greene’s example, 1980 , is the termination date of some of the key multiproxy reconstructions – which makes the segue especially apt. One of the most obvious questions for me when I first encountered proxy reconstructions is why authors had not updated the standard proxy series into the 2000s to verify that they responded to the warm 1990s and 2000s. This issue was very much on Kurt Cuffey’s mind in the oral NAS panel hearings, although they didn’t face it up to it very squarely in the written report.

As opposed to arguing back and forth about whether a relationship between bristlecone ring widths to temperature in the period leading up to 1980 could be projected to apply to warm periods, why not simply update the records and find out? In econometrics, such an update would be viewed as a test of the model – if the relationship failed to hold up, then the model (e.g. that there is a linear relationship between temperature and bristlecone ring widths or PC1s or whatever) would be rejected. And, as readers of CA well know, looming over such a discussion is the “Divergence Problem”.

I did a very quick survey of recent work at 4 sites of particular interest: 1) our own update to 2007 of the Graybill Almagre bristlecones; 2) the Ababneh (2006, 2007) update of the Graybill Sheep Mt bristlecones (the most important site in MBH and Mann and Jones 2003); 3) the Grudd (2006, 2008) update of Tornetrask; and 4) the unreported Esper/Schweingruber update of Polar Urals.

There are, in effect, not one but two Divergence Problems (I didn’t make that distinction in my talk, but it’s logical).

The First Divergence Problem is the decline in ring widths (and MXD) in the 2nd half of the 20th century despite rising temperatures. For an econometrician, this “divergence” would be viewed as a contradiction of the hypothesis that RW (or MXD) can be used in linear temperature reconstructions and evidence that many supposed relationships were spurious or data mined. The Divergence issue was mentioned – but ineffectively handled – by both the NAS Panel and IPCC AR4. Both argued that the Divergence Problem was limited to high latitudes – IPCC making a more extreme and less accurate statement than the NAS Panel. However, results at Almagre and at Sheep Mountain (and elsewhere e.g. Woodhouse’s limber pines) provide evidence that divergence is not limited to high latitudes (see also the post on young dendros at AGU 2007).

In addition, there is a Second Divergence Problem – the “divergence” between recent chronology updates at Sheep Mt, Tornetrask and Polar Urals and earlier chronologies used in the canonical multiproxy studies. In each case, the more recent chronologies have resulted in substantial increases of medieval relative to modern values, with, in some cases, medieval proxy values outstripping modern values. I observed (as I have on many occasions here) that canonical multiproxy studies are typically not robust even to the version selected for key proxies – for example, the use of Polar Urals Update instead of Yamal reverses the medieval-modern relationship in Briffa 2000; the use of the Ababneh Sheep Mt version eliminates the HS in the Mann and Jones 2003 PC1, etc etc Many such variations have been reported at CA.

The substantial changes from one version to another in these important series should, in my opinion, be very troubling to third party scientists. Until there is a thorough and comprehensive reconciliation of different chronologies – Ababneh versus Graybill, Grudd versus Briffa, Polar Urals Update versus Yamal – I don’t see how any version of these chronologies can be used in a reconstruction placing multiproxy authors in an untenable position and making advances in the field extremely difficult and perhaps impossible.

As an online editorial comment, I don’t say this to mean that people should throw their hands up in the air. It seems to me that there should be some way of making climate reconstructions. But at present, third party scientists presented with reconstructions that purport to establish temperature in AD1000 to within 0.2 deg C or even 0.5 deg C should take such calculations with a grain of salt until the various “divergence” problems are resolved.

I’ve posted up a ppt (9 MB) of my presentation here, very slightly edited to add any y-axis descriptions (in italics) that had been left out and to improve the referencing.

Questions and Criticisms

I really didn’t get much, if any, feedback on any statistical or proxy interpretations either in the public forum at the presentation or in smaller groups or in private. Mostly people asked about “big picture” questions or matters that were not raised in my presentation. I don’t interpret that as meaning that people necessarily accepted my views on these matters, only that the audience had their own specialties and that my topics were sufficiently technical that it was hard for someone unfamiliar with the detailed terrain to really find a foothold.

One person asked me about borehole reconstructions, which are not really material to my analysis. The borehole recon in the IPCC AR4 spaghetti graph was referred to, but I observed that it does not go back to the MWP and so doesn’t really shed much light on the medieval-modern issue. A long borehole recon – the Dahl-Jensen borehole recon has a very elevated MWP in Greenland. I’m not sold on borehole recons and said so, but it’s not a topic that I’ve evaluated in depth. Thus this discussion was really just chitchat.

A second person asked me about (aboriginal) oral traditions in the high Arctic. I’m personally doubtful that any aboriginal oral traditions would really shed much light on things for a variety of reasons (and the existence of Viking traditions would also have to be weighed in the balance). Again nothing to do with my presentation and the discussion while interesting was just chitchat.

One person asked me about the supposed “Inquisition” in which Mann’s financial records and bank accounts had been sub-poenaed. I replied that Mann’s financial records had not been subpoenaed; he had been asked to list federal and private financial support for his work. It was my understanding at the time that such questions were relatively pro forma for the committee. I had to provide information about federal support when I testified – which took me only one line to answer. Judy Curry volunteered that she had to provide similar information as well when she testified before a different committee.

I don’t remember much about what the hockey stick class asked me. I do recall that my assertion regarding the dependence of the MBH98 hockey stick on bristlecones was challenged but I don’t recall whether the questioner provided any reasoning for the challenge. A graphic in my PPT (the one that shows the contributions of bristlecones relative to the almost white noise of other proxy classes) would have been useful in answering this question, but this question was asked prior to my EAS presentation.

Other than that, I’m drawing a blank in terms of remembering any questions about statistics, regression, principal components, bristlecones, tree ring chronologies, verification r2, multivariate methodologies. Maybe someone will remind me and I’ll amend this note accordingly.

I was asked in one session about dealing with the media and how climate scientists could better get their message across. It was interesting to chat about this, but it’s not something about which I claim particular knowledge or expertise.

My emphasis on archiving data was endorsed at the presentation and my highlighting of problems in the field was acknowledged as a contribution. I had made a point of noting both privately and in my public presentation that the speleothem data for Partin, Cobb et al 2007 (both at Georgia Tech) was placed online at WDCP concurrent with journal publication – an excellent example of best practices. One scientist was surprised at archiving problems in paleoclimate as apparently NSF Ocean and Polar programs have very strict compliance enforcement – data had to be archived prior to application for the next grant. So it seems perhaps that it might be worthwhile to spend less attention on the general principles of archiving and more attention on ineffective administration at the NSF unit that deals with paleoclimate.

On a number of occasions, I was asked (in different ways) whether I endorsed IPCC findings. I’ve said on many occasions (including the preamble to my talk at Georgia Tech), that, if I had a senior policy making job, I would be guided by the views of major scientific institutions like IPCC and that, in such a capacity, I would not be influenced by any personal views that I might hold on any scientific issue. Many people seemed to want me to make a stronger statement, but I’m unwilling to do so. In the area that I know best – millennial climate reconstructions – I do not believe that IPCC AR4 represents a balanced or even correct exposition of the present state of knowledge. I don’t extrapolate from this to the conclusion that other areas are plagued by similar problems.

My presentation could undoubtedly have been improved (and many such improvements would occur even to me on a 2nd occasion). One person who had privately given me a particularly hard time about climateaudit came up to me at the reception and said that he found the presentation “compelling”; another said that he followed the linear algebra and complimented me on the approach. I’m sure that there were some, perhaps even many, who didn’t like my presentation, but were too polite to tell me.

I presume that most people in the audience have derived their perspective on these disputes from realclimate, from which they would believe that we had made elementary and even stupid errors and that any minor points on which we had been accidentally correct didn’t “matter”. I would like to think that such a person, even if they were not overwhelmed by my argument or presentation, would have leave with the impression that I was not merely trifling, that I had certainly not got everything completely “wrong” in a trivial way and that they should not necessarily accept Hockey Team assertions as the last word on the topic.

There are some other issues that I plan to re-visit on another occasion, not least of which will be posting rules for Climate Audit. Again, I appreciated both the invitation and the hospitality.

919 Comments

  1. MrPete
    Posted Feb 18, 2008 at 5:34 PM | Permalink

    Good summary, Steve.

    On the major challenge of moderating high volume “vigorous discussion”… I think I’ll have a chat with a few friends who have dealt with in computer analysis of high volume phone calling — i.e., automagically determining which callers to an auto-answering system are upset/irate/etc.

  2. steven mosher
    Posted Feb 18, 2008 at 5:59 PM | Permalink

    good job, well put. Kudos to Dr. Curry, JEG, and Dr. Cobb

  3. MJW
    Posted Feb 18, 2008 at 6:05 PM | Permalink

    Readers of this blog should realize that Judy Curry has been (undeservedly) criticized within the climate science community for inviting me to Georgia Tech.

    If merely inviting a climate-change agnostic to speak provokes such criticism because he questions some of their methods, imagine the hostility that would be directed toward a climate scientist who didn’t join the consensus.

  4. Roger Pielke. Jr.
    Posted Feb 18, 2008 at 6:11 PM | Permalink

    I don’t always agree with Judy Curry, but I have an awful lot of respect for her leadership here. This goes for the other Ga Tech faculty as well.

    And kudos to Steve for returning the cordiality with this post.

    Here is hoping that the goodwill rubs off on some of the more enthusiastic posters here, and help to focus comments on the interesting questions that can be resolved via analysis.

  5. jae
    Posted Feb 18, 2008 at 6:14 PM | Permalink

    Sounds like a great adventure. Kudos are deserved by all the participants.

  6. jeez
    Posted Feb 18, 2008 at 6:21 PM | Permalink

    I apologize to Judith for my angry post.

  7. BarryW
    Posted Feb 18, 2008 at 6:46 PM | Permalink

    While you were treated with courtesy, comments that were made by JEG and Dr. Curry were perceived as dismissive and discourteous by and to the denizens of the blog and they reacted to them. If there was an overreaction it was due to the constant sneering that is directed at anyone who disagrees with the AGW proponents. To get courtesy you also need to give it and when JEG and others argue the science they will and should get courteous responses, when they sneer, they only get what they are giving.

  8. Max
    Posted Feb 18, 2008 at 6:58 PM | Permalink

    Questions for Steve M:
    Do you have any western Canada presentations in the works? We’ve had the Gore love in, and Suzuki has done his best Goebbles routine. As dry to the general public much of the discussion may be, I do believe a scientific presentation with little to no shock and awe would be a popular even in the West. I guess the decider is if the institutions here would prop up such a presentation, they like seem to like purchasing hockey sticks.

  9. Bernie
    Posted Feb 18, 2008 at 7:24 PM | Permalink

    Steve:
    It sounds like you jammed a semester long set of lectures into one seminar presentation. I suspect that many in your audience were simply overwhelmed. Great job. I concur totally with your comments on civility. I think we must all develop slightly thicker skins and, in particularly, when confronted with what I hope are JEG’s attempts at Gallic wit and humor. I trust you found him pleasant in person.

  10. Francois Ouellette
    Posted Feb 18, 2008 at 7:45 PM | Permalink

    Andrew, try to locate these two:

    Wagner, F., Kouwenberg, L. L. R., van Hoof, T. B., Visscher, H., 2004, Reprocucibility of Holocene atmospheric CO2 records based on stomatal frequency, Quart. Science Rev. 23, 1947-1954.

    Kouwenberg, L. Wagner, R., Kürschner, W. Visscher, H., 2005, Atmospheric CO2 fluctuations during the last millennium reconstructed by stomatal frequency analysis of Tsuga heterophylla needles, Geology, vol. 33, pp. 33-36.

    Also, I think you can find Thomas van Hoof’s Ph.D. thesis on-line. It’s entitled “Coupling between atmospheric CO2 and temperature during the onset of the Little Ice Age”, by Thomas Bastiaan van Hoof, from the University of Utrecht

    good luck!

  11. Bill Derryberry
    Posted Feb 18, 2008 at 8:01 PM | Permalink

    Kudos Steve, not being a scientists I do appreciate your “report of
    events at the seminar. I wish I could have attended as it is less than 100
    miles from home.

    I wish all blogs concerning science and climate were as cordial as yours
    here at climateaudit is.

    Again Kudos,

    Bill

  12. NeedleFactory
    Posted Feb 18, 2008 at 8:25 PM | Permalink

    Steve McIntyre says

    [I] now try to delete angry posts when I notice them

    When you delete a comment, could you please leave its number? Some threads on Climate Audit, especially those containing simultaneous discussions, are hard to follow when comments refer to previous comments by number and preceding comments disappear. As a simple example, in the post where you announce your trip to Georgia Tech, your own comment #10 allegedly responds to subsequent comment #11.

    Steve: As I’ve said on many occasions, given the software that I have, it’s much faster for me to delete than to snip if there’s a foodfight or more than a singleton. If readers would simply not rise to every barb and ignore offtopic posts, then I would be prepared to snip singletons. But I can’t spend the time to snip multiple posts. I’d encourage readers to request OT and angry posters to go to Unhtreaded or BB or to settle down.

  13. deadwood
    Posted Feb 18, 2008 at 8:32 PM | Permalink

    While I agree with Steve flaming does occur here and should be discouraged, I also note that the flames are set on low. This is quite unlike the reception I see give to skeptics over at RC and “Open Mind”. Over there that kind of treatment is the norm and, more often than not it seems, dished out by the host.

    Steve: Yeah, it’s inconsistent for people to object to the relative mild flaming here and not to object to much worse behavior elsewhere. But there it is. I don’t set my standards by their standards. I’ve asked people not to post angry comments here. So don’t.

  14. MarkW
    Posted Feb 18, 2008 at 8:37 PM | Permalink

    I don’t know how references to barking dogs and being funded by Exxon can be regarded as polite.

    Steve:
    No one said that it was. But no one said that you were obliged to rise to the bait either. If someone says something foolish or impolite, you will accomplish more by being unctuously and annoyingly polite in return than getting into a food fight.

  15. Geoff Sherrington
    Posted Feb 18, 2008 at 8:51 PM | Permalink

    Steve,
    Nothing I write here is to be regarded as negative.

    An analogy comes to mind. You are fishing and you are getting bites. You convince a sceptical audience that you are getting bites. No argument. But they want to see you land a big fish. In this case the big fish is (perhaps) the formation of a group to investigate IPCC claims formally.

    This was always the difficulty in the public opinion work I did in the past. To change public opinion, you have to convince the people who first posed the question, then those who experiment with solutions and spread them, then those who review and weigh diverse possibilities, then those who advise policy makers, then the policy makers or politicians themselves, as a majority.

    This is a tough road for a person or group starting with dissenting evidence. I wish I had an answer on how to travel it, but I don’t. The main words that come to mind are accuracy, encouragement, persistence. You have caused me to add an extra one, politeness. Congratulations.

  16. Raven
    Posted Feb 18, 2008 at 8:55 PM | Permalink

    MarkW says:

    I don’t know how references to barking dogs and being funded by Exxon can be regarded as polite.

    I believe that Steve’s point is that casual readers are not going to parse long threads and figure out that 10 angry replies were provoked. Civility in the face of provocation is still a virtue.

  17. yorick
    Posted Feb 18, 2008 at 9:04 PM | Permalink

    I look forward to reading about spurious correlations of solar to temp. I have been fascinated by the long running coincidence that both Svensmark and the Max Plank institute have documented. It is an especially timely topic with solar cycle 24.

  18. Jeff A
    Posted Feb 18, 2008 at 9:47 PM | Permalink

    I showed that, for the one-PC case of the MBH AD1400 and AD1000 steps, the proxies were correlation-weighted; that correlation weighting was equivalent to a technique actually known in the broader statistical community (one-step Partial Least Squares regression); that PLS coefficients were a rotation of OLS coefficients; that, in a situation (such as MBH) where there was little multicollinearity between the proxies, the rotation matrix was “near”-orthogonal. Given that it’s trivially easy to picture overfitting from a multiple “inverse” OLS regression of temperature (or temperature PC1) onto 65-90 non-collinear proxies in a period of only 79 years, it therefore follows (from the near-orthogonality of the rotation) that overfitting will occur in a PLS regression where there is little multicollinearity in the underlying proxies. In such cases, of course, you’re going to get a good fit in the calibration period, but confidence intervals calculated from such calibration residuals have no scientific meaning – a simple point that seems to have eluded far too many. I argued that the “no-PC” MBH98 variant that Wahl and Ammann put forward in an effort to salvage MBH falls prey to these overfitting problems (among others) and merely goes from the frying pan into the fire.

    My brain hurts.

  19. Ron Cram
    Posted Feb 18, 2008 at 9:47 PM | Permalink

    Steve,

    Thank you for this recap. It went pretty much the way I expected it to, except on one point. I would have expected at least one person to have challenged you on at least one issue. I think it unfortunate no one was prepared to discuss the proxies, methods or statistics with you.

  20. Ian McLeod
    Posted Feb 18, 2008 at 10:11 PM | Permalink

    Good summary Steve, I’m not sure if it’s just me, but it seemed like you’ve been gone for a long time. Nice to see that you’re back and wiser for the experience.

    #18 Indur Goklany
    I’ve recently read your brilliant book The Improving State of the World: Why we’re living longer, healthier, more comfortable lives on a cleaner planet. I want to say what a refreshing world view when juxtaposed against the incessant liberal doom and gloom we’re constantly bombarded with on a daily basis. Bravo! I might add to all readers that this book is must read and be prominently displayed in your library.

  21. Tom C
    Posted Feb 18, 2008 at 10:20 PM | Permalink

    I apologize for any comments that were not constructive.

    I’m curious about this:

    I then attempted to place temperature reconstruction recipes in a broader statistical context – first by showing, in relation to MBH, that all the MBH operations were linear; and that the MBH reconstruction (like other reconstructions) was thus a linear combination of underlying proxies. I showed a graphic (previously shown at CA, like most of the material) in which AD1400 MBH weights were represented by dot area on a world map, showing the tremendous influence of bristlecones. I posited that it should be possible to calculate weights for the RegEm of Mann et al 2007 and that its weights would look pretty similar to MBH weights – with a very high bristlecone weighting.

    Apparently nobody asked any questions. Did you get the feeling, though, that the above point was understood, or just bored stares all around?

  22. Steve McIntyre
    Posted Feb 18, 2008 at 10:27 PM | Permalink

    I’m getting annoyed with people posting off-thread topics. It takes me time to move them to Unthreaded. Or go to the Bulletin Board . What do stomata or HAdSST2 have to do with Georgia Tech? C’mon folks.

    Also there’s no need to try to get the last word in on various disputes. I’ve read the thread. I can figure out who said what to whom. I don’t rise to every bait. If rising to bait merely results in an annoying record here, please ignore it.

  23. Kenneth Fritsch
    Posted Feb 18, 2008 at 10:41 PM | Permalink

    I can only speak for myself when I say I read and post here at CA to learn something about climate science and the disciplines that it uses like statistics. Most of that is derived by way of the analyses and reviews of climate science papers that are carried out here and to a much lesser degree, unfortunately, by those scientists who come here with the consensus point of view. The blog appeal of the analyses is the real time exchanges it allows and in sometimes providing some background on and personalities of the involved climate scientists.

    While the visit to GT was no doubt, at a minimum, a public relations coup for Steve M and perhaps even for some on the GT staff and at the same time it was unfortunate that posters (including myself) here rose to the bait as Steve M was traveling there, for my selfish interests, I can only say I learned little new about climate science or precious little (as I hoped) about the students at GT. Not all is lost in these circumstances in blogging as one can often learn something, at least in this case, about the climate scientists involved. My only comment there would be to reiterate my preference for teaching over preaching.

  24. Gerald Machnee
    Posted Feb 18, 2008 at 10:44 PM | Permalink

    Steve:
    You got a lot into your summary. As a result of you visit and presentation, do you believe that it will be somewhat easier to communicate analysis of papers to either Georgia Tech or similar institutions? And also have a more serious look at your presentations and work. You had many ears there who now know you are real. Congratulations on a great technical and PR job!

  25. Steve McIntyre
    Posted Feb 18, 2008 at 10:57 PM | Permalink

    In terms of millennial reconstructions, I suspect that some people in the audience would probably conclude that it’s a much harder job than people are making out. As I’ve mentioned before, two young scientists at 2006 AGU told me that they thought that the HS-type reconstructions had been killed and that it might take 10-20 years and much better data to get anywhere. These were third party scientists familiar with the matter.

    It’s possible that a few people in the audience might be added to that list. But don’t kid yourself – nobody is going to re-think views on the impact of doubled CO2 on climate as a result of this nor am I asking them to.

  26. Harry Eagar
    Posted Feb 19, 2008 at 12:06 AM | Permalink

    Steve, I can’t say I am surprised that you didn’t get any (many) questions about statistical methods.

    Many years ago, I was the only non-engineer in a crowd of about 1,500 engineers who had paid $10,000 a head for a seminar on quality control.

    They were getting a presentation by a Ph.D. statistician/engineer who was in charge of a multibillion manufacturing project.

    Suddenly he stopped, looked out over the audience and said, “Come on, people. This is the way we have to do it now.”

    It was obvious to him — and to me — that hardly any of these PEs were able to follow his presentation.

  27. AndyL
    Posted Feb 19, 2008 at 1:33 AM | Permalink

    In comments about the original invite, I recall Judith Curry / JEG stated that they would post their own release or write-up somewhere.

    Has this happened, and can someone provide a pointer? Similarly, are there any other discussions about this event elsewhere?

  28. Posted Feb 19, 2008 at 2:20 AM | Permalink

    Steve, can’t get Eq. 3 (Rescaling in 1-D Case) on page 31 to work, is it correct? Next eq., 1-D Case, works fine.

  29. David
    Posted Feb 19, 2008 at 2:46 AM | Permalink

    Steve,

    Once the current theories fail to hold up to prediction, then those of the consensus may start to pay more attention to your work. However, at that point many will have gotten all of what they want: Policies that fit their world view, more funding, and the facts straightened out after the fact. All you will have done is to speed up the postmortem analysis. So I must ask, is what you are doing all for naught?

  30. Posted Feb 19, 2008 at 3:15 AM | Permalink

    #32

    Number of AGW theories that falsified if temps goes down: 0
    Number of skeptical theories confirmed if temps go down: 0
    Number of climate scientists who pay more atten to Steve’s work if temps go down: 0
    Steve’s contribution to science: priceless

  31. peter
    Posted Feb 19, 2008 at 3:16 AM | Permalink

    Steve, don’t you think you should maybe consider using a volunteer editor? He/she could change every so often, or it could rotate regularly. It is a real waste of your time to have to edit and delete posts. It has to be done however, just a pity for you to be distracted by doing it personally. Given clear guidelines, they could probably do pretty much what you’d want.

  32. AlanB
    Posted Feb 19, 2008 at 3:18 AM | Permalink

    RE: 28.

    See jeg’s comment (18/02/2008) on his Blog at Strange Weather

    “Hi TCO !
    Good to have you here. Please forgive me for not posting your earlier comment on Steve McIntyre’s visit to GaTech last week, which I found needlessly aggressive.

    So, the visit, was… err… interesting, as we say in technical jargon. I know Steve is preparing a post on it, and I was waiting for him to shoot first, which he hasn’t done, surprisingly. We have had quite a bit of back channel communication on this to avert a blog blodshed…

    I will tell you this much : at GaTech he was treated as an equal by everyone he met with over his 2-day visit. He will be the first to say he was well-received, and I think a good time was had by all.

    At the seminar, Steve presented his beef against the Hockey Stick reconstruction in a more coherent and self-contained manner than one could ever gather from Climate Audit (or his articles). I personally was very curious to see his mathematical results on the genealogy of regression methods used for such reconstructions, but he only had enough time to flip through a few equations-filled slides. When his post is up, I think he will link to the Powerpoint file so that you can have a look if you wish. He will also summarize his story better than i can do.
    I really appreciated him going into some details about spurious regressions : it is an area of applied mathematics that has been abundantly covered in econometrics, but most of our field is hopelessly naive about it. I think Steve’s most original contribution to the debate is to frame paleoclimate reconstructions in that well-known mathematical context.

    His presentation did run significantly overtime, despite repeated advice and despite having being loaned Judy Curry’s watch to keep track of minutes But I can’t say he was the first speaker to do so… I had qualms about a few other things, which need not be stated here – I shared those comments with him in private.
    All in all it was a pretty successful visit : people did get to hear what he had to say, and some of them spent time to show him around the labs and discuss various aspects of climate science with him. That is not to say that everyone went for his shtick, but that holds true for every visitor.

    As for him, I hope he will come out of the experience with a slightly more positive view of what we do, and hopefully (but I am bordering on utopia here) that this will be reflected in his future online behavior regarding climate “scientists” (with the dirty quotes).

    So the visit was by no means the “setup” that some of his more paranoid followers were trying to make of it. But nothing is new here : some dogs at Climate Audit will do nothing but bark all day. In truth, I think the most Steve could be upset about is that Kim Cobb actually called him a “climate scientist”, which must have been an unbearable insult given the contempt in which he holds our profession.

    For the rest… what happens in Atlanta stays in Atlanta. We were originally thinking of writing a public statement given the amplitude of the uproar on both sides (NSF managers getting angry that we invite Steve within a university context, CA dogs barking because we are not electing Steve head of the Department of Earth and Atmospheric Sciences). But like all blog frenzies, it’s as quickly come as it’s gone, so we’ll keep it mellow.

    I think it is a fruitful dialogue… but only time will tell.”

  33. Stephen Richards
    Posted Feb 19, 2008 at 3:21 AM | Permalink

    Steve Mc

    I agree, no excuse for anger. I note that since the blog award competition it had steadily got worse but we should all be self aware. Remeber, smilies were invented because we all read text with our own eyes and not those of the author. English text can be very easily mis-read or misinterpreted. So, more smilies means less misunderstanding, less excuse for anger. Keep it to yourself.
    We all owe a great debt to Steve Mc and part of that debt now transfers to the taem at GT.

  34. PhilA
    Posted Feb 19, 2008 at 4:36 AM | Permalink

    #35. “not everyone went for his schtick” … “hope it will be reflected in his future online behaviour” … “a more coherent manner than one could ever gather from his blog or his articles”. Not to mention fundamental scientific criticism being described as a “beef”.

    If that is how he thinks he should “treat people as an equal”, I’d hate to see how he refers to those he looks down on. A great shame and one that to me loses many of the respect points he’d otherwise have gained from having given the invite and the hospitality.

    Sad. One can only presume he was pre-emptively anticipating a discourteous blast from Steve, so I hope he is now duly shamed by Steve’s admirable courtesy and respect. Kudos Steve, as always your behaviour is (or is that a “bordering on utopia” hope?) an example to everyone else.

  35. AlanB
    Posted Feb 19, 2008 at 5:24 AM | Permalink

    Re PhilA #37

    I feel you have misjudged jeg’s tone. That was a return comment on his own blog to a most vituperative TCO. Julien has a good sense of humour… Stephen Richards #36 is quite right.. Alan

  36. John A
    Posted Feb 19, 2008 at 5:40 AM | Permalink

    I find it disappointing that Georgia Tech students, post grads and post docs didn’t ask any tough technical questions about the math and seemed more interested in touchy-feely subjects of personal motivation and “big picture issues”.

    Is any doing any mathematical research at Georgia Tech on forensic issues in statistics? On spurious regressions and data quality input into climate models?

    Steve:
    I said that I drew no conclusions from the questions other than people had other specialties. Nor should others.

  37. Posted Feb 19, 2008 at 5:48 AM | Permalink

    @Alan– It looks like, on further reflection, JEG took that down. JEG can have an unfortunate way of phrasing things on his blog. It often takes bloggers a little time to realize that ironic and acerbic words sound worse in posts than in real life where tone of voice often conveys a more positive emotive content. (Even those who realize this sometimes have difficulties– I do.)

    @SteveM,
    I’m glad it went well. As I said, I admire Judy Curry a lot for inviting you.

    Listening or reading actual arguments question and comments by those you wish to convince, is the first step in overcoming this problem. It appears Judy realizes it and acted on it. Normally, this would not be characterized as a bold move, but oddly enough, in the context of climate-blog wars, it was.

    I do wish we had a youtube video! (I’d like to see the talk.)

  38. AlanB
    Posted Feb 19, 2008 at 6:07 AM | Permalink

    Lucia #40

    It is still there (Noon GMT 19/12/08)in the comments on his post “The Heisenberg Principle of Climatology”. When I cut and pasted, I lost the emoticons he put in – so he was trying!

  39. Dave Dardinger
    Posted Feb 19, 2008 at 7:01 AM | Permalink

    re: #39 John A,

    I find it disappointing that Georgia Tech students, post grads and post docs didn’t ask any tough technical questions about the math

    Well, there’s nothing stopping them from having gone home, boning up on the math and asking the tough questions here now. If present / future Climate Scientists can’t or won’t do that, what conclusions can we draw?

  40. Judith Curry
    Posted Feb 19, 2008 at 7:02 AM | Permalink

    A few brief comments. I enjoyed my personal interactions with Steve, he is an interesting and intelligent person with a sense of humor. He appears to be a decent listener, and as per his post he clearly took away some of what I hoped he would. In our discussion in the hurricane group, he asked a provocative question, which deserves some further thought. With regards to his seminar, there were too many slides (80) with too much information for much to be gleaned about the nuances particularly of his statistical arguments, but his main conclusion in this regard certainly came across clearly. I learned much about bristlecones (a topic which normally makes my eyes glaze over, but now I have a better appreciation).

    Students in the hockey stick class are quite familiar with Steve’s arguments, so there was probably a core group of about 15 people that are quite familiar with this subject area. In particular, Steve had lunch in group that included a student working specifically on the statistical aspects. I don’t know what specific questions the most knowledgable of the students had for steve. I do know that there was much interest in the sociology of Steve’s role in all of this.

    There were definitely some flareups regarding blog content. I would make a subtle distinction regarding the angriness issue. Generic angriness against AGW proponents or political skeptics is far less inflammatory than angriness (or ad homs, motivational attacks) directed at an individual (both are best avoided, I agree). With regards to my ExxonMobil comment (in hindsight it would have been better to use the generic label political skeptic) This was not said in anger, it was an attempt to provide some historical context for the situation that steve walked into ca. 2003. While i don’t want Steve to burden himself with such editing, i would be happy to have my comment on the other GT thread edited to change “ExxonMobil” to “political skeptics”. In the interests of calming the dialogue here (and generating a more productive dialogue with a larger number of climate scientists), which i hope is a major outcome of Steve’s visit.

  41. PhilA
    Posted Feb 19, 2008 at 7:04 AM | Permalink

    Re: AlanB #38

    It would be nice to think I was misjudging the tone. But to me that looks very much like “playing to the gallery” of CA-despisers.

    Which I suppose only lends more weight to the idea that getting this sort of visit/dialogue is a Good Thing, since then both sides should have a better idea of the spirit in which future words were put to pixel.

  42. fFreddy
    Posted Feb 19, 2008 at 7:29 AM | Permalink

    Re #43, Judith Curry

    (in hindsight it would have been better to use the generic label political skeptic)

    Do you accept the existence of scientific skeptics ?

  43. Hoi Polloi
    Posted Feb 19, 2008 at 7:39 AM | Permalink

    Great report, all´s well that ends well. It shows nothing goes beyond personal approach, there are already too many keyboard warriors in cyber space…

  44. AlanB
    Posted Feb 19, 2008 at 7:42 AM | Permalink

    Re Phila #44. In case any one misses JEG’s comment

    I really appreciated him going into some details about spurious regressions : it is an area of applied mathematics that has been abundantly covered in econometrics, but most of our field is hopelessly naive about it.

    The “CA despisers” will love that!

  45. Posted Feb 19, 2008 at 7:44 AM | Permalink

    I’d like to apologize to Steve for any frivolous or undignified bandwidth I may have generated. He has far better things to do with his time than ride herd on a pack of unruly pups! :=)

    It sounds like the visit was a big success, one that widened the horizons of students, faculty, and speaker alike. We’re looking forward to his visit to OSU in the spring!

  46. Posted Feb 19, 2008 at 7:53 AM | Permalink

    AlanB- Ok. I see what you posted in comments. You missed an informational big:

    NSF managers getting angry that we invite Steve within a university context,

    You know… if NSF managers were calling Judy, then she is really, truly, amazingly brave! Also, shame on the NSF managers for their lack of insight.

  47. AlanB
    Posted Feb 19, 2008 at 8:14 AM | Permalink

    Lucia #49

    I see what you mean!

    The National Science Foundation promotes and advances scientific progress in the United States by competitively awarding grants and cooperative agreements for research and education in the sciences, mathematics, and engineering

    from NSF Funding

  48. Joe Black
    Posted Feb 19, 2008 at 8:25 AM | Permalink

    These same NSF “managers” would be the ones who don’t hold the “climate scientists” (no known licensing body AFAIK) to the legal standards of data archiving?

    Perhaps these “managers” should come out of the shadows to speak to their own performance(s)?

  49. Joe Black
    Posted Feb 19, 2008 at 8:37 AM | Permalink

    http://www.nsf.gov/funding/aboutfunding.jsp

    “About Funding

    “The National Science Foundation funds research and education in most fields of science and engineering. It does this through grants, and cooperative agreements to more than 2,000 colleges, universities, K-12 school systems, businesses, informal science organizations and other research organizations throughout the United States. The Foundation accounts for about one-fourth of federal support to academic institutions for basic research.

    “NSF receives approximately 40,000 proposals each year for research, education and training projects, of which approximately 11,000 are funded. In addition, the Foundation receives several thousand applications for graduate and postdoctoral fellowships.

    “The agency operates no laboratories itself but does support National Research Centers, user facilities, certain oceanographic vessels and Antarctic research stations. The Foundation also supports cooperative research between universities and industry, US participation in international scientific and engineering efforts, and educational activities at every academic level.

    “NSF FUNDING OPPORTUNITIES/PROGRAMS

    “Most NSF funding opportunities are divided into broad program areas:

    “Biology
    Computer and Information Sciences
    Crosscutting Programs
    Education
    Engineering
    Geosciences
    International
    Math, Physical Sciences
    Polar Research
    Science Statistics
    Social, Behavioral Sciences

    “Program deadline and target date information can be found on the Upcoming Due Dates list. It also appears in individual program announcements and solicitations. These publications can be found through the Funding page. To receive rapid notification of new program information by email you may subscribe to National Science Foundation Update.”

  50. Judith Curry
    Posted Feb 19, 2008 at 8:48 AM | Permalink

    ffreddy ffreddy, see previous GT thread for my definition of scientific skeptic, political skeptic, armchair skeptic.

    Re NSF, I have confidence that the paleo program managers want to get to the bottom of scientific issues surrounding the hockey stick and preserve the integrity of paleoclimate science. This is not as easy as it might sound given resource limitations and politicization of the issue. Open communication in my opinion will help the situation (things are too adversarial at the moment). In the interests opening communication (and perhaps even to help the resource base for paleoclimatology), I propose that at some point we have a thread here where we design a strategy for sampling of multiproxies towards doing regional and global time series of surface temperature for the past 2000 years, taking a broad look at this in terms of what it would take in terms of the actual data to assemble a convincing time series. Hopefully this would attract some climate researchers to engage in this dialogue.

  51. Ross McKitrick
    Posted Feb 19, 2008 at 9:02 AM | Permalink

    Steve – congratulations on your visit. I was a bit nervous sending you out unchaperoned, but seems like you did a good job and, as always, had fun. I am not surprised that you don’t recall hearing a lot of technical questions. Your presence answered the one that people really were curious about: “Does he have horns and a tail?” Having answered that one satisfactorily, you might get the more technical ones sent your way in the future by email.

  52. fFreddy
    Posted Feb 19, 2008 at 9:06 AM | Permalink

    Re #53, Judith Curry
    Right, this one, I assume.
    Do you have a similar categorisation for AGW believers ?

  53. Posted Feb 19, 2008 at 9:33 AM | Permalink

    Steve, Thank you kindly for the GT update. I come away from this experience of yours with two salient issues with which I would like to comment, and possibly have answered;

    First–why is there such anger within the scientific community at people such as yourself who will fly in the face of convention?—Is science not the investigation and exposure of every boundary of the particular field in which the discussion is placed? I see it very limiting, indeed uterly closed-minded, for the scientific fields in question that University Academians should have to ATONE for inviting to speak people with varying opinions.

    Second–Of all of your comments, I find one actionable statement—I paraphrase—that no studies have been completed that will verify proxies of “old” with the period of –1980 to present.
    –How much would this cost ($$) to complete? Is it possible to commission such a project under your supervision? How long would such a study take to complete?

    Thank you again Steve, for taking the time and energy to present to us a cogent and meaningful dissertation of the current affairs of climate science.

  54. John Lish
    Posted Feb 19, 2008 at 9:48 AM | Permalink

    #53, while I feel that such a thread may be useful in taking things forward from Steve’s GT presentation, I feel that the phrase “preserve the integrity of paleoclimate science” needs to be contextualised. Part of the politicisation of paleoclimate science has come from a hardening of meaning out of analysis. I personally view paleoclimatology as having integrity within the local environment and climate. Its the attempts to extend this to regional and global meanings that it loses integrity.

    So while I don’t have any issue with exploring this subject, any attempts to produce a convincing regional and global time series of surface temperature for the past 2000 years will, IMO, end up as flawed as MBH98+99 and others. ‘Convincing’ is too optimistic as there is not enough proxies and you get into extrapolation. Instead any mapping process should be open-ended so that it can be built upon and evolve by future researchers. That would be a project that I could support. I hope this isn’t too far off-topic.

  55. JP
    Posted Feb 19, 2008 at 9:54 AM | Permalink

    Judith,
    Are any of your undregrads or grad students interested in taking advanced courses in statistics? Did Steve M’s visit pique thier interest in statistical analysis?

    Steve:
    The issue isn’t just statistics. I had a nice chat with von Storch at the time of the House hearings about statistics and he expressed great frustration at his attempts to get useful perspectives from academic statisticians, who he found locked into modes of thinking that were not helpful to his problems. The econometric texts that I mentioned are ones that would not appear on any statistics curriculum and might not be familiar to most statisticians. Whether we are right or wrong, we are noticing aspects of the problem that other people weren’t noticing and by being too quick to call Ross and I “amateurs”, people have missed that. One of the disappointing aspects of hte NAS panel for me was that they did not include an econometrician on the panel – I wrote to them proposing that they needed to add one in order to ensure that they complied with their mandate for panel selection, but my suggestion was ignored – to the loss of the panel’s perspective IMO.

  56. Bruce
    Posted Feb 19, 2008 at 10:00 AM | Permalink

    Why do AGW proponents lecture “skeptics” on politeness at the same time their “gurus” like David Suzuki (the Al Gore / James Hansen of Canada) are suggesting that politicians be sent to jail if they don’t drink the AGW kool-aid?

    http://www.canada.com/theprovince/news/editorial/story.html?id=d8d435aa-ef76-41a4-bb75-7daebeb155dd

    B.C.’s very own David Suzuki, a Companion of the Order of Canada, wants to throw politicians who question his climate-change thesis in prison.

    The celebrity scientist dropped the bombshell while addressing an audience at McGill University. He urged students to seek ways to incarcerate elected officials who are “committing a criminal act by ignoring science.”

    This was no slip of the tongue. According to the National Post, he said very much the same thing during a speech at the University of Toronto last month.

    I think anger aimed towards AGW proponents is very justified.

    Steve:
    I don’t. If you want to indulge in anger, which I discourage you from doing, please do it elsewhere.

  57. yorick
    Posted Feb 19, 2008 at 10:08 AM | Permalink

    Your presence answered the one that people really were curious about: “Does he have horns and a tail?”

    I was going to make a comparison of Steve’s trip to GT to bringing an exotic beast into Hogwarts in order to teach the students how to master it, but you sort of beat me to it.

    As for resource issues and paleoclimate. I can’t imagine that the political situation isn’t perfect for funding new programs to get to the bottom of this, from observation platforms to climate networks to paleoclimate expiditions. HOWEVER; it is not difficult to imagine such a program being attacked by the alarmists as just a way to postpone action, especially if it came from Bush. By claiming that the science is “settled”, funding is cut for everything but impact type studies, which, if the science truly were settled, would be logical. If we can afford this stimulus package, we can afford better climate data.

  58. John V
    Posted Feb 19, 2008 at 10:35 AM | Permalink

    I’ve gone back to lurking for the past couple of months, but SteveMc’s comments on civility have brought me back. It seems that many people (myself included) find the “opposing” communities to be angrier than their own communities. Both sides are easily fired up, so everyone (myself included) has to be careful to avoid adding fuel to the fire.

    #57 Bruce:
    Quoting someone from outside this discussion and this community adds absolutely nothing to the conversation. There are extremists on both sides who say crazy things. Pitting them against each other is a waste of time.

  59. Francois Ouellette
    Posted Feb 19, 2008 at 10:36 AM | Permalink

    Judith,

    Of course you should be commended for inviting Steve. But in any other circumstances, he would have been invited earlier, and it wouldn’t have made such a fuss. So it all comes back to the issue of “politicization” of the issue, as you say.

    [snip – too angry]

    I still don’t get it.

  60. Steve McIntyre
    Posted Feb 19, 2008 at 10:41 AM | Permalink

    Judy, you say:

    Re NSF, I have confidence that the paleo program managers want to get to the bottom of scientific issues surrounding the hockey stick and preserve the integrity of paleoclimate science. This is not as easy as it might sound given resource limitations and politicization of the issue. Open communication in my opinion will help the situation (things are too adversarial at the moment).

    I’ve been very critical on NSF on this blog for their failure to implement high-level US federal archiving policy. Not many people are in a position where they can criticize NSF without worrying about repercussions, but I can. It’s unconscionable that they should have attempted to put pressure on you.

    Maybe it would be a good idea for them to spend a little more time doing their own job – spend a little time trying to get paleoclimate scientists to archive data – and a little less time worrying about what people at Georgia Tech might learn about spurious regression and bristlecones.

  61. Posted Feb 19, 2008 at 10:52 AM | Permalink

    Good post to an event that hopefully will push the whole narrative much further forward. What has set CA apart from just about all other blogs on climate related matters has been the willingness of some secure academics to engage in the various debates and discussions: both Judith Curry and Rob Wilson spring to mind as quality scientists who have sifted through posts and comments, dealt with some of the harsher rhetoric, and still come back to add their contribution. It is only sad that Steve’s invite to Georgia Tech should have stirred up so much controversy in the first place. Three things come to mind: (1)most people do not realize is how fragile an ego most academics have: it is by practice a very ego-driven profession and for experts to accept a non-professional into their club is hard for many, who are less emotionally secure; (2) much of the invective is fuelled by a blogosphere more concerned with the political spin to be placed on science than the provenance of the science per se; and,(3)climate science is over-subscribed by what can best be termed “research managers” (see IPCC structure) wherein a lot of opinion and decision making clout rests with persons whose livelihood is derived from the policy arena they participate in but not from any empirical, scientific research that they conduct. This last group is the least tolerant of non-sanctioned, critical commentary.

    Hopefully Steve’s activities and the initiative taken by those at Georgia Tech will act to inspire further dialogue that is constructive, informed and precedent setting for not resorting to the ad hominem rhetoric so resplendent on other blogs.

  62. Steve McIntyre
    Posted Feb 19, 2008 at 10:53 AM | Permalink

    #54. You’re talking relatively small dollars to update a Graybill bristlecone site (all near roads). Altogether, there were 3 days of sampling plus you have costs of a 4-wheel vehicle. We borrowed a tree ring borer but it wouldn’t cost much to buy one. Measurement and analysis cost about $5000. Writing up an article, presenting it at conferences, adds to the cost.

    If someone gave Pete Holzmann and I $200,000 to update 5 Graybill bristlecone sites and submit the measurements to ITRDB, we’d make a handsome profit for ourselves. There aren’t any big ticket expenses.

  63. Howard
    Posted Feb 19, 2008 at 11:19 AM | Permalink

    Steve:

    I rarely post (nothing meaningful to contribute) and have posted some sarcastic stuff as well for which I apologize for wasting your time. If you made the comments open only to registered members, then you could avoid deleting posts from habitual offenders.

    Judith: Thanks for bridging the gap between the RC and CA folks. Shows real guts. I also appreciate your putting up with the insults, etc. There is also another category of skeptic: the gut instinct skeptic. Most geologist I know who do not practice paleo-climate are quite skeptical based on similar experiences of grand conceptual and numerical models based on error ridden data collection, spurious correlations, cherry picking and data mining. By the way, in my field of science, hiding or altering raw data can land you in jail. Until the climate science community starts to police itself and force all of the information into the light of day, color me a gut instinct skeptic.

    Cheers

  64. MarkW
    Posted Feb 19, 2008 at 11:55 AM | Permalink

    if the paleo program managers are so concerned about the politicizing of this issue, maybe they should stop trying so hard to politicize it.

  65. Gary
    Posted Feb 19, 2008 at 1:02 PM | Permalink

    I’ll add my endorsement of SteveMc’s efforts to cool the rhetoric and expand the dialog between the professionals and the non-professionals. Steve, as you did with the open invitation last year to the Dendros to contribute to the discussion here, this trip to GT is another example of what makes this blog more compelling than most – your efforts seek understanding of the science and of each other rather than to play one-up-manship. Dr. Curry is of the same mind on this point and thanks to her for following through on it.

    Her proposal (#51) “that at some point we have a thread here where we design a strategy for sampling of multiproxies towards doing regional and global time series of surface temperature for the past 2000 years, taking a broad look at this in terms of what it would take in terms of the actual data to assemble a convincing time series” is sensible and a blog is a great place to explore the idea. Myopia affects everyone and the more input the better.

  66. Peter D. Tillman
    Posted Feb 19, 2008 at 1:11 PM | Permalink

    Re #32, Peter, volunteer moderators

    It is a real waste of your time to have to edit and delete posts. It has to be done however, just a pity for you to be distracted by doing it personally.

    Second to that, and I hereby volunteer.

    Steve, you need to delegate more of the routine housekeeping.

    Glad to hear the Ga Tech trip went well. It does sound like you need to trim down the presentation for next time.

    Best regards, Pete Tillman

  67. Bruce
    Posted Feb 19, 2008 at 1:13 PM | Permalink

    [Steve – snip. Please stop this. I’ve asked people to be polite on my blog. If you don’t like it, post elsewhere.]

  68. Allen C
    Posted Feb 19, 2008 at 1:27 PM | Permalink

    With all the praise on this posting for gentle and civil responses to the AGW proponents I fear that Steve M is becoming the climate science equivalent of Neville Chamberlain. Steve returns waving the GW equivalent of “Peace in our time”, while the global warmers are paying no attention to the good work of Steve and others and in fact are continuing their work toward a policy “Blitzkrieg” that will horribly damage the world economy and destroy the trust in real science.

    Steve: This blog is about science, not policy. Look, I’m as annoyed as anybody about the bile of the Hockey Team and the inability of third party climate scientists to understand the problems with emanations from the Hockey Team, but it’s silly to say that this will destroy the “trust in real science” – take a valium. And I’m serious about stamping out angry posts. If you want to persuade somebody of something, there’s no point yelling at them. I don’t and I don’t like people doing so at my blog.

  69. Posted Feb 19, 2008 at 1:36 PM | Permalink

    I post here very rarely and I don’t believe I have ever posted anything in anger. I see Steve as a hero of sorts, not because I think he might be right but for his stance in a world where there are people in positions of influence and power who would quite happily supress such blogs as CA if they thought they could get away with it. We live in a dangerous world and scientists have a duty to everyone to be open in their scientific discussions. When a silly politician with no scientific knowledge or expertise states that the science is settled he should be derided by all scientists for such stupidity. Good start Judith and JEG but I hope every much that whatever your scientific opinion you continue to stand up for such open debate and discussion.

  70. Joe Black
    Posted Feb 19, 2008 at 1:40 PM | Permalink

    “you need to trim down the presentation”

    2 minutes / slide including questions is a standard I’ve heard. More time/slide if it’s complicated. 80 in 60 is a sprint.

  71. Willis Eschenbach
    Posted Feb 19, 2008 at 1:42 PM | Permalink

    Bruce, I could not disagree with you more when you say:

    I find it sad that you are giving a platform for Judith Curry to crap on AGW skeptics and to browbeat the people who don’t share her beliefs into submission using the fake “civility” argument.

    I guess their invite has really paid off.

    Judith and I have disagreed many times. However, that is the nature of science. I have never seen her “browbeat” anyone. She holds her beliefs strongly, and good on her for that. She and JEG have been among the few scientists willing to come here and defend those beliefs. Yes, they may have not always acted flawlessly, but who among us has? Passions run strong on these issues.

    Finally, the idea that their invitation to Steve involved some “quid pro quo” whereby now Steve will be induced to be nice to them is laughable. They’re not that Machiavellian, and Steve is not that stupid.

    w.

  72. Larry Sheldon
    Posted Feb 19, 2008 at 1:47 PM | Permalink

    Regarding deleting postings, to paraphrase myself and others in another universe “Your blog, your rules”.

    I do wish there was a way around the missing number problem. Maybe we could develope a convention that quotes name and posting time instead of posting number.

    And I do not agree with Allen C at 2/19 1;27 (CST in case that time is localized).

    The warmers don’t pay attention to anything. Not even a walk to the mailbox.

  73. Pat Frank
    Posted Feb 19, 2008 at 1:51 PM | Permalink

    Steve M. wrote: “it’s long past … time to re-formulate the proxy debate in the form of standard statistical questions e.g. is there a valid linear relationship between bristlecone ring widths and temperature such that this can actually be used to estimate past temperatures.

    Establishing a linear relationship between ring widths and temperature is not at all a statistical question, it’s a scientific question. A linear relationship between temperature and ring widths can’t be determined until some theory is available to extract a valid temperature from ring-widths. Any statistical correlation absent such a theory is inductive and ad hoc, and would have no physical relevance past the fitted interpolation limits. And that physical relevance would only be an inferred causal possibility, not a causal demonstration. The reason is that the physical determinants for tree ring widths are multiple and presently non-quantifiable. The phase space has many dimensions and no one knows how to physically orthogonalize its vectors.

    Steve, following your extremely important work, it’s become very obvious that there is no physical content in any of the tree ring width/temperature proxy reconstructions. The entire business is false science. It’s scandalous.

  74. Posted Feb 19, 2008 at 1:52 PM | Permalink

    Allen C: You are quite wrong. The stance that Steve and many others are taking is slowly making more and more people think. Here in NZ despite the efforts of our own particularly silly politicians, people are discussing the issues more than they were two years ago. People are slowly realizing that the settled science actually equates to higher taxes and policies which will hurt people. Of course it is sometimes extremely frustrating when the media in particular, seem unable to see both sides of the argument but for those who read history, both ancient and recent, the tide will turn because the pendulum has swung (been pushed) too far. There will not be a victory day but there will be a slow turning of the tide and I peronally believe the tide is about to start turning.

  75. Dave Dardinger
    Posted Feb 19, 2008 at 1:56 PM | Permalink

    Oh come on, Bruce, (no # as you’ll probably be snipped anyway)

    There are ways to get your point across without being obnoxious.

  76. Dave Dardinger
    Posted Feb 19, 2008 at 1:58 PM | Permalink

    Oops! His message didn’t even last till I responded.

  77. Sam Urbinto
    Posted Feb 19, 2008 at 1:59 PM | Permalink

    Maybe it would be a good idea for them to spend a little more time doing their own job and a little less time worrying about what people at Georgia Tech might learn about spurious regression and bristlecones.

    if the paleo program managers are so concerned about the politicizing of this issue, maybe they should stop trying so hard to politicize it.

    It often takes bloggers a little time to realize that ironic and acerbic words sound worse in posts than in real life where tone of voice often conveys a more positive emotive content.

    If you want to persuade somebody of something, there’s no point yelling at them.

    The econometric texts that I mentioned are ones that would not appear on any statistics curriculum and might not be familiar to most statisticians. Whether we are right or wrong, we are noticing aspects of the problem that other people weren’t noticing and by being too quick to call Ross and I “amateurs”, people have missed that.

    If someone says something foolish or impolite, you will accomplish more by being unctuously and annoyingly polite in return than getting into a food fight.

    I really appreciated him going into some details about spurious regressions : it is an area of applied mathematics that has been abundantly covered in econometrics, but most of our field is hopelessly naive about it.

  78. Joe Black
    Posted Feb 19, 2008 at 2:01 PM | Permalink

    Pat Frank says:
    February 19th, 2008 at 1:51 pm

    “It’s scandalous.”

    Do ya THINK?

    Dendrochronology is a great tool for site chronology. Using it to determine anything beyond “good” growth years vs. “bad” is highly problematic.

  79. Sam Urbinto
    Posted Feb 19, 2008 at 2:09 PM | Permalink

    Pat said:

    Any statistical correlation absent such a theory is inductive and ad hoc, and would have no physical relevance past the fitted interpolation limits. And that physical relevance would only be an inferred causal possibility, not a causal demonstration. The reason is that the physical determinants for tree ring widths are multiple and presently non-quantifiable. The phase space has many dimensions and no one knows how to physically orthogonalize its vectors.

    You mean those speaking and/or acting as if carbon dioxide causes warming rather than saying it absorbs IR and provides heat into the system that may contribute a net percentage to a rise in energy levels in the overall system??

    It’s science that needs to use statistical methods; so it’s a little bit of both. Hopefully this vist to GA (and what we’ve learned here on the blog from the entire experience) will advance such a blended multi-discipline approach forward in the future.

  80. Kenneth Fritsch
    Posted Feb 19, 2008 at 2:20 PM | Permalink

    When SteveM says the following:

    I’ve been very critical on NSF on this blog for their failure to implement high-level US federal archiving policy. Not many people are in a position where they can criticize NSF without worrying about repercussions, but I can. It’s unconscionable that they should have attempted to put pressure on you.

    It makes me think that these circumstances not only point to Steve M’s independence in these matters and therefore his unimpeded capabilities to deal with them but also the less impeded analysis that can be performed on general matters of climate science by non-participating persons here at CA.

    Let us face it though, an exchange, limited, more or less, to only the scientific and statistical matters that involved analyses and discussion even in the less formal and more personal mode of blogging compared to the published literature, would not in itself create the kind of controversy that we have seen or at least the perception of controversy posted here. It is rather obvious to me that the major controversy involves the perceptions of the divided sides (and a divide that can have many divisions) of the other sides political and policy motivations. Dr. Curry’s perception has certainly been evident in her preaching on that subject here. Others posting here have intimated a political agenda on the consensus view. It is here that I agree whole heartily with Francios O’s comments on the matter in Post #60 and would only add that for those who are truly interested in the scientific discussions that the political comments and innuendos can be readily ignored. Even personal comments can be set aside.

    That the CA owner has clearly indicated not taking any political or policy sides in these matters would make one wonder then why that would not ease the minds of climate scientists who might come here or who might comment from afar. That brings forth another issue to which Dr. Curry, JEG, Rob Wilson and other climate scientists have made reference: the perception that climate scientists in general are held in low regard by Steve M and numerous posters here. Part of that apparently comes from Steve M’s stated and rather specifically stated frustrations with and references to The Team, specifically, and from time to time with other climate scientists as in “but that’s climate science” and the compliance with that stand by posters here. The only way around that perception would be to either carry the tone and language of the published paper over to the blog discussions or to deal specifically with the instances that Steve M and others here have expressed frustrations about climate science methodologies, archiving information and writing articles — and, of course, in doing so naming names.

    In the end it is generalizing that bogs down the blogs.

  81. DeWitt Payne
    Posted Feb 19, 2008 at 2:29 PM | Permalink

    Larry Sheldon,

    One more time. There is a simple way around the deleted post renumbering problem for posts on the same thread. It’s called a permalink. If you right click on the post number to which you are replying and select copy link location from the menu you get the URL for that post in the clipboard. That URL doesn’t change unless the post is moved to a different thread. Then in your response, select the text for the link (in this case I used your name), click the link button in the Quicktags line, copy the URL to the box and click OK. Even if the post is moved, the comment number remains the same, but the page number changes to the new thread. I haven’t tried searching on comment number to find a relocated post, but I suspect it will work.


    Steve:
    You’re quite right. It works. Readers should be able to tell whether there is a re-numbering risk – many threads don’t have foodfights – and if you have to reason to think that there will be a risk, do what DeWitt suggests here.

  82. Joe Black
    Posted Feb 19, 2008 at 2:30 PM | Permalink

    Kenneth Fritsch says:
    February 19th, 2008 at 2:20 pm

    “others here have expressed frustrations about climate science methodologies, archiving information and writing articles — and, of course, in doing so naming names.”

    So just WHO are the “Climate Scientist” “managers” at the NSF (and any other gov’t agencies inolved in supporting “Climate Scientists”)?

  83. Posted Feb 19, 2008 at 2:31 PM | Permalink

    I unreservedly apologise for my “almost perfect exposition of the Argument from Authority”. It was not my intention to be rude, nor do I recall being angry when I wrote it.

  84. Steve McIntyre
    Posted Feb 19, 2008 at 2:31 PM | Permalink

    #81. Kenneth,

    people at GA Tech generally realized that my own language was usually measured. However, not always. One person said that he agreed with everything in one of my posts about data archiving (and many other posts), but grated when I used the phrase ‘climate “scientist” ‘. I agreed immediately. It was a pointless snark that would be offputting to a third party. As Oscar Wilde famously said (and the person in question laughed when I said this), the definition of a gentleman is not someone who never insults anyone, but someone who never insults anyone unintentionally. I have enough substantive issues in play that I have no need to get into accidental frays. I try to watch snarkiness and I often tone things down, but it’s something that I have to be vigilant about. I’m not going to be using the phrase “hey, it’s climate science” any more. I don’t think that I’m going to give up snark against the Team, but I’m going to ensure that it is correctly focused.

    At least I try. The annoying and undisciplined lack of politeness by so many climate bloggers and commentators IMHO detracts from their effectiveness as spokesmen.

  85. Posted Feb 19, 2008 at 2:32 PM | Permalink

    thanks for the write up.

    Any chance there’s a youtube or other video of the event in existence?

  86. Posted Feb 19, 2008 at 2:35 PM | Permalink

    c29,

    Got it now, only one transpose is incorrect, numerator should be \rho ^ T C_{yy}  \rho ( skipped P-matrix, that shouldn’t make any difference anyway ). The fact that Mann scales Ys by detrended std makes verification of these equations a bit difficult ( your \rho is not exactly correlation vector then ). Also, C_{uu} should be C_{uu}^{-1} in the first eq.?

    Anyway, great slides! You say Mann’s method is PLS, I’ve classified it as CCE (with S = identity matrix), but I take variance matching as a separate issue, so there seems to be no contradiction.

    Steve: UC, I’m going to do a separate post on these things reprising the derivations in more detail, as I want to chat about them. Much of it we’ve chatted about before, but I want to push it along a little.

  87. Jon
    Posted Feb 19, 2008 at 2:45 PM | Permalink

    Steve: Where is the slide set?

  88. John B
    Posted Feb 19, 2008 at 2:59 PM | Permalink

    I come from a land down under,
    A rockstar leads our policies,
    and his bile flows like chunder.
    The CSIRO science community,
    Again forecasts a calamity.

    http://www.abc.net.au/science/articles/2008/02/18/2165549.htm?site=catalyst

    Fifteen years of research and over 8000 measurements, we have got to be doomed. May be worth having Georgia Tech examine the statistical significance of the latest block in the wall of doom provided by our public broadcaster and the science elite in Australia.

  89. Mike B
    Posted Feb 19, 2008 at 3:02 PM | Permalink

    Congratulations to Steve. It is certainly an honor to be invited to speak at Georgia Tech under any circumstances. To be invited after lots of hard work against long odds must be particularly gratifying. I hope the trip was rewarding.

    Also congratulations to Judith Curry. I commend you for the leadership and courage you displayed in bringing Steve to Atlanta. I find it a bit sad to recognize that it was a professional risk for you to invite Steve, but that is an unfortunate fact over which you had no control. I would like to think I would do the same thing in a similar situation.

    I am personally going to withold judgement with regard to how Steve’s trip to Atlanta ultimately figures in the big picture. Under the best of circumstances, it will take some time to see how or even if things change. At the very least I hope the Georgia Tech crowd continues their visits to CA, I always welcome an opportunity to learn from experts.

  90. TonyN
    Posted Feb 19, 2008 at 3:12 PM | Permalink

    Re:#85

    I’m not going to be using the phrase “hey, it’s climate science” any more.

    Some of us will miss it!

  91. Bruce
    Posted Feb 19, 2008 at 3:13 PM | Permalink

    Steve, while at Ga. Tech, did you ask Judith Curry this statement:

    Exactly. But you can’t use hurricanes to prove that there is global warming. What you can do is show an unambiguous link between the increase in hurricane intensity and the warming sea surface temperatures. And if you look for why the sea surface temperatures are warming since the 1970s, you don’t have any explanation other than greenhouse warming.

    http://pubs.acs.org/subscribe/journals/esthag-a/40/i01/html/010106interview.html

  92. MarkW
    Posted Feb 19, 2008 at 3:14 PM | Permalink

    Sam,

    It’s GaTech. GA is that other school which must remain un-named.

  93. Bill
    Posted Feb 19, 2008 at 3:30 PM | Permalink

    NSF:

    Are they funding climate research projects equitably?

  94. Sam Urbinto
    Posted Feb 19, 2008 at 3:47 PM | Permalink

    I meant the state of GA, but that wasn’t clear at all. Right, Georgia Tech.

  95. jeez
    Posted Feb 19, 2008 at 4:22 PM | Permalink

    I propose that the phrase “Hey, it’s Climate Science” be replaced with “I’ll buy that for a dollar”

    Other suggestions?

  96. Neal J. King
    Posted Feb 19, 2008 at 4:38 PM | Permalink

    As an occasional reader and poster to ClimateAudit, I am glad to see a growing channel of communication between it and what I consider mainstream climate studies.

    I appreciate Steve McIntyre’s declared intention to cutback on snarkiness: If everyone goes along with that goal, I think that will improve the quality of the discussion on all sides.

    With regards to questions: From my experience with large colloquia, people rarely ask piercing questions. Unless you have a “killer question”, there’s little to gain. Perhaps there will be some longterm interest in looking more carefully into the statistical methodologies of temperature proxies. Probably, the best result could be more interest in what non-mainstream students of climate questions have found in the way of conceptual holes or issues. If Steve’s announced policy of politeness prevails throughout the blog, it will be easier for these issues to find their way into research studies.

  97. Darwin
    Posted Feb 19, 2008 at 4:41 PM | Permalink

    I congratulate both Steve and Judith for their attempts to bridge a rather large chasm in the climate debate with civility.
    Judith deserves recognition for standing up for science against name-calling and consensus thumping not only by inviting Steve, but also by speaking up at Andy Revkin’s Dot Earth blog about the AGU statement, noting: “…consensus statements and statements by professional organizations don’t help things; we need to do more and better science, and more extensive assessments. We are wasting time attacking each other’s credentials and motives.” Amen. I do think there has been too much of that. But I also think that many scientists, including Judith, haven’t done enough of what SteveM has done — fully examine the basis for assertions about climate, whether it be Michael Mann’s about the 1990s being the warmest decade and 1998 the warmest year in the last 1,000 years with a certainty of 95% or that there is a 66% probability of warming between 1.5 and 4.5 C from a doubling of CO2g in the next 100, as Judith did on the thread preceding SteveM’s visit to GT. There is too much bowing to authority by most scientists (and laymen as well), indicated by Judith’s own opinion on dot earth: “For the record, I view the IPCC 4th Assessment Report to be the best available statement of the state of climate science at the time it was written. Policy makers do not have a better document or analysis from which to work with in grappling with the myriad of issues associated with climate change. … There is a very big difference between assessment reports (such IPCC and NAS/NRC) and consensus statements. Assessment reports involve a large number of carefully selected people by an unbiased group, seeking to represent a diversity of opinions. A very careful assessment of the literature is undertaken, a very extensive review (both peer and public) is done, and in the case of the IPCC policy makers carefully scrutinize the exact wording (at least of the SPM) to make sure the statements aren’t misleading to policy makers and the public.” I think Roger Pielke Sr. might raise some concerns about that along with many other scientists whose work was used, and sometimes abused, by the IPCC. For this blog, though, I think the issue was best raised by William Briggs in Statistic’s Dirtiest Secret: “It is important to understand that all results (like saying ‘statistically significant’, computing p-values, confidence or credible intervals) are conditional on the model that chosen being true.” I would like to hear from SteveM. and Judith about whether the Georgia Tech experience, for both of them, provided any new ideas about how science can better test whether a chosen model is true, rather than relying upon authority to say it is so. Personally, I would think that it is not only a matter of accessibility to data but, as you two have attempted, dealing with the data and arguments about it civilly. But what more?

  98. Larry Sheldon
    Posted Feb 19, 2008 at 5:04 PM | Permalink

    DeWitt said there was aq better way. And it is, unless you suffer from Dictionary Derangement Syndrome as I do. (The manifestation here: I did the right click thing but then had to back and look because my short term memory for names is bad (not quite as bas the long term, but I digress). While looking for it, I got distracted by other postings and….

    What is really annoying is that I supposed to knowledgeable of such things and didn’t know that.

    It will be interesting to see if I got it right.

  99. Larry Sheldon
    Posted Feb 19, 2008 at 5:09 PM | Permalink

    Dang! That is sweet!

    (Is GA Tech where George Gray is or was? Fascination UNIVAC historian.} (Not so Off Topic–Air Force Global Weather ran on huge UNIVAC machines last I heard. Those were models that must have worked.)

  100. Dave Dardinger
    Posted Feb 19, 2008 at 5:18 PM | Permalink

    Larry, It does indeed work, and easier than I’d been led to believe here before; in fact it’s easier than what I have been doing. The only problem I see, is getting back to your message if you’d linked to another thread but, of course, cross threading is frowned on in the first place.

  101. Jud Partin
    Posted Feb 19, 2008 at 5:24 PM | Permalink

    Hey Steve! Thanks for coming out to GaTech. It was nice chatting with you. Nice post/summary and very respectful. You are a very thorough person, and I would expect nothing less. (and I can’t figure out how you find the time to do all of this!!!)

    And while I have the floor, I would like to give many thanks to Judy (and Julien) for the invitation. It was not only a bold move, but a wise one as well. Only by hearing all sides of an issue can one form a sound conclusion.

  102. Judith Curry
    Posted Feb 19, 2008 at 5:26 PM | Permalink

    Darwin, thanks for your thoughtful post. I still think that assessments by institutions such as the IPCC are very very important. Each of the IPCC assessments has improved relative to the previous one in terms of process, and I would expect the 5th assessment to further refine the process. The issue of “whether a chosen model is true” isn’t really the relevant issue. Conceptual models are theories that have some empirical (data) support and and have some predictive ability. Science never absolutely “proves” theories, although theories with repeated predictive ability become accepted and are used in further scientific studies, engineering, decision making, etc. Climate models are diverse, ranging from the low order models such as toy models and radiative convective models to the complex global climate models that couple the atmosphere, ocean, land surface, cryosphere, etc in very large computer codes. There are about 20 different global climate models that participated in the IPCC, each reflecting different choices in terms of model resolution, physical parameterizations, numerical solution method, and number of members in the ensembles. This diverse group of climate models reflects the predictive capability of the greenhouse warming theory. These models are evaluated against the historical data record. Unfortunately evaluating the model projections with say a decades worth of data is misleading owing to individual events associated with natural climate variability such as a large volcanic eruption or a large El Nino. So we can verify elements of the models, e.g. the atmospheric component is evaluated each day in the context of weather predictions.

    So what can be done by the likes of the climateauditors, broadly defined? Make public your expectation for publicly accessible data and metadata plus climate model codes. This is really important, climate researchers and their institutions should not defend their work or assessments by saying we are the experts, we all agree, trust us. Everything needs to be documented and defensible.

    I think the blogosphere can play a unique role in increasing the public understanding of the issues (particularly the educated and interested public, which characterizes much of the readership at climateaudit), and in promoting a dialogue whereby research can be discussed and critiqued both by the scientific experts and scientific skeptics (both “regular” and free lance scientists), without noise from political and armchair skeptics, people that are prepared to do work and work towards clarifying some aspects of the science and thereby contributing to the credibility of climate research. I spend more time at climateaudit than at RC because i think I can be more effective at interacting with “unconverted” skeptics than in preaching to the converted, and its a fun challenge.

  103. MrPete
    Posted Feb 19, 2008 at 5:48 PM | Permalink

    Note to skeptics: when Judith says

    …I think I can be more effective at interacting with “unconverted” skeptics than in preaching to the converted, and its a fun challenge.

    …please don’t read it as a put-down, but rather as an admission that she is not herself a skeptic. She’s being up-front about her own perspective.

  104. Jeff A
    Posted Feb 19, 2008 at 5:54 PM | Permalink

    I appreciate Steve McIntyre’s declared intention to cutback on snarkiness: If everyone goes along with that goal, I think that will improve the quality of the discussion on all sides.

    It won’t really, since one side believes there is nothing to discuss.

  105. Judith Curry
    Posted Feb 19, 2008 at 6:09 PM | Permalink

    Mr Pete, a clarification, I am skeptical of many aspects of the science, if i weren’t i wouldn’t have anything to do as a climate researcher. I am not a skeptic when it comes to being convinced that anthropogenic greenhouse gases are warming the climate.

  106. Bruce
    Posted Feb 19, 2008 at 6:10 PM | Permalink

    snip

  107. Neal J. King
    Posted Feb 19, 2008 at 6:10 PM | Permalink

    105: Jeff A.:

    Snarkiness won’t convince scientists that have spent decades studying a subject that the interlocutor has anything interesting to say.

    Whereas honest questions, respectfully presented, can elicit thoughtful responses.

  108. steven mosher
    Posted Feb 19, 2008 at 6:18 PM | Permalink

    RE 108. Well put Neil. Old timers here have appreciated your help in facilitating
    comunication. Well I did! Good to see you again.

  109. Sam Urbinto
    Posted Feb 19, 2008 at 6:39 PM | Permalink

    106 Judith Curry says:
    I am not a skeptic when it comes to being convinced that anthropogenic greenhouse gases are warming the climate.

    Are you sure you’re not instead convinced that AGHG add a net warming to the climate as evidenced by the anomaly trend as a proxy for global temperature, given the apparent relationship between the levels of AGHG and the anomaly trend?

    Are you sure the anomaly isn’t a reflection of measurement and/or combinational methods? Are you sure that AGHG are not a cause and not an effect? Are you sure that the population going from 1 billion to 8 billion over the same period isn’t responsible for whatever warming there is? Are you sure ht anomaly isn’t under-reporting rises in energy levels?

  110. steven mosher
    Posted Feb 19, 2008 at 6:43 PM | Permalink

    I would propose a distinction between climate science, climate modelling and climate morality.

    A distinction between Climate IS; climate WILL BE; and climate SHOULD BE.

    Climate IS.

    The world is warming. We have multiple lines of evidence… some independent
    other less so, but the world is warming. There are open honest questions about
    the RATE of warming. basically, concerns about the accuracy of the data.

    Climate WILL BE.

    Guess work based on good models. Projections of future climate based on projections of C02
    emissions. PRECISE physical models driven by guesswork Emmission scenarios
    known as the SRES.

    Climate SHOULD BE:

    Based on Sound science ( climate IS) and Questionable Projections ( climate WILL BE)
    climate Morality whines that certain people should change their behavior for the benifit
    of other people.

  111. John A
    Posted Feb 19, 2008 at 6:47 PM | Permalink

    Judith Curry:

    I am not a skeptic when it comes to being convinced that anthropogenic greenhouse gases are warming the climate.

    Then could you write an article (I’m sure that Steve would host it) setting out in detail why you believe that anthropogenic greenhouse gases are warming the climate and why warming the climate is, on balance, a bad thing.

    Because so far, I haven’t seen a coherent argument that doesn’t involve trite syllogisms like:

    1. Carbon dioxide is a greenhouse gas.
    2. Increasing greenhouse gases causes the atmosphere to warm
    3. The atmosphere is warming – this is bad
    4. Greenhouse gases are rising
    5. Therefore greenhouse gases are causing warming


    Steve:
    John A, this is a pointless request. It’s reasonable to ask for a specific reference to an existing article as I do. I’ve had an outstanding request for references setting out how you get from A (doubled CO2) to B (2.5 deg C) in which all the relevant steps are spelled out. No one’s provided such a reference; Judy’s aware of these requests and hasn’t suggested anything before. That doesn’t mean that it can’t be done, just that IPCC hasn’t seen fit to include that in its mandate. I think that this is a major and regrettable IPCC shortcoming and is the largest single factor in present controversy.

  112. Kenneth Fritsch
    Posted Feb 19, 2008 at 6:48 PM | Permalink

    Re #103

    I still think that assessments by institutions such as the IPCC are very very important. Each of the IPCC assessments has improved relative to the previous one in terms of process, and I would expect the 5th assessment to further refine the process.

    Judith just to give you a flavor of someone who sees the statement by you above as an opinion by a well-respected member of the climate science community to be sure, but nevertheless an opinion that remains an opinion without a more detailed exposition. It is my opinion that the IPCC, as in the example of the AR4, gives a good review of the science that supports the case for immediate mitigation of AGW, but when all is said and done it is not presenting a case in the manner of a review under US jurisprudence where both sides of the case are presented. I view the IPCC in these matters more as a marketing approach for selling immediate AGW mitigation.

    Recent past AR reviews dwelt heavily on temperature reconstructions and specifically on the Mann HS in pushing for mitigation. The AR4, in my view, perceptively switched emphasis to climate models from reconstructions without admission of any major problems with the reconstructions. Progressively the AR reports have concentrated on potential specific and more immediate adverse effects of AGW that in my judgment is aimed at better marketing immediate mitigation. The models unfortunately continue to have large +/- projections of 21st century temperatures, have admitted problems with regional projections and depend on even more uncertain economic inputs for its scenarios.

    I spend more time at climateaudit than at RC because i think I can be more effective at interacting with “unconverted” skeptics than in preaching to the converted, and its a fun challenge.

    Judy, Judy, Judy, (doing my best Cary Grant imitation) please spare us the preaching. What I, and I think most participants here, want is teaching not preaching. The devil is in the details and any conversions, I believe, will dwell in that exorcism.

  113. Pat Frank
    Posted Feb 19, 2008 at 6:57 PM | Permalink

    Judith wrote: “This diverse group of climate models reflects the predictive capability of the greenhouse warming theory.

    Which is zero. WGI Chapter 8, Figure 8.2, shows the per-year GCM error in global surface temperatures, averaging outputs of about 22 different models. That error is about (+/-)1 degree, except for Antarctica, which is about 5 degrees too warm. The individual model errors are larger.

    The per-year average error is larger than the entire canonical 100-year 20th century global average surface temperature rise of 0.7 C.

    It is beyond knowing how any physical scientist can suppose that climate models are capable of validating that modest rise as due to anthropogenic CO2. They clearly are not so capable.

    Carl Wunsch is also on record, in the peer-reviewed literature, as observing that ocean models are not integrable. He wrote that climate modelers brush aside his questioning the meaning of a non-converged solution, on the grounds that the outputs “look reasonable.” This, I think, is the problem with the entire field, in a nutshell. They’ve been mesmerized by reasonable-looking outputs that have no testable physical meaning.

  114. Andrew
    Posted Feb 19, 2008 at 7:47 PM | Permalink

    Judith, 106, I submit that hardly anyone is skeptical of that, but the devil is in the details. The question is not whether they will or are for me, but how much. I don’t believe it is possible yet to say for sure. At any rate, I am also skeptical of the ideas proposed to “solve” the problem. I was planning on righting something called “Why I’m pretty sure mitigation isn’t the solution to Global Warming” becuase, well, I really don’t see anyone suggesting something that would actually work, even in theory (I’m sure we’ve all heard by now how pathetic Kyoto is). One of the reasons I’m so adamant about pinning down climate sensitivity and getting accurate predictions is becuase I believe we will need to adapt (most models predict about .5 C warming even if you hold atmospheric concentrations of GHG’s at 2000 levels (which CO2 has obviously long since surpassed)). I want to know if I’ll be needing an umbrella or a raincoat and gollashes, so to speak. I’m actually more afraid of what we don’t know, that we may make terrible mistakes based on wrong assumptions. Sure, if you bring an umbrella and it doesn’t rain, you don’t really care, but this is where that analogy breaks down.

  115. Neil Fisher
    Posted Feb 19, 2008 at 8:44 PM | Permalink

    Good post, Steve.

    Kudos once again to Judith Curry for her support in getting her students and colleges to listen to alternate view-points – that’s vital, and that she has been discouraged from doing so is, IMO, unforgivable in an academic environment.

    Judith, I would ask – if I may – for a summary of commentary of those who attended. Nothing formal, just a summary of what (if anything) your students gained from Steve’s talk. If you have a link to a blog etc with this type of thing, that would be great. I’m not trying to “tear down” anything (or support it, for that matter), but I am curious as to how this has been perceived by the people it was aimed at. Did it make them pause? Did it make them ask questions in subsequent lectures that you hadn’t heard before? That sort of thing. An “impact statement” if you will. Probably highly useful if Steve is asked to do this sort of thing again (and of course, I am curious)

    Thanks in advance (and apologies if this has already been provided and I missed it)

  116. John A
    Posted Feb 19, 2008 at 9:02 PM | Permalink

    Steve:

    John A, this is a pointless request. It’s reasonable to ask for a specific reference to an existing article as I do. I’ve had an outstanding request for references setting out how you get from A (doubled CO2) to B (2.5 deg C) in which all the relevant steps are spelled out. No one’s provided such a reference; Judy’s aware of these requests and hasn’t suggested anything before. That doesn’t mean that it can’t be done, just that IPCC hasn’t seen fit to include that in its mandate. I think that this is a major and regrettable IPCC shortcoming and is the largest single factor in present controversy.

    Really? Its a pointless request to ask a climate modeller why she is not skeptical of the notion of anthropogenic greenhouse gases causing warming?

    Why is this unreasonable? Dr Curry came to her conclusion presumeably on the basis of scientific evidence which she could share with the rest of us. If you’re telling us that even the basis of trite syllogism #2 has yet to be discovered, where is that actual confidence coming from?

    When I ask an astrophysicist why he or she believes that the General Theory of Relativity is correct (and I have), they are delighted to give me chapter and verse on the experimental results which have so far failed to falsify a single prediction of that theory despite all attempts.

    If Dr Curry has a reasonable explanation of her lack of skepticism on anthropogenic greenhouse warming, is it unreasonable to ask her to share it?

    Steve: If you want to engage someone who may not be particularly interested in chatting with you, you’re better off asking much more focused questions that can be answered in a sentence or two. Try to think about what you can reasonably expect from this sort of engagement.

  117. Bruce
    Posted Feb 19, 2008 at 9:19 PM | Permalink

    Steve

    snip

    Pointing out Closed Minded statements in a polite way gets snipped!

    They got great value for their invitation Steve.

  118. Dave Dardinger
    Posted Feb 19, 2008 at 10:27 PM | Permalink

    Bruce,

    You weren’t polite. Drop it!

  119. bender
    Posted Feb 19, 2008 at 10:54 PM | Permalink

    No surprises from this GT visit. As predicted. A different university might supply a different response. One wonders about Ohio State? Or MSU? Or the Universities of Toronto or Calgary, in Canada?

  120. Posted Feb 19, 2008 at 10:56 PM | Permalink

    Judith Curry says:
    February 19th, 2008 at 5:26 pm

    Climate models are diverse, ranging from the low order models such as toy models and radiative convective models to the complex global climate models that couple the atmosphere, ocean, land surface, cryosphere, etc in very large computer codes.

    Dr Curry, my particular interest is the response of the climate if the warming were due in a large part to albedo change — Palle/’s work showed how albedo change dwarfs the theoretical warming potential of AGHGs, at least for a short time. Has anyone run a large model using a different assumption for the primary heating mechanism? Would a model based on albedo drop be possible, or are the knobs set in such a way that a model would have to be retuned from scratch to match the record? That, no doubt, would be expensive. I’m interested in the match to lower and upper troposphere warming results in such a run and, in particular, whether it would eliminate the need for the Folland and Parker correction.

    JF

    Steve: Please stop presenting your own bright ideas to Judith Curry as though she’s some kind of truth machine.

  121. Posted Feb 19, 2008 at 11:55 PM | Permalink

    #72

    Finally, the idea that their invitation to Steve involved some “quid pro quo” whereby now Steve will be induced to be nice to them is laughable. They’re not that Machiavellian, and Steve is not that stupid.

    Willis : thank you. Thank you very much.

    And BTW thanks to everyone who issued “kudos”. It’s not really why we did it but it never hurts to hear.

    Steve says (#87)

    I’m going to do a separate post on these things reprising the derivations in more detail, as I want to chat about them.

    What’s wrong with a paper ? I do think it would be extremely useful.

  122. Steve McIntyre
    Posted Feb 20, 2008 at 12:11 AM | Permalink

    What’s wrong with a paper ? I do think it would be extremely useful.

    Nothing’s wrong with a paper. But I like to make sure that I have things right so that I don’t have to “move on” all the time. Chatting about some of these things with UC at our online seminar is helpful that way.

    In addition, my understanding of these matters is a little different than it was a year ago. I was stuck on one point at this time last year and didn’t really want to push it until I clarified the matter, which I did while I was preparing for my G Tech presentation.

  123. Gerald Browning
    Posted Feb 20, 2008 at 12:20 AM | Permalink

    John A (117),

    Here I agree with you and not Steve M. I have asked Judith Curry very specific mathematical questions on this topic and all she responded with
    was generalities. I also refer you to Pat Frank’s upcoming article in Skeptic. It is a real eye opener.

    Jerry

  124. Gerald Browning
    Posted Feb 20, 2008 at 12:25 AM | Permalink

    Steve M,

    Shame on you for deleting my remark about Pat Frank’s comment.
    It was completely true. You really have been corrupted.

    Jerry

    Steve: Jerry, I’ve asked people not to make angry posts at my blog. What’s so hard about that? I’ve never liked these food fights, I haven’t always reined them because I get tired of editing them But I’m determined to try to stamp them out. There are lots of ways of making a point far more effectively than people do in these food fights.

  125. Geoff Sherrington
    Posted Feb 20, 2008 at 12:26 AM | Permalink

    Re # 114 Pat Frank

    We both entered computing when memories of a few K were the norm so I guess we have both seen a lot of life in science.

    Frankly, I’m amazed that there is so much serious writing about getting a capable person to talk at a Uni. Used to be normal, not newsworthy, not inducing ideas of quid pro quo or seduction of science.

    I’m on your side. You can’t build a solid theory on wobbly data.

    From Judith Curry –

    Exactly. But you can’t use hurricanes to prove that there is global warming. What you can do is show an unambiguous link between the increase in hurricane intensity and the warming sea surface temperatures. And if you look for why the sea surface temperatures are warming since the 1970s, you don’t have any explanation other than greenhouse warming.

    This is a rather dangerous statement from a prominent scientist. It would not surprise me that the error in measurement of SST (at the ?known? depth that increases hurricanes) is greater than the effect postulated. Next, 30 years of episodic events is not a long time for a confident data base. Finally, several alternatives to greenhouse warming (a term I do not accept) are unfinalised.

    Im my 66 years on Earth so far, “global warming” has affected me not one bit – except from annoyance at the refusal or inability of people to answer the critical Steve McIntyre question of doubling CO2. I don’t like seeing science being damaged by premature assertions – and I dislike dangerous statements being used to frighten the public.

  126. Steve McIntyre
    Posted Feb 20, 2008 at 12:43 AM | Permalink

    John A, Jerry, Geoff and others,

    At a climateaudit bloig, remember the “audit” part of the name. James Annan sent in a response to the sensitiivity (which didn’t respond to my question) but that completely missed the point. The only exposition that should interest anyone is a complete exposition by IPCC or an equivalent organization or endorsed by them. Any other exposition by Judy or anyone else, prepared after the fact and not endorsed by IPCC, is completely irrelevant to how IPCC got to its conclusions.

    Maybe Judy can provide us with a reference available to IPCC AR4 which provides a complete derivation of 2.5-3 deg C for doubled CO2, together with the derivation of all parameterizations used in the analysis – the sort of thing that an engineering quality study would provide. My guess is that if she could, she would have already.

    I have no qualms about asking Judy for such a reference, but that’s all that I’d ask for and it’s all that I think that it’s reasonable for visitors here to ask for. It’s a simple request, but it doesn’t seem easy to answer.

    I don’t regard this as giving Judy or anyone else a free pass. Hopefully she’ll give us a reference to the long-sought engineering-quality derivation. But if she can’t, so be it. Realistically, she’ll probably just point to the chapter on Models in AR4 and say – there you go. It’s not what people here are looking for, because it is a literature review rather than a first-principles derivation. Maybe she’ll give something more useful.

    But there’s no point badgering about requests for her to do first-principles analyses; it goes nowhere. I’ve never asked anyone to do this sort of thing on my account and I get tired of people using this blog to make requests that I wouldn’t make. And for people like Jerry who don’t seem to respect that approach, I would submit that this sort of methodical narrow focus can accomplish things, but, regardless, it’s how I do things and, if people don’t like it, it’s a big world and there many other places where you are free to try doing things some other way.

  127. John Baltutis
    Posted Feb 20, 2008 at 1:20 AM | Permalink

    Re: #33, wherein JEG is quoted as stating:

    NSF managers getting angry that we invite Steve within a university context…

    This old adage comes to mind: “He who pays the piper, calls the tune.”

  128. Geoff Sherrington
    Posted Feb 20, 2008 at 1:47 AM | Permalink

    Re Steve # 127

    I was not asking Judith Curry to provide the doubling figure or to do a first principles analysis. I was pointing to the danger of making assertions before all of the facts are in – as an auditor would, if a ledger was not complete when presented for review of financial details.

    Am I sensing a divide, where the older contributors are expressing more deep concerns than the younger? There is loose evidence that those taught early maths using Cuisenaire Rods now perform less well. By analogy, has there been a similar effect in the acceptance of the rigour of scientific proof? Are standards of proof deteriorating?

    If you state “you don’t have any explanation other than greenhouse warming” then you should complete your argument with a complete dismissal of alternative explanations. I have yet to see one written by anyone.

    I am in complete agreement with your often-stated remarks on engineering quality reporting. I apologise if I expressed this in a way that caused you to react.

  129. PhilA
    Posted Feb 20, 2008 at 4:36 AM | Permalink

    Judith Curry, quoted in #126. “And if you look for why the sea surface temperatures are warming since the 1970s, you don’t have any explanation other than greenhouse warming.”

    Which is where what Howard so accurately describes above as “gut instinct skepticism” cuts right in. Many outside the Hockey Stick team believe that the Earth has been warming for most of the last 300 years. Ergo since most of that warming was demonstrably *not* due to man-made CO2, one can’t simply use “we don’t have any other explanation for the recent warming” as “proof” or even significant probability that the recent warming must have been man-made.

    And given this sort of argument is precisely what triggers such skepticism, it pretty much by definition won’t “persuade” such skeptics that their doubts are misplaced.

    (I’m aware this quote may well be being taken out of context or may have been a simplification for a particular audience. But I think it’s a good illustration of the sort of comment that sets off some skeptic’s mental alarms.)

  130. Posted Feb 20, 2008 at 5:18 AM | Permalink

    Re: #106, Judith Curry

    …. a clarification, I am skeptical of many aspects of the science, if i weren’t i wouldn’t have anything to do as a climate researcher. I am not a skeptic when it comes to being convinced that anthropogenic greenhouse gases are warming the climate.

    As an unconverted sceptic, it’s statements like this from prominent climate scientists that feed my scepticism. It sounds awfully like saying, ‘I’ve got doubts about the evidence, but not about the conclusions’. I accept that science is seldom certain, and that opinions must be formed on the balance of evidence rather than conclusive proof, but too few climate scientists are being as vocal about their doubts as they are about there convictions. Silence can be as misleading as exaggeration or false statements and those who want to cherry pick research that will sway public opinion are having a field day.

    Judith Curry’s engagement with CA is very welcome and, as this is a rather sceptical blog, I hope that she will feel able to express her doubts freely here as well as defending her convictions.

  131. John A
    Posted Feb 20, 2008 at 5:23 AM | Permalink

    I must have missed the course in logic where it was ever applicable to say “we can’t imagine what else could have caused it, therefore it was caused by X” in a scientific context. For some reason this “Argument from Collapsed Imagination” appears to be a critical piece of rhetorical armour for some climate scientists.

    And Steve, I’m not asking for detailed first causes here. I’m asking Judith Curry to explain in her own words, why she is “not a skeptic when it comes to being convinced that anthropogenic greenhouse gases are warming the climate”.

    Now this might involve some armwaving or not; perhaps she has access to some information that would help me understand the whole greenhouse gas issue, because at the moment I’m baffled. Something must have convinced Judith Curry from her former position of agnosticism on the subject to one of Belief, and I’d just like to know what that Something is.

  132. Brian Johnson
    Posted Feb 20, 2008 at 5:40 AM | Permalink

    Steve: snip – Brian, this is the sort of venting that I’m trying to have people stop doing. If you get cross at things, that’s fine, all of us do. But bite your lip.

  133. MarkW
    Posted Feb 20, 2008 at 5:59 AM | Permalink

    The problem with “reasonable looking outputs” is that it is subject to your personal biases regarding what “reasonable” should look like.

  134. MarkW
    Posted Feb 20, 2008 at 6:02 AM | Permalink

    In all my time here, I’ve only seen one or two people who take the position that CO2 has no affect on temperature.

    In that sense, Judity is already preaching to the choir. We are all believers in AGW. The devil, as they say, is in the details. How MUCH warming are we to expect from a doubling of CO2. Most here believe that it will be much smaller than the IPCC’s most recent guess. If you want us to believe that the IPCC is closer to the truth. Do something to prove that water vapor is strong positive feedback, and not a negative one as most recent studies have concluded.

  135. MarkW
    Posted Feb 20, 2008 at 6:08 AM | Permalink

    I like to differentiate between the two camps as AGW, and CAGW (Catastrophic Anthropomorphic Global Warming). Since we pretty much all agree that at least some of the warming of the past 100 years is due to man’s influence. Secondly, if the results of AGW aren’t going to be catastrophic, why should we put the entire world’s economy at risk in order to stop it.

    It is my belief that the small amount of warming that we have seen and are going to see are on net beneficial.
    It’s been proven to my satisfaction that enhanced CO2 is good for plant growth. Which makes it beneficial.

    On the whole, I see no reason why we should want to slow down the growth of atmospheric CO2, much less endure much pain to do so.

  136. MarkW
    Posted Feb 20, 2008 at 6:13 AM | Permalink

    It’s long been a principle of science that just because one can’t think of an alternate explanation is not proof that an alternate explanation does not exist. It may just prove that there is a lot more to learn before definitive statements can be made.

  137. Andrew
    Posted Feb 20, 2008 at 6:17 AM | Permalink

    MarkW, or how about the mysterious cloud feedback? Can we be certain nothing like an “iris” operates? Can we be certain AGW will increase cirrus clouds and not decrease them (Spencer et al. 2007 would seem to suggest not) etc.?

    Maybe we are asking to much?

  138. Posted Feb 20, 2008 at 6:27 AM | Permalink

    Steve: Please stop presenting your own bright ideas to Judith Curry as though she’s some kind of truth machine.

    Sorry about that: I tend to equate a cooperative attitude and deep specialist knowledge as omniscience.

    Mind you, it could have been worse: I could have asked her to derive a CO2 sensitivity of 2 to 4.5 deg C from first principles…

    JF

  139. MrPete
    Posted Feb 20, 2008 at 6:29 AM | Permalink

    Perhaps I can try another bridge-building statement.
    Judith said:

    I am not a skeptic when it comes to being convinced that anthropogenic greenhouse gases are warming the climate.

    I am willing to say:
    1) I’m convinced increased greenhouse gases add warmth to the climate. (But is that factor more important than other factors, and does it cause overall warming? I dunno.)
    2) I’m convinced anthropogenic sources add to the total level of greenhouse gases. (But is that factor more important than other factors, and does it cause a net increase in greenhouse gases? I dunno.)
    To falsify #1, someone would have to show that increased greenhouse gases have no impact on warmth or cause a decrease in warmth. I’m comfortable that one is covered.
    To falsify #2, someone would have to show that anthropogenic sources are removing greenhouse gases overall, or are currently at zero-impact. I’m comfortable that one is covered (although when the bigger/long-term picture is seen, I even wonder about that. We are incredibly blind to “unknown factors.”)
    Am I a skeptic about whether increased greenhouse gases drive climate warming overall? You bet. I don’t have a conclusion about it.
    Am I a skeptic about whether anthropogenic sources are driving the total greenhouse gas picture? You bet. I don’t have a conclusion about it.
    I don’t see that we have enough understanding nor enough historical data precision about either one. And I see ClimateAudit providing a valuable multidisciplinary service and forum for scraping away the sludge of easy believe-ism to get to some bedrock understanding. What do we know, that will not need to be adjusted or moved-on-from in a year, or five, or ten?
    We still need to make those trillion-dollar decisions. But let’s not fool ourselves into thinking our decisions are built on a stable foundation if they’re actually based on expansive soil — not a sure foundation.

  140. Judith Curry
    Posted Feb 20, 2008 at 6:36 AM | Permalink

    Andrew #115, I agree with your statement. While I am not skeptical about increasing CO2 causing warming, there is much to be skeptical about in future projections regarding how much warming. The IPCC makes no pretense of having nailed down “how much warming”, it gives a range of temperature increases even for a specific scenario (Steve, your request for proof of 2.5C sensitivity doesn’t make sense in this context, which is why no one has responded). What do about the warming, given the scientific uncertainties, is a great challenge. However, decision making under uncertainty is something that is routinely faced in all aspects of our life, from government policy to individual decisions. The challenge is to come up with policies and strategies that make sense, even if the warming turns out to be less than expected and will cover us even if the warming is greater than expected. In the U.S. there is a national mandate for energy security, which is almost totally consistent with reducing greenhouse gases. There are numerous health concerns associated with continued pollution of our environment from energy generation. What can we do about it? There is much to be gained from energy efficiency and conservation (for georgia tech’s efforts in this, the largest power user in atlanta, see http://www.stewardship.gatech.edu/2007stewardshipV3.pdf). There are existing alternative energy technologies that are not quite cost competitive with the subsidized fossil fuels we currently use (change the carrots and sticks, and these technologies are cost competitive). There is much promise in a number of new technologies, that need further investment. The other thing we need to do is focus on the socalled adaptation strategies. Whether or not global warming is increasing hurricanes, surely it makes sense to make our coastal cities more resilient to hurricanes. Whether or not global warming is going to increase droughts, surely Georgia needs to figure out how to manage its water resources better and make it more resilient to drought. etc. The bottom line is that such policy decisions don’t hinge on the science of whether the sensitivity is 2 or 3 or 4 degrees.

    Steve: Judith, this blog is not about policy, but science. There are many places where people can discuss policy and people who want to discuss policy can go elsewhere. In present circumstances, there are many policies that people can agree on, some perhaps being more concerned about energy security and others about climate change, but reaching similar decisions. All interesting questions and issues, but OT for this blog. I’ve left your comment stand, but am deleting responses to this comment.

  141. MrPete
    Posted Feb 20, 2008 at 6:54 AM | Permalink

    The expansive soil map I linked to reminds me of a strange thought I had. I’m sure this is not original, but I don’t recall seeing it here:
    People tend to be sure of what they know, and tend to base much of what they know on experience.
    Has anyone ever attempted to correlate the climate perspective of scientists with the climate they live in or with which they have the most experience?
    Just for example, and not picking on anyone in particular (I understand both sides of this one!)
    In much of the Western USA, growth of flora and fauna is obviously precipitation-limited. Many wonderfully “green” areas (SF Bay Area, Colorado Front Range, etc) get about 15 inches of precipitation a year.
    In much of the Eastern USA, growth is rarely precipitation-limited. Precipitation is typically 300-400 percent higher than the green-dry areas of the West.
    Could it be that scientists tend to be biased in their interpretation of field data, based on their “home base” climate experience? Could “Eastern” gardeners or scientists tend to assume temperature-limitation while “Western” gardeners or scientists would assume precipitation-limitation?
    Just askin’ 🙂
    [OK, not “just” askin’. Here’s one line of evidence: the USDA publishes hardiness zone maps based on minimum temperature. Such climate zones have some significant value in the Eastern USA. In the West, they’re pretty useless. Supposedly, the same things can grow here in Colorado (and on Almagre) as grew in Upstate New York where I used to live. Hardly! Our drier climate changes everything, including cold/heat tolerance. So… why would the USDA persist in publishing such material? Why does Home Depot ship plants to our region that can’t possibly survive here? Hmmm….]

    Steve: Much of the early initiative in dendrochronology as a discipline came from Arizona (Schulman, Douglas, Lamarche, Fritts, etc) and the role of precipitation hardly eluded them. The development of attempts to apply dendro information to extract temperature information, as opposed to precipitation, is fairly interesting – the key figures would probably be Jacoby-D’Arrigo, Briffa-Schweingruber, Jacoby being from the East, I think and Schweingruber from Switzerland. However, it also looks to me like a considerable part of this effort was “market-driven” in the sense that they were trying to meet the demands of NSF and IPCC for temperature proxies.

  142. Judith Curry
    Posted Feb 20, 2008 at 6:57 AM | Permalink

    Julien, land use changes are very important for regional climate, but don’t have much of an influence on global climate unless they trigger something else. An example would be something that caused the release of methane locked up in high latitude permafrost. The bottom line is that land is only 30% of the earth’s surface. Other than ice covered portions having a large albedo, changing the land surface (deforestation, urbanization, whatever) doesn’t change the surface albedo all that much, and likely only for a small portion of 30% of land surface. So land use changes are a very important issue in regional climate and sustainability, but most aren’t on a large enough scale or have associated large positive feedbacks to make much impact on global climate.

    p.s. Of course I am no truth machine, but i am happy to engage in the dialogue with interested people when i have the time. I am very interested in understanding what kinds of arguments/data/analyses would be convincing to the educated skeptic, so i do take these questions seriously, when made in good faith. I am not willing to respond to challenges such as issued over on the Jablonsky thread by Gerald to derive fluid dynamics equations that i did in graduate school 30 years ago. I don’t pull arguments from authority, and won’t pay much attention to people who seek to challenge my “authority”, its pointless. My arguments are what they are, i always look to improve them (and seems like i managed to earn most of the quatloos over on the jablonsky thread without deriving equations that have been sitting in text books for decades.)

  143. Dave Dardinger
    Posted Feb 20, 2008 at 7:16 AM | Permalink

    Mark M (#135),

    Dittos, but I would add that we need to differentiate between “feedbacks” in the technical sense and warming via CO2 / H2O per se and from adding things like clouds and snow / ice albedo changes. A lot of times the proAGWers will try to turn the conversation just to the initial CO2 forcing +water vapor feedback while not including clouds and ice. Water vapor feedback will be positive, though there are questions about just how strong it really is. This is where discussions of overlap of bands, and the various parameterizations of spectral lines comes into play. But what AGW skeptics are generally referring to is the net feedback of H2O in all it’s forms and manifestations.

    For myself, I’d also like to discuss the amount of H2O feedback as a function of distance from equilibrium. In engineered systems, it’s generally not the case that feedback is strictly linear. Normally as a system moves farther from equilibrium it is designed to get stronger and stronger negative feedbacks. From a strict physics POV, I’d think the same is the case with the atmosphere. Smallish movements will have a smallish feedback while larger movements will have larger even expotentially larger feedbacks. If this sort of thing is not examined, we’re naturally lead to some of the extreme warming positions which are touted in the broadsides and by dishonest politicians (which I hope isn’t redundant).

  144. Posted Feb 20, 2008 at 7:18 AM | Permalink

    re: #140

    I will say:

    1) Increased greenhouse gases have the potential to provide a net increase in the energy content of the climate system of the Earth.

    How’s that for going way out to the very end of the limb?

  145. MarkW
    Posted Feb 20, 2008 at 7:28 AM | Permalink

    While it is true that land use changes do not automatically trigger global changes. It is also true that the areas where land use changes have occurred are also the places where our climate sensors are located.

    A pretty significant fraction of the world’s land surface has been modified by man. While it’s true that one single location will not have much impact on the entire planet, as the fraction of land that’s been modified goes up, so does the global impact.

    Anything that changes albedo, will have global impact.

  146. Posted Feb 20, 2008 at 7:28 AM | Permalink

    Judith, #103 says,

    These models are evaluated against the historical data record.

    To this meteorologist’s mind that is a major issue. A number of us contend that we will never know the predictive value of the models until they are used for short term (average conditions over a year, not specific daily forecasts) prediction and verified. Until they demonstrate consistent skill at one year, there is no reason to believe they have skill at, say, 50 years.

    Yes, I am aware of the contention that “random weather errors” cancel themselves out over 50 years which would be true if there is no bias in the models. But, how do you know there is no predictive bias until they are verified over a period of time?

    Roger Pielke, Jr. has also written about this recently on his blog at:

  147. Posted Feb 20, 2008 at 7:29 AM | Permalink

    Roger’s blog URL: http://sciencepolicy.colorado.edu/prometheus/

  148. MarkW
    Posted Feb 20, 2008 at 7:31 AM | Permalink

    There is no reason to assume that climate sensitivity will be closer to 2C rather than 0.2C. The only things that give the high numbers are the climate models. Models that are filled with so many assumptions that they are closer to works of fiction than they are scientific instruments. The biggest of these assumptions is the assumption that water vapor is a strong positive feedback. An assumption that has never been backed up by any real world science. In fact most of the studies have reached the conclusion that water vapor is a negative feedback.

  149. MarkW
    Posted Feb 20, 2008 at 7:35 AM | Permalink

    These models are evaluated against the historical data record.

    Two problems.
    1)They are evaluated against the historical data record, and the match is poor. At continental and below, the match is almost nonexistant.
    2) The historical data record is nothing to cheer about, either in terms of quality or of coverage.

  150. Raven
    Posted Feb 20, 2008 at 7:45 AM | Permalink

    MarkW says:

    These models are evaluated against the historical data record.

    Two bigger problems:

    1) The models depend on estimates for parameters like aerosols which cannot be measured accurately.
    2) Historical estimates of aerosols are calculated by assuming the CO2 sensitivity estimates are correct and calculating the amount of aerosols required to make the models match.

    Here is an excellent example of how CO2 sensitivity assumptions affect the amount of aerosols:
    http://www.ferdinand-engelbeen.be/klimaat/oxford.html

  151. Joe Black
    Posted Feb 20, 2008 at 7:49 AM | Permalink

    “In much of the Western USA, growth of flora and fauna is obviously precipitation-limited. Many wonderfully “green” areas (SF Bay Area, Colorado Front Range, etc) get about 15 inches of precipitation a year.”

    I find it ironically humorous that those in GA call “drought” when they are still getting >30 in./yr.

  152. Posted Feb 20, 2008 at 8:14 AM | Permalink

    Judith Curry:

    With regards to my ExxonMobil comment (in hindsight it would have been better to use the generic label political skeptic) This was not said in anger, it was an attempt to provide some historical context for the situation that steve walked into ca. 2003.

    Judith, this is the only part of the exchange that I found troublesome, because it demonstrated that you were forming part of your opinion based on popularized accounts, rather than by factual records. Eventually we are all private individuals and free to form our opinions on politics and the like as we please, but when we are dealing with scientific issues relating to our own fields, I believe we have a greater responsibility to facts-check these sorts of claims.

    Conventional political wisdom aside, it is not factually accurate to state that Exxon-Mobile only funds contrarians, in fact they mostly only fund proponents of AGW. It is more of the truth that AGWs get miffed because contrarians get any funding whatsoever, even when that funding is nothing more than a literature review (as was the case here).

    Also, I would encourage a better choice of word beside “skeptic”, maybe “doubter”. Skepticism is a necessary facet of empirical science. Once you have dropped skepticism, you are no longer practicing empirical science. You can be a “believer” in an empirical hypothesis, while still allowing skepticism to play a positive role in refining that particular hypothesis over time.

  153. Judith Curry
    Posted Feb 20, 2008 at 8:15 AM | Permalink

    For discussions on energy, i suggest you head for Joe Romm’s blog at
    http://gristmill.grist.org/user/Joseph%20Romm

    Totally agree that historical aerosol forcing is rather a large wild card, not sure how this can be better nailed down.

    Totally agree that the most interesting issue in all this is the total water feedback (vapor, clouds, land/sea ice). At some point I would like to organize a thread on that.

    Another issue of great relevance to the skeptic is the issue of verifying the predictions of climate models. The challenge is that these models aren’t really making predictions, since they don’t pretend to know what the volcanic activity or exact solar variability will be in the coming century. Hence they climate models are really being used for scenario simulations. There should be a way to design experiments to verify climate models in a predictive mode, say by preserving a frozen model version and then doing a hindcast with the observed solar, volcanic forcing, and interpreting the difference with the original simulation and the observations. So I think we should figure out some sort of protocol for doing this. This is a much greater challenge than verifying weather models as Mike Smith points out. This is why much of the verification has been done for component models on shorter time scales, rather than the coupled model on climate time scales.

  154. steven mosher
    Posted Feb 20, 2008 at 8:18 AM | Permalink

    i buy agw as the best explanation of the data. i havent seen a reasonable scientific theory
    about policy. that’s ethical theory.

  155. Posted Feb 20, 2008 at 8:31 AM | Permalink

    Joe Black:

    I find it ironically humorous that those in GA call “drought” when they are still getting >30 in./yr.

    There is also a certain amount of irony, when the majority of the shortage of rainfall can be attributed to a dearth of tropical cyclones making landfall during the summer months in the SE United States.

    Judith does raise some valid questions with respect to policy issues of course. What one defines as a drought usually depends on whether that rainfall shortage has any significant ecological or economic impact. (The natural ecological impact has been minimal so far.) Changing water use patterns in the SE have had the effect of increasing the sensitivity of local economies to what amount to natural variations in rainfall over time, and I believe that often this increased sensitivity is mistaken for a shift in natural underlying rainfall patterns.

  156. David Smith
    Posted Feb 20, 2008 at 8:43 AM | Permalink

    Steve’s request is for greater civility in posts. That’s reasonable and productive. It also helps the process if skins thicken a bit and if occasional politeness is added to the mix.

    Willis Eschenbach is an excellent example of a person with good posting techniques.

    This doesn’t come easily. It takes effort. When something irritates me I sometimes correspond with other CA participants offline and do my venting there, rather than here in public. In a pinch I ‘splain things to the family cat. Sometimes, despite those efforts, the frustration slips through, but it’s not for lack of trying to contain things.

    Why go to this effort? Because CA is at its best when it examines specific aspects of issues, in detail, in language that’s accessible to technically-proficient people and puts it all in context. When the threads get derailed by hot-button issues this process breaks down. Everybody loses.

    CA also has a valuable potential role as a “sandbox” for people of varying backgrounds to explore a topic, with help from the professionals. Excellent examples of this are the threads run by Leif Svaalgard. He explains thing in context, takes on all comers, says what’s known and what’s not known with clarity, gives his POV. He deals in details, not hand-waving. He’s gentle with novices. I’ve never sensed sleight of hand or ideological marketing. He doesn’t get derailed by hot-button issues or go away in a huff. Leif is a role model for the professionals. Perhaps we posters can help develop other Leifs, an effort that starts with what I mentioned at the start.

  157. Posted Feb 20, 2008 at 8:46 AM | Permalink

    re: #154

    Judith, you sent us to a site where the first two paragraphs contains these:

    “Its authors described a way for the United States to obtain nearly 100 percent of its electricity and 90 percent of its total energy, including transportation, from solar, wind, biomass, and geothermal resources by end-of-century. Electricity would cost a comfortable 5 cents per kilowatt hour.

    U.S. carbon emissions would be reduced 62 percent from their 2005 levels. Some 600 coal and gas-fired power plants would be displaced.”

    Now I consider myself to be an objective kind of person and look at the data and not the messengers, but those sentences are very difficult to get past. I have worked in one sector of the energy industry for decades, so I think I have some knowledge about the general subject. The statements listed above are not in agreement with anything that I have ever encountered in my careers.

    How about pointing us to some information that is very much more reasonable before you send us to read about states of energy utopia. Or do you agree with the statements as presented?

    If the statements are correct, how can there be any discussions whatsoever about 80% reductions in carbon emissions below 1990/2000 levels?

    Steve: Dan, no more policy please.

  158. David Smith
    Posted Feb 20, 2008 at 8:50 AM | Permalink

    Re #157 Leif, my apology for mis-spelling your last name 🙂

  159. Steve McIntyre
    Posted Feb 20, 2008 at 9:12 AM | Permalink

    Let me add a little additional explanation of Judith’s Exxon comment and how she expressed it to me. She was trying to explain the total hostility of the climate science community to (what I would describe) as fairly straightforward criticisms of the Mann study – a hostility that, in my opinion, continues to cloud a proper evaluation. Her rationale was that in the 1990s Exxon funded people to foment dissent. So people in the trade had guns drawn. Thus when I wandered into the debate from out of left field in 2003, whatever my motives were, people assumed that I was another hired gun from Exxon and started shooting guns in all directions paying little attention to anything that Ross and I actually said. She meant this only as a sort of historical explanation of the gunfight. Extending the metaphor, I’d say that one of the results of the gunfight in the saloon was that Mann and his associates were able to take advantage of the situation and used it to poison objective appraisal of the criticisms.

    Regardless of the history, it’s time for the climate science community to grow up. I came at this problem quite innocently as do most readers here. Forget about the 1990s, forget about fights with Seitz, deal with the 2000s and 2010s.

    It’s also long past time to grow up about data. If you want to influence policy, then everything has to be open kimono. If climate scientists want to keep data and methods private and stay in the academic journals and seminar rooms, then fine; but once public policy is engaged, you have to be prepared for a new level of due diligence. So stop acting like prima donnas. No data hoarding. What about third party climate scientists who archive their data and don’t agree with people like Michael Mann, Phil Jones, Lonnie Thompson acting like prima donnas.

    Well, speak out. Even if you’re worried about what NSF will say. From the total silence of the broad climate science community, it’s not unreasonable for the public to assume that the broad community endorses bad behavior by the prima donnas.

  160. Posted Feb 20, 2008 at 9:26 AM | Permalink

    Steve – snip, sorry about that. Re-read your post and see why I deleted it.

  161. Steve McIntyre
    Posted Feb 20, 2008 at 9:28 AM | Permalink

    #157. David, I agree with you about Leif. I’m happy to offer him access to the audience here and similar arrangements can be made for others.

    Nobody has to tread a party line here. Leif can say what he wants.

    If Michael Mann or Gavin Schmidt want to put up threads, they’re welcome as well. Or for that matter, someone less engaged in the disputes.

    CA has a big and active audience. Having developed this audience, it’s hard to keep feeding it and I’m glad when others step up. There are some things that I’m not interested in – I’m not interested in discussing policy and I wish to discuss mainstream papers that are relied on, rather than personal theories.

  162. MarkW
    Posted Feb 20, 2008 at 9:36 AM | Permalink

    So if I were to try and defend my theory that global warming is caused by an overabundance of qumquats, you wouldn’t be interested in hearing it?

  163. Scott-in-WA
    Posted Feb 20, 2008 at 9:38 AM | Permalink

    SteveM; It’s also long past time to grow up about data. If you want to influence policy, then everything has to be open kimono. If climate scientists want to keep data and methods private and stay in the academic journals and seminar rooms, then fine; but once public policy is engaged, you have to be prepared for a new level of due diligence.

    Because the consequences of a strong carbon management policy have yet to be felt in any serious way, there is no external demand for high quality climate analysis processes and for a disciplined and professional approach to climate data management.

    Will that situation change once the impacts of an all-pervasive carbon management policy are fully evident?

    Maybe, maybe not. But if one is a member in good standing of the pro-AGW climate science community, one would be well-advised that the AGW public policy debate may have only just begun, and will intensify significantly once people realize how carbon management policies will actually affect them personally.

  164. Posted Feb 20, 2008 at 9:46 AM | Permalink

    re #161, Steve’s Comment

    I’m confused. That post was on topic and far more moderate than your #162.

  165. Gary
    Posted Feb 20, 2008 at 9:56 AM | Permalink

    Way back in #51 Dr. Curry said:

    In the interests (of) opening communication (and perhaps even to help the resource base for paleoclimatology), I propose that at some point we have a thread here where we design a strategy for sampling of multiproxies towards doing regional and global time series of surface temperature for the past 2000 years, taking a broad look at this in terms of what it would take in terms of the actual data to assemble a convincing time series. Hopefully this would attract some climate researchers to engage in this dialogue.

    This is an excellent idea suitable for “crowdsourcing” but probably better done over at the CA Bulletin Board. Dr. Curry, why not start a thread over there?

  166. Lance
    Posted Feb 20, 2008 at 9:59 AM | Permalink

    Dr. Curry I applaud your, and JEG’s, participation here at climateaudit. While I think John A’s request for an “in your own words” explanation of the reasons that you believe that we face dangerous warming from anthropogenic greenhouse gases is well intentioned it does have a somewhat inquisition-like tone to it.(Unintentionally I’m sure.)It is also somehat broad in its scope.

    I think Steve’s suggestion to ask you to respond to individual scientific points will prove to be more constructive in the long run and in the process we will get a better understanding of your insights and motivations.

    Do you know of any current paleo-climate proxy studies that demonstrate, to your satisfaction, that the temperatures of the current warm period are outside of the values one might expect from natural variability?

  167. Darwin
    Posted Feb 20, 2008 at 10:05 AM | Permalink

    I want to thank Judith for her thoughtful reply to my post. I am pondering the “20” models and their origins back to three men out of Tokyo University — Manabe, Arakawa and Kasahara. It is remarkable to me that the range for the models has remained fairly static since the 1970s when Charney split the difference between Manabe’s GFDL 2 C and GISS’s UCLA II (Arakawa) 4 C adding error bands to make it 1.5 to 4.5. I still don’t see how they derive a 66% probability for the range now. Of course, decades of time will tell, but in the meantime the need for verification of models would seem vital, and that requires accessibility to data and codes. Is it time that Congress, here, and the United Nations globally set rules for accessibility to data and codes for studies to be funded and used in their reports, and that they set up repositories for archiving such information so that robust verification can occur? And wouldn’t this be a something that the AGU and other associations should be pursuing, rather than “consensus” statements? Would such actions also help end the suspicions about hiding information that underly much of the acrimony found in web discussions?

  168. Posted Feb 20, 2008 at 10:20 AM | Permalink

    Judith Curry says:
    February 20th, 2008 at 6:57 am

    Julian, land use changes are very important for regional climate, high latitude permafrost. The bottom line is that land is only 30% of the earth’s surface. Other than ice covered portions having a large albedo, changing the land surface (deforestation, urbanization, whatever) doesn’t change the surface albedo all that much, and likely only for a small portion of 30% of land surface.

    Thank you. I’m mostly interested in the 70% non-land albedo, which would be included in Palle/s Earthshine. I will not expand further — our host does not like speculation on matters which will only become important when his proxy evaluations make it necessary to search for a new paradigm. One step at a time…

    Julian Flood
    (but I’m pleased to read this:

    http://sciencenow.sciencemag.org/cgi/content/full/2008/125/1?rss=1

    I will now find somewhere to nip off to and say ‘just as I predicted’ to anyone listening.)

  169. Matthew
    Posted Feb 20, 2008 at 10:22 AM | Permalink

    As a non-scientist who reads this blog faithfully, I become hopeful reading (subtracting the rancor) that concerned and learned individuals from both sides of this issue have come together to talk. Each side of the scientific debate could look to the true form of Skepticism (both from a philosophical and scientific standpoint) and spend time vigorously searching for evidence that contradicts their current theories and hypotheses.

    Thank you Steve for all your time moderating, and thanks to all who contribute here.

    For policy, perhaps a middle ground where people can meet is the Breakthrough Institute.

  170. theduke
    Posted Feb 20, 2008 at 10:26 AM | Permalink

    As my high-school physics teacher admonished us in those we-shall-conquer-the-world-with-a-slide-rule days, “Begin all of your scientific pronouncements with ‘At our present level of ignorance, we think we know . . .'”

    John R. Cristy in musings on winning “0.0001” of the Nobel Peace Prize for his work with the IPCC.

  171. Kenneth Fritsch
    Posted Feb 20, 2008 at 10:35 AM | Permalink

    When Judith Curry says:

    Exactly. But you can’t use hurricanes to prove that there is global warming. What you can do is show an unambiguous link between the increase in hurricane intensity and the warming sea surface temperatures. And if you look for why the sea surface temperatures are warming since the 1970s, you don’t have any explanation other than greenhouse warming.

    I think this comment shows why blogging can bring to the fore a one or two line comment that has to be consider opinion — informed opinion but nevertheless opinion without further exposition. One might want to consider that (a) the 1970s were at or near the “bottom” of what appears to be a multi-decadal TC cycle in the NATL while currently we are at a peak, (b) the evidence that changing detection capabilities have something to do with NATL trends we see over the longer term and (c) the lack of a global trend that includes the NATL (to which I have to assume Curry was specifically referring) that was shown in the Kossin reanalysis from the 1980s forward.

    Another weakness of the two line blog comment is that one has to closely parse the wording of these comments or come away with the wrong impression or an out of context perception of what was said. The juxtaposition of the words increasing hurricanes and SST (and AGW, for that matter) seems to imply a complete relationship that is not actually shown. Stating that what you can do is show an unambiguous link between hurricane intensities and SST is not to be confused with stating that there is a demonstrable unambiguous link.

  172. Kenneth Fritsch
    Posted Feb 20, 2008 at 10:56 AM | Permalink

    In http://www.climateaudit.org/?p=2708#comment-214158 Scott-In_WA says:

    Because the consequences of a strong carbon management policy have yet to be felt in any serious way, there is no external demand for high quality climate analysis processes and for a disciplined and professional approach to climate data management.

    I agree that much of the sloppy science will be honed only on demand when and if the mitigating policies begin to be felt adversely and in earnest by the public. My analogy would be that what we currently are seeing are exhibition games leading up to the official season when things get much more serious and expectations are significantly different.

    Steve: Be careful with the nuances. There’s a difference between “sloppy science” and “prima donna behavior”. Merely because someone is acting like a prima donna doesn’t mean that the science is bad or sloppy. It may have been done perfectly and the person has a self-inflated ego and doesn’t like being asked questions or is annoyed that anyone should dare to ask him to archive data. Because Thompson hasn’t archived his data doesn’t mean that his analyses were done wrong. So don’t go a bridge too far in how you express this. If the matter is important, there’s no room for prima donna bahavior any more.

  173. yorick
    Posted Feb 20, 2008 at 11:06 AM | Permalink

    In Fainess to Judy, I do recall as a teenager reading ads in magazines, not sure if it was Exxon then, but I am pretty sure it wasn’t Esso, could have been Mobil, anyway, they were arguing that if scrubbers were installed on smokestacks, that by 1990 we would all be waist deep in some kind of sludge.
    There is a history there, my advice, get over it. Using ExxonMobil attacks wins no converts, it is a groupthink type mindguard tactic. The left, which is susceptible to these kinds of ant-corporation rhetoric is on board. Those of us on the right are highly suspicious of it, and use of it says “My mind is made up and I am now moving over to the politics side of it, and on the politics side of it, I am coming from the Left.” Fair or not, that’s what we hear, just like when the “settled science” is quesioned, the other side hears “ExxonMobil.” Look at Hansen’s rhetoric for a textbook case of how to keep your support under fifty percent.

  174. Jim Arndt
    Posted Feb 20, 2008 at 11:19 AM | Permalink

    Hi,

    The issue I have with the models used by AGW is the limit used for TSI. It is true the TSI change in W/m2 is small, but it is the other effects that are linked to TSI. For instance the models don’t include CRF and UV. I also think that Leif is exploring the effect of gravity waves on climate. Both of these have shown experimentally that they change the atmosphere. CRF by increasing the albedo through clouds and UV by its interaction with O3. I guess my question is does the IPCC or Judith Curry even explore these forcings.

  175. jae
    Posted Feb 20, 2008 at 11:20 AM | Permalink

    Well, speak out. Even if you’re worried about what NSF will say. From the total silence of the broad climate science community, it’s not unreasonable for the public to assume that the broad community endorses bad behavior by the prima donnas.

    Yes, the failure to release data and the censorship that occurs in some climate forums are the primary drivers of my skepticism, and probably many others.

  176. jae
    Posted Feb 20, 2008 at 11:28 AM | Permalink

    Judith:

    Totally agree that the most interesting issue in all this is the total water feedback (vapor, clouds, land/sea ice). At some point I would like to organize a thread on that.

    Me, too! Please do.

  177. Pat Cassen
    Posted Feb 20, 2008 at 11:28 AM | Permalink

    MarkW (#149) said:
    “In fact most of the studies have reached the conclusion that water vapor is a negative feedback.”

    References please? (I am already familiar with Spencer et al., GRL, 34, L15707, doi:10.1029/2007GL029698, 2007)

    Thanks,
    Pat Cassen

  178. Andrew
    Posted Feb 20, 2008 at 11:36 AM | Permalink

    Since most of my comment was lost, I’d like to at least repost the relevant, part that was commenting on th policy comment Judith made. Which was that, if land use is important for regional climate, then it is important, becuase regional climate is what people live with and musty adapt to.

  179. Severian
    Posted Feb 20, 2008 at 11:52 AM | Permalink

    Good point Andrew, plus, the summation of many regional climate impacts becomes a global climate impact.

  180. Kenneth Fritsch
    Posted Feb 20, 2008 at 11:57 AM | Permalink

    Steve M states :

    Be careful with the nuances. There’s a difference between “sloppy science” and “prima donna behavior”. Merely because someone is acting like a prima donna doesn’t mean that the science is bad or sloppy. It may have been done perfectly and the person has a self-inflated ego and doesn’t like being asked questions or is annoyed that anyone should dare to ask him to archive data. Because Thompson hasn’t archived his data doesn’t mean that his analyses were done wrong. So don’t go a bridge too far in how you express this. If the matter is important, there’s no room for prima donna bahavior any more.

    I might be inclined to reword and detail my sloppy reference as follows:

    If prima donna attitudes that affect the availability of data that have been used in published papers are tolerated and ignored generally in a discipline, then I would call that discipline’s action, not by commission, but by omission, sloppy. I am not inferring that sloppiness necessarily means the science is wrong, but I do know that if the public, or for that matter, a vocal part of that public, became concerned by potential adverse effects of that science’s applications, they would be much less tolerant of prima donna attitudes and sloppy disregards for its potential bad effects.

    Since we do not know in detail or certainty the motivations for with holding data, one can conjecture that it stems from a prima donna attitude or from being embarrassed to show sloppy data and/or sample handling. We have the case of the GISS code. We have the case of the USCHN station quality control (the lack of which brings to mind the term sloppy) that Watts and his team have shown evidence for. The standard answer that it does not matter only reinforces the sense of a sloppy attitude towards the science.

  181. Joe Black
    Posted Feb 20, 2008 at 11:59 AM | Permalink

    ‘At our present level of ignorance, we think we know . . .’

    True genius at work.

  182. Joe Black
    Posted Feb 20, 2008 at 12:11 PM | Permalink

    Severian says:
    February 20th, 2008 at 11:52 am

    “…the summation of many regional climate impacts becomes a global climate impact.”

    Well, not really. Land represents less than 30% of the Earth’s surface, so it could(should) be neglected in Global Concerns. 😉

  183. Keith Herbert
    Posted Feb 20, 2008 at 12:20 PM | Permalink

    Steve,
    My post was one of the many deleted, though I am certain it was for some reason other than being angry or disrespectful as I have been neither on this site. I appreciate you are trying to maintain a balance here to allow participants from many differing viewpoints to participate.
    Those who use terms such as “armchair skeptic” or “denier” are referring to some participants here in a derogatory manner. I think this is understood. I believe most readers and posters at this site are skilled at science, engineering or academics or some field that would allow them to comprehend the discussions here on some level. It is unfair to belittle those who are not climate scientists.
    I am an expert in my field of engineering and I certainly know when science encroaches in the realm of engineering; just as you know when science is in the realm of statistics. We don’t have the option of excluding people outside our field from expressing their viewpoints or even determining policy for how we are to conduct ourselves in our fields. I think this is something climate science does not yet accept though it has broadened into engineering and politics.

  184. MarkW
    Posted Feb 20, 2008 at 12:31 PM | Permalink

    Almost all of the climate network sensors are on land.

  185. J.Hansford.
    Posted Feb 20, 2008 at 12:36 PM | Permalink

    Steve McIntyre says:

    February 20th, 2008 at 9:12 am

    ” Her rationale was that in the 1990s Exxon funded people to foment dissent.”

    I usually lurk, But I just had to make an observation about this mindset of Judy’s and other Proponents of AGW, as it relates to corporations.

    Exxon Mobil is a legal business, run, organized and worked by people who are just like anyone else. They and Exxon Mobil have legal obligations as well as protections. Just like everyone else. As a corporate entity, it is subject to regulation.

    However, the last time I looked, I was unaware that they could manipulate the laws of physics….. It doesn’t matter if Exxon Mobil pays someone obscene amounts of money to say that the Moon is made of Green Cheese… The Moon will still be the same as it ever was.

    Facts are the only things that need be worried about…. as well as access to data, open dialog, etcetera, etcetera.

    Just thought I’d add that observation…. I like clarity rather than excuses. : )

    Great Blog Guys.

  186. Judith Curry
    Posted Feb 20, 2008 at 12:58 PM | Permalink

    For the record, i totally agree that data and metadata plus climate model code should be publicly available, i have said this numerous times including on CA. For the most part it is available, but there are some obvious examples where it is either not available or appropriately documented. Some of this is data created outside the U.S. where we have little control (such as Jones surface data set). Some of the datasets are not simple to interpret and shouldn’t be released without appropriate metadata. I can’t speak for the situation regarding any of the specific paleo data sets of concern on this site. I certainly agree that the data should be made publicly available with suitable metadata.

    Steve: The Jones data set was funded by the US Department of Energy. To my knowledge they continue to fund Jones and, even if Jones beat them on earlier contracts, their ongoing funding is a pretty big carrot to bring Jones into line.

  187. Severian
    Posted Feb 20, 2008 at 1:03 PM | Permalink

    Well, not really. Land represents less than 30% of the Earth’s surface, so it could(should) be neglected in Global Concerns.

    Well, gee, then I guess we shouldn’t take temperature measurements over land then. 😉

    If CO2 is supposed to be a “well mixed” gas in the atmosphere, and when you consider the fact that the NH is warming a lot more than the SH, and more land mass and people and UHIs are in the NH, that makes me suspicious that the summation of many regional changes does indeed have at least a pseudo-global (NH dominant) effect.

  188. Joe Black
    Posted Feb 20, 2008 at 1:33 PM | Permalink

    If CO2 is supposed to be a “well mixed” gas in the atmosphere, and when you consider the fact that the NH is warming a lot more than the SH, and more land mass and people and UHIs are in the NH, that makes me suspicious that the summation of many regional changes does indeed have at least a pseudo-global (NH dominant) effect.

    From what I’ve seen, the CO2 levels in Antarctica and at Mauna Loa are essentially equivalent. The NH temp anomalies are higher than in the SH. Maybe it’s not the CO2 then? Maybe the “climate change” is different over the Oceans than over the Land. That would make it not exactly “global” wouldn’t it?

  189. Severian
    Posted Feb 20, 2008 at 1:41 PM | Permalink

    From what I’ve seen, the CO2 levels in Antarctica and at Mauna Loa are essentially equivalent. The NH temp anomalies are higher than in the SH. Maybe it’s not the CO2 then? Maybe the “climate change” is different over the Oceans than over the Land. That would make it not exactly “global” wouldn’t it?

    Maybe not global, but teleconnected!

    But this whole line of discussion cuts to the heart of one of the main arguments against CO2 driven AGW in my mind. I don’t doubt for a second that land use changes can and are driving climate change on regional and summed regional levels, and outstripping CO2. And, unlike CO2, we probably can (if we decide to drop this CO2 nonsense and pay attention to it) do something about land use changes to ameliorate the effects if we study them and think of options. I’ve long been of the opinion that addressing regional climate changes and warming makes a lot of sense, much like paying attention to a factory spewing real pollution in your backyard is both doable and makes sense.

  190. John V
    Posted Feb 20, 2008 at 1:50 PM | Permalink

    Severian and Joe Black:
    The difference between NH and SH warming gets brought up again and again and again. An explanation is always given, but it doesn’t seem to matter. Oh well, I’m a glutton for punishment — here are a few points to ponder:

    1. The NH has more land. Land warms faster than ocean. Therefore the NH warms faster than the SH.
    2. Modern climate models all predict less warming in the SH.

    RealClimate recently did a write up on Antarctica:
    http://www.realclimate.org/index.php/archives/2008/02/antarctica-is-cold/

    Here’s the money quote:
    “Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.”

  191. Lance
    Posted Feb 20, 2008 at 1:52 PM | Permalink

    Dr. Curry,

    Did you see the question I asked in post #167 ?

  192. Andrew
    Posted Feb 20, 2008 at 1:56 PM | Permalink

    Oh, I see that quarter century was discreetly changed to “a long time” when they realized Hansen had a warm Antarctica.

    Also, why do the Northern land areas warm more than the Southern land areas, and the Northern Oceans more than the Southern ones? Soot anyone?

  193. Joe Black
    Posted Feb 20, 2008 at 2:02 PM | Permalink

    Here’s the money quote:
    “Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.”

    Gee, that would make not global wouldn’t it? It’s a semantic issue confounded by a poor system definition.

    To be measuring “global warming” it seems to me that one would try to measure/estimate the heat contained by the lower atmosphere, the land mass down 5 or 10 feet, lake and river temps and the ocean heat down to whatever level is important to he biotic mass of direct or indirect importance to humans.

  194. Mike B
    Posted Feb 20, 2008 at 2:04 PM | Permalink

    Here’s the money quote:
    “Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.”

    John, part of the entire context is the Realclimate paleos who claim that the MWP was not “global” because it was restricted to the NH.

    With regard to the current warming period, the surface record shows that the warming is concentrated not just in the NH, but in the upper latitudes of the NH. In that sense, the current warming period isn’t “global” either.

    The team can’t have it both ways, with one standard for “global” warming in past periods and a different on now.

  195. Sam Urbinto
    Posted Feb 20, 2008 at 2:10 PM | Permalink

    Pat Frank: Before we can even get to the question of the models having what seems to be margins of error (or whatever they call them) greater than the anomaly trend for the entire period, we have to answer the underlying questions: Does such a thing as a global temperature exist; that is, can we distill all the sampling and averaging et al into a single thing. Does the anomaly trend itself reflect this in a meaningful way, a physical way.

    Then we can ask about its accuracy, which of course then leads to defining possible non-natural variability causes to quantify in some weighted fashion, some ratio, to then find out the cost/benefit and risk/reward of actions intended to fix each specific issue.

    bender: University of Toronto Mississauga?

    Julian Flood: I tend to attach a lot of this suspected warming to lower albedo; if the anomaly trend is indeed reflecting some rise in temperatures in the overall net system, I would hazard a guess as it being 90% of whatever it is. Certainly, it’s not unlikely it has warmed about 1 C, and if it’s non-natural variability, albedo change in conjunction with an 800% rise in population explains it better than the impact of AGHG on the weather system/greenhouse effect.

    But like Steve, I don’t know why Judith would be the person to ask about it. However, her comment later on is a reasonable one, that land is only 30%; however it’s also reasonable that what MarkW said also, it’s where we have the air sensors.

    At the end of the day, the biggest change from then to now is technology and population.

    Julien Emile-Geay: I also thought it was a very nice and helpful thing for the two of you, and everyone else involved, regardless of the reasons.

    PhilA: I have no problem with accepting that the anomaly trend might reflect that something is going on, but my gut instinct says whatever it is is mostly due to population and technology. Think back to 1800ish; number of large cities, cars, farms, trains, airplanes, factories? So while the three main AGHG are up 35%, 150% and 16% respectivly, population is up 800% and anything that didn’t exist before 1880 is up infinity.

    TonyN: I would like to see more on the doubts than the convictions also, but people don’t really operate that way. They focus on convictions and try and move things forward in line with them. Nothing wrong with it, and expected. What’s the issue sometimes is ignoring the doubts or pretending you don’t even think they exist. I don’t like that much. But I’m not surprised.

    MrPete:

    1) I’m convinced increased greenhouse gases add warmth to the climate. (But is that factor more important than other factors, and does it cause overall warming? I dunno.)
    2) I’m convinced anthropogenic sources add to the total level of greenhouse gases. (But is that factor more important than other factors, and does it cause a net increase in greenhouse gases?
    I dunno.)

    Excellent.

    Steve:

    From the total silence of the broad climate science community, it’s not unreasonable for the public to assume that the broad community endorses bad behavior by the prima donnas.

    Excellent.

  196. John V
    Posted Feb 20, 2008 at 2:27 PM | Permalink

    #195 Mike B:

    The team can’t have it both ways, with one standard for “global” warming in past periods and a different on now.

    I try to stay out of the MWP stuff, but here’s my understanding. Over time scales appropriate for climate (10+ years?) the temperature is currently rising in every reasonably sized region you can define. NH, SH, tropical, land, ocean, every continent — they are all warming. There is *more* warming in the NH, and particularly the Arctic, but there is warming everywhere.

    Apparently there is some evidence that the MWP was more regional. I know very little about that and won’t argue it either way.

    (Yes, I know that the 1998 peak temperature is warmer than any single year since. The trend lines are all, or very nearly all, still positive).

  197. Tim Ball
    Posted Feb 20, 2008 at 2:29 PM | Permalink

    Excluding computer generated data, where it is programmed that CO2 increases drive temperature increases, could somebody provide me with a record of any time period for essentially any length which show CO2 increases preceding temperature increases.

  198. LadyGray
    Posted Feb 20, 2008 at 2:36 PM | Permalink

    Judith, you posted that

    The challenge is that these models aren’t really making predictions, since they don’t pretend to know what the volcanic activity or exact solar variability will be in the coming century. Hence they climate models are really being used for scenario simulations. There should be a way to design experiments to verify climate models in a predictive mode, say by preserving a frozen model version and then doing a hindcast with the observed solar, volcanic forcing, and interpreting the difference with the original simulation and the observations. So I think we should figure out some sort of protocol for doing this.

    Would it be fair to say that you don’t believe the science is settled? It would seem that you are calling for experimental verification of climate models. It would even seem that you are encouraging patience and not jumping to conclusions, while waiting for such verification. If you had not declared that you believe that AGW is actually occurring, I would think you were an AGW-skeptic (and I mean that as a compliment, regardless of how it may sound). I would guess that you are a moderate on these issues, and probably don’t completely fit within either of the extreme camps. You might even be closer in thinking to Steve than to Gavin.

  199. yorick
    Posted Feb 20, 2008 at 2:39 PM | Permalink

    Tim Ball,
    My bet will be that the comback is “Snowball Earth.”

  200. MarkW
    Posted Feb 20, 2008 at 2:44 PM | Permalink

    I thought the models were only good for global level guesses. Since when were they producing useable continent level data?

  201. Pat Cassen
    Posted Feb 20, 2008 at 3:07 PM | Permalink

    MarkW,
    Can you give me some references for your statement that “…most of the studies have reached the conclusion that water vapor is a negative feedback”?

    Thanks,
    Pat Cassen

  202. Pat Cassen
    Posted Feb 20, 2008 at 3:08 PM | Permalink

    MarkW,
    Can you give me some references for your statement that water vapor provides a negative feedback?
    Thanks,
    Pat Cassen

  203. Andrew
    Posted Feb 20, 2008 at 3:09 PM | Permalink

    JohnV, you’ve got to be kidding me, you know of this blog and you aren’t familiar with the whole MWP modern situation? That’s basically the main topic here!

    BTW, NAS says there is evidence for a MWP in many places, except (coincidence?) Antartica. Henrik Svenmark says that Antartica should be different if warming is at least partly effected by clouds (becuase Antartica is white, the change in albedo has the opposite effect) ie Cosmic Rays (or perhaps circulation pattern changes. Coincidence? I’m inclined to think not.

  204. John V
    Posted Feb 20, 2008 at 3:28 PM | Permalink

    #198 Tim Ball:

    Excluding computer generated data, where it is programmed that CO2 increases drive temperature increases…

    That’s not what’s programmed into the models. They have a radiative forcing model which causes CO2 increases to change the radiative energy balance of the earth. Temperature increases are a side-effect. That is, rising temperatures are a model *output* not a model *input*.

    …could somebody provide me with a record of any time period for essentially any length which show CO2 increases preceding temperature increases.

    I expect you won’t accept this as your example of CO2 preceding temperature, but the solar effects of the Milankovitch Cycle (ice-age cycle) are not alone sufficient to drive the temperature changes. The difference (~40%) is made up by the increased CO2 in the atmosphere. Of course, the increase in CO2 is also caused by the warming temperatures in a classic feedback loop.

    Prior to recent centuries, there were few if any causes for CO2 levels to change other than temperature-dependent cycles. On very long time scales, geological processes affect CO2 concentrations. There is evidence that the world was quite warm hundreds of millions of years ago (despite a weaker sun) because of high levels of CO2.

    Would you accept Venus as an example?
    Would you accept the last 30 years on Earth as an example?

    =====
    #204 Andrew:
    I’m familiar with the MWP-vs-CWP situation. What I said is that I try to stay out of it, and that I don’t know enough to argue either way. I’m more interested in the last 30-100 years. Why is it warming now? That seems more important than what happened 500-1000 years ago (depending on whose definition of medieval you prefer), particularly given the huge uncertainty on temperatures that long ago.

  205. Neal J. King
    Posted Feb 20, 2008 at 3:31 PM | Permalink

    Judith,

    As you have remarked, a straight-forward calculation of the amount of temperature change expected from a 2X in atmospheric C-O2 is not available, because of the complexity of the Earth’s response to such a 2X.

    However, an intermediate step in evaluating the impact of a 2X is the calculation of a 3.7 W/m^2 imbalance in the Earth’s radiative budget. This is a widely quoted figure that seems to have few caveats littering the landscape. Based upon my understanding of the general argument, there should not be too many fudge-factors available for it. However, I am still trying to track down a specific calculation for this number.

    I have looked at Pierrehumbert’s in-progress textbook (http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html)
    and find its explanation very helpful qualitatively; but he doesn’t do the actual calculation to arrive at 3.7. I’ve also looked through Weart’s website (http://www.aip.org/history/climate/summary.htm) for sources; right now, I’m looking at Hulburt’s article from 1931. Conceptually, the article is fairly clear, but also sketchy; and it was written at a time when less information on absorption coefficients was available.

    Do you know of a complete and explicit calculation of the impact of a 2X in C-O2 on radiative imbalance? Perhaps a textbook?

    Steve: We haven’t had much luck with this sort of question. I spent a lot of time trying to track down this number. IPCC reports are pretty sloppy about this as 3.7 wm-2 is used in TAR without attribution. The best treatment that I’ve seen is in Ramanathan’s articles in the late 1970s and early 1980s – some of which are online at Am Met Soc., which I’ve been thinking about commenting on for some time. Also look at some of the references when I reviewed Simplified Forms in connection with Hansen et al 1988. For people interested in coming to grips with how AGW works, I think that the radiative-convective models are quite useful and I think that this line of argument should have been reviewed in detail in one of the IPCC reports. Because of their focus on literature review and mentioning every researcher in the field, they skip over key steps in the fundamental exposition. Of course then people say – if you want to know more, take an undergraduate course. That’s a silly response on their part. The basics should be compiled in one of the IPCC reports so that interested people can access an authoritative version.

  206. Neil Fisher
    Posted Feb 20, 2008 at 3:52 PM | Permalink

    MarkW:

    I thought the models were only good for global level guesses. Since when were they producing useable continent level data?

    I thought so too – and it’s what worries me about climate models. Maybe there is something “different” about climate that I’m unaware of, but in any other field such a “failure” would raise many eyebrows and people would be treating the results with a very large grain of salt. Can anyone point me to either 1) an explanation of why climate is different in this respect or 2) some other modelling exercise where such significant disparity of detail is ignored?

  207. John V
    Posted Feb 20, 2008 at 3:52 PM | Permalink

    #204 Andrew:

    Henrik Svenmark says that Antartica should be different if warming is at least partly effected by clouds (becuase Antartica is white, the change in albedo has the opposite effect) ie Cosmic Rays (or perhaps circulation pattern changes. Coincidence? I’m inclined to think not.

    Honest question:
    If that’s the case, shouldn’t we expect the Arctic to behave similarly since it is also white?

  208. Jud Partin
    Posted Feb 20, 2008 at 4:01 PM | Permalink

    Steve – here’s a paper I didn’t mention to you while you were here at GT. It may be a good one to discuss. I haven’t gotten all the way thru it, so I can’t comment on it right yet. Since people were talking about medieval times, I thought I’d throw it out there.

    Graham et al., 2007 “Tropical Pacific – mid-latitude teleconnections in medieval times” Climatic Change (2007) 83:241–285, DOI 10.1007/s10584-007-9239-2

    It has a lot of meat in it to satiate some appetites around here… for a short time at least.

    Steve: I’ve got a copy of that and went to one of his AGU presentations in 2006. He showed pictures of submerged medieval trees in Sierra lakes that overlap some pictures that I’ve shown here. I’ve tried to collate some of Graham’s data to look at, but it’s provenance is spotty. I’ve got notes and maybe I’ll throw a quick post together.

  209. Jim Arndt
    Posted Feb 20, 2008 at 4:02 PM | Permalink

    Hi,

    John V #205, Venus is a bad example. Venus is much closer to the sun and its atmosphere is total different from ours. It has internal heat that is also adding to the surface temperature not to mention that its atmosphere is hundreds of times denser than ours. As for 100s of millions, another bad example, of years ago the Earth was closer to the sun and the orbit and rotation was different. The continents where in a different place, ocean currents where different. The moon was closer which effected the tides. The TSI changes in W/m2 I agree is small but its the indirect relationship with TSI as in CRF and possibly UV that is responsible for most of the climate drive with AMO and PDO moderating and CO2 is a small part of the equation. If CO2 was the main driver of climate then you would have a run away effect because the oceans will out gas more CO2 with temperature which would then raise the temperature even higher and out gassing more CO2.

  210. Mike B
    Posted Feb 20, 2008 at 4:14 PM | Permalink

    I try to stay out of the MWP stuff, but here’s my understanding. Over time scales appropriate for climate (10+ years?) the temperature is currently rising in every reasonably sized region you can define. NH, SH, tropical, land, ocean, every continent — they are all warming. There is *more* warming in the NH, and particularly the Arctic, but there is warming everywhere.

    According to GISS Land-Ocean data, the Southern Tropics (EQU-24S) and the Southern Ocean (44S-64S) both have (slightly) negative temperature trends over the last 11 years. The Northern Tropics and the Southern Temperate Zoness have very slight (map hardly paints a forboding picture of the future of planet.

    BTW, the analysis I had handy used 11 and 22 year trends because I was looking at some solar cycle stuff. 10, 20, or 30 year trends will show essentially the same thing.

    Again, keep in mind that it was the RealClimate paleos who first raised the issue of what constitutes a “global” warm period.

  211. Tim Ball
    Posted Feb 20, 2008 at 4:16 PM | Permalink

    205
    None of these answer my question. The argument about how the models react to CO2 increase is specious. The Venus argument is put down by 210, it also includes the issue of geothermal energy which I think is overlooked in the climate issue, especially as it affects ocean temperatures.
    No, the last thirty years show temperatures changing before CO2, the only debate is about the length of the lag time.

  212. Pat Cassen
    Posted Feb 20, 2008 at 4:26 PM | Permalink

    Jim Arndt (#210)
    Can you supply a reference for the statement that Venus’ internal heat is adding to the surface temperature of that planet? Also, for the statement that the Earth was closer to the Sun a 100 million years ago? Likewise, for the statement “If CO2 was the main driver of climate then you would have a run away effect because the oceans will out gas more CO2 with temperature which would then raise the temperature even higher and out gassing more CO2”.

    (The last is contradicted by Kasting JF. 1989. Runaway and moist greenhouse atmospheres and the evolution of Earth and Venus. Icarus 74:472–94.)

    Thanks,
    Pat Cassen

    Steve:
    take the Venus stuff and lead-lag stuff to the Bulletin Board.

  213. Mike B
    Posted Feb 20, 2008 at 4:32 PM | Permalink

    My previous post got partially eaten. I’ll try again.

    According to GISS Land-Ocean data, the Southern Tropics (EQU-24S) and the Southern Ocean (44S-64S) both have (slightly) negative temperature trends over the last 11 years. The Northern Tropics and the Southern Temperate Zoness have very slight positive trends (GISS map hardly paints a forboding picture of the future of planet.

    BTW, the analysis I had handy used 11 and 22 year trends because I was looking at some solar cycle stuff. 10, 20, or 30 year trends will show essentially the same thing.

    Again, keep in mind that it was the RealClimate paleos who first raised the issue of what constitutes a “global” warm period.

  214. JamesG
    Posted Feb 20, 2008 at 4:38 PM | Permalink

    Judith Curry said “Science never absolutely “proves” theories”. This type of comment keeps being repeated ad-nauseum by the climate science community and it’s quite bogus. They mean that you cannot get a mathematical proof, which is correct, but is a complete red herring because we prove things experimentally all the time. I admit this experiment is difficult but this particular argument, which attempts to justify both the dearth of proof of AGW, and the attitude of not even bothering to seek proof, is completely unscientific. Now I agree 100% with Judith that we are over-polluting our environment and that we should continue seeking renewable, clean energies regardless of AGW. In fact, I think it is really not too important WHY we think clean, renewable energy is a good idea, only that we DO think that way, because that is the only path towards multi-party support. All other arguments are divisive and ultimately pointless if most of us actually agree on the same end-point. But frankly Judith if you want to persuade people then being unscientific by parroting such ridiculous arguments from realclimate.org, who are collectively as far from objective, Feynman-style scientists as they could possibly be, really doesn’t help.

  215. John V
    Posted Feb 20, 2008 at 4:43 PM | Permalink

    Jim Arndt, Mike B, Tim Ball:
    I’ve been down this road before.
    We can continue to argue past each other or I can stop now.
    I choose the latter so that others can have a turn.

  216. Andrew
    Posted Feb 20, 2008 at 5:02 PM | Permalink

    JohnV, in short, no. Most of the ice in the Arctic melts in summer, and the snow which doesn’t melt (ie, the interior of Greenland) covers a much smaller area than Antartica. Additionally, the Northern Hemisphere, and the Arctic along with it, appears to be experiencing at least warming due to anthropogenic soot (which is why the pattern of industrialized areas warming more, not less, seems to occur, counter what you’d expect for aerosol cooling). Additionally, there certainly has been some weird Arctic behavior, since the Arctic shows a pronounced warm thirties phenomenon.

  217. DeWitt Payne
    Posted Feb 20, 2008 at 5:11 PM | Permalink

    Steve Mc. in #206

    For people interested in coming to grips with how AGW works, I think that the radiative-convective models are quite useful and I think that this line of argument should have been reviewed in detail in one of the IPCC reports. Because of their focus on literature review and mentioning every researcher in the field, they skip over key steps in the fundamental exposition. Of course then people say – if you want to know more, take an undergraduate course. That’s a silly response on their part. The basics should be compiled in one of the IPCC reports so that interested people can access an authoritative version.

    You don’t have to take a course, just read the textbook. That’s the most likely place to find detailed basic information on fundamentals. There is every incentive to not put that stuff in the primary literature (page charges, word limits, etc.) I remain unconvinced that the IPCC report is the correct venue. Of course, I haven’t found a textbook yet that fills the bill entirely either. It’s disappointing that there are better popular explications of string theory (Brian Greene comes to mind) than climate science.

  218. Sam Urbinto
    Posted Feb 20, 2008 at 5:40 PM | Permalink

    Other atmospheres: Go to wikipedia, look up the planet, and write yourself a compare/contrast list of all the differences; you’ll see why you can’t use any of them as examples and won’t have to discuss it anywhere.

    John V:

    “(Yes, I know that the 1998 peak temperature is warmer than any single year since. The trend lines are all, or very nearly all, still positive).”

    You are correct of course, the trend is still up (depending upon exact period). However, GISTEMP has 2005 as the highest anomaly (you didn’t specify which data set, so I thought I’d point that out again.) Then again, we are (besides the usual up and down sawtooth) right at where we were in 1998 in 2007. If the last 10 years are any indication, nothing much is happening (at least on the anomaly side of things) so, I don’t know.

    But yes, if the past is any indication of the future, the anomaly will continue trending up, I don’t dispute that.

    JamesG: That is the key.

    we should continue seeking renewable, clean energies regardless of AGW. In fact, I think it is really not too important WHY we think clean, renewable energy is a good idea, only that we DO think that way, because that is the only path towards multi-party support. All other arguments are divisive and ultimately pointless if most of us actually agree on the same end-point.

  219. Jim Arndt
    Posted Feb 20, 2008 at 5:50 PM | Permalink

    Hi,

    Sam U. #219, only GISS shows an upward trend line HadCRUT, UAH and RSS show flat or declining temperatures in the last ten years. Also only GISS is the odd man out for temperatures if you compare the tracks from month to month. See Watts Up With That here
    http://wattsupwiththat.wordpress.com/2008/02/19/january-2008-4-sources-say-globally-cooler-in-the-past-12-months/

  220. Neal J. King
    Posted Feb 20, 2008 at 5:55 PM | Permalink

    #218, DeWitt Payne:

    I have been looking in several places for a derivation of the 3.7 (see #206). I have gone to the trouble of buying a few textbooks to look for the number. I can’t claim that I’ve exhausted the field, but so far I’ve not found an explicit calculation; at best, qualitative sketches of what could be fleshed out into a full-on calculation (but wasn’t). I’m still looking: I have a few older books on order on atmospheric radiative transfer.

    It’s true that there are better popular explications of string theory than of climate science (and in particular, of radiative transfer). However, keep in mind that a description of string theory does not need to calculate results (a damn good thing, since they don’t seem to be very successful at that, either!): Typical readers don’t really care about why there are 11 instead of 10 dimensions, do they? But the interest in climate science has to be in the numbers as well as the concepts. And explaining the numbers means talking about the calculations – which means that you have lost the normal readers for popular science.

    Possibly the best popular/historical presentation of the science behind the GW studies is by Weart – who frankly gives up on trying to give an intelligible explanation of the enhanced greenhouse effect to a non-technical audience. I sympathize with him: the best qualitative explanation I’ve seen is by Pierrehumbert. But you can only follow his qualitative explanation if you have some idea of what it would take to flesh it out. If you’re not familiar with the thermodynamics of gases, some kind of understanding of radiative transfer, and a general feel for partial differential equations, his “qualitative” explanation won’t make any sense, because it’s not so much “qualitative” as “high-level”: If you’re acquainted with that realm of physics, you can follow the argument. However, you can never arrive at the hard-core numbers without the hard-core calculation, which would require integration over spectral absorption functions; and he doesn’t present these. (Neither do the older papers that Steve McIntyre mentioned above, as far as I can tell.)

    I’m hoping that a professional in the field can remember where that 3.7 came from, and point to an explicit calculation. If I had nothing else to do, I’d try to do the calculation myself. But I think it’s a lot of work.

  221. Sam Urbinto
    Posted Feb 20, 2008 at 6:07 PM | Permalink

    Jim Arndt: I prefer gistemp, but they’re all about the same as far as I’m concerned (within a few tenths or hundreths).

  222. Judith Curry
    Posted Feb 20, 2008 at 6:20 PM | Permalink

    #167 Lance, i would refer you to my colleague Kim Cobb’s paper (corals) at

    Click to access ENSOLastMillenium.pdf

    Steve: this paper was discussed previously here at http://www.climateaudit.org/?p=770

  223. steven mosher
    Posted Feb 20, 2008 at 6:30 PM | Permalink

    Re 223. Judith, are you suggesting by referring to Dr. Cobbs paper, that she would
    assent to the gauntlet? Testing both her paper and our newfound sense of civility?

    sorry it’s my inner don king.

  224. Judith Curry
    Posted Feb 20, 2008 at 6:35 PM | Permalink

    The 3.7 W m-2 is determined in a very straightforward way. If you take a radiative transfer model and make a calculation for the infrared part of the spectra for average atmospheric conditions, and then make another calculation where you double CO2, then the the one with double CO2 will have a surface flux that is 3.7 W m-2 greater than the original calculation

    here is an online radiative transfer code to use.
    http://www.icess.ucsb.edu/esrg/pauls_dir/

  225. Judith Curry
    Posted Feb 20, 2008 at 6:42 PM | Permalink

    Wikipedia has great link to publicly available radiative transfer codes
    http://en.wikipedia.org/wiki/List_of_atmospheric_radiative_transfer_codes

  226. John Lish
    Posted Feb 20, 2008 at 6:48 PM | Permalink

    Judith Curry said:

    The 3.7 W m-2 is determined in a very straightforward way. If you take a radiative transfer model and make a calculation for the infrared part of the spectra for average atmospheric conditions, and then make another calculation where you double CO2, then the the one with double CO2 will have a surface flux that is 3.7 W m-2 greater than the original calculation

    My problem with this is that the IPCC 2007 Report shows CO2 doubling as 2.3W m-2 based on calculation of forcing to 2005. Its somewhat odd that the report gives a different forcing yet this hasn’t been picked up by the editors.

    Personally, this lower 2.3W m-2 figure makes sense when using Willis’s 2-shell atmospheric model rather than the more simplistic 1-shell model used by IPCC modellers. It also reduces the messiness of the aerosol masking claim.

  227. Judith Curry
    Posted Feb 20, 2008 at 6:55 PM | Permalink

    The difference could be a clear sky calculation, vs an all sky (with clouds) calculation

  228. Tom Gray
    Posted Feb 20, 2008 at 6:58 PM | Permalink

    re 205

    I expect you won’t accept this as your example of CO2 preceding temperature, but the solar effects of the Milankovitch Cycle (ice-age cycle) are not alone sufficient to drive the temperature changes. The difference (~40%) is made up by the increased CO2 in the atmosphere. Of course, the increase in CO2 is also caused by the warming temperatures in a classic feedback loop.

    Has the contribution of the currrent warming to C02 concentration been quantified? How much CO2 is added to the atmosphere in response to the warming produced by anthropogenic CO2 emmissions?

  229. DeWitt Payne
    Posted Feb 20, 2008 at 7:13 PM | Permalink

    John #227,

    Remember that the 3.7 W/m2 is for doubling CO2. It hasn’t doubled yet, even if you include forcings from other ghg’s so the forcing will be less than 3.7. A rough calculation would be 3.7xln(380/280)/ln(2). That’s 1.63 W/m2. Add in forcing from CFC’s etc. and 2.3 seems a reasonable estimate for the total forcing so far as measured from the 18th century.

  230. DeWitt Payne
    Posted Feb 20, 2008 at 7:23 PM | Permalink

    Judith #225,

    The 3.7 W m-2 is determined in a very straightforward way. If you take a radiative transfer model and make a calculation for the infrared part of the spectra for average atmospheric conditions, and then make another calculation where you double CO2, then the the one with double CO2 will have a surface flux that is 3.7 W m-2 greater than the original calculation

    Define average atmospheric conditions. What percentage clouds, how many layers at what altitudes, average droplet size, percentage of ice crystals, temperature lapse rate, water vapor mixing ratio etc. Then show that the known diurnal, seasonal and geographic variations of these conditions do not significantly affect the calculation. That’s what I’m looking for and what I assume that Steve Mc is looking for.

  231. John Lish
    Posted Feb 20, 2008 at 7:58 PM | Permalink

    DeWitt 230
    Yes, the figure used by the IPCC is approx. 1.6W m-2 for current anthropogenic CO2 forcing. Given that, due to the logarithmic relationship between CO2 concentrations and temperature, approx. 70% of the forcing from a doubling of CO2 has already occurred – this gives a forcing from a doubling of CO2 to be approx. 2.3W m-2.

    The difference between the two Judith is that the 1.6W m-2 is worked out using current data (temperature, concentration of greenhouse gases in the atmosphere, known natural forcings etc) and the 3.7W m-2 is based on an simplistic 1-shell model of the atmosphere. So modellers are using the output of a model as an input to their models.

    If we can start by admitting that the 3.7W m-2 figure is entirely arbitrary then this would be a start.

  232. yorick
    Posted Feb 20, 2008 at 8:08 PM | Permalink

    the solar effects of the Milankovitch Cycle (ice-age cycle) are not alone sufficient to drive the temperature changes.

    This is one of those proofs that beings with “assuming that the models are correct”.

  233. DeWitt Payne
    Posted Feb 20, 2008 at 8:21 PM | Permalink

    John #232,

    Maybe you missed that I used the natural log of the CO2 concentration ratio in the calculation. Your figure of 70% for the percentage change compared to doubling is incorrect if my calculation is correct. I calculate the actual amount of forcing so far from CO2 as about 44% of doubling not 70%. Now that I think about it, I’ve seen that 70% figure before, but never saw how it was derived. Do you or anyone else have a source for the 70% figure?

  234. Posted Feb 20, 2008 at 8:27 PM | Permalink

    #232 John Lish:

    approx. 70% of the forcing from a doubling of CO2 has already occurred

    I believe your math is mistaken. Given the current CO2 at 380ppm and pre-industrial at 280ppm, then log2(380/280) = 0.43. That is, only 43% of a doubling forcing is currently felt. 3.7Wm-2 * 43% = 1.6Wm-2. Thus the IPCC numbers are consistent with a value of 3.7Wm-2 for doubling CO2.

    For a complete arm-waving, totally indefensible estimate of the temperature effect, use 1C for the warming in the last century. Assume a linear relationship between radiative forcing and temperature change. Further assume that there is no more warming “in the pipeline”. (Neither assumption is valid, but that’s the fun of arm-waving). 1C for 1.6Wm-2 would be equivalent to ~2.4C for 3.7Wm-2. That is, doubling CO2 would yield ~2.4C. (As I said, that’s total arm-waving and not a proof of anything — it is comforting that the numbers are all self-consistent though).

  235. Peter Thompson
    Posted Feb 20, 2008 at 9:15 PM | Permalink

    John V.,

    Bravo:

    For a complete arm-waving, totally indefensible estimate of the temperature effect, use 1C for the warming in the last century. Assume a linear relationship between radiative forcing and temperature change. Further assume that there is no more warming “in the pipeline”. (Neither assumption is valid, but that’s the fun of arm-waving). 1C for 1.6Wm-2 would be equivalent to ~2.4C for 3.7Wm-2. That is, doubling CO2 would yield ~2.4C. (As I said, that’s total arm-waving and not a proof of anything — it is comforting that the numbers are all self-consistent though).

    That is an excellent summary of the state of climate science. Which of your points would you call the most egregious example of arm waving?

  236. Paul Linsay
    Posted Feb 20, 2008 at 9:55 PM | Permalink

    Notice that whether the forcing by doubled CO2 is 2.3 w/m^2 or 3.7 W/m^2 it is smaller than the errors in the outgoing SW and LW radiation calculated by the models as shown in WGI Chapter 8, Figure 8.4. Some models have a combined SW+LW error that is a factor of 10 larger than this.

  237. John V
    Posted Feb 20, 2008 at 9:58 PM | Permalink

    #229 Tom Gray:

    Has the contribution of the currrent warming to C02 concentration been quantified?

    I’m in an arm-waving mood, so I’ll guesstimate for you:
    Ice age cycles have a peak-peak temperature swing of ~4C globally (more at the poles). The CO2 concentration varies about 100ppm (180 to 280ppm). Presumably the CO2 increase is entirely due to the temperature change. Linearizing, I get 25ppm/degC. So, over the last century about 25ppm CO2 has been an effect of warming.

    =====
    #236 Peter Thompson:
    My arm-waving may have been an excellent summary of your opinion of the state of climate science. Like any good skeptic, I find it useful to perform a quick sanity check of complex results — this one worked out pretty well, as do most.

  238. John A
    Posted Feb 21, 2008 at 12:27 AM | Permalink

    Re #238
    I too am in an arm-waving mood so I’ll guesstimate the reverse: The global temperature (whatever that may mean) is some 4C higher than during the last Ice Age.
    So the linear sensitivity of climate to carbon dioxide warming is: 4/(400-180)= 0.018C/ppmCO2. This makes the climate sensitivity to doubling is 0.018*280 ~ 5.1C
    But I can wave my arms to almost any silly calculation I like, since the carbon dioxide followed the warming the above calculation (and yours) is utterly meaningless. Whatever was driving temperature out of the Ice Age wasn’t carbon dioxide but something far more potent.
    So then we’d have to look at the temperature vs CO2 relations as carbon dioxide responded to the previous warmth 800 years later. Did the warming get a) quicker b) slower or c) remained about the same?

  239. Raven
    Posted Feb 21, 2008 at 12:52 AM | Permalink

    John A says:

    So then we’d have to look at the temperature vs CO2 relations as carbon dioxide responded to the previous warmth 800 years later. Did the warming get a) quicker b) slower or c) remained about the same?

    Kristen already looked at this:
    http://www.ncdc.noaa.gov/paleo/icecore/antarctica/domec/domec_epica_data.html
    It appears that the temperature trend remained exactly the same after CO2 joined the party.
    This calculation would need to be done for all of the ice data to see if this example is representative or not.

  240. Raven
    Posted Feb 21, 2008 at 12:53 AM | Permalink

    Sorry wrong link: http://home.earthlink.net/~ponderthemaunderf/id11.html

  241. DeWitt Payne
    Posted Feb 21, 2008 at 12:58 AM | Permalink

    Ill-posed problem. Neither the data, temperature and CO2 as a function of time, nor the models are good enough to be able to calculate the “true” climate sensitivity to CO2 from the glacial to interglacial transitions. I’m pretty sure this falls under the classification of an inverse problem to which there are an infinite number of solutions. It is possible to solve these problems, x-ray crystallography is a good example, if there is enough other data to constrain the possible solutions. That is far from true in this case. There are too many unknowns, the magnitude of gain from the ice/albedo effect for example, not to mention the poor data quality (insufficient resolution and the reliability of the transform from isotope data to temperature among other things).

  242. DeWitt Payne
    Posted Feb 21, 2008 at 1:24 AM | Permalink

    Let me add that ill-posedness applies equally to the use of the data by AGW believers or skeptics. You can’t prove, IMO, that the climate sensitivity to CO2 is either high or low.

  243. Andrey Levin
    Posted Feb 21, 2008 at 1:30 AM | Permalink

    On land surface modified by men:

    In civil engineering there is parameter, quite important to city planning of stormwater systems: amount of pawed surface. In a nutshell, paved surface is one where rain is not adsorbed by soil and contributing 100% of precipitation to water runoff. Roads, parking lots, roofs of industrial and residential buildings qualify for paved surface, while lawn in front of house or community garden do not. Such parameter is well defined and measured, including from satellite and aero photography. Incidentally, such parameter could be used to estimate antropogenic changes in albedo.

    For continental US, amount of paved surface is only 2% of total area; roads being overwhelming contributor. This is quite trivial amount, especially compared with agriculture, which uses 25% of US continental area.

    Keeping in mind that only relatively small parts of the world have higher percent of pavement (Japan, Europe), and that land is only 30% of Earth, and that oceans have much lower albedo than land, Judith Curry notion that “land use changes are a very important issue in regional climate and sustainability, but most aren’t on a large enough scale or have associated large positive feedbacks to make much impact on global climate” is quite convincing.

  244. Raven
    Posted Feb 21, 2008 at 1:50 AM | Permalink

    Andrey Levin says:

    Keeping in mind that only relatively small parts of the world have higher percent of pavement (Japan, Europe), and that land is only 30% of Earth, and that oceans have much lower albedo than land, Judith Curry notion that “land use changes are a very important issue in regional climate and sustainability, but most aren’t on a large enough scale or have associated large positive feedbacks to make much impact on global climate” is quite convincing

    Most of the thermometers are on land near pavement.

    Deforestation has a significant impact on albedo over a large area:
    http://climatesci.org/2008/01/29/important-new-research-paper-published/
    Note that albedo in the eastern US has decreased in the last 100 years – most likely a result of forests growing back. This would result in an increase in temperatures

    I have seen other papers arguing that development and irrigation affects regions precipitation patterns – do this in enough regions and you have a global effect.

  245. Andrey Levin
    Posted Feb 21, 2008 at 3:29 AM | Permalink

    Re#245, Raven:

    Interesting article, thanks. I am wondering how they estimate albedo of SE US in 1650…

    One thing I wanted to emphasize: urban sprawl is not significant factor affecting global climate. Agriculture and de/re-forestation could be.

  246. Geoff Sherrington
    Posted Feb 21, 2008 at 4:08 AM | Permalink

    Re # 242 DeWill Payne

    I’m pretty sure this falls under the classification of an inverse problem to which there are an infinite number of solutions. It is possible to solve these problems, x-ray crystallography is a good example, if there is enough other data to constrain the possible solutions.

    This is a good example. In practice, the breakthrough came from the synthesis of similar crystals in which a simple change had been made, like the insertion of a heavy metal atom. It was clever. (It did not rely on a greater mass of data – it was an advance in method).

    What I do not understand is the ability of climate scientists to be clever in a like way. There are simple experiments that can yield a lot of information, but a reluctance to perform them (so Steve goes to the Bristlecone Pine Mountain). Am I tilting at windmills? Is the derivation of the doubling factor really so difficult that a solution is permanently eluding us?

    BTW, we chemists did not ASSUME we could draw complicated crystal structures corrrectly before DEMONSTRATING a credible likihood using an explainable method.

  247. Tom Gray
    Posted Feb 21, 2008 at 5:58 AM | Permalink

    re 238

    I’m in an arm-waving mood, so I’ll guesstimate for you:
    Ice age cycles have a peak-peak temperature swing of ~4C globally (more at the poles). The CO2 concentration varies about 100ppm (180 to 280ppm). Presumably the CO2 increase is entirely due to the temperature change. Linearizing, I get 25ppm/degC. So, over the last century about 25ppm CO2 has been an effect of warming.

    One thing that puzzles me about calculations like this is that there is a positive feedback relationship between CO2 concentration and temperature. Increasing one factor will increase the other until the concentration of CO2 will saturate and block all IR. The positive feedback will end at that point. This would seem to create a system in which the CO2 concentration would be maintained by the feedback at this saturation level for wide ranges of solar input.

    I can see how feedback would raise the concentration of CO2 and temperature. The other issue of declining CO2 level is puzzling to me if CO2 concentration is determinative of temperature and the CO2 concentration rises to saturation.

    What causes the CO2 concentration to decline from saturation?

  248. Posted Feb 21, 2008 at 6:03 AM | Permalink

    As Steve M is unwilling to have some of the political issues that Judith Curry raised in #141 disused here – for reasons that I fully understand – I have re-posted it at Harmless Sky if anyone would like to comment. What she says seems to be very revealing of the symbiotic relationship that has developed between science and politics in the field of climate research.

  249. MarkW
    Posted Feb 21, 2008 at 6:06 AM | Permalink

    JohnV,

    The Arctic is mostly white during the winter, when the sun isn’t shining anyway. During the rest of the year, the sun melts the ice.

  250. MarkW
    Posted Feb 21, 2008 at 6:16 AM | Permalink

    Raven,

    Irrigation also affects the amount of water vapor in the air, as does the water of lawns and golf courses.

  251. MarkW
    Posted Feb 21, 2008 at 6:18 AM | Permalink

    Andrey,

    The problem is that we don’t know what the global climate is. We only know what the climate around our sensors are. And those sensors are on land and for a large part, near or in cities.

  252. MarkW
    Posted Feb 21, 2008 at 6:24 AM | Permalink

    What causes the CO2 concentration to decline from saturation?

    There are several mechanisms by which CO2 is removed from the atmosphere. The first and most obvious is plants. Plants absorb CO2 as they grow. When they die, they either rot, which returns the CO2 to the air, or they get buried, which removes the CO2 from the biosphere. (You can also consider the animals, which eat the plants, but the results are similar when they die.) There are also chemical processes. Certain sea animals use CO2 from seawater to create their shells. This lowers the CO2 concentration in the water which enables the water to absorb CO2 from the atmosphere. There are also weathering processes. One of them, that I don’t fully understand, involves granite and carbonic acid (CO2 disolved in water).

  253. Posted Feb 21, 2008 at 7:00 AM | Permalink

    re 243

    Ill-posed problem. Neither the data, temperature and CO2 as a function of time, nor the models are good enough to be able to calculate the “true” climate sensitivity to CO2 from the glacial to interglacial transitions. I’m pretty sure this falls under the classification of an inverse problem to which there are an infinite number of solutions. It is possible to solve these problems, x-ray crystallography is a good example, if there is enough other data to constrain the possible solutions. That is far from true in this case. There are too many unknowns, the magnitude of gain from the ice/albedo effect for example, not to mention the poor data quality (insufficient resolution and the reliability of the transform from isotope data to temperature among other things).

    agree, you can do a forward modeling though on what the contribution of co2 to temperature is in the ice age given a sensitiviti of 1 or 3 degrees for co2 doubling. Then it shows that this contribution disappears in the noise. i.e. it is not detectable.


    Steve:
    Hans, can you spell out how you did this forward modeling step by step?

  254. Andrew
    Posted Feb 21, 2008 at 7:37 AM | Permalink

    Guys, might I recommend giving your favorite sensitivity derivations on the thread I started here:
    http://www.climateaudit.org/phpBB3/viewtopic.php?f=4&t=128
    I’m interested in any ways you know of!

  255. Andrew
    Posted Feb 21, 2008 at 8:06 AM | Permalink

    Pat Cassen

    MarkW (#149) said:
    “In fact most of the studies have reached the conclusion that water vapor is a negative feedback.”

    References please? (I am already familiar with Spencer et al., GRL, 34, L15707, doi:10.1029/2007GL029698, 2007)

    Thanks,
    Pat Cassen

    I couldn’t find precisely what your looking for (and I’m not sure what MarkW means) but

    Minschwaner, K., Dessler, A.E., 2004. Water Vapor Feedback in the Tropical Upper Troposphere: Model Results and Observations. Journal of Climate, 17, 1272-1282.

    Found the tropical water vapor feedback to be weaker than previously thought.

    Described here:
    http://www.worldclimatereport.com/index.php/category/climate-models/page/3/

  256. Posted Feb 21, 2008 at 9:53 AM | Permalink

    #248 Tom Gray:

    What causes the CO2 concentration to decline from saturation?

    I hear this question a lot. It’s actually very simple and is common to any system with feedback. I’ll try to describe it using step changes in solar output (instead of following the nearly sinusoidal curve of the real cycles):

    1. Ice-age
    a. Temp is Tice
    b. CO2 concentration is Cice
    2. Solar output steps up
    a. Temperature increases from Tice
    b. CO2 concentration increases
    c. Some ice melts
    d. Temperature increases further (leading to more CO2 and more ice melting)
    e. Repeat a-d
    f. Eventually, equilibrium is reached at Twarm and Cwarm
    3. Solar output steps down
    a. Temperature decreases from Twarm
    b. CO2 concentration decreases
    c. Some water freezes
    d. Temperature decreases further (leading to less CO2 and more water freezing)
    e. Repeat a-d
    f. Eventually, equilibrium is reached at Tice and Cice
    4. Repeat 2-3

    The process is the same with a sinusoidal solar forcing, but it is harder to visualize each of the steps.

  257. John V.
    Posted Feb 21, 2008 at 9:55 AM | Permalink

    #254 Hans Erren:
    Your plot of temperatures seems to be for the poles (~11C for ice-age cycle). If you used the global temperatures (~4C for ice-age cycle) you would see that the contribution of CO2 is nearly 50%. That’s surprisingly close to the accepted number of ~40% which yields a sensitivity of ~3C for doubling CO2.

    I suspect your intent was not to back up the consensus. Remember I did some basic simulations of the feedback cycle a few months ago? Our results are very similar.

    SteveMc, considering that Hans’ results back up the IPCC, are you still interested in how they were derived?

  258. Peter Thompson
    Posted Feb 21, 2008 at 10:15 AM | Permalink

    John V. #191,

    A money quote,

    Here’s the money quote:
    “Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.”

    Deserves a money link:

    Which, exactly, of these models from the AR4 predicts Antarctic cooling? Lets also remeber that hansen’s 1988 models predict that the antarctic will warm more than anywhere else on earth.

    Quoting a falsehood doesn’t make it the truth.

  259. bender
    Posted Feb 21, 2008 at 10:27 AM | Permalink

    considering that Hans’ results back up the IPCC, are you still interested in how they were derived

    All auditors are interested in all aspects of accountability for all calculations and inferences, regardless what side the chips fall on. This is made clear by the attitudes espoused by all those who are willing to post their data and code, including Anthony Watts, Allan MacRae, and Steve M.

    Who will audit the auditors? The people will. The internet is the end of authority as we know it.

  260. bender
    Posted Feb 21, 2008 at 10:35 AM | Permalink

    #259 There are model runs that allow for a temporary cooling 1999-2008. However (1) the effect is temporary and (2) it is not in the ensembles (which are collections of many stochastic runs). I wondered about that “money quote” from John V at the time he made it. Your explanation, John V?

  261. John V.
    Posted Feb 21, 2008 at 10:36 AM | Permalink

    #239 John A:

    I too am in an arm-waving mood so I’ll guesstimate the reverse: The global temperature (whatever that may mean) is some 4C higher than during the last Ice Age.

    So the linear sensitivity of climate to carbon dioxide warming is: 4/(400-180)= 0.018C/ppmCO2. This makes the climate sensitivity to doubling is 0.018*280 ~ 5.1C

    If all of the temperature change was due to CO2, then I suppose that would be in the ballpark. Since we know that solar plays a major role in the ice age cycle, the CO2 sensitivity derived above is obviously too high. I realize that was your point.

    I suppose a calculation like the above could yield an upper limit for CO2 sensitivity. I would tweak the numbers a bit, but for arm-waving it’s close enough.

    Another way to look at it would be that the ice age cycle varies from 180 to 280ppm with a 4C temperature swing. If this was all caused by CO2 (it’s not), and assuming a linear relationship (as you did), I get:

    4C/100ppm = 1C/25ppm = 11C/280ppm (upper limit)

    Thus using the ice-age cycle, neglecting solar, and linearizing the CO2 response, the *upper limit* for CO2 sensitivity would be 11C/doubling from pre-industrial levels.

    Using a logarithmic response:

    4C / (log2(280/180)) = 4C/0.64 doublings = ~6C/doubling (upper limit)

    So, based on the ice age cycle it would appear that the *upper limit* for CO2 sensitivity is 6C/doubling.

  262. Tom Gray
    Posted Feb 21, 2008 at 10:39 AM | Permalink

    re 257

    1. Ice-age
    a. Temp is Tice
    b. CO2 concentration is Cice
    2. Solar output steps up
    a. Temperature increases from Tice
    b. CO2 concentration increases
    c. Some ice melts
    d. Temperature increases further (leading to more CO2 and more ice melting)
    e. Repeat a-d
    f. Eventually, equilibrium is reached at Twarm and Cwarm
    3. Solar output steps down
    a. Temperature decreases from Twarm
    b. CO2 concentration decreases
    c. Some water freezes
    d. Temperature decreases further (leading to less CO2 and more water freezing)
    e. Repeat a-d
    f. Eventually, equilibrium is reached at Tice and Cice
    4. Repeat 2-3

    This is all very well but it leaves out one factor. It is not describing a system that contains feedback. There is nothing in this analysis that deals with the factor of increasing CO2 concentration causing an increase in temperature. The analysis describes a CO2 concentration that is dependant on temperature and leaves out the factor of temperature being dependant on CO2 concentration.

    So
    a) temperature rises for some reason
    b) as a result of a) CO2 concentration rises and greenhouse effect is enhanced
    c) as a result of b) temperature rises
    d) as a result of c) CO2 concentration rises
    e) …

    So CO2 and temperature rise together until CO2 concentration saturates at level at which IR is effectively blocked and further greenhouse effect is nullified

    Why doesn’t the concentration of CO2 remain at the saturation level if CO2 is determinative of temperature?

  263. Andrew
    Posted Feb 21, 2008 at 10:40 AM | Permalink

    John V isn’t it possibly we are missing some important forcings in the ice age cycles, or, here’s a thought: climate sensitivity and feedbacks vary with time/time scale/temperature? Just a thought or two.

    Hmm, quick back of the envelope calculation, .8 C of warming in 150 with a positive forcing of 3.3 W/m2 a negative forcing anywhere from -0.9 to -2.9 W/m2, implies a Climate Sensitivity range from 1.2 to 7.4ºC for 2XCO2 (3.7 W/m2). Curiously,

    Douglass, D.H., and R. S. Knox, 2005. Climate forcing by volcanic eruption of Mount Pinatubo. Geophysical Research Letters, 32, doi:10.1029/2004GL022119.

    Find CS to doubling CO2 of (wait for it):

    .6 C

    Now there are obviously questions as to whether volcanoes can be used to estimate CS (Roy Spencer thinks not) but that is still a interesting result.

  264. Tom Vonk
    Posted Feb 21, 2008 at 10:48 AM | Permalink

    De Witt wrote

    Let me add that ill-posedness applies equally to the use of the data by AGW believers or skeptics. You can’t prove, IMO, that the climate sensitivity to CO2 is either high or low.

    That is right .
    The reason why AGW believing scientists are wrong is that they simply postulate the predictibility of time averages of the climate system dynamics .
    Obvously everybody knows that the climate parameters (temperature distribution , pressure distribution , concentration distribution , radiation distribution) are not predictible .
    Having established this fundamental truth both empirically and theoretically , we could simply stop there and move on .
    Some parts of the system can be understood/described (multidecadal oscillations , radiative transfer , boundary layer) but the sum of all would stilll stay unpredictible .
    The rationnal behaviour when facing unpredictibility is to stop predictions .
    I believe nobody is trying to predict accurately both the position and the momentum of an electron .

    So now the AGWers (I include in this generic description both the scintifically and the politically motivated) come at us with a very strange theory .
    This theory is much more fundamental than the derivation of the 2x gives 4.5 °C and yet is much less discussed than the latter .

    This theory pretends that while being quite unable to predict the dynamical state of the system what means to give a set of
    i functions Fi (x,y,z,t) , it is possible to predict the time and space AVERAGES of the same functions , e.g |Fi(t)| and |Fi(x,y,z)| .
    Yet as an average of a function is an integral of the same , it is obviously impossible to integrate something you don’t know .

    The explicit calculus being impossible , you must then come up with a theory that is both physical and contains only time
    and space averages of the dynamical parameters .
    Also here , bad luck – the laws of nature are local and instantaneous so you have no laws available to calculate the averages directly .

    Having failed 2 times , there is only one last possibility .
    Suppose that all dynamical parameters are randomly fluctuating around their own average and the probability laws of these fluctuations are known .
    This approach is not very different from the RANS approach of the Navier-Stokes equations which , as everybody knows , leads to an undefined system with unphysical parameters (the Reynolds stress tensor) that can’t be closed .
    It can give a limited information for limited times in limted conditions through empirical ad hoc closures but gives no information about the behaviour of the system in a general case and for large times .
    First reason why this approach can’t work is that the dynamical parameters are not random .
    Temperatures , pressures , velocity fields , albedos , cloudiness , humidity are neither random nor randomly fluctuating around their averages .
    Second reason is that even if one admitted that the random hypothesis is right , it stays the question around WHAT average these parameters fluctuate .
    What average – 10 years ? 20 years ? 37.43 years ? 84 years ?
    There is no physical reason to choose one of the above values or to explain why a 5th value should be preferred to the 4 I propose .
    The belief in an artefact of a “typical” time scale for which emerge “globally relevant” new phenomenons that would be called climate leads to many strange statements that would be immediately questioned in other science domains but not so in “climate science” .

    Example is the statement you’ll see everywhere in AGW litterature : “The Earth must radiate the same amount of energy as it absorbs .”
    Must it ?
    Of course not and actually it never does . The Earth is not in a radiative equilibrium locally and also not globally .
    The day half absorbs more than it emits and the night half the opposite .
    There is no reason that they cancel each other . The difference between the 2 is called a change of internal energy .
    OK then they will say “Right , but it cancels over long times .”
    What are “long times” ? A year ? There is no reason that 2 states of the Earth system separated by 1 year have the same internal energy and actually they don’t . The same is true for any other time interval .
    There is also no reason that this difference be “small” or stays “small” .

    What happens in reality is that the Earth system chaotically moves through different states of internal energies that don’t stay constant over any special time period and continually adapts its internal parameters through highly non linear interactions even if the external flow of energy stays constant .
    What one has to realise is that the internal energy of the Earth system is NOT defined by temperature only let alone by a so primitive thing like a single “global temperature” .
    The internal energy is given by clouds , humidities , pressure and velocity distribution , sea currents , ice masses and
    of course temperatures at every point .
    There is clearly no reason that all this stays constant or that the sum of contributions to the internal energy be magically zero .

    As the third and last attempt fails too , it stays only one question worth being asked “What observations must be made to falsify the AGW theory ?”
    For me it is theoretically already falsified for the above reasons so the only important thing is how long will it take before the public notices that the apocalypse predicted since 20 years is obviously not coming and if it comes , it will be something completely unexpected .
    That is my scientific conviction and there is no amount of computers be it 100 or 1000 , that will ever be able to bring the proof of predictibility of climate parameter averages .

    The nature is much smarter than computers and it would be typical of Her (but very unpleasant for us) if she threw us in 1 or 2 centuries in a too early ice age while everybody would be thinking that we warm up 🙂

  265. Lance
    Posted Feb 21, 2008 at 10:59 AM | Permalink

    Dr. Curry,

    Thank you for pointing me to Dr. Cobbs study. I will evaluate it and perhaps comment later.

  266. Lance
    Posted Feb 21, 2008 at 11:09 AM | Permalink

    Steve Mc,

    Thanks for pointing to the thread on Dr. Cobb’s study.

  267. Posted Feb 21, 2008 at 11:20 AM | Permalink

    #265 Compressed, Does the “weather noise” cancel when integrated over space and time?

    Or, Is global mean temperature 1/f -like or hockeystick -like ?

    ( Don’t ask me, I don’t know. )

  268. John V.
    Posted Feb 21, 2008 at 11:22 AM | Permalink

    #259 Peter Thompson:

    Which, exactly, of these models from the AR4 predicts Antarctic cooling?

    Why would you expect them to predict southern cooling? The southern hemisphere is *not* cooling (even from 60S to the pole). It is warming less quickly then the north (over any time period that is reasonable for discussing climate).

    The original quote was given in response to Severian and Joe Black asking about why the SH is warming less than the NH. It’s well understood and not surprising.

    If the 1988 GISS model showed the antarctic warming the most, then I suppose it was wrong. It is good news that the models have advanced significantly since then. From the RealClimate article I quoted above, it appears that a 1988 paper and model (Bryan et al) showed that warming would be delayed around Antarctica. Since that was a new insight in 1988, it is not surprising that it was not incorporated in the GISS model.

    =====
    #261 bender:
    See above.
    I have been looking for measured data showing an Antarctic cooling trend but can’t find any other than very regional cooling in East Antarctica. Please provide a link. Thanks.

  269. Lance
    Posted Feb 21, 2008 at 11:32 AM | Permalink

    Tom Vonk,

    Your post summarizes my thoughts on the subject exactly although in a much more scientifically precise manor than I could hope to manage. The side show of tracking an “average global temperature” is as pointless as it is arcane. Your comment about no one trying to simultaneously measure the position and velocity of an electron is quite a propos.

    That this folly has attracted the attention of the world’s press and most of its governments is a testament to the fact that a political issue cleverly framed as a scientific one can come to prominence despite being completely nonphysical at its core.

  270. John V.
    Posted Feb 21, 2008 at 11:35 AM | Permalink

    #263 Tom Gray:
    CO2 warming was included in my little example.
    Feedback does not imply unstable feedback.

    CO2 causes temperature to rise, which causes more CO2, etc. Eventually a steady state is reached because each bit of extra CO2 has a smaller forcing. Do you accept that steady state will be reached?

    Once steady state is reached, any perturbation towards cooler temperatures will cause some CO2 to leave the atmosphere (by dissolving in the oceans, for example). The warming effect of that CO2 is lost, so the temperature drops a bit more. And so on.

    I did a very basic simulation in October 2007:
    http://www.climateaudit.org/?p=2220#comment-151934

    And here are some results using sinusoidal solar forcing:
    http://www.climateaudit.org/?p=2220#comment-152103

    Don’t worry about the scale or the relative contributions of solar and CO2. The simulation is extremely simple and intended as a proof of concept only. (The concept being that stable CO2 feedback is possible and not unusual).

    The posts linked above include links to my spreadsheet so you can play with the parameters if you like.

    As a side note, this little simulation also shows many characteristics of the ice age cycle that are supposedly problematic:

    1. CO2 lags temperature
    2. CO2 continues to increase when temperature starts to decrease
    3. The temperature slope does not change appreciably when CO2 starts to rise

  271. Andrew
    Posted Feb 21, 2008 at 11:35 AM | Permalink

    I don’t buy the North warms more because of greater land area/less ocean thing. For one thing, difference in the rate of warming doesn’t disappear when we look at land SH versus land NH, or the oceans in each. The explanation, for me, is obvious: Soot from industrialization! Why the resistance to non GHG anthropogenic effects?

  272. John V.
    Posted Feb 21, 2008 at 11:39 AM | Permalink

    I’m out for a while.
    Must… do… real…. work…

  273. MarkW
    Posted Feb 21, 2008 at 11:48 AM | Permalink

    JohnV,

    Positive feedback is by definition unstable. It’s not implied at all. It’s implicit.

  274. Mike B
    Posted Feb 21, 2008 at 11:55 AM | Permalink

    Why would you expect them to predict southern cooling? The southern hemisphere is *not* cooling (even from 60S to the pole). It is warming less quickly then the north (over any time period that is reasonable for discussing climate).

    Funny John. In an earlier post, you specified at least 10 years as “relevant for climate”. I immediately pointed out that two large SH regions with negative temperature trends over the past 11 years. Your reaction in #216 was to laydown, but now you come back in the same thread and repeat your error.

    Download the GISS data yourself and calculate the slopes if you don’t believe me. Otherwise you’re just repeating innaccuracies you’ve read elsewhere.

  275. Andrey Levin
    Posted Feb 21, 2008 at 12:09 PM | Permalink

    Bravo, Tom Vonk, you nailed it.

    In a nutshell, what GCM climate models are doing is predicting daily local weather for century ahead and calculating averages over globe and time. It is well known that weather predictions totally diverge from reality after about 5 days, yet we are supposed to believe that average of quadrillion incorrect components somehow will give us correct answer.

    It is like predicting S&P500 for century ahead by predicting daily share price of each of 500 companies, and summarizing it into S&P index.

  276. Peter Thompson
    Posted Feb 21, 2008 at 12:59 PM | Permalink

    John V. #158,

    Your plot of temperatures seems to be for the poles (~11C for ice-age cycle). If you used the global temperatures (~4C for ice-age cycle) you would see that the contribution of CO2 is nearly 50%. That’s surprisingly close to the accepted number of ~40% which yields a sensitivity of ~3C for doubling CO2.

    You will pardon me if “gut instinct” skepticism kicks in when someone casually throws out a “global” temperature tens of thousands of years ago during the last and preceding ice ages. Which record did you check? Gistemp or HadCRUT3? Maybe Grog Hansen-Jones wrote it on the wall of his teleconnected cave after having it explained via grunt by Ugh Mann-Hughes

  277. Peter D. Tillman
    Posted Feb 21, 2008 at 1:03 PM | Permalink

    Dewitt Payne, http://www.climateaudit.org/?p=2708#comment-213787

    There is a simple way around the deleted post renumbering problem for posts on the same thread. It’s called a permalink. If you right click on the post number to which you are replying and select copy link location from the menu you get the URL for that post in the clipboard. That URL doesn’t change unless the post is moved to a different thread. Then in your response, select the text for the link (in this case I used your name), click the link button in the Quicktags line, copy the URL to the box and click OK. Even if the post is moved, the comment number remains the same, but the page number changes to the new thread.

    Thanks, DeWitt — I’ll add your comment to the FAQ page, http://climateaudit101.wikispot.org/FAQ

    Incidentally, for some reason I’ve never ben able to get the quicktag link feature to work. But this slightly less elegant cut & paste works fine too.

    Cheers — Pete Tillman

  278. Pat Cassen
    Posted Feb 21, 2008 at 1:27 PM | Permalink

    Andrew (#256):

    Thanks for the reference.

    I just wonder where people are getting their info from when they say things like “…most of the studies have reached the conclusion that water vapor is a negative feedback.”

    Pat Cassen

  279. Spence_UK
    Posted Feb 21, 2008 at 1:32 PM | Permalink

    Re #265

    We are in agreement as ever 🙂

    This type of system incorporates two seemingly contradictory elements: great diversity yet a robustness against external forcing. A counter-intuitive combination of internal instability and external stability; perfect for the formation, development, and sustenance, of life.

    And this model is supported by the evidence of LTP in a wide range of climate parameters, especially elements of the hydrological cycle which are capable of dwarfing CO2 effects on temperature on all spatial and temporal scales.

    Yet despite having a solid mathematical, physical foundation and supporting evidence in the observations, it is very rarely discussed in either the peer-reviewed literature. Just remind me – who is in denial?

    Side note: denial might not be a river in Egypt, but that river might just be able to teach us more about our planet than a GCM ever could 😉

  280. DeWitt Payne
    Posted Feb 21, 2008 at 1:45 PM | Permalink

    Peter,

    I think the problem with the quicktag link button not working is actually in the preview window. I always add a space to the link between the equal sign and the quote mark (href= “). That seems to make all the text in the preview appear correctly although the link won’t appear to be correct if you mouse over it in the preview window, it will be correct in the actual post. You also have to select the text you want to appear as the link before clicking the button or close the link by clicking the button again after entering the text immediately following the HTML tag.

  281. Jim Arndt
    Posted Feb 21, 2008 at 1:55 PM | Permalink

    Hi,

    Tom V #265

    Example is the statement you’ll see everywhere in AGW litterature : “The Earth must radiate the same amount of energy as it absorbs .”
    Must it ?
    Of course not and actually it never does . The Earth is not in a radiative equilibrium locally and also not globally .
    The day half absorbs more than it emits and the night half the opposite .

    This statement by the AGW literature also leaves out chemical responses to energy. Plant and animal life depend on the energy given by the sun and use it for a multitude of processes that take the energy out of the system. Chemical responses in the atmosphere take even more out. BTW nice rant.

  282. Jim Arndt
    Posted Feb 21, 2008 at 2:04 PM | Permalink

    Hi,

    Pat C., My bad on #210 I meant the Earths rotation was faster and mix it up with the moons orbit was closer. Need to proof read better. The Venus literature is pretty straight forward.

  283. John V
    Posted Feb 21, 2008 at 2:26 PM | Permalink

    #272 Andrew:
    Soot from industrialization most likely also contributes. I’ve got no problem with that.

    =====
    #274 MarkW:
    Positive feedback in an entirely linear system is indeed unstable. Positive feedback when the feedback term has a logarithmic effect is not unstable:
    http://www.climateaudit.org/?p=2220#comment-151934

    =====
    #275 Mike B:
    Sorry about that. I did not read your post prior to #216 closely enough. However, other than the two *regions* you identified, I still believe the trend for the SH is most definitely positive. I assume you have a region-by-region breakdown available. Would you mind sharing the trends for all of the regions?

    BTW, do not interpret it as “laying down” if I stop posting for a few hours. I know from experience that once the pile-on begins it’s best to back away for a while.

    =====
    #277 Peter Thompson:
    Your sarcasm aside, the average global temperature swing on an ice-age cycle is generally well accepted and is smaller than at the poles. Did your “gut instinct” kick in when Hans Erren suggested that the effect of CO2 on ice ages was “in the noise”.

    =====
    #265 Tom Vonk:

    Example is the statement you’ll see everywhere in AGW litterature : “The Earth must radiate the same amount of energy as it absorbs .”

    There is no reason that they cancel each other . The difference between the 2 is called a change of internal energy .

    Exactly. And a change of internal energy can take a few forms:
    – latent heat (melting and evaporation)
    – temperature changes
    – pressure? (maybe a little)
    – kinetic energy (ocean currents)? (maybe a little)

    So, if the radiative balance changes due to an enhanced greenhouse effect there is a change of internal energy which results in melting, evaporation, and warming. It looks like we agree.

  284. DeWitt Payne
    Posted Feb 21, 2008 at 2:37 PM | Permalink

    Hans #254,

    I was hoping that someone would do this and would have suggested it if you hadn’t already done it. I assume that you plugged the ice core CO2 data into an equation like T(t) = T(0) + SF*log2(CO2(t)/CO2(0)) where t is time, log2 is logarithm base 2 (log2(2) =1), SF is the climate sensitivity factor for doubling CO2 T(0) is the initial temperature CO2(0) is the initial CO2 concentration and CO2(t) is the CO2 concentration at time t.

    John V #258,

    Actually Hans’ post doesn’t back up the IPCC at all. The point is that the ice core data is not sufficient to determine the climate sensitivity to CO2 at all. You can put an upper bound on the climate sensitivity as you demonstrated above, but there is no reason to prefer any value less than or equal to the upper bound including zero. There are physical reasons that make it plausible and even likely that it isn’t zero and is positive, but we can’t prove it. The design of the models constrain the range of solutions to give a climate sensitivity in the IPCC range, but we have no reason yet to believe that any of the models give a correct description of climate and Tom Vonk has given a whole raft of reasons why they should not be correct.

  285. John V
    Posted Feb 21, 2008 at 2:50 PM | Permalink

    Mike B (and others):
    I downloaded the GISTEMP data (again) as you suggested:
    http://data.giss.nasa.gov/gistemp/tabledata/ZonAnn.Ts+dSST.txt

    I looked at 10- to 15-year trends. Here’s a summary in degC/decade. Each row has trends for 10, 11, 12, 13, 14, and 15 years:

    Latitude: 10yr 11yr 12yr 13yr 14yr 15yr
    64N – 90N: 1.60 1.46 1.32 1.03 1.07 0.25
    44N – 64N: 0.51 0.42 0.59 0.44 0.47 0.04
    24N – 44N: -0.10 0.09 0.26 0.28 0.26 0.05
    EQU – 24N: 0.14 0.08 0.12 0.10 0.13 0.02
    EQU – 24S: 0.00 -0.01 0.05 0.05 0.09 0.11
    24S – 44S: 0.05 0.07 0.11 0.14 0.18 0.02
    44S – 64S: 0.05 -0.01 -0.06 -0.01 0.00 0.00
    64S – 90S: 0.66 0.74 0.19 0.29 0.40 0.21
    NorthHemi: 0.28 0.28 0.37 0.31 0.33 0.13
    SouthHemi: 0.08 0.08 0.06 0.09 0.13 0.04

    Looking at these numbers, I see some slight negative trends in the SH from 44S to 64S. The trend is most definitely positive in the Antarctic region from 64S to 90S.

    Many posts ago, I said: “The southern hemisphere is *not* cooling (even from 60S to the pole). It is warming less quickly then the north (over any time period that is reasonable for discussing climate).”

    This prompted a response from you: “Funny John… I immediately pointed out that two large SH regions with negative temperature trends over the past 11 years… now you come back in the same thread and repeat your error.”

    It appears that my original point was correct, so I stand by it.

  286. Peter Thompson
    Posted Feb 21, 2008 at 2:53 PM | Permalink

    John V.

    Your sarcasm aside, the average global temperature swing on an ice-age cycle is generally well accepted and is smaller than at the poles. Did your “gut instinct” kick in when Hans Erren suggested that the effect of CO2 on ice ages was “in the noise”.

    My gut instinct skepticism kicks in when people make fantastic claims without no proof, nor anyway to test or falsify, and the best they can offer is that it is “well accepted”. Interesting that you bait and switch so quickly, we have moved from approximately 4C to average samaller than at the poles. Trouble is, that was not the crux of your contention, 4C was, 2C either way blows it up. Lawyering this debate will not serve it

  287. DeWitt Payne
    Posted Feb 21, 2008 at 2:54 PM | Permalink

    Jim Arndt #282,

    To be completely fair to the other side, the amount of solar energy consumed by photosynthesis is about 0.04% of the total received from the sun (1.7 out of 3850 zettajoules/year). Only a fraction of that, and probably a small fraction, will be buried and lost. As far as ice melting, the global sea ice anomaly has not changed significantly from zero for the last 30 years. I predict that last year’s low Arctic sea ice extent will be shown to be a fluke and that the Arctic sea ice anomaly will go positive within ten years because the AMO and PDO are now in negative phase.

  288. DeWitt Payne
    Posted Feb 21, 2008 at 2:59 PM | Permalink

    John V. #286,

    Would you please put confidence limits on those trends. My bet is that for most of the SH the lower confidence limit will be less than zero.

  289. John V
    Posted Feb 21, 2008 at 3:20 PM | Permalink

    #287 Peter Thompson:
    It *is* well accepted that the global ice-age temperature swing is ~4C. I’m not going to waste time arguing over whether it’s actually 4C or 6C. If you disagree with me, find a reference and prove me wrong. I’m happy to learn.

    My point was that Hans Erren’s claim that the CO2 effect is lost in the noise was based on comparing polar temperature swings to global CO2 sensitivity. The comparison is not valid. 2C either way of a 4C global average (your number) would not make a difference to my point. The effect of CO2 is far from lost in the noise at a 3C sensitivity.

    =====
    #289 DeWitt Payne:
    I agree that the SH trends over 10-15 years may not be statistically significant. (Although I suspect the polar region is significant). I’ll leave the calculation of confidence limits to someone with more statistical prowess.

  290. rafa
    Posted Feb 21, 2008 at 3:30 PM | Permalink

    Dear Steve, during your time at Georgia Tech. did you have the opportunity to ask if there is a refereed paper on CO2 doubling leading to 2.5C raise?

    thank you

  291. Peter Thompson
    Posted Feb 21, 2008 at 3:50 PM | Permalink

    #290 John V.

    In #257, it would seem that you make a case for an “interdependence” of CO2 and temperature. Certainly temperature has an effect on the ablity of oceans to absorb CO2; I’ll buy this because unlike most climate science it can be demonstrated experimentally. It would seem to me then, that polar temperature would be the relevant metric, as the Vostok ice core is from the Antarctic.

  292. DeWitt Payne
    Posted Feb 21, 2008 at 4:02 PM | Permalink

    John V. #290,

    My point was that Hans Erren’s claim that the CO2 effect is lost in the noise was based on comparing polar temperature swings to global CO2 sensitivity. The comparison is not valid. 2C either way of a 4C global average (your number) would not make a difference to my point. The effect of CO2 is far from lost in the noise at a 3C sensitivity.

    I think you are still missing the point. The question was, can we tell if CO2 is contributing to the temperature swing from glacial to interglacial by analyzing the rate of change of temperature, not the absolute change in temperature. Even for the ice core data, a change of 2 C for a climate sensitivity of 3 C for doubling is not trivial, but the change in slope is too small to detect. Is that true for proxy data from the tropics? I don’t know. Do you (or anyone else) have a link to a good data set for testing?

    I forgot to add to my other post that after you calculate the temperature change with time from the CO2 concentration with time, you subtract it from the temperature data, renormalize to the same delta T and compare slopes. Any difference in slope must be statistically significant.

  293. John V
    Posted Feb 21, 2008 at 4:41 PM | Permalink

    #292 Peter Thompson:
    The CO2 sensitivity is generally interpreted as the change in global average temperature for a doubling of CO2. The change in temperature at the poles is larger than the global average temperature, but the CO2 concentration is essentially the same everywhere.

    A comparison of the global CO2 sensitivity to polar temperature will make the sensitivity seem small.

    =====
    #293 DeWitt Payne:
    I don’t believe Hans Erren’s analysis was related to the rate of change, but I see that there was some conversation about rates of change. I’m not going to pursue it right now.

    In fact, I’m out for the night.

  294. Peter Thompson
    Posted Feb 21, 2008 at 6:11 PM | Permalink

    John V.

    The CO2 sensitivity is generally interpreted as the change in global average temperature for a doubling of CO2. The change in temperature at the poles is larger than the global average temperature, but the CO2 concentration is essentially the same everywhere.

    Everything you say here is hypothesized, not a shred can be experimentally demonstrated. Period. CO2 has risen at a virtually linear rate for decades, temperature, global or otherwise has done anything but.

  295. John V
    Posted Feb 21, 2008 at 7:20 PM | Permalink

    #295 Peter Thompson:
    I understand that you don’t agree with much or all of global warming. However, the statements that you quoted and then denounced are not controversial:

    #1: The CO2 sensitivity is generally interpreted as the change in global average temperature for a doubling of CO2 — true by definition.

    #2: The change in temperature at the poles is larger than the global average temperature… — not controversial at all, look at the temperature trends for the last century

    #3: …but the CO2 concentration is essentially the same everywhere — not controversial, there are measurements from around the globe and they are all very similar

    The difference in shape between the CO2 and temperature trends is expected. CO2 is not the only driver of temperature, and I’m not saying it is. It is one of many. I suspect our only difference of opinion is that I say there are many climate drivers and CO2 is one of them. You say there are many climate drivers and CO2 can *not* be one of them. If I have stated your position accurately, please explain why CO2 can not be a climate driver.

  296. DeWitt Payne
    Posted Feb 21, 2008 at 7:34 PM | Permalink

    John V.,

    #2: The change in temperature at the poles is larger than the global average temperature… — not controversial at all, look at the temperature trends for the last century

    The trend at the North Pole has been higher on occasion, especially since 1995, but not the South Pole. The Vostok ice core data shows zero change in temperature for at least a century before the present and there’s been very little change since it was taken.

    Polar amplification during glacial/interglacial transitions is a given. But apparently it has nothing to do with greenhouse. See my posts in Unthreaded #31.

  297. Pat Frank
    Posted Feb 21, 2008 at 9:14 PM | Permalink

    Sam, I agree. However, people do use GCMs to calculate something they call global average temperature, and that is the canonical value used to flog everyone about AGW. So whatever that ‘global average temperature’ may be — even if it’s physically meaningless — the reliability of that value with respect to the credibility of GCMs will be affected by the integrated error of the calculation. That’s my only point.

    A couple of my responses here have been flagged as spam and put in the stockade. I’ve not tried to repost.

  298. Pat Frank
    Posted Feb 21, 2008 at 9:33 PM | Permalink

    Judith wrote: “The 3.7 W m-2 is determined in a very straightforward way. If you take a radiative transfer model and make a calculation for the infrared part of the spectra for average atmospheric conditions, and then make another calculation where you double CO2, then the the one with double CO2 will have a surface flux that is 3.7 W m-2 greater than the original calculation.

    This is a core difficulty. Why would anyone believe that a calculation using an incomplete model will produce a number that actually predicts one of the observables of an extremely complex dynamical system?

    In a dynamical chemical system, with which I am more familiar, one can use straightforward equilibrium theory to “predict” the level of some intermediate, given certain input concentrations. But when the full system is modeled and the feedback dynamics and rate constants are entered, there is no reason to expect that the concentration of that intermediate will ever come anywhere close to the predicted value.

    Likewise Earth response to that radiative transfered 3.7 Wm^-2. It’s not even clear that the full 3.7 Wm^-2 will be realized if the emissivity or the albedo of Earth changes as a consequence of the climate effects of CO2. Likewise, even if the 3.7 Wm^-2 of forcing is realized, there’s no reason to expect that its effect on global climate will be linear; again due to dynamical adjustments. The feedback dynamics of the system are too complex (as Tom Vonk outlined nicely).

    I can’t imagine that this understanding is opaque to climate physicists.

  299. Peter Thompson
    Posted Feb 21, 2008 at 10:15 PM | Permalink

    @296 John V.,

    Allow me to help you with your understanding of me.

    I understand that you don’t agree with much or all of global warming.

    Global warming is a fact every time it happens. Global cooling is a fact every time it happens. Facts are not things you agree or disagree with, for they are facts…..observable, quantifiable, i.e. physical.

    Causes are different. In dynamic, chaotic systems, I think they are idealistic, bordering on faith based belief systems. CO2 is a perfect example. It has been linearly rising for decades. During this period, global temperatures have fallen and risen in arguably historically normal patterns, yet for some reason CO2 is going to cause dangerous warming….soon. Or within 100 years, or sooner….you’ll see. People who question this theory, which has no experimental evidence, are then called heretics……..I mean deniers.

    The difference in shape between the CO2 and temperature trends is expected. CO2 is not the only driver of temperature, and I’m not saying it is. It is one of many. I suspect our only difference of opinion is that I say there are many climate drivers and CO2 is one of them. You say there are many climate drivers and CO2 can *not* be one of them. If I have stated your position accurately, please explain why CO2 can not be a climate driver.

    The difference in shape is explained, now, after observational evidence blew apart the models. This was done by ex post, ad hoc tuning and unphysical flux adjustments, which make a mockery of actual science, however.

    I would say there are myriad climate drivers, which act differnetly depending on the unique mix of them at a given time, sort of Heisenberg(sp?) on steroids. I think CO2 might be one, but I also think that since whatever factors which also are at play in our world of today are demonstrably larger in effect, CO2 should not get the attention, or any attention w.r.t. global temperature.

  300. Neal J. King
    Posted Feb 22, 2008 at 2:15 AM | Permalink

    #299, Pat Frank:

    – The 3.7 W/m^2 should not be affected by clouds, significant chemical reactions, etc.
    – It should be affected by the distribution of C-O2 in the atmosphere, which is assumed to be well-mixed.
    – It should be affected by water vapor as a 2nd-order effect, once the temperature impact has been assessed and modifies the distribution and concentration.

    Therefore, I don’t think the radiative-forcing issue has nearly the degree of uncertainty that the temperature-change issue has.

    I will look over Judith’s references; I will be looking for an explanation of the code: not a line-by-line, but some description of how the calculation works, high-level, to confirm my own understanding of how it should be done.

    Steve: Neal, you might look at some posts examining some of these matters http://www.climateaudit.org/?p=2596 2594 2586 2570 , where I started looking at one of the Modtran models in the context of the radiative-convective models.

  301. Tom Vonk
    Posted Feb 22, 2008 at 5:18 AM | Permalink

    UC wrote # 268

    #265 Compressed, Does the “weather noise” cancel when integrated over space and time?

    Or, Is global mean temperature 1/f -like or hockeystick -like ?

    ( Don’t ask me, I don’t know. )

    I can’t answer the second question but I take your word that it is equivalent/similar to the first question .
    However I can suggest an answer to the first and it is a clear no .
    The main arguments can be given by going back to the variable change transforming the original N-S equations in RANS .
    You obtain a PDE system featuring both the averages and the “fluctuations” (that some would call “noise”) .
    Then you ask yourself what are the conditions that the solutions of this new system be independent of the fluctuations (aka they “cancel out by integration”) .

    You will easily see that there are 3 of them .
    First is that the fluctuations be small and stay small LOCALLY compared to the averages .
    Second is that the fluctuations be random with known probability distributions .
    Third is that you can close the system because as already mentioned this new system contains 6 new variables and is undefined .

    To the first : that is the reason why the N-S problem has not been solved after more than 1 century of hard trying . It is impossible to prove that the fluctuations stay reasonably “small” for any t . Observation of , say , the stormy sea surface shows that they don’t . The mathematical conjecture that might also be proven theoretically one day is that they never do .

    To the second : it is impossible to prove that the fluctuations (so the N-S solutions) are random . The rest is a circular fallacy . If I suppose that the fluctuation are random , I will be able to show that the “noise” cancels . It is equivalent to say “If I suppose that the fluctuations are random then they are random .” N-S equations being deterministic continuous PED , there is no space for a randomness hypothesis .

    To the third : What could be the 6 additionnal equations constraining the 6 additionnal variables ? Conservation laws ? Well energy and momentum conservation IS already in N-S . There are no known additionnal natural laws that could be used to close the system .
    The system has too many freedom degrees and has therefore an infinity of solutions depending on an infinite number of arbitrary (aka unphysical) functions . With a confidence of 99,99 % we may say that there are not some still unknown natural laws enabling this closure .

    Other numerical methods like LES etc are variations on the same theme .

    All that , despite being generally ignored by climate scientists , has been studied in other scientific domains because it is indeed important for understanding at least partly fluid dynamics .
    I’d advice to interested people to look at an important and relevant paper by Ruelle and Takkens “On the nature of turbulence” (I have unfortunately no link handy but it must be easy to find on the web) .
    This paper demonstrates mathematically that turbulence is governed by the deterministic chaos under certain assumptions what is a result already known experimentally .
    It is not general because else they’d have solved the N-S problem .
    But it is a strong mathematical proof supported by the observations that “noise doesn’t cancel out” .

    The unpredictability of the evolution of fluid systems both locally and in average is a consequence of all the above .

  302. Jim C
    Posted Feb 22, 2008 at 8:18 AM | Permalink

    It has been 28 years since my last course in physics, so please be patient.

    My limited understanding of this debate is not whether humans can influence climate, but to what degree. Is that close?

    Ok, that being the case there must be a physical explanation for all things physical.

    In order to convert radiant energy to temperature, the Stefan-Boltzmann constant is used, is that not correct? Yes I am aware of black body vs grey body. Reviewing IPCC AR4 much to my surprise, S-B is not referenced anywhere in any chapter including SPM. Thinking I may have misspelled, a second search found nothing.

    This is puzzling as S-B is quoted in nearly every thermodynamic article I’ve read related to this subject. More questions. Are the climate models so often discussed devoid of S-B in their code? Why isn’t it discussed in IPCC?

    Further, since CO2 is reportedly trapping LW IR it would seem logical for it to accumulate somewhere. As I comprehend the matter, that should be in the tropical troposphere and thusly creating a “hot spot”, no? Satellite data does not show this unless I’m misunderstanding the data. Still, why isn’t the Stefan-Boltzmann constant used?

    I’ve read it hypothesized LW IR is being re-emitted into the oceans, yet reviewing the literature indicates IR in the 15 micron range (CO2) apparently cannot penetrate water beyond the very top layer of water, .02-.03mm(?). How then does LW IR trapped by CO2 warm the oceans to any significance and why do they not continually warm from now until infinity? Are there experiments that can be referenced?

    Finally, with respect to the various IPCC charts displaying what appears to be a linear rise in temperature to 2100, does not Stefan-Boltzmann apply there as well by the law of diminishing return limiting the effect of CO2?

    Along with my views on the reliability of the surface station thermometers, it is these matters that give pause to the entire AGW hypothesis; observations do not support much of the claims, or least do not make sense…..to me anyway.

    Dum engineer

  303. Tom Vonk
    Posted Feb 22, 2008 at 9:30 AM | Permalink

    Jim C wrote # 303

    In order to convert radiant energy to temperature, the Stefan-Boltzmann constant is used, is that not correct? Yes I am aware of black body vs grey body. Reviewing IPCC AR4 much to my surprise, S-B is not referenced anywhere in any chapter including SPM. Thinking I may have misspelled, a second search found nothing.

    This is puzzling as S-B is quoted in nearly every thermodynamic article I’ve read related to this subject. More questions. Are the climate models so often discussed devoid of S-B in their code? Why isn’t it discussed in IPCC?

    Because it doesn’t apply . Gases are neither black nor grey – they have absorption and emission bands . S-B law is irrelevant for them .
    To calculate radiative properties of gases , quantum mechanics is necessary .

    For this and your other questions I would suggest you to go to the discussion boards here (http://www.climateaudit.org/phpBB3/)
    There are already discussions and explanation of several basics of the climate dynamics and most of them answer your questions .

  304. Andrew
    Posted Feb 22, 2008 at 10:26 AM | Permalink

    Neal J. King, I’ve also heard 3.8 W/m2, varying slightly around a value close to four. Whats the difference?

  305. Phil.
    Posted Feb 22, 2008 at 11:18 AM | Permalink

    Re #265

    The Earth is not in a radiative equilibrium locally and also not globally .
    The day half absorbs more than it emits and the night half the opposite .

    Of course ‘The Earth’ consists of both halves!

  306. Andrew
    Posted Feb 22, 2008 at 11:25 AM | Permalink

    Phil, is radiative equilibrium a good perturbative approximation or not?

  307. jae
    Posted Feb 22, 2008 at 12:39 PM | Permalink

    Tom Vonk, 265, 302: Those are the clearest expositions (at least to me) of what is wrong with climate modeling and the AGW hypothesis that I have ever seen. Bravo.

  308. Earle Williams
    Posted Feb 22, 2008 at 1:02 PM | Permalink

    Re #306, 265

    The assumption of radiative balance is flawed, to the degree that energy is captured and stored in the chemical bonds of hydrocarbons, cellulose, etc. I don’t accept the notion of radiative balance without at least considering this potential enrgy sink. Certainly at one point the earth was not in balance, as evidenced by our current worldwide energy dependence upon fossil fuels. Granted the amount of energy imbalance due to the formation or combustion of these chemical bonds may be negligible, but absent any consideration of the process the assumption is flawed.

  309. Posted Feb 22, 2008 at 1:33 PM | Permalink

    @Phil

    Of course ‘The Earth’ consists of both halves!

    No! It consists of four quarters! 🙂

  310. Posted Feb 22, 2008 at 1:43 PM | Permalink

    @Andrew,
    Whether or not the earth is radiative balance is correct to leading order in a perturbution analysis depends on what you are analyzing.

    I suspect you want to know if radiative balance holds sufficiently well for S-B to describe the energy emitted from the earth’s surface at any point and time. I’d say yes. We certainly do apply S-B when working in engineering applications and with great success.

    Those who say ‘no’ need to clarify what they mean by “S-B doesn’t hold” and explain the precise analysis they are doing, and describe the way in which they think it doesn’t hold.

  311. Sam Urbinto
    Posted Feb 22, 2008 at 2:37 PM | Permalink

    John Lish: The 3.7 figure is not arbitrary, it’s an estimate based upon assumptions and guesses of unknowns.

    Yorick: We always have to start anything involving models out with the assumption they’re correct, because if they’re not, we have nothing.

    John V: “relationship between radiative forcing and temperature change.” I’ll give you proxies (tree, ice core, anomaly) are temperature change on this one.

    Given the AR4 IPCC radiative forcing chart (which of course does not account for water except in the stratosphere vis-a-vis methane reactions, and which I believe greatly underestimates land albedo) — given that chart, it is impossible for carbon dioxide to be all of just the radiative forcing part of the temperature change. So any calculation that attempts to use what doubling of carbon dioxide does has to take into account what percentage of overall temperature change it is accountable for in the system. If the number is 100%, the result has to be wrong.

    DeWitt: Yes, the estimates are a two-edged sword; the question is; is the statement “The doubling of carbon dioxide can not be X” falsifiable?

    Raven: That says it all; the thermometers and land.

    Andrey: Why couldn’t huge areas blocking wind and absorbing more sunlight and changing weather patterns there and elsewhere — it’s not like what happens above Chicago doesn’t affect anywhere else — why can’t urbanization be a significant factor? For example, what happened after the steam pressure blew the top off of the reactor at Chernobyl to the radioactive materials ejected? And what do you call “significant”? I’d say it’s at least, in the system as a whole, 5-25% of the land use part (then add in the roads between farms and forests and such, agriculture, de/re-forestation, railroads, freeways) The trouble is, as with estimates of 2xOCO, it can’t really be proven either way. (I never have liked ‘seems as if’ or anecdotal evidence.)

    MarkW: Yes. Also, irrigation, parks, suburbs, golf courses, lakes, etc. Although I don’t agree with Andrey that “urbanization” (as a whole, including what you brought up) can’t have some significant effect upon climate, to be fair nothing was said about temperature readings or the anomaly: “urban sprawl is not significant factor affecting global climate” But I do agree (as I said) we can’t really prove or disprove it.

    John V: “SteveMc, considering that Hans’ results back up the IPCC, are you still interested in how they were derived?” Snarky. I would think you know by now Steve is interested in all the results and doesn’t care what they are, he just reports them (and wishes others would do the same) and doesn’t go about things to find an answer he’s already decided upon before looking. Not that the post by Hans really does anything but tell you the signal pretty much is noise.

    Tom: “primitive thing like a single ‘global temperature’”” Nicely put.

    I can stop commenting on this thread after that one!

  312. John V
    Posted Feb 22, 2008 at 2:56 PM | Permalink

    #312 Sam Urbinto:
    Ya, I was a little snarky yesterday. I probably would’ve skipped that line today.

    I would think you know by now Steve is interested in all the results and doesn’t care what they are

    How can I say this without being snarky? Umm…. do you really believe that?

    From *my* perspective, SteveMc is very selective in what gets promoted. Any number of analyses that support the IPCC position never make it out of the comments, but any whiff of IPCC mistakes is quickly written up as a full article. I could cite a number of examples but I’ve already said too much — let the flames begin.

  313. steven mosher
    Posted Feb 22, 2008 at 3:16 PM | Permalink

    313. Hi JohnV Of course SteveMc is selective. One must be selective. The more selective the focus
    the stronger the beam. The other wayto look at it is two teams. a blue team proposes.
    A red team opposes. It’s a discipline.

    The notion is that truth evolves from the competition of ideas. No flames. Many people expect blogs to be fair and balanced.

    I prefer to see the contest of ideas.

    Hey, do you have the file of stations you used for CRN123R?

  314. Sam Urbinto
    Posted Feb 22, 2008 at 3:16 PM | Permalink

    I’m not going to flame you. I usually at least agree with some of what you say, although more often I don’t reach the same conclusions myself. But since so much of this subject is opinion and conjecture, I have no problem with any conclusions. Although at times some folks seem a bit dense over the specifics, such as they are.

    That said, yes, I do. What makes you think Steve isn’t just doing what he said; investigating the things that interest him. Things that are mainstream. If the mainstream is reaching these conclusions, using this data, making these claims, what else would you logically expect he’s going to dig into? Certainly you can understand that if scientific reports are ignoring certain parts of the science, there will be some that question that. It certainly doesn’t help when at times the mainstream is hostile, evasive, vague and uncooperative. Don’t you ever wonder why some people involved with climate-related matters act like that?

    All I really see is the the bulk of regulars that specialize in certain science or sciene-related fields are looking for the truth, asking questions about various conclusions or data or methods.

    So I’ll ask a question; if you are advocating for mitigation strategies because you believe that carbon dioxide is the main issue, that the anomaly trend reflects a rise in temperature levels, and that the first is the main cause of the second, aren’t you going to be less critical of things that support that world view, and not spend much time on digging into it or going into the negatives rather than accentuating the positives? So if you’re of the opinion that the science is hidden and vague, and when you check you keep finding sloppy or incorrect bits and pieces, and don’t care that they give you an X or a Y or a -Z, just that the answer is correct, why would you start going and looking at non-peer reviewed or obviously incorrect material?

    So yes, I believe Steve is doing a crossword puzzle. He just not interested in solving some as much as others. Others don’t interest him at all. And he can’t fill out every one of them, so he only does the ones he’s especially interested in and they have to be put out by reputable producers.

    I don’t understand why you don’t see that, but if you don’t, fine.

  315. John V
    Posted Feb 22, 2008 at 4:11 PM | Permalink

    #315 Sam Urbinto:
    I absolutely accept that SteveMc is doing what he says in terms of which papers or studies he chooses to attack investigate. I can completely accept his interest in attacking auditing the mainstream.

    (I just discovered the strikeout font — sorry for overusing it above).

    My original snarky statement was in response to his interest in pursuing Hans Erren’s method for purportedly showing that the effect of CO2 on the ice age cycle had to be small. It reminds me of some past cases of promoting questionable analyses, and they always seem to be one-sided.

    My original question — whether SteveMc is still interested in pursuing Hans Erren’s analysis given that it actually supports the IPCC — was rhetorical. I knew the answer was no. If I understand your comment, it appears that you agree. You think that’s entirely justified. I agree that it may be justified, but also believe it undermines any claims of indifference.

    Incidentally, I’ve been following the global warming story for a long time. As I’ve said many times, I don’t want it to be real. I am a very skeptical person and I’ve spent a lot of time investigating the science and investigating claims made here. Almost without exception, I find the IPCC science to be much closer to the truth than one would believe from reading this site. (OpenTemp and the USA48 temperature record is the best example). The science is not perfectly right, but it’s closer to right than wrong.

    =====
    #314 steven mosher:
    Selective is fine, deceptive is not.
    Being overly selective can have the net effect of being deceptive.
    It’s essentially the cherry-picking problem.

    SteveMc often advocates for the publication of negative findings. I support that position. However, in the ~6 months I’ve been here I have not seen a single negative result promoted out of the comments.

    The CRN123R stations are listed here:
    http://www.opentemp.org/_results/20071011_CRN123R/stations_crn123r.txt

    And, just for old time’s sake, here’s there comparison with GISTEMP for the USA48:

    Ah, that brings back memories…

  316. Sam Urbinto
    Posted Feb 22, 2008 at 4:48 PM | Permalink

    John V: Ah, but you see, I don’t care if it’s there or not, to the point where if it’s real it won’t bother me at all. I never said that most of the IPCC info wasn’t solid (it’s not always presented in the detached neutral manner I would expect) and it’s often seems not honest enough in being open and direct about the degree to which certain things are uncertain and makes assumptions. So if 80% of it (or whatever) is taken as correct, why would any of that get questioned? Although sometimes it is, and found to be correct as stated, and then the next oddity is checked, seemingly correct or not. So I guess we agree. You just seem to be a little upset about how things get done, perhaps a mit bit too critical. Not that I agree with everything he does or how he does it of course. It seems to happen a little too often, where you and Steve get into it.

  317. Neal J. King
    Posted Feb 22, 2008 at 5:27 PM | Permalink

    #303, Jim C.:
    #311, lucia:

    – The Stefan-Boltzmann constant relates the total power, integrated over frequency, of blackbody radiation, to the temperature of the body. It does not apply to a system which has frequency-dependent features, such as emission and absorption lines. But the greenhouse effect depends explicitly on absorption lines. In fact, the most basic hint as to existence of the greenhouse effect comes from applying the Stefan-Boltzmann theorem to planet Earth, and realizing that the resulting expected surface temperature is 33 degrees-C too low. That tells us that the S-B theorem doesn’t apply: you need to worry about what the atmosphere is doing, and it has different temperature at different altitudes. If you start thinking seriously about that, you find that the S-B theorem is essentially useless in this situation.

    – IR radiation emitted into the oceans is absorbed and converted to heat. The IR does not need to penetrate into the depths of the ocean: heat is conducted by the water.

  318. Neal J. King
    Posted Feb 22, 2008 at 5:31 PM | Permalink

    #305, Andrew:

    The differences among 3.7, 3.8 and 4 W/m^2 are not particularly significant. The question is, How to obtain a quantitative estimate to the radiative imbalance due to a 2X in C-O2 concentration? I believe I’ve seem the general arguments, but have never seen them carried through to the final numbers.

    Judith and Steve McIntyre have both provided some leads, that I hope to get to soon.

  319. maksimovich
    Posted Feb 22, 2008 at 5:33 PM | Permalink

    re 296

    #2: The change in temperature at the poles is larger than the global average temperature… — not controversial at all, look at the temperature trends for the last century

    The metrological data Vostok station is a rather large black swan in your argument,or indeed any GCM !.

    SAT mean – 55.2c
    600hpa mean – 42.5c
    500hpa mean – 41.8c

    Please provide the algorithms from the GCM that account for this.

  320. Neal J. King
    Posted Feb 22, 2008 at 5:40 PM | Permalink

    #265, Tom Vonk:

    I don’t agree that it’s impossible to predict the average/integral of something you don’t know. Example: If I have a bathtub shaped like a rectangular prism with floor area A, and am running water into it at a rate of W (volume/second), the average water level will be increasing at the rate of W/A. This will be the spatial average of the water levels over the tub, and I can predict it confidently, even though the instantaneous water level at any point over the tub is changing unpredictably due to turbulence, etc.

    This analogy is actually quite close to the GHE that is under discussion. The physical constraints on the radiative transfer imply that a sudden increase in C-O2 will result in more radiant energy coming into the Earth (land/ocean/atmosphere) system than leaving it. So something is happening to all that extra power. If it’s not leaving, it must result in a build-up of energy.

  321. Neal J. King
    Posted Feb 22, 2008 at 5:54 PM | Permalink

    #265, Tom Vonk:
    #309, Earle Williams:

    Yes, if there is a radiative imbalance, there has to be build-up of energy somewhere in the system. So why do you think we are talking about global warming? Because a very straightforward way of accommodating more energy in a system is for the temperature to increase!

    Is it possible that some of this extra energy is being captured in chemical bonds of hydrocarbons, cellulose, etc.? Yes, but let’s think about that: You’re basically talking about more vegetation growing, and then that vegetation turning into oil/gas/coal.

    – With regards to vegetation: Where is it that you believe this extra vegetation is hiding? In the rain-forests that are being burnt down? In the woody forests, which aren’t expanding either? In the farmlands, where the production is harvested and then eaten or again burned? Where is all this extra cellulose hiding?

    – With regards to fossil fuels: Most people seem to think that it takes millions of years for hydrocarbons to form from vegetative matter under pressure. Given an excess C-O2 input over a timescale of about 150 years, do you think that a significant amount of hydrocarbon could have been produced? And where would it be located: near the surface, where there is no great pressure? Or down in the depths of the Earth (and then please explain how the vegetative matter got to those depths within that 150-year period)?

  322. Kenneth Fritsch
    Posted Feb 22, 2008 at 5:56 PM | Permalink

    http://www.climateaudit.org/?p=2708#comment-215291

    Incidentally, I’ve been following the global warming story for a long time. As I’ve said many times, I don’t want it to be real. I am a very skeptical person and I’ve spent a lot of time investigating the science and investigating claims made here. Almost without exception, I find the IPCC science to be much closer to the truth than one would believe from reading this site. (OpenTemp and the USA48 temperature record is the best example). The science is not perfectly right, but it’s closer to right than wrong.

    John V, I sometimes think that you have rather selective vision yourself. The IPCC, as in AR4, makes a good case for immediate mitigation of AGW but unfortunately it does not give equal weights to the consensus and “other side(s)”. You look at the CRN 12345 data one way and I looked at it in another and came to different (tentative) conclusions. If I went over to RC I would expect them to give me a view from the consensus side and not expect much in the way of an exposition from the “other sides”. Steve M has not been into policy (as is the case more evident at RC)and as a private person picks and choses the puzzles he wants to analyze and perhaps eventually solve. Why should that seemingly bother you who comes here with the consensus on your side?

  323. steven mosher
    Posted Feb 22, 2008 at 6:32 PM | Permalink

    RE 316. Hi JohnV.

    “My original snarky statement was in response to his interest in pursuing Hans Erren’s method for purportedly showing that the effect of CO2 on the ice age cycle had to be small. It reminds me of some past cases of promoting questionable analyses, and they always seem to be one-sided.”

    I dont mind snark. Sometimes I Give people BONUS points for being
    snarky. It’s a pixel world not a meat world. Also, I usually SKIM the C02 threads. Why?
    I havent seen an interesting alternative hypothesis to well understood radiative physics

    “My original question — whether SteveMc is still interested in pursuing Hans Erren’s analysis given that it actually supports the IPCC — was rhetorical. I knew the answer was no. If I understand your comment, it appears that you agree. You think that’s entirely justified. I agree that it may be justified, but also believe it undermines any claims of indifference.”

    Huh? I’m not really interested in defending SteveMC on anything. This is a nice house. Interesting people show up. Sometimes the conversation is nice, sometimes there are food fights. I thank our host. Is he utterly consistent and logical? He’s Canadian not vulcan.

    I find it utterly funny that people expect more consistency out of people than they do out of
    models or theories. My latest favorite:

    Nightlights says that Bratford population 90,000 is DARK. Nightlights says that Mount Shasta
    population 3600 is BRIGHT. So, the theory says that DARK sites dont have UHI and the BRIGHT sites
    do have UHI and therefore, adjust the Bright sites and leave the Dark Alone.

    I think we can give people as much latitude as this. Simply: When it comes to consistency we are all better than nightlights.

    “Incidentally, I’ve been following the global warming story for a long time. As I’ve said many times, I don’t want it to be real. I am a very skeptical person and I’ve spent a lot of time investigating the science and investigating claims made here. Almost without exception, I find the IPCC science to be much closer to the truth than one would believe from reading this site. (OpenTemp and the USA48 temperature record is the best example). The science is not perfectly right, but it’s closer to right than wrong.”

    I dont know how you weigh or average “claims” made here. To be sure, lots of people crash the party
    overstay their welcome and say silly things.But our host SteveMc has made FEW claims. Mostly he espouses an ethic. And that ethic is DOUBLE CHECK authority. There is no leverage in double checking lunatics. For the record, AGW is the best explanation of the data
    we have. I wont extrapolate beyond that.

    Other than that, I think Anthony could have another batch of files in a few weeks and We have some
    more station metadata. Could be fun, method wise. Lets talk in due course. If everyone can
    mind there manners it might make an interesting multi blog seminar ( I suspect that’s a bit ambitious)

  324. steven mosher
    Posted Feb 22, 2008 at 6:44 PM | Permalink

    RE 316. JohnV in your file when you say RURAL designation taken from GHCN
    what did that refer to?

  325. Sam Urbinto
    Posted Feb 22, 2008 at 6:45 PM | Permalink

    How do you know he’s not Vulcan and Canadian, but doesn’t follow the logic thing 100%? Maybe he’s Romulan. Maybe half human.

    The half dog half cat wizard of the statistical and auditing world.

  326. Steve McIntyre
    Posted Feb 22, 2008 at 6:49 PM | Permalink

    I asked Hans if he would write up some analysis. What’s wrong with that? People must be pretty desperate in their attempts to slime, if that warrants snark. Get a life, JOhn V.

  327. steven mosher
    Posted Feb 22, 2008 at 7:11 PM | Permalink

    RE 326. How do I know he is not Vulcan? Thats not very damn funny…..

    As for being a romulean

  328. jae
    Posted Feb 22, 2008 at 10:36 PM | Permalink

    321, Neal:

    #265, Tom Vonk:

    I don’t agree that it’s impossible to predict the average/integral of something you don’t know. Example: If I have a bathtub shaped like a rectangular prism with floor area A, and am running water into it at a rate of W (volume/second), the average water level will be increasing at the rate of W/A. This will be the spatial average of the water levels over the tub, and I can predict it confidently, even though the instantaneous water level at any point over the tub is changing unpredictably due to turbulence, etc.

    I am certainly no expert on the NS subject, but I sense that your example is overly simplistic. The turbulence in a bathtub is probably several orders of magnitude lower than the turbulence of the atmosphere, and may be entirely predictable. Nice try, though.

  329. bender
    Posted Feb 22, 2008 at 10:40 PM | Permalink

    #290 John V

    I’ll leave the calculation of confidence limits to someone with more statistical prowess.

    You say this. Then you turn around and make inferences freely, as though the confidence limits are incidental. This is called “paying lip service”, and it is bad practice to make a habit of it.

  330. johnlichtenstein
    Posted Feb 22, 2008 at 11:44 PM | Permalink

    Sounds like a great trip Steve and I hope some of the people you had private conversations with will write up those talks or ask you to. I’m not surprised nobody in the big presentation asked stats questions. Most people don’t want to slow down a big meeting to clarify notation and terms so they can ask a question. The goal for a big talk is to give the overview and hope for good questions to come later over coffee or via email.

    For presentations to small groups like the HS class, maybe you need to ask some questions to get people going. I dunno. Prof Curry and JEG might have good suggestions about that.

  331. Neal J. King
    Posted Feb 23, 2008 at 5:14 AM | Permalink

    #329, jae:

    My point is that, even under the assumption that the Navier-Stokes equations are too complex to be accurately integrated, I can STILL predict the average water-level as a function of time, because it depends on something else that I can calculate: the total amount of water that has entered the bathtub, and the area of the bottom of the tub.

    But in fact, the calculation of the radiative transfer issue in the atmosphere is much less complex than a hydrodynamic problem. The calculation depends on three profiles: for temperature, C-O2 concentration, and water-vapor concentration. The average temperature profile cannot be very different from the adiabatic profile, and the C-O2 concentration is considered to be well-mixed in the atmosphere. The water-vapor concentration is going to be more variable, but if the question is, What is the impact of C-O2?, it also doesn’t matter very much: It can be considered as constant, until the impact on the temperature can evaluated.

  332. Geoff Sherrington
    Posted Feb 23, 2008 at 5:17 AM | Permalink

    Re 158 Dan Hughes

    You refer to a Judith Curry reference being:

    “Its authors described a way for the United States to obtain nearly 100 percent of its electricity and 90 percent of its total energy, including transportation, from solar, wind, biomass, and geothermal resources by end-of-century. Electricity would cost a comfortable 5 cents per kilowatt hour.

    U.S. carbon emissions would be reduced 62 percent from their 2005 levels. Some 600 coal and gas-fired power plants would be displaced.”

    I live in Australia, which has no nuclear power generation because of US-inspired resistance.

    If we reduced carbon emissions to 62% of present, in future years, from where would our electricity come when the sun did not shine and the wind did not blow? Answer – we would have none except a small base load of hydro.

    The silly talk about reductions of this magnitude is the same silly talk that left us without nuclear. Some might cheer at that, but I don’t.

    So, Judith, if you endorse reductions like this, should you not give alternatives? Otherwise, it is propaganda, not science.

    One costing for Australia at present puts the cost of preventing a tonne of CO2 going into the air by using windmills at $1,100 approx. The cost to avoid a tonne by using nuclear is about $22.

    Can you see why propaganda is unpopular among those with access to data for these calculations? Does your conscience not worry you about helping to commit a preosperous, enjoyable nation to poverty with partly-worked radical ideas?

  333. John Lang
    Posted Feb 23, 2008 at 6:20 AM | Permalink

    John V 316 – (reminds of the guy at sporting events with the rainbow wig holding the John 3:16 sign lol.)

    Your chart of GISTEMP 5 year running mean for the lower US seems a little smoothed and a little adjusted compared to the current GISTEMP chart from the GISS website.

    How much playing around with the data did you do to get it to match so closely? Audit anyone?

  334. Kenneth Fritsch
    Posted Feb 23, 2008 at 11:18 AM | Permalink

    How much playing around with the data did you do to get it to match so closely? Audit anyone?

    I for one am not suggesting that John V is playing around with the data. In doing so, we only get derailed from the real issues. What struck me about some of John V’s early analyses were that they used categories that contained but a relatively few stations and uncorrected measurements. The statistical significance of such an analysis has to be questioned (and measured). When one sees the station to station variability even for those in close proximity, one becomes even more cautious when using small samples of stations.

    I found differences in the trends when I grouped CRN 1, 2 and 3 stations and compared them to the group consisting of CRN 4 and 5 stations. I used corrected station data and was careful to note that as one goes back in time the data set has much missing data and particularly so when using unadjusted station data — that makes no attempts to fill in the missing data. I also did some random selections of stations for trend comparison amongst the Watts rated CRN 1, 2, 3, 4 and 5 stations to determine whether the trends differences I found were significant. That analysis indicated the differences were significant. I looked at other confounding variables such as latitude, longitude and elevation and was not able to find any confounding.

    I did this analysis as the personal exercise of a layperson and I make no claims for its statistical authenticity. It needs more analysis (and particularly with regards to change points) and by a qualified statistician.

    By the way, I made an unforgivable and egregious error in an early analysis of differences between the Watts rated CRN stations in neglecting to factor in the effects of elevation on temperature. Thanks to Steven Mosher the error was found and corrected, but I am certain that not all my errors are so easily found.

  335. steven mosher
    Posted Feb 23, 2008 at 12:33 PM | Permalink

    334. JohnV has his code online, Its open source. Download it and run it.

    As JohnV, Kenneth, Clayton B and I worked on this project ( each doing different things)
    we all just helped each other. You’re welcome to Check Johns Work. I have. The code is
    very well written. JohnV used an input dataset that was Open and selected stations for
    his study, publishing them. It’s all Open. Its all free. So, join in.

    Here is the deal. JohnV contributed his time and code. Now others contribute their time
    or modify the code, or run results, lots of things.

  336. John V
    Posted Feb 23, 2008 at 1:41 PM | Permalink

    It looks like I’ve got a lot of responses to write.
    I have tried not to provoke any new arguments. If anything written below appears to be picking a fight, please clarify before firing back.

    =====
    #317 Sam Urbinto:

    You just seem to be a little upset about how things get done, perhaps a mit bit too critical.

    It was an off-hand, single-sentence comment. Would SteveMc be interested in promoting results that back up the IPCC? Sometimes I do get upset, and I’ll try to restrain myself, but it definitely goes both ways (in general, not necessarily from you). This can be a hostile place for outsiders.

    =====
    #323 Kenneth Fritsch:

    You look at the CRN 12345 data one way and I looked at it in another and came to different (tentative) conclusions.

    (I’m going from memory, so correct me if I’m wrong).
    As I recall we both approached the data from different directions, looking for different things, but our results did not contradict each other. My approach was to build the best possible reconstruction using the best stations (as identified by SurfaceStations). Your approach was to look for differences between the best and worst stations. Both are valid.

    I believe you found that the worst stations (urban and/or CRN5) were statistically different than the best. I agree. In fact, I even sent results to SteveMc showing the differences.

    I compared my reconstruction using the best stations to the GISTEMP reconstruction and found them to be similar. At the time there were a lot of disparaging comments about GISTEMP, so it seemed an appropriate comparison. It is also in the spirit of an audit to compare GISTEMP to my independent analysis.

    =====
    #324 steven mosher:
    Thanks for the reasoned comments. I don’t think there’s anything that requires a response. Except perhaps that when I wrote of “claims”, I was referring to articles written/approved by SteveMc.

    =====
    #325 steven mosher:
    For the GHCN rural stations, I believe I chose stations with an ‘R’ in the ‘R/S/U’ column from this list:
    http://data.giss.nasa.gov/gistemp/station_data/station_list.txt

    =====
    #327 Steve McIntyre:

    I asked Hans if he would write up some analysis. What’s wrong with that?

    There’s nothing wrong with that.
    I simply asked if you’d still be interested if his results backed up the IPCC position. What’s wrong with asking?

    People must be pretty desperate in their attempts to slime, if that warrants snark.

    Given that a substantial portion of your M.O. is directing snark at authorities, this is an ironic statement.

    Get a life, JOhn V.

    Again ironic given the time and energy you have volunteered to your efforts. Of course you are free to choose how to spend your time. I can assure you that I have a life outside my comments, and occasional snarkiness, here. My snarkiness was not productive though, and I will try to refrain in the future.

    =====
    #330 bender:

    Then you turn around and make inferences freely, as though the confidence limits are incidental. This is called “paying lip service”, and it is bad practice to make a habit of it.

    I believe it was you and Mike B who made comments about Southern Hemisphere and Antarctic cooling. I asked you for a reference to backup your claim of Antarctic cooling. I made the effort to download the data and calculate the trends, as opposed to just making claims.

    The only inference I made was that the claim of Southern Hemisphere or Antarctic cooling in the last 10-15 could not be statistically supported.

    =====
    #334 John Lang:

    (reminds of the guy at sporting events with the rainbow wig holding the John 3:16 sign lol.)

    That’s unproductive. I won’t rise to your bait.

    Your chart of GISTEMP 5 year running mean for the lower US seems a little smoothed and a little adjusted compared to the current GISTEMP chart from the GISS website.

    How much playing around with the data did you do to get it to match so closely? Audit anyone?

    My chart shows only the 5yr mean for clarity.
    Perhaps you weren’t around when I originally posted these charts — search for them and you’ll find links to my data and source code. I encourage you to audit them, as I have encouraged the community since the beginning.

    =====
    #335 Kenneth Fritsch:

    I for one am not suggesting that John V is playing around with the data. In doing so, we only get derailed from the real issues.

    My data and code are freely available. Kristen Byrnes and others put some effort into finding problems with it (which was a good thing), but no problems were found. If there are problems with my data or methods, I welcome corrections.

    What struck me about some of John V’s early analyses were that they used categories that contained but a relatively few stations and uncorrected measurements.

    For the record, I used the best stations (as identified by SurfaceStations.org) and included TOBS adjustments. Further GISTEMP adjustments (such as urban trends) were not required since I used only rural stations. Also, the adjustments themselves were the subject of controversy so it did not seem appropriate to use them.

    You and I had differences of opinion in our approaches, but as I said above I don’t believe our results were contradictory. There was a lot of interesting conversation and I think we were making real progress.

  337. bender
    Posted Feb 23, 2008 at 3:38 PM | Permalink

    I won’t go all Socratic on you John V; I’ll just give it to you straight. One doesn’t get to pick and choose when to put the error bars on a trend line when comparing model predictions and observed – a common Team tactic.

    Regarding Antarctic short-term trends, it is YOU who say the model matches observations. So YOU show us the goods. YOU show us that the so-called match is not the sort of ridiculous match that Gavin likes to trumpet: model trend “matching” the observational trend just because the error bars are really wide on both. i.e. Slopes not significantly different from zero. That may be a “match”, but not the sort of match that supports GW theory. When you show me that the slopes in both model and data are significantly non-zero with equal slopes, then I will buy your argument that the models fit the data.

    I, on the other hand, argue nothing. The burden of proof is on you, so you get to it.

  338. bender
    Posted Feb 23, 2008 at 3:46 PM | Permalink

    And let’s move this to an Antarctic thread or to unthreaded. This doesn’t have anything to do with Georgia Tech.

  339. John V
    Posted Feb 23, 2008 at 4:01 PM | Permalink

    #338 bender:
    I did not make any claims of my own. I investigated a couple of statements made here that surprised me. Firstly, that less warming in the SH than the NH refutes AGW theory. Secondly, that the SH and/or Antarctica are cooling.

    Using your approach, the burden of proof is not on me as the original claims were not mine. Nevertheless, I did a little research and found that the claims were incorrect. The ball is now in your court if you wish to back up the claim that Antarctica is cooling.

    I’ve been investigating claims made by both sides to satisfy my curiousity and skepticism. I am not going to produce any results that will satsify your skepticism, but when I see a questionable statement I will feel free to investigate and correct it if necessary.

  340. bender
    Posted Feb 23, 2008 at 4:02 PM | Permalink

    #337

    The only inference I made was that the claim of Southern Hemisphere or Antarctic cooling in the last 10-15 could not be statistically supported.

    Caught back-pedaling away from an untruth again. Witness:

    #191

    Modern climate models all predict less warming in the SH.

    You see? You have indeed made other inferences. Inferences that require error bars.

    And yet you suggest that calculating confidence limits is not your job:

    #290

    I leave the calculation of confidence limits to someone with more statistical prowess.

    If you don’t have the “prowess” to run a script that’s already been provided to you, then you have no right pretending you have the skill to compare two trend lines.

    This is the second time a lurking bender has caught you making erroneous inferences while playing the authority. My recommendation is to get some prowess and learn how to calculate those trend line confidence limits. And be prepared to provide data plots yourself before daring others to do the same.

  341. John V
    Posted Feb 23, 2008 at 4:27 PM | Permalink

    #341 bender:
    You’re quoting me out of context. Let’s review:

    In #188, Severian said:

    If CO2 is supposed to be a “well mixed” gas in the atmosphere, and when you consider the fact that the NH is warming a lot more than the SH, and more land mass and people and UHIs are in the NH, that makes me suspicious that the summation of many regional changes does indeed have at least a pseudo-global (NH dominant) effect.

    Then, in #189 Joe Black said:

    From what I’ve seen, the CO2 levels in Antarctica and at Mauna Loa are essentially equivalent. The NH temp anomalies are higher than in the SH. Maybe it’s not the CO2 then? Maybe the “climate change” is different over the Oceans than over the Land.

    That would make it not exactly “global” wouldn’t it?

    Both were claiming that the difference between NH and SH warming refutes AGHG as a cause of the warming.

    To them I responded in #191:

    Severian and Joe Black:
    The difference between NH and SH warming gets brought up again and again and again. An explanation is always given, but it
    doesn’t seem to matter. Oh well, I’m a glutton for punishment — here are a few points to ponder:

    1. The NH has more land. Land warms faster than ocean. Therefore the NH warms faster than the SH.
    2. Modern climate models all predict less warming in the SH.

    Their claims. My investigation to counter the claims.

    =====
    Regarding Antarctic and SH cooling, I believe it started with

    Peter Thompson in #259:

    Which, exactly, of these models from the AR4 predicts Antarctic cooling? Lets also remeber that hansen’s 1988 models predict that the antarctic will warm more than anywhere else on earth.

    Followed by yourself in #261:

    There are model runs that allow for a temporary cooling 1999-2008. However (1) the effect is temporary and (2) it is not in the ensembles (which are collections of many stochastic runs). I wondered about that “money quote” from John V at the time he made it. Your explanation, John V?

    To which I responded in #269:

    #259 Peter Thompson:
    Why would you expect them to predict southern cooling? The southern hemisphere is *not* cooling (even from 60S to the pole). It is warming less quickly then the north (over any time period that is reasonable for discussing climate).

    The original quote was given in response to Severian and Joe Black asking about why the SH is warming less than the NH. It’s well understood and not surprising.


    #261 bender:
    See above.
    I have been looking for measured data showing an Antarctic cooling trend but can’t find any other than very regional cooling in East Antarctica. Please provide a link. Thanks.

    Mike B then came back in #275:

    Funny John. In an earlier post, you specified at least 10 years as “relevant for climate”. I immediately pointed out that two large SH regions with negative temperature trends over the past 11 years. Your reaction in #216 was to laydown, but now you come back in the same thread and repeat your error.

    Download the GISS data yourself and calculate the slopes if you don’t believe me. Otherwise you’re just repeating innaccuracies you’ve read elsewhere.

    I thought that was a good idea, so I did as Mike B suggested. (I note that you did not question Mike B for error bars when he found two regions with negative trends). The trends were in #286.

    Claims made by you and Mike B. My investigation.

    Finally, in #290, when questioned by DeWitt Payne about the error bars I said:

    I agree that the SH trends over 10-15 years may not be statistically significant. (Although I suspect the polar region is significant). I’ll leave the calculation of confidence limits to someone with more statistical prowess.

    I conceded the point that the trends were probably not significantly positive. And that’s what got you all upset?

    Your “second time” presumably refers to the time when we misunderstood each other when I asked about splicing the instrumental record onto the proxy record.

    I realize you don’t like me. That’s fine. Quoting me out of context and making un-founded accusations is crossing the line.

  342. bender
    Posted Feb 23, 2008 at 10:16 PM | Permalink

    #340 You’re not getting it, as usual. That’s fine.

  343. bender
    Posted Feb 23, 2008 at 10:48 PM | Permalink

    If a very weak trend or no trend at all in observed temps in the Antarctic over an 11-yr period is consistent with the dangerous warming predicted by the models (over a comparable 11-yr period), then tell me what sort of observations would NOT be consistent with the models. e.g. What slope over what length of time? Zero slope over 20 years? Weakly negative slope over 15 years? As has been said many times before here, it seems the models are awfully permissive. Which begs the question: what would it take to refute the hypothesis? If everything is permissible, then the hypothesis is irrefutable. That ain’t science.

  344. John V.
    Posted Feb 23, 2008 at 11:33 PM | Permalink

    #343 bender:
    Oh I get it. You’re here to raise the uncertainty flag on results you don’t like. To lure me into trying to defend all of climate science. That’s fine. That’s your role.

    Ideally all analyses and results would have error bars. Since you’re so vigilant about policing my comments, it’s surprising that error bars are nearly impossible to find on this site.

    At some point, when the vast majority of the science community is against you, you will need to accept that you are actually the one that needs to provide evidence. Spreading uncertainty is not enough. Running for the error bars is not enough.

    If you decided to be agnostic about the theory of plate tectonics, would the onus
    really be on me to defend the theory? Or would it be up to you to provide an
    alternative hypothesis?

    Step up. Make a statement. Defend it. Then we can all learn something.
    Otherwise you’re just heckling from the cheap seats.

    =====
    #344 bender:
    You claim a weak trend over the Antarctic for 11 years. The GISTEMP data (see #286) actually shows the Antarctic trend to be second only to the Arctic trend and similar to the 44N-64N trend. I have to ask again for a reference.

    For me, it’s pattern of warming that is convincing. Models show the most warming in the Arctic. Models show more warming in the NH than the SH. Models show the Antarctic warming more than southern mid-latitudes. Models show the poles warming more than the equator. There are some issues with equatorial tropospheric-vs-surface trends (although the pattern does not match solar forcing either). The stratospheric-vs-tropospheric trends seem to support GHG as the cause of warming.

    Furthermore, the models give different results with and without GHG forcing. The differences are large enough to be statistically significant. Reality is bound by the GHG-forced models and outside the range of the non-GHG models. It seems to be that the models have made testable predictions, and have passed those tests.

  345. maksimovich
    Posted Feb 23, 2008 at 11:54 PM | Permalink

    re 344

    The GISTEMP data (see #286) actually shows the Antarctic trend to be second only to the Arctic trend and similar to the 44N-64N trend.

    The scientific committee on antarctic research has consigned Gistemp to the dustbin for the following reasons.

    • Data from different stations have been combined to create single time series.

    • No metadata were provided, so it is not clear when stations have moved or when new observing instruments were introduced.
    • It is often not stated what quality control has been carried out on the observations.
    • It is unclear how the daily mean temperatures were produced prior to the monthly means being computed.At some stations the daily mean is calculated from the three- or six-hourly synoptic observations, whereas at other stations it is taken as the mean of the daily maximum and minimum temperatures.

    Better to not use “wrong way” Hansen in Antarctic references.

  346. Posted Feb 24, 2008 at 2:57 AM | Permalink

    John V wrote:

    Land warms faster than ocean

    I’m rather puzzled by the Hadcrut SST data. If you take off the Folland and Parker correction — there is a thread on this blog which argues, convincingly to me, that that uncorrection to a correction is a valid treatment of the data — then the oceans have been warming at .14 deg/decade for the last 100 years. This is more than the land warming, which is, after much manipulation and carrection, shown to be .06 deg/decade. This is obviously not consistent with your statement. Current cooling excursion looks like a slow return to that .14 slope after the abrupt El Nino warming of the 90s.

    The oceans are 70% of the surface. We need an Anthony C. Watts to have a look at them with an eye to recovering all their temperature data which is largely ignored by the media. There’s no asphalt or air-conditioning exhaust or UHI out there and, give or take a few cool buckets, examination should yield a truth which is less contentious.

    JF

  347. Philip Mulholland
    Posted Feb 24, 2008 at 3:04 AM | Permalink

    bender & John V

    Are you guys talking about or just demonstrating signal to noise ratio? 🙂

  348. steven mosher
    Posted Feb 24, 2008 at 8:43 AM | Permalink

    re 337. Thanks johnV. Looks like you got yourself in the center of a circular firing squad.

    a quick comment, no reply needed. I think we collectively got it wrong by using the R/S/U
    flag, that is if nighlights is a better proxy..any there is something constructive here that relates to the bender argument. now I have six screens I can use to look for the best
    sites. rural, nghtlights, ghcn brightness, brightness index, landcover and crn.

    Can I hunt around in the data using these different criteria until I find a mismatch?

    actually I could do it backwards. Since I have a trend for evey station, I could select those
    that have negative trends ( there are many) and then I could do a multiple linear regression
    and DISCOVER the criteria that picked out the negative trends. Then I could use that criteria
    as a selection criteria and create a series with negative trend

  349. bender
    Posted Feb 24, 2008 at 10:07 AM | Permalink

    Yes, heckling from the cheap seats. Just like you. Get it yet?

  350. bender
    Posted Feb 24, 2008 at 10:12 AM | Permalink

    You’re here to raise the uncertainty flag on results you don’t like

    So, uncertainty is a “flag” to be raised when it suits one’s purpose. Yes, that is a very Team-like view. Good for you.

    I think you will find I am very even-handed about my views on scientific uncertainty. It is you who appears to flag selectively.

  351. Posted Feb 24, 2008 at 10:40 AM | Permalink

    Neal,

    I don’t agree that it’s impossible to predict the average/integral of something you don’t know. Example: If I have a bathtub shaped like a rectangular prism with floor area A, and am running water into it at a rate of W (volume/second), the average water level will be increasing at the rate of W/A. This will be the spatial average of the water levels over the tub, and I can predict it confidently, even though the instantaneous water level at any point over the tub is changing unpredictably due to turbulence, etc.

    That’s true, in this case physics give a solution, and some independent, stationary noise is added to that solution. So, physicist gives a solution, and only small part of the work is left to statistician; to estimate the variance of that noise.

    Next we will add drain to the opposite side of the tub, add overflow opening, and tilt the bathtub so that water level samples (with missing data and moving stations) has to be combined using anomaly method. Then add time to input water rate, W_in(t), and also to the output rate of the drain, w_out(t). Plus some other difficult-to-predict effects (storms, random-ish oscillations) to the water level. After that, the statistician may note that a sample variance from a short data set may not be sufficient to obtain valid prediction intervals. Unless there’s accurate 1000-year reconstruction that shows that the ‘noise’ part averages out very rapidly, and only astronomical cycle and A-CO2 effect remains. That’s hockey stick, and there’s a very interesting presentation about it linked in the main post 😉

    There are two distinct uncertainties involved here, power of ‘weather noise’ after averaging, and uncertainty of the CO2 sensitivity. Hockey stick debate is about the former.

  352. Pat Cassen
    Posted Feb 24, 2008 at 10:56 AM | Permalink

    UC (#352), your point is well-taken. But Neal K. Fisher’s rebuttal (#321, 332) of Tom Vonk’s argument (#265) still stands. Tom Vonk essentially asserted that the intractability of the NS eqns. precludes predictability in systems with chaotic components (e.g., turbulent flows, climate), an assertion which ignores decades of progress in analyzing such systems. I like Neal’s simple example, but examples in more complicated problems of engineering, geofluidynamics and astrophysics (turbulent heat transfer, material transport, self-organizing flows, etc.) abound. Although the climate system has many internal degrees of freedom, it is by no means obvious, a priori, that macroscopic behavior (e.g., the response of well-defined averages to changes in CO2) exhibits chaotic behavior. One might argue that paleo evidence suggests that some climatic variables evolve in and out of chaotic states in response to slowly varying inputs. In any event, numerical analysis with GCMs is an entirely appropriate means of addressing such questions.

  353. Steven Mosher
    Posted Feb 24, 2008 at 11:26 AM | Permalink

    RE 351. Bender, there other day I was thinking. How many predictions does GCM
    make? : crossing over from the hindcast to the forecast:

    Teleconnected.

  354. bender
    Posted Feb 24, 2008 at 11:30 AM | Permalink

    it is by no means obvious, a priori, that macroscopic behavior exhibits chaotic behavior

    1. It does not need to be “obvious” to be true.
    2. It is an undeniable possibility.
    3. Many alarmist warmers accept the chaotic climatic postulate.

  355. Pat Cassen
    Posted Feb 24, 2008 at 11:39 AM | Permalink

    bender, #355:
    1 and 2: Sure.
    3. So what?

  356. Kenneth Fritsch
    Posted Feb 24, 2008 at 11:42 AM | Permalink

    I over simplify here, but these debates, that John V and Neal J. King bring to the fore in this thread, I judge revolve around the adage that the devil is in the details. One can get a fuzzy view of the effects operating in AGW but the real contention is in the details.

    The Antarctic is supposed to warm, but less than the Artic. But is it cooling?

    The tropospheric temperatures in the tropics are warmer than models generally predict but we can show the uncertainty of the models overlaps with those measurements.

    The historical MSU satellite readings have been corrected in the direction of the trends from surface temperature measurements, but in reality the measurements remain different and that leaves an implication that future adjustments will put the satellite measurements closer to those of the surface which are tacitly assumed to be correct.

    What is needed is a discussion of the details and certainly not a bath tub analogy to a climate system.

  357. Kenneth Fritsch
    Posted Feb 24, 2008 at 11:49 AM | Permalink

    The tropospheric temperatures in the tropics are warmer than models generally predict but we can show the uncertainty of the models overlaps with those measurements.

    The devil made me say it when I should have said: The tropospheric temperatures are cooler than the models generally predict.

  358. Posted Feb 24, 2008 at 12:51 PM | Permalink

    #352, me

    There are two distinct uncertainties involved here, power of ‘weather noise’ after averaging, and uncertainty of the CO2 sensitivity. Hockey stick debate is about the former.

    ..distinct unless CO2 sensitivity is estimated from (instrumental/reconstructed) temperature record. Or if the system is not linear, and ‘weather noise’ and A-CO2 interact (water rate depends on water level etc.. ) . But I’ve heard that hockey stick doesn’t matter in this CO2 sensitivity issue (physicist can derive CO2 sensitivity without past temperature record), so these uncertainties can be dealt separately.

  359. bender
    Posted Feb 24, 2008 at 4:02 PM | Permalink

    All: This is precisely the wicket John V is getting stuck on.

    Someone argues the models are crap because they don’t mimic reality: too much warming predicted, not enough observed.
    The warmer insists the models are mimicking reality because the models make certain allowances.
    Look under the hood and you find these allowances are indeed very permissive.

    The reality is that these model uncertainties are suppressed, until they are needed to make an argument. Gavin Schmidt.

    For me – unlike John V – the uncertainties do not come into existence only when I need them. They are always there.
    For me – unlike John V – the uncertainties must be estimated BEFORE an inference can be made, not after.

    John V demands I (1) take a stand, and (2) show some proof. I take no stand because of these uncertianties. My proof is over at RC. Cited many times. Look at Gavin’s confidence limits on his model runs. Suffiently huge when he needs them to be. I’m talking about tropospheric tropical temperatures, of course. Which Ken Fritsch alluded too. The thread where their only avenue to refute Christy & Spencer is to show just how much uncertainty there is on the model runs. Lame. Moreover this admission has consequences.

    So what? Well, if there is that much uncertainty on an 11-year trend, why is there not that much uncertainty on two 30-year trends? ENSO is a texas sharpshooter’s fallacy. Of course John V probably has no clue what I’m saying here. I’ve said it a half dozen times and he hasn’t replied yet. Weather noise, John V. It’s the reason why you need to look at model ensembles, John V – not individual runs, or selectively chosen runs.

    Now tell me: where does weather noise stop and climatic noise start. That’s what’s “so what”, Pat Cassen. Draw me that line. Tell me why. Once you admit that chaotic weather noise includes 3-7 year variability, you’ve let open the barn door to 10, 20, and 30-year variability.

    Trouble in modeling paradise once you start asking what is signal and what is noise.

  360. Posted Feb 24, 2008 at 5:24 PM | Permalink

    @Jae

    I am certainly no expert on the NS subject, but I sense that your example is overly simplistic. The turbulence in a bathtub is probably several orders of magnitude lower than the turbulence of the atmosphere, and may be entirely predictable. Nice try, though.

    Jae, you are correct with regard to Neil’s understanding of the NS. The turbulence in the tub happens to be irrelevant calculating how fast the tube fills he posed; the turbulence in the pipematters.

    After all, to predict how much water is flowing through the pipe that fills his tub from first principles, Neil likely needs to solve the NS. (Though, we could design hardware to avoid this problem. But, in that case, Neil couldn’t predict the pressure drop across the relatively expensive pump he used to supply the water, nor could he predict power requirement. )

    But, alas, I fear this thread has developed cross talk. Because I can tell from Neil’s answer to me, that either:
    a) my earlier comment didn’t actually address the question Andrew actually asked or
    b) Neil thought I was interjecting on something entirely different.

  361. Pat Cassen
    Posted Feb 24, 2008 at 6:17 PM | Permalink

    bender (#360):

    Once you admit that chaotic weather noise includes 3-7 year variability, you’ve let open the barn door to 10, 20, and 30-year variability

    No barn doors shutting over here, bender. Still, I notice that the journals are full of studies that see signal where you see noise…

  362. Severian
    Posted Feb 24, 2008 at 6:33 PM | Permalink

    Still, I notice that the journals are full of studies that see signal where you see noise…

    Just like the EVP folks…;)

  363. bender
    Posted Feb 24, 2008 at 7:11 PM | Permalink

    #362 Signal? Where? In everything that’s not ENSO? Don’t you think it’s a bit odd to suppose that all these teleconnection centres behave orthogonally to one another? Is this not an artifact purely of human convenience? Like the texas sharpshooter’s target? What is this “signal”? What regulates it’s behavior? Why is it not predictable? What authority can prove that the two 30-year warming blips are not low-frequency noise? You do realize that even the staunchest alarmists at RC recognize that outside possiblity – including Gavin and Mike?

  364. Pat Cassen
    Posted Feb 24, 2008 at 7:20 PM | Permalink

    #364:

    You do realize that even the staunchest alarmists at RC recognize that outside possiblity – including Gavin and Mike?

    Yep.

  365. Spence_UK
    Posted Feb 24, 2008 at 7:55 PM | Permalink

    Neal J King’s response to Tom Vonk and Pat Cassan’s endorsement seem to have misunderstood Tom’s post.

    Tom did not say it was impossible to derive an average behaviour; he said it was impossible to provide the definite integral of the climate equations at our present capability. I think we can all agree on this. On the average behaviour, he asked: what is the timescale and what is the justification? Bathtubs filling with water do not address Tom’s question. Try reading #265 more carefully. Post hoc observations, especially those with just 3-4 degrees of freedom (30 year trends over 100 years of data?), are not a convincing answer either.

    Complex natural systems are fundamentally difficult to predict, especially those exhibiting self-similarity. The example to help people understand the difficulty in predicting these environments given by Per Bak is the case of the sandpile. Consider a constant flow of sand into a sandpile. The sandpile becomes unstable and a landslide (sandslide?) redistributes the sand to be at a lower energy level, and with more stability. Landslides are occurring continuously, and at all scales. Small slides, just a few grains of sand, happen all the time. Larger slides happen, but less frequently. Huge slides do occur, but less frequently still.

    Now here’s the rub: large slide events are dictated by the shape of the pile. But the shape of the pile is dictated by smaller scale events. If you can’t predict the small scale events, you can’t predict the large scale events.

    So, we have two bad analogies for climate. The bathtub and the sandpile, neither of which reflect the detailed behaviour of the climate. Which of these two bad analogies is actually correct? You can’t tell from the analogies, you have to look at the actual climate system. That is what Tom is getting at. So this should be a hot topic question for climate scientists, right? Wrong. It hardly gets addressed at all, either in the peer-reviewed literature or in blogs. Furthermore, the answers I have seen have shown such a woefully poor understanding of chaos and self-similarity that even I could see the gaping holes in the claims. It seems to me that climate science is unequipped to address these issues.

  366. Willis Eschenbach
    Posted Feb 24, 2008 at 8:25 PM | Permalink

    JohnV, you say:

    Furthermore, the models give different results with and without GHG forcing. The differences are large enough to be statistically significant. Reality is bound by the GHG-forced models and outside the range of the non-GHG models. It seems to be that the models have made testable predictions, and have passed those tests.

    I fear that you think the difference between

    a) A model that has been tuned to replicate the historical climate with GHGs, and

    b) The same model with GHGs removed

    means something more than

    c) ONCE YOU’VE TUNED THE MODEL, ADDING OR SUBTRACTING ANYTHING GENERALLY MAKES THE RESULTS FROM YOUR TUNED MODEL WORSE …

    John, suppose I make you a very simple tuned model that does not include GHGs, and it does a very good job replicating the past. It’s not that hard to do. However, when I add GHGs to the model, the results don’t match reality at all.

    By your lights, would that show us that GHGs do not improve the accuracy of models?

    Of course not. You’d be the first to point out that the model is tuned to produce results without GHGs, so there’s no reason to expect it would do well with them.

    But in your statement above, you’re making exactly the same argument. Won’t fly. If you think it will fly, think about it some more. Repeat as necessary.

    w.

    PS – for a tuned model, hindcasting temperatures (no matter how accurately) is not a “testable prediction” in any case. See the “pre” part of “prediction”? … think about what it means.

  367. Posted Feb 25, 2008 at 1:52 AM | Permalink

    bender,

    Once you admit that chaotic weather noise includes 3-7 year variability, you’ve let open the barn door to 10, 20, and 30-year variability.

    And then Hockey Stick closes that door. That’s why it is important to go MBH9x through equation by equation, figure by figure, to show people what was actually done in that study. How they calculated CIs? Why didn’t they use conventional methods to obtain CIs? Where does that astronomical trend come from? How did they select the number or TPCs for each step? How did they select the TPCs for each step? Is it accident that the result corroborates AGW theory so well, after all those big mistakes in it? Was the answer in mind when the question was phrased? This blog and and Steve’s presentation are excellent sources, but the information is scattered around quite a lot. And there are still people who think MBH9x are mathematically sound studies, so the work is not done.

  368. mikep
    Posted Feb 25, 2008 at 2:13 AM | Permalink

    I still haven’t managed to find where Steve’s presentation at the EAS seminar is online. No doubt I am being stupid, but could someone point me to it.

  369. AlanB
    Posted Feb 25, 2008 at 5:31 AM | Permalink

    mikep

    Here is the link

    I’ve posted up a ppt (9 MB) of my presentation here, very slightly edited to add any y-axis descriptions (in italics) that had been left out and to improve the referencing

    or http://data.climateaudit.org/pdf/gatech.ppt

    Great photo of Steve and his hockey stick!

  370. MarkW
    Posted Feb 25, 2008 at 6:33 AM | Permalink

    JohnV,

    I think you are projecting more than a little.

    SteveMc, is not interested in promoting anything. He has stated time and again that he is interested in auditing. When you learn to appreciate the difference, you will become a much better scientist.

  371. MarkW
    Posted Feb 25, 2008 at 6:35 AM | Permalink

    I wonder why JohnV gets so upset over the very concept of data uncertainty?

  372. MarkW
    Posted Feb 25, 2008 at 6:46 AM | Permalink

    The tropospheric temperatures in the tropics are warmer than models generally predict but we can show the uncertainty of the models overlaps with those measurements.

    What was that comment about only caring about error bars when it suited your purpose?

  373. Tom Vonk
    Posted Feb 25, 2008 at 8:53 AM | Permalink

    Neal J King’s response to Tom Vonk and Pat Cassan’s endorsement seem to have misunderstood Tom’s post.

    Tom did not say it was impossible to derive an average behaviour; he said it was impossible to provide the definite integral of the climate equations at our present capability. I think we can all agree on this. On the average behaviour, he asked: what is the timescale and what is the justification? Bathtubs filling with water do not address Tom’s question. Try reading #265 more carefully. Post hoc observations, especially those with just 3-4 degrees of freedom (30 year trends over 100 years of data?), are not a convincing answer either.

    Complex natural systems are fundamentally difficult to predict, especially those exhibiting self-similarity. The example to help people understand the difficulty in predicting these environments given by Per Bak is the case of the sandpile. Consider a constant flow of sand into a sandpile. The sandpile becomes unstable and a landslide (sandslide?) redistributes the sand to be at a lower energy level, and with more stability. Landslides are occurring continuously, and at all scales. Small slides, just a few grains of sand, happen all the time. Larger slides happen, but less frequently. Huge slides do occur, but less frequently still.

    All that is perfectly right Spence_UK .
    Indeed neither has carefully read the post or got the point .
    Analogies are just that … analogies and in this bathtube case not even wrong .
    Mathematical treatement of equations and behaviour of solutions are a completely different game .
    Substituting easy pictures and simple examples to a complex system is all fine as long as nobody confuses the pictures with the reality or even worse , tries to convey the illusion that the reality behaves like the picture .

    Lucia has already said in few words why the “bathtub” example was irrelevant and misleading to casual readers who don’t really know what chaotic systems are .
    Indeed the turbulence doesn’t “matter” for the average level (if the bath tub is large enough , if the flow is constant , if the fluid is incompressible , if , if , if …) , it matters for the pipe .
    However because this irrelevant example has been given , I will take it to show how even in this (apparently) easy case every and any statement that Neil did becomes wrong .

    It is enough to take the incoming flow coming from above and the jet has a section comparable to the section of the bath-tub .

    2 consequences .
    First is that the “average level” is no more clearly defined .
    Second is that the vertical perturbations have the same or bigger order of magnitude than the horizontal dimensions .
    Third is that the volume is no more equal to A x h .
    So even with a linear simplistic example with constant flow where V = W.t where I can define a “level” by h = V/A , this unphysical quantity h has no relationship with the distribution of the vertical coordinates of the surface .
    Indeed it is trivial to say that this thing h grows like W/A because it has been defined that way but it has no physical meaning .

    As for Pat Cassen’s examples of “turbulence tractability” in engineering , it misses the point by a vast margin too .
    Deterministic chaos in the climate is not reduced to turbulence or to N-S alone for that matter .
    Of course turbulence is there but a million of other non linear phenomenons come on top – oscillations at all frequencies from an hour to thousands of years and probably for even more but we don’t yet know of them , phase changes , cloudiness and aerosols etc etc .
    So it is not because engineers have established semi-empirical formulas that work quite well for calculating pressure drops in pipes of different geometries , that it has any relevance to the predictability of the climate .
    If you knew a bit more about that than only a couple of words , you would know that PRECISELY the pressure drop formulas are the best example of data fitting .
    The difference between the engineers and the so called climatologues being that an engineer would apply a formula only for a strictly defined range of Raynolds and Prandtls and never dream to use it outside while a climatologue doesn’t even know the range of validity of his models .

    What stays is that there is no reason whatsoever that , like UC nicely put it , “noise cancels itself” and to be clear “noise” is here not only restricted to turbulence nor has it actually anything to do with the statistical meaning of “noise” .
    If anything , the proofs we DO have (e.g Ruelle and Takkens already quoted) say that it doesn’t .

  374. bender
    Posted Feb 25, 2008 at 9:06 AM | Permalink

    #372 Because he doesn’t have the “prowess” to compute confidence limits on his trend lines, that’s why.

    He is right, BTW, to accuse me of jumping in selectively on the uncertainty issue. I won’t attack newcomers or occassional commenters, or people obviously outside their field. Whereas John V himself is such an irresistible target. Playing at authority when he can’t even compute his own confidence limits. Tsk.

    You see, John V, when you are a real scientist you are not just trying to rebut all known, current opponents. Being sworn to the statistically defensible truth, you also accept to take on all lurkers and all future opponents.

    When you have equipped yourself to do that, John V, then you can start playing the authority. Until then, watch out for lurking eyes.

    lurk /on

  375. John V
    Posted Feb 25, 2008 at 9:06 AM | Permalink

    bender:
    You have some misconceptions about my level of understanding, but that’s fine. I’m not here to prove anything to you.

    This all started with comments here claiming that the SH and Antarctica were cooling. I did a quick analysis and found the data did not back up that claim. I conceded that a cooling trend was in the error bars. You’re still making the claim of Antarctic cooling with no data to back it up. I understand that you feel you don’t need to argue anything because you are not claiming anything, but you don’t get to make up negative trends that don’t exist.

    I understand you have some issues with Gavin and “the team”. There’s no need to project your issues with them on to me. Similarly, I avoid projecting my issues with others on to you. We have enough issues with each other.

    =====
    #367 Willis E:
    You seem to be espousing the view that climate models are nothing more than statistical models — exercises in curve fitting. I suggest that a GCM is fundamentally different.

    John, suppose I make you a very simple tuned model that does not include GHGs, and it does a very good job replicating the past. It’s not that hard to do.

    I have my doubts but will suspend my disbelief as this gets to the core of the question I’ve been asking. Please direct me to some resources or your own thoughts on how such a model could be built. Even a statistical model that fits a curve from global temperature to known forcings (excluding GHGs) would be a start. (A science-based model would of course be preferable).

    =====
    (#371 to #373) MarkW:
    I do not accept the view that “SteveMc is not interested in promoting anything”. If he was truly only auditing, then both positive and negative results (from his perspective) would be shown.

    I wonder why JohnV gets so upset over the very concept of data uncertainty?

    Um, I don’t. I conceded the original point about data uncertainty.

    What was that comment about only caring about error bars when it suited your purpose?

    You may not be aware but you were quoting Kenneth Fritsch. Your comments leave the impression (perhaps unintentionally) that the comment was mine.

    Steve: John V, one of the few blog rules is that I ask people not to impugn one another’s motives. I don’t impugn your motives, please don’t impugn mine. There are a wide variety of topics covered here. I’m quite happy to have opposite points of view represented here – I rather like it. If Gavin Schmidt or Michael Mann wants to post a thread providing opposite audit results on some study, they’re welcome to do so. I’ve posted threads for Judith Curry. You allege that I’ve failed to show “both positive and negative results”. IF you’re alleging this about, for example, my posts about MBH, Juckes, Briffs, please provide some evidence for this assertion or withdraw it.

  376. John V
    Posted Feb 25, 2008 at 9:19 AM | Permalink

    bender:
    As you say, there is variability in the global climate at all time scales. Our ability to resolve and understand the variability depends on the physical and temporal range. Small scale, short duration variability is beyond our ability to forecast for more than a few days.

    At a much larger scale, ENSO is seen through a large part of the Pacific Ocean on a time scale of about 1 year. Models can reproduce the ENSO cycle but not its timing or the intensity of a single cycle. Cycles at this scale are a key reason why model ensembles are required to capture the range of physical variability. Admittedly, the error bars on the ensembles are sometimes too large to be useful.

    On a much larger scale, we are now in the midst of a 30-year warming trend that covers most of the globe. IMHO, such a large scale variation is unlikely to be caused by random chance or a mystery phenomenon. IMHO, a 30-year trend is well within our ability to forecast and understand. As I read Leonard, at your suggestion, I believe he would agree.

    If your argument is merely that there is a *possibility* that it is random or a natural cycle, then I can’t argue. Of course there’s a possibility. It comes down to the odds.

  377. Raven
    Posted Feb 25, 2008 at 9:28 AM | Permalink

    John V says:

    You seem to be espousing the view that climate models are nothing more than statistical models — exercises in curve fitting. I suggest that a GCM is fundamentally different.

    I could build a nifty science based model that would describe a sled going down a hill, however, this model would only be useful if I have the correct data to feed into it (i.e. the mass of the sled, the slope of a hill or the co-efficient of friction). GCMs may be based on sound mathematical theory but they rely on data with a lot of uncertainty. In many cases, the data is not available and has to be estimated by the modelers (e.g. aerosols). This means that GCMs quickly turn into a curve fitting exercises even if they were not intended to be one.

  378. John V
    Posted Feb 25, 2008 at 9:31 AM | Permalink

    bender:

    When you have equipped yourself to do that, John V, then you can start playing the authority. Until then, watch out for lurking eyes.

    Sheriff bender is watching. Gotcha.
    I only ask that your response is proportional.
    If I’m refuting the low-hanging fruit of common misconceptions, don’t assume that I’m also trying to support all of climate science in the same comment.

    BTW, do you have a reference on the supposed Antarctic cooling yet?

  379. Tom Vonk
    Posted Feb 25, 2008 at 9:39 AM | Permalink

    Bender # 375

    He is right, BTW, to accuse me of jumping in selectively on the uncertainty issue. I won’t attack newcomers or occassional commenters, or people obviously outside their field. Whereas John V himself is such an irresistible target. Playing at authority when he can’t even compute his own confidence limits. Tsk.

    Bender this one has been puzzling me for too much a long time so I must ask .
    Why do you bother ?
    As much as I would not miss reading a post from UC , Spence_UK , Dan Hughes , you and some others , the posts of John V are only a nuisance because they take too much space and one has to scroll to get beyond .
    It has been a long time since everybody has already seen that his skills were relatively low and his contributions more rhetorical than useful .
    Long ago I thought that he had something interesting to say in a discussion about radiation transfer but then I realized that he was only parrotting half digested readings thereby furiously reminding this Steve Bloom something character at the former R.Pielke boards .

    Why would you bother reading ?

  380. Raven
    Posted Feb 25, 2008 at 9:44 AM | Permalink

    John V says:

    BTW, do you have a reference on the supposed Antarctic cooling yet?

    Would GISS be acceptable or would you rather have a more reliable source?
    http://www.climateaudit.org/?p=2721#comment-213085

  381. John V
    Posted Feb 25, 2008 at 10:39 AM | Permalink

    #381 Raven:
    Thanks for the link. However, I think you misunderstood the question. The link shows the January 2008 (single month) anomaly vs the 1951-1980 reference period. A single month does not a trend make. The error bars on monthly anomalies are huge.

    As for GISS data, I have already calculated the trends for 10- to 15-year periods in #286 above:
    http://www.climateaudit.org/?p=2708#comment-214825

    To review, here are the Antarctic trends:
    Latitude: 10yr 11yr 12yr 13yr 14yr 15yr
    64S – 90S: 0.66 0.74 0.19 0.29 0.40 0.21

    I concede that the positive Antarctic trends may not be statistically significant. Showing that their mean is positive is sufficient to refute any unqualified statement that the Antarctic is cooling.

  382. Steven Mosher
    Posted Feb 25, 2008 at 10:46 AM | Permalink

    RE 377. What I want out a climate MODELLER is something rather simple.

    It is what I have to do before running Models.

    1. Specify the measures I would use to test my model. For example, If I use Bomb A
    I will kill more people than if I use Bomb B.

    Then I run my model. I have to specify my MOM ( Measures of Merit) before the test.
    have a hypothesis before the test, and test that measure. And report that result.

    Afterwords when that prediction fails, we mine the data for ways in which
    we got some things right. hey, we knocked down more buildings, hey we created
    more dust. The explosion craters were the right size. The blast wave was right.
    So, we have a good model. Look at all the things we got right!!!
    Stopped clock analysis.

    Simple test. A GCM in Hindcast mode. From 1850 to 2007. One measure. Average Surface Temp trend.

    How far apart do they have to be before you question one of two things.

    1. the model
    2. the historical temp record.

    The test of a model isnt finding a few things right with it or many things.
    Its finding the right thing right with it. More precisely its specifyng AHEAD OF TIME
    what you will measure, what test you will use, and when you will claim success or
    failure.

    I dont see that discipline in GCM work. It may be there. I havent seen it

  383. Sam Urbinto
    Posted Feb 25, 2008 at 10:51 AM | Permalink

    John V: I appreciate the work you’ve done with your alternative code as a way to check up on things. I would think the uncertainty in this field tends to prove there is uncertainty with most of it and nothing can be stated unqualified.

    “Would SteveMc be interested in promoting results that back up the IPCC?”

    Why bother? We know not everything from the IPCC is wrong; what’s the point of bringing up what’s correct? Yeah, yeah, yeah, methane absorbs IR. Sure, burning it creates carbon dioxide and water. Okay. We need to reinforce that CFCs are greenhouse gasses? We need to prove that it’s warmer with GHG than it is without? No. We don’t need to be RC, it is already there. Rabett and Tamino are already here to cheerlead what’s “correct”.

    “If your argument is merely that there is a *possibility* that it is random or a natural cycle, then I can’t argue. Of course there’s a possibility. It comes down to the odds.”

    That is the problem. Nobody has a firm grip on the odds. So it boils down to investigating the uncertainty. I don’t mind people focusing on what we know, the issue is that

    ” A single month does not a trend make. The error bars on monthly anomalies are huge.”

    And that makes the longer-term information comprised of such error-ridden chunks reliable, how?

    “I concede that the positive Antarctic trends may not be statistically significant. Showing that their mean is positive is sufficient to refute any unqualified statement that the Antarctic is cooling.”

    Since the data is so suspect, how can anyone really make a statement that the (ant)arctic is cooling, warming, or staying the same? I certainly wouldn’t. Doesn’t checking into the details of what seems odd be the way to go?

  384. MarkW
    Posted Feb 25, 2008 at 11:33 AM | Permalink

    Given the large number of tunable parameters used by the models, saying that they are nothing more than statistical models is a quite defensible claim.

  385. MarkW
    Posted Feb 25, 2008 at 11:37 AM | Permalink

    TomVonk,

    I’m still amused by the time JohnV declared that since for a couple of decades, TSI did not match the temperature record, this proved that the sun played little role in the recent warming. A few posts later he declared that it didn’t matter that for a few decades at a time, CO2 increases did not match the temperature record, since for the longer record, the trends matched, hence proving that CO2 was the major climate driver.

    In one record, a few decades of mismatch was a disqualifier. In the other record, a few decades of mismatch was irrelevant.

  386. Posted Feb 25, 2008 at 12:01 PM | Permalink

    JohnV.

    This all started with comments here claiming that the SH and Antarctica were cooling.

    Gosh, I thought I first read Antarctica is coolng at real climate! Plus, I thought they said the’ve been predicting that!

    Even a statistical model that fits a curve from global temperature to known forcings (excluding GHGs) would be a start. (A science-based model would of course be preferable).

    Here’s “Lumpy!”

    I don’t think modelers chose their forcings to tune GCM’s by secretly developing “lumpy-like” models. But it’s not impossible or even difficult to do.

    The problem is that given how slowly real earth data streams in, it can be difficult to avoid inadvertently tuning. You do some runs; you know your model predicts low during the ’70s’, you look for a reason. You incorporate what you found.

    Of course this fix continues to explain the 70s in future runs. It will work if it’s “science” — that IS the scientific method. The problem is it will work if it’s tuning to match the ’70s also.

    That’s why validation is using data that comes out after a model is run is important. (It’s also why I prefer simpler models. They predict warming too. So, this shouldn’t be a big deal.)

  387. John V
    Posted Feb 25, 2008 at 12:09 PM | Permalink

    #386 MarkW:
    I’m not going to bother digging through all the old posts to clarify your out-of-context quotes. Your logic would make some sense if I was saying that GHGs were the only climate driver — I have never said that.

    =====
    #387 lucia:

    Gosh, I thought I first read Antarctica is coolng at real climate! Plus, I thought they said the’ve been predicting that!

    What I read was that the models predict less warming, not cooling. That’s an important distinction. Maybe we read different pages.

    Can you elaborate or provide a link for the Lumped Parameter Model? Thanks.

  388. Posted Feb 25, 2008 at 12:21 PM | Permalink

    JohnV,
    We read the same article. They do say parts of the antartic are cooling, don’t quantify anything, and mention that cooling was predicted in one particular model.

    Frankly, the more you read it, the more vague nuanced all statements are.

    Lumpy is my toy model. I need to finish her up– but keep getting distracted. You can see one discussion here.

    The article contains more links.

  389. Sam Urbinto
    Posted Feb 25, 2008 at 12:40 PM | Permalink

    Less warming is cooling, isn’t it? But that’s the trouble; what are we talking about, a reduction in the rate of change, a downward movement, going under the zero line, what? To be more percise, I’d put it as a reduction in the rate of positive change of the anomaly. Or some such other specific phrase that explains what’s going on. Is not the rate of positive change in the anomaly less?

    This is much like the semantical issue of if going inside “warms you up” or not (e.g. it’s 80 F outside and 100 F inside), or discussion if a reduction in alkalinity to become less alkaline acidification, or if a reduction of acidity to become less acid alkalinification.

  390. kim
    Posted Feb 25, 2008 at 12:40 PM | Permalink

    The article by Lorne Gunter in today’s National Post quotes two climate modelers who now believe that wind moving heat from the tropics to the Arctic provoked last summer’s melt, and not CO2 caused warming.
    =======================================

  391. John V
    Posted Feb 25, 2008 at 1:55 PM | Permalink

    #390 Sam Urbinto:
    My interpretations of warming/less warming:
    – “Less warming” means “warming at a slower rate”, not “cooling”;
    – Cooling means the temperature goes down;
    – Warming means the temperture goes up;
    – “Less warming in the SH than the NH” means the SH temperature increases less than the NH temperature, not that the SH temperature decreases.

    I don’t see any ambiguity in these particular statements.

  392. bender
    Posted Feb 25, 2008 at 2:01 PM | Permalink

    #392 How does the rate of change predicted by the models for the Antarctic compare to the rate of change observed?
    You have admitted now that the slope of observed temperature change in the Antarctic may not be different from zero. Is this what the models say as well?

    [See how you bring the Socratic approach upon yourself?]

  393. mikep
    Posted Feb 25, 2008 at 2:29 PM | Permalink

    Re http://www.climateaudit.org/?p=2708#comment-216442

    Thanks Alan, I have now found it and enjoyed.

  394. Sam Urbinto
    Posted Feb 25, 2008 at 2:42 PM | Permalink

    For the sake of discussion, we should describe things in a neutral manner; unambiguity is a good thing.

    – A rise in the anomaly trend
    – The rise in greenhouse gasses suspected to be from anthropogenic causes. {Or give a percentage of likelyhood if you’re going to be more than suspecting things; if I say it’s 90% sure that the AGHG are AGHG then I should be able to give a reason for that number, and give others the opportunity to be able to show that it’s not true, or that there’s no way to prove or disprove it, etc}
    – The (tentative working hypothesis) that anthropogenic greenhouse gases are mainly responsible for the anomaly trend.
    – The assumption that the anomaly trend reflects rising overall temperatures
    – A decrease in the rate of rise in the anomaly trend
    – The oceans’ pH is becoming less alkaline
    – The anomaly falling from its high point
    – The anomaly reaching a new high point
    – The anomaly was down this month/year, but the trend is still up

    etc

    Look, we all know, I would think, that when talking about models, ‘model X says Y’ or ‘this very likely thing’ rather has a built in disclaimer that the model or estimate may be underestimating or overestimating the number, or that the number may not even exist. That in the face of a lack of concrete numbers, something put forth as an opinion is known to be just an opinion. For heavens sake, can we stop arguing between the opinion that the number is more likely correct versus most likely incorrect? All this is doing is pitting “I”m pretty sure it’s right.” versus “We don’t know.” (and the occasional “I’m pretty sure it’s wrong.”). Do we endlessly need to rehash questions with no answers in a battle over opinions and perceptions and unstated assumptions?

  395. John V
    Posted Feb 25, 2008 at 2:47 PM | Permalink

    #393 bender:
    My comments were not related to the models. We were merely talking about definitions of less warming vs cooling. I was careful not to step outside those boundaries.

    *You* claimed a negative trend (cooling) in the Antarctic. For the third time, do you have a reference?

  396. bender
    Posted Feb 25, 2008 at 2:52 PM | Permalink

    My comments were not related to the models. We were merely talking about definitions of less warming vs cooling. I was careful not to step outside those boundaries.

    I know. Now I’m asking you for the information that actually matters: how do the observed “trends” compare to the modeled trends.

  397. Sam Urbinto
    Posted Feb 25, 2008 at 2:58 PM | Permalink

    I would contend that grouping samplings of temperatures into averages of averages of averages… it just about the same as modeling things.

    Especially since we pretty much have lost 5% or more of the surface at the two poles (sparse measurements, no satellite coverage, etc)

  398. bender
    Posted Feb 25, 2008 at 2:58 PM | Permalink

    *You* claimed a negative trend (cooling) in the Antarctic. For the third time, do you have a reference?

    Did I? I thought it was someone else said that. You can ask a 17th time for a reference if you like. If I didn’t give it to you the first time I likely don’t have one handy.

  399. bender
    Posted Feb 25, 2008 at 3:13 PM | Permalink

    At a much larger scale, ENSO is seen through a large part of the Pacific Ocean on a time scale of about 1 year. Models can reproduce the ENSO cycle but not its timing or the intensity of a single cycle. Cycles at this scale are a key reason why model ensembles are required to capture the range of physical variability.

    You are taking about heat flow through a pathway over time. I am talking about the pathway itself. All such pathways in fact. You think these are fixed?

    Admittedly, the error bars on the ensembles are sometimes too large to be useful.

    If the error bars on the model trends are huge, then what does this say about an observed “trend”? As important: what proof do you have that the model error bars are large enough? [You saw how this played out with the paleo data and Loehle & McCullough (2008). I dare say you’ve got the same problem here.]

    On a much larger scale, we are now in the midst of a 30-year warming trend that covers most of the globe. IMHO, such a large scale variation is unlikely to be caused by random chance or a mystery phenomenon. IMHO, a 30-year trend is well within our ability to forecast and understand.

    Guess how much your HO is worth to me. Guess the trend as well.

    As I read Leonard, at your suggestion, I believe he would agree.

    You think YOU can GUESS what Leonard thinks? He is a genius, and you are John V. I think you should ask him, not assume you can read his mind. When John V BELIEVES something, my skepticism mounts.

  400. bender
    Posted Feb 25, 2008 at 3:33 PM | Permalink

    *You* claimed a negative trend (cooling) in the Antarctic.

    Hmm. Looking over the thread – no, I didn’t. I echoed someone else’s sentiment in #259, merely stating that I had wondered about your earlier “money quote” – where you indeed did attempt to make an inference about model trends vs observed trends.

    So once again: it is you, John V, caught back-pedaling from a half-truth, trying to support what little is left of your argument by claiming I should provde data, whereas you do not need to provide data. Dude, you’re the one with the “money quote”. Not me. Your double-standard is showing.

    And to answer Tom Vonk’s #380: “why bother?”. Yes: John V is a waste of time. Normally I ignore him too. But I could not resist tearing a strip off him when he tried to suggest Steve M’s was not even-handed. I will stop tearing when he apologizes. And admits that he is guilty of the exact crime he accuses Steve M of.

  401. MarkW
    Posted Feb 25, 2008 at 3:52 PM | Permalink

    Averaging a bunch of models, each of which individually can’t reproduce the climate, is able to reproduce the climate?????

    A new application of the law of large numbers?

  402. MarkW
    Posted Feb 25, 2008 at 3:53 PM | Permalink

    The PDO is a well known 30 year trend.
    How do you know there aren’t longer period trends yet to be discovered? The PDO was unknown only a decade ago.

  403. John V
    Posted Feb 25, 2008 at 3:58 PM | Permalink

    I appear to have offended many by questioning Steve McIntyre’s motives.
    In the interest of raising the signal-to-noise ratio I humbly apologize.

  404. bender
    Posted Feb 25, 2008 at 4:01 PM | Permalink

    #405
    Read all of #401:

    *And* admits that he is guilty of the exact crime he accuses Steve M of.

  405. John V
    Posted Feb 25, 2008 at 4:10 PM | Permalink

    bender:
    Gracious as always.
    You keep up the noise, I’m going back to signal.

  406. bender
    Posted Feb 25, 2008 at 4:15 PM | Permalink

    Yes: John V = signal; bender = noise. How gracious of you.
    There’s only one problem with your signal, John V. It’s polluted full of error. To the point where it is nothing but noisy pseudo-signal. Admit your errors and I will leave you alone.

  407. Peter Thompson
    Posted Feb 25, 2008 at 4:53 PM | Permalink

    John V:

    #277 Peter Thompson:
    Your sarcasm aside, the average global temperature swing on an ice-age cycle is generally well accepted and is smaller than at the poles. Did your “gut instinct” kick in when Hans Erren suggested that the effect of CO2 on ice ages was “in the noise”.

    John, prior to reading Bender’s and Tom Vonk’s dismissal of you, I spent (too much) time trying to ascertain if you were accurate or talking through your hat (you were). It is by no means well accepted what the “global” temperature is during an ice age, and by extension what the peak to peak variance is. That was arm waving. Using that number of 4C is to do any math with significance of greater than +/- 5C is unphysical, unscientific and unsupportable.

    When looking at the expose of Hansen’s GIS Slop going on in the most recent threads, it is clear that “global” temperature is currently useless to do math with any significance beyond the ones place.

    I humbly apologize.

    That is the most risible statement you have made to date.

  408. Jud Partin
    Posted Feb 25, 2008 at 4:57 PM | Permalink

    #400 bender

    You are taking about heat flow through a pathway over time. I am talking about the pathway itself. All such pathways in fact. You think these are fixed?

    Are you suggesting that there are no dynamic modes of climate variability in the earth’s climate system? Only random variability?

    (meant as a serious question that hopefully will not elicit a one-word answer)

  409. bender
    Posted Feb 25, 2008 at 5:13 PM | Permalink

    #409
    This is very far OT. The topic was GT … until John V led the thread astray.

    In general Steve M does not encourage people to discuss their pet theories – unless they are grounded in the literature. Ask a question about a particular paper on this topic in unthreaded and I will answer in a thread.

    [The short answer would go something like: yes, there are dynamic modes of variability. But my question back to you would be whether these dynamic modes behave realistically. Which would then lead you to review that literature for us and write a thread opener for Steve M to post.]

  410. Sam Urbinto
    Posted Feb 25, 2008 at 5:30 PM | Permalink

    If you want to know how much energy is in the air at some spot, (taking it for granted the measuring device itself is not directly affected by wind, sun, rain etc), what value that data is becomes a moot point if you don’t know what temperature, humdity and wind are at the spot.

    If I read 50F min at 40% humidity and 5 MPH NNE winds, and 70F max calm at 80% humidity on day X of year Y, if the next year that same day is 50F 60% humidity calm and 70F 20% humidity 20 MPH SSE, what does the fact that the days were both a mean of 60F tell me? How about the next year being 50 F 80% 25 MPH and 70 F 100% 5 MPH? And the next 50 100 20 and 70 100 20, and the next 50 0 0 and 70 50 5 and so on and so forth.

    Or even ask yourself, what if at 10 feet up 50 feet away there’s a 5 F difference?

    This is just a bunch of meaningless noise as far as anyone knows.

    Here, random metro area; home of The Masters, Martinez GA (Yes, it’s pronounced mar-tin-ez) and Augusta/North Augusta GA:

    62.3 F 55% S 1
    64.4 F 33% S 0
    62.8 F 49% S 2
    64.4 F 49% SSW 1
    66.1 F 45% S 0
    66.6 F 30% S 0

    Now, in nearby Edgefield SC, it’s 62.1 53% NNE 1 In fact, today, the high was 68.9 9%%3 PM wind, and low is 41.9 39%

    So the question is what the heck do you get out of what “the temperature” is right now in this area in the first place. Then, does the air have as much energy at 64.4 F 49% 2 MPH S or at 64.4% 33% calm?

    But hey.

  411. Sam Urbinto
    Posted Feb 25, 2008 at 5:33 PM | Permalink

    Oh, and I should mention also that monthly readings for Edgefield were

    High 85.5 100% 6 MPH SSW
    Low 28.3 20% –

    Average 52.2 64.3% 1.3 MPH

    Today’s been as low as 41.9 and high as 68.9 — Average 53.1

    So tell me about the current temp of 62.1 — OMG what do we do, it’s above average!!!

  412. DeWitt Payne
    Posted Feb 25, 2008 at 6:21 PM | Permalink

    If I had known that a simple request for confidence intervals to back up an assertion would take up this much bandwidth, I wouldn’t have asked. *sigh*

    John V #286:

    Many posts ago, I said: “The southern hemisphere is *not* cooling (even from 60S to the pole). It is warming less quickly then the north (over any time period that is reasonable for discussing climate).

    This statement is technically correct if you include ‘not warming at all’ in the meaning of ‘warming less quickly’, which seems reasonable. No one yet has posted confidence intervals that would cause the rejection of the hypothesis that SH is not warming at all. So no one can say that the SH is either warming or cooling. The whole issue is moot anyway, imo, because not warming at all does not falsify model results or enhanced greenhouse warming theory.

  413. bender
    Posted Feb 25, 2008 at 6:42 PM | Permalink

    #41 Disagree, DWP.
    1. That the request takes this long and *still* goes unfulfilled is very informative. I won’t say of what.
    2. The weak Antarctic warming appears to falsify Hansen’s 1988 scenario A, possibly his B as well. I suspend judgement on B. You can say this is “no surprise” after the fact. But it would have surprised the modelers back in 1988, no? So what did they have wrong? Has this since been “fixed”?

  414. Neal J. King
    Posted Feb 25, 2008 at 9:19 PM | Permalink

    lucia, #311, #361:
    jae, #329:
    UC, #352:
    Tom Vonk, #265, #374:
    Spenc_UK, #366,

    – The Stefan-Boltzmann formula would roughly apply to the Earth if you were talking about the radiation very near the surface. At any given frequency in the IR, if you get away from the surface of the Earth by optical depth ~ 1, the intensity will not match the Planck formula, and hence the total emitted power will not match the SB. lucia, you know this. I don’t know why it’s a point of contention.

    – The point is not my understanding of the Navier-Stokes equation. My point is that sometimes the answer to a problem doesn’t depend on the fine details of behavior of the dynamical variables you happen to first be thinking about, but upon their integrals. That is why they are called “integrals of motion”: momentum, energy, angular momentum, etc. In my example of the bath tub, the relevant integral was the amount of water, from which I could determine the average water-level, without any concern for the Navier-Stokes equation. In the famous joke, a mathematician quipped, “The way I overcome an obstacle is to go around it.”

    My interest happens to be in the question of the radiative forcing: When you double the concentration of C-O2, how much does that imbalance the radiation budget? My conclusion is that it depends on the C-O2 density profile, the temperature profile, and to some extent on the water-vapor profile. These profiles in turn depend on the adiabatic lapse rate, which is a matter of dynamical stability/instability. It also depends on the cloud cover, but this can be bracketed. It does not depend on the chaotic details of the turbulent behavior of the gases, considered as fluids. It just doesn’t.

    There is also, of course, the question of how the Earth/oceanic/atmospheric system responds to the extra energy input of the radiation. This is a much more complex problem, about which I have little to say, since I haven’t worked with these models, running them over a range of parameters, etc. Maybe someday. But in the meantime: Analysis = divide & conquer.

    Finally: It should be remembered that the phenomenon of global warming was not an empirical observation, but was first a theoretical prediction, going back over 100 years. It was a prediction that increase in C-O2 would lead to an increase in the global average temperature (not, as some would have it, in a mythical “temperature of the Earth”). This prediction has largely been borne out, taking into account:
    – volcanic eruptions
    – sulfate aerosols emitted by the unscrubbed burning of coal
    – (rather minor) solar-flux variations
    – the “noise level” due to weather variations

    Thus, the climate scientists have set forth their theory. According to the philosopher of science, Popper, opponents have the duty of disproving it. Or in other words, as they say in my line of work, “You can’t beat something with nothing.”

  415. Raven
    Posted Feb 25, 2008 at 10:12 PM | Permalink

    Neal J. King says:

    This prediction has largely been borne out, taking into account:
    – volcanic eruptions
    – sulfate aerosols emitted by the unscrubbed burning of coal
    – (rather minor) solar-flux variations
    – the “noise level” due to weather variations

    The prediction only appears to borne out because:
    – aerosols are very difficult to measure in retrospect which gives modellers a lot of latitude when it comes to estimating their values
    – solar effects are assumed to be insignificant despite the strong historical evidence that suggests otherwise
    – climate “noise” is assumed to occur over decades but not over centuries or millenia

    A reasonable person looking at the same data but with different preconceptions could easily come up with an explaination that does not include a signficant CO2 factor.

  416. Neal J. King
    Posted Feb 25, 2008 at 10:37 PM | Permalink

    #416, Raven:

    Here are the calculations from the IPCC reports: http://www.grida.no/climate/ipcc_tar/wg1/figspm-4.htm

    – Please explain your confidence in the latitude of modeling aerosols.
    – Solar effects have been taken into account explicitly.
    – Even “noise” has a cause: for example, if I have a pendulum clock with a rusty mechanism sitting in an elevator, when the elevator goes up, the average z-coordinate of the pendulum bob will rise, even though the bob oscillates (irregularly) up & down. But the oscillation will be limited by the extension and the period of the pendulum.

    If a reasonable person could easily come up with an explanation that does not involve C-O2, then why hasn’t anyone done it? Oh, there is a simple requirement: The explanation has to be consistent with everything else we know about physics, chemistry, and the universe.

  417. bender
    Posted Feb 25, 2008 at 10:43 PM | Permalink

    opponents have the duty of disproving it

    No. Proponents have equal responsibility. It is a shared responsibility.

  418. John V
    Posted Feb 25, 2008 at 10:49 PM | Permalink

    #413 DeWitt Payne:
    I got distracted from your original request. Looking back, it was a pleasant conversation in the beginning.

    I took a few minutes tonight to calculate the 95% confidence intervals for the trends. These ranges are calculated using the annual data I originally linked in #286:

    http://www.climateaudit.org/?p=2708#comment-214825

    The confidence intervals consider the auto-correlation in the data but not the uncertainty in the measurements themselves. (That’s a whole other issue). The NH warming is statistically significant on both 10- and 15-year timescales. The SH warming is not statistically significant on either time scale, as you suggested in #289.

    The Antarctic warming is also not statistically significant (t=~1.8 for 15yr trend, t=~1.9 for 10yr trend), so I was wrong on that hunch.

    The difference between NH and SH trends is not significant at the 95% level.

    I am curious about the longer-term trends, but have spent too much time on this (both directly and tangentially) already.

    =====
    15-Year Trends (degC/decade):

    Latitude__Trend_(Limits)
    NorthHemi: +0.37 (+0.24 to +0.51)
    64N – 90N: +1.01 (+0.53 to +1.49)
    44N – 64N: +0.54 (+0.30 to +0.79)
    24N – 44N: +0.36 (+0.12 to +0.60)
    EQU – 24N: +0.14 (-0.03 to +0.31)
    EQU – 24S: +0.12 (-0.08 to +0.32)
    24S – 44S: +0.21 (+0.12 to +0.29)
    44S – 64S: +0.00 (-0.13 to +0.13)
    64S – 90S: +0.49 (-0.06 to +1.05)
    SouthHemi: +0.16 (-0.03 to +0.28)

    =====
    10-Year SH Trends (degC/decade):
    Latitude__Trend_(Limits)
    NorthHemi: +0.28 (+0.06 to +0.51)
    64N – 90N: +1.60 (+0.69 to +2.52)
    44N – 64N: +0.51 (+0.23 to +0.78)
    24N – 44N: -0.10 (-0.30 to +0.10)
    EQU – 24N: +0.14 (-0.26 to +0.53)
    EQU – 24S: 0.00 (-0.47 to +0.47)
    24S – 44S: +0.05 (-0.05 to +0.15)
    44S – 64S: +0.05 (-0.22 to +0.31)
    64S – 90S: +0.66 (-0.04 to +1.36)
    SouthHemi: +0.08 (-0.18 to +0.34)

  419. bender
    Posted Feb 25, 2008 at 10:57 PM | Permalink

    The Antarctic warming is also not statistically significant (t=~1.8 for 15yr trend, t=~1.9 for 10yr trend), so I was wrong on that hunch.

    Hunch?! That was a vehement, bandwidth-consuming assertion!

  420. bender
    Posted Feb 25, 2008 at 11:36 PM | Permalink

    “Hey bender, thanks for urging me on to calculate those confidence intervals. It didn’t take as much effort as I thought. And you were right about the trend. My bad.”

    “No problem, John V. I’m sure you won’t make that kind of mistake again. Now you see why it’s important to be self-critical, not just criticizing others.”

    “Yep. I see Neal J. King making that very mistake over in another thread.”

    “Yep. Say, did you ever locate those Antarctic model trend data that you were hunching about?”

    “Good question, bender, I’ll get back to you on that. With confidence intervals this time! :)”

    “You’re all class, John V. :)”

  421. Bruce
    Posted Feb 25, 2008 at 11:49 PM | Permalink

    HADCRUT3 says DOWN.

  422. John V
    Posted Feb 25, 2008 at 11:56 PM | Permalink

    #421 bender:
    I’m trying to ignore you, but this is getting ridiculous.

    I admitted the trends were likely not significant from the start. No vehemency, at least not from me. My point was that the claims of cooling made by others (I don’t remember or care who anymore) were incorrect.

    Three days prior to you enterring the conversation I stated the modern climate models predict less warming in the SH than in the NH. My reference was the article written by Spencer Weart at RealClimate that I originally linked.

    I did not investigate if the differences in the model predictions were statistically significant.

    Have your last word and then leave it alone already.

    =====
    #422 Bruce:
    Where have you been? SH trends came up days ago. You’re usually here with your graphs within a few minutes.

    bender, do you want to ask about the uncertainty on a 4-5 year negative trend that includes only 1 month in the final year, or should I?

  423. bender
    Posted Feb 25, 2008 at 11:57 PM | Permalink

    Suppose these “trends” (=absence of trend south of 44N) in #419 do not contradict the models. Does this mean that the models therefore “fit” the observations? And if the models “fit” the observations in this way, would this in turn mean AGW is therefore true?

    What’s wrong with this picture?

  424. bender
    Posted Feb 26, 2008 at 12:20 AM | Permalink

    From #191:

    Severian and Joe Black:

    RealClimate recently did a write up on Antarctica:
    http://www.realclimate.org/index.php/archives/2008/02/antarctica-is-cold/

    Here’s the money quote:
    “Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.”

    I didn’t really like your reply to Joe Black and Severian, John V. I thought you were dismissive, autocratic.

    You called this a “money quote”. I took this to mean a ringing endorsement. Are you now backing away from that characterization? Are you no longer endorsing the RC statement? Please clarify.

    I have noted elswewhere that a neutral Antarctic is inconsistent with Hansen et al. (1988). Do you dispute this? Or do you suggest that Hansen (1988) is now irrelevant?

  425. bender
    Posted Feb 26, 2008 at 12:42 AM | Permalink

    When you read the “money quote”, John V, were you aware of the Hansen (1988) result? Or were you – as usual -simply parrotting what you’d read on RC without questioning? There is a reason, you know, why they worded the “money quote” the way they did. They were dodging an issue through pre-emptive word-smithing. “Long time” was meant to exclude Hansen ’88 – for a reason. Failed hypothesis tests are tests that can never happen.

  426. bender
    Posted Feb 26, 2008 at 1:02 AM | Permalink

    Joe Black #189 asks:

    From what I’ve seen, the CO2 levels in Antarctica and at Mauna Loa are essentially equivalent. The NH temp anomalies are higher than in the SH. Maybe it’s not the CO2 then? Maybe the “climate change” is different over the Oceans than over the Land. That would make it not exactly “global” wouldn’t it?

    [Ans: The CO2 effect is global, but, yes – it is over-ridden locally by other non-global effects. Land cover is one such factor. These effects are strong, but they have a cap on them and they are reversible.]

    #190 Severian offers:

    But this whole line of discussion cuts to the heart of one of the main arguments against CO2 driven AGW in my mind. I don’t doubt for a second that land use changes can and are driving climate change on regional and summed regional levels, and outstripping CO2. And, unlike CO2, we probably can (if we decide to drop this CO2 nonsense and pay attention to it) do something about land use changes to ameliorate the effects if we study them and think of options. I’ve long been of the opinion that addressing regional climate changes and warming makes a lot of sense, much like paying attention to a factory spewing real pollution in your backyard is both doable and makes sense.

    [Ans: It is a good question how much cooling could be could induced by greening up the asphalt environment we’ve created in the past 50 years. I’m not sure at this point we even know for sure how much heating is caused by UHI effects.]

    And next, in #191, things mysteriously go off the tracks. But see how easily #189 and #190 could have been answered, without dredging up unrelated RC junk in #191?

    Why did he do it?

  427. bender
    Posted Feb 26, 2008 at 1:21 AM | Permalink

    John V in #191 missed Severian’s basic original point in #188 that with NH dominated by land, maybe there is a stronger UHI effect contaminating the NH series. Given what we have now seen of Peru and Brazil, maybe John V would care to revisit Severian’s original point.

    I note that Severian and Joe Black were addressing Judith Curry’s #143:

    land use changes are very important for regional climate

    Which is pretty hard to dispute. S & JB were basically wondering if anyone had convincingly pinned down the NH-wide UHI contribution to the overall NH warming trend.

    And that IMO is a damn good question that ought not be dismissed with RC dribble.

  428. Neal J. King
    Posted Feb 26, 2008 at 1:56 AM | Permalink

    #418, bender:

    Not really: It’s a proponent’s role to make the best case s/he can for it – but the key point of Popper’s insight is that a scientific theory can never be proven true – it can, at best, fail to be proven false.

    And it is obviously going to be much easier for an opponent to disprove the theory than for the proponent. The proud parent of the theory is going to have blind spots, and more invested in its “correctness”, than anyone else. And even the most self-critical scientist is going to have taken his/her best shot at proving himself/herself wrong before publication, not after.

  429. Tom Vonk
    Posted Feb 26, 2008 at 4:35 AM | Permalink

    Neil King # 415

    Sorry you are still not getting it . It goes much deeper than bathtubs .

    My point is that sometimes the answer to a problem doesn’t depend on the fine details of behavior of the dynamical variables you happen to first be thinking about, but upon their integrals. That is why they are called “integrals of motion”: momentum, energy, angular momentum, etc. In my example of the bath tub, the relevant integral was the amount of water, from which I could determine the average water-level, without any concern for the Navier-Stokes equation

    I already falsified your example of bathtub by adding ONE simple assumption in # 374 . Everything you say is wrong
    with this one assumption added . I am not sure if you understood the point but I observe that you are not able to comment .
    You also completely misinterpret the meaning of “integrals” .
    They are precisely called integrals because a process of integration is involved . A trajectory doesn’t depend on an integral , it is an integral . A statement about integrals is always much stronger and more constrained than a statement about the laws of nature (differential local forms) because it can be only made if you are able to integrate . All these are basics about ODE and PDE .
    Whether you like it or not , all laws of nature describe what happens on a very small scale during a very short interval of time and these “details” are the laws of nature .
    Do you realize that saying “without concern for N-S” is equivalent to say “without concern for energy and momentum conservation” ?
    Because that’s all N-S is about 🙂
    Sometimes the laws of nature integrate easily , sometimes with great difficulties and sometimes not at all . All is in the “sometimes” .
    Did you know that the 3 body gravitationnal problem was chaotic ? Well it is .

    It does not depend on the chaotic details of the turbulent behavior of the gases, considered as fluids. It just doesn’t.

    This is a touching declaration of faith but not more than that .
    Precisely natural convection is governed by strongly non linear processes . As you can’t decouple radiation from convection , you are of course unable to support this faith by anything substantial . Ever heard of Rayleigh Benard ? Probably not . No examples about empirical relationships that work in an extremely limited validity domains help here .

    But in the meantime: Analysis = divide & conquer.

    Very wrong illusion yet often infecting people who never got to understand strongly non linear problems . The statement itself postulates that problems can be separated and that the combination of solutions of sub problems is a solution of the problem .
    Sorry to disappoint you but that only works if the problem is linear . In a weaker sense it may work if variables can be separated .
    Why do you think that we don’t know if a unique continuous solution of N-S exists ? Because the variables can’t be separated and the problem is strongly non linear .
    Postulating that any non linear problem can be solved by a linear perturbation theory is incredibly naive .
    You should visit Dan Hughes site because I am afraid that you don’t really understand the subject .

    It was a prediction that increase in C-O2 would lead to an increase in the global average temperature (not, as some would have it, in a mythical “temperature of the Earth”). This prediction has largely been borne out taking into account:
    – volcanic eruptions
    – sulfate aerosols emitted by the unscrubbed burning of coal
    – (rather minor) solar-flux variations
    – the “noise level” due to weather variations

    I won’t elaborate much because I already said everything in # 265 and it is certainly not an irrelevant bathtub example that added or substracted something scientific to the argument .
    As everybody should know by now , the “global average temperature” is unphysical in the sense that there is no physical law that would depend on it . On top it is irrelevant for the dynamics because there is an infinity of temperature distributions with the same average corresponding to different dynamics . Lastly there is no reason that an arbitrary integration domain (earth surface) be more significant than any other arbitrary integration domain . You can bet that if somebody was able to prove that there is a reason , we would all hear about it .
    It will not happen because it is impossible and it will only stay hand waving .
    Of course you did right to put “” around “noise level” because as I have also already shown in # 265 , there is no randomness (or “noise”) in the climate . It is not by repeating the same unproven claim that it will make it true .
    And to avoid repeating all the time the same things ad nauseum , read at last the Ruelle and Takkens paper “On the nature of turbulence” and then come back , hopefully better informed about what happens in the atmosphere and with something more than declarations of faith .

  430. Neal J. King
    Posted Feb 26, 2008 at 4:48 AM | Permalink

    #430, Tom Vonk:

    I’m afraid that you are missing the point.

    Once something is known to be an integral of motion, it can be used to solve a problem. It’s not necessary to prove it over and over again.

    I’m quite familiar with the concepts of turbulence and chaos, and the 3-body problem. The question of radiative forcing does not have to be solved to that level of detail to be usefully pinned down.

    And I read Takens & Ruelle (which you keep misspelling) decades ago.

  431. MarkW
    Posted Feb 26, 2008 at 6:06 AM | Permalink

    Didn’t a recent study find that the various models used aerosol loadings that varied by a factor of 7?

  432. Tom Vonk
    Posted Feb 26, 2008 at 6:19 AM | Permalink

    And I read Takens & Ruelle (which you keep misspelling) decades ago.

    Apologies to Floris Takens if he comes by as for some reason I might mispel his name time and again 🙂
    I doubt your assertion because as I have already shown you make so many uninformed or outright wrong statements that obviously you never looked at strongly non linear processes besides the fact that somebody who admits that he is not familiar with N-S would hardly be familiar with the more complex aspects of the chaos theory .
    Why you confuse particular invariant functions (integrals of motion) in particular cases with the problem of N-S solutions which has nothing to do with it is anybody’s guess .
    Why you say that people don’t need to be concerned with energy and momentum conservation is already much stranger .
    In any case I notice that you avoid to deal with the scientific arguments adressing computability and stability questions in the climate debate focusing only on strawmen and irrelevant examples .
    Naive faith that any problem can be solved by a deterministic linear perturbation theory , hand waving with “noise that cancels” and bathtubs simply won’t do .
    However if you have something scientifically substantial to say to the 3 main points I have already developped , I am all ears .

  433. Raven
    Posted Feb 26, 2008 at 6:23 AM | Permalink

    Neal J. King says:

    – Please explain your confidence in the latitude of modeling aerosols.
    – Solar effects have been taken into account explicitly.
    – Even “noise” has a cause: for example, if I have a pendulum clock with a rusty mechanism sitting in an elevator, when the elevator goes up, the average z-coordinate of the pendulum bob will rise, even though the bob oscillates (irregularly) up & down. But the oscillation will be limited by the extension and the period of the pendulum.

    See Figure 2.23 in http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter2.pdf

    You should notice that the cloud albedo effect is the dominate aerosol. If you scroll up to Figure 2.22 you will see the huge uncertainty bars (-.25 to -2.0) for that effect. The caption in Figure 2.23 notes that different models use different values which are likely within the uncertainty range.

    Here is an example of a simple analysis that matches the historical trend by choosing aerosol forcings that are within the uncertainty range: http://www.ferdinand-engelbeen.be/klimaat/oxford.html

    As you can see the sensitivity of CO2 is very dependent on the assumptions regarding aerosols.

    WRT solar effects – recent research by Dr Svalgaard sugguests that TSI almost never changes – even during the Dalton minimum. This implies that TSI is either the wrong way to measure the effect of sun on climate or that the climate’s sensitivity to TSI variations is huge. Either example demonstrates how the current treatment of solar effects in climate models are grossly inadequate.

    People are working only alternate explanations such as cosmic rays, however, scientists who attempt to do this must be extremely persistent because the science establishment is not interested in hearing altenate explanations at this time.

  434. Neal J. King
    Posted Feb 26, 2008 at 6:28 AM | Permalink

    #433, Tom Vonk:

    As I stated several times, I am interested in the radiative forcing issue, which is very little affected by turbulence issues.

    The issues you are raising don’t apply to the question of radiation, and don’t currently interest me.

  435. Raven
    Posted Feb 26, 2008 at 6:35 AM | Permalink

    Neal J. King says:

    Even “noise” has a cause:

    The planet is never in radiative equalibrium. TSI varies daily, yearly and over millenia – due to direct effects or random volcano events. On top this we have a huge heat resevoir called the oceans which can absorb/release heat over decades. AGW alarmists have no problem accepting 1000 year lag when it comes to releasing CO2 once the temperature rises at the end of an ice age. It makes no sense to argue that similar lags in heat release cannot occur.

    CO2 theory ignores these long term noise effects because they are incalculable and inconvient. However, ignoring them does not mean they don’t exist.

  436. Peter Thompson
    Posted Feb 26, 2008 at 6:37 AM | Permalink

    Neal,

    “As I stated several times, I am interested in the radiative forcing issue, which is very little affected by turbulence issues”.

    When you talk of “pinning down” radiative forcing, do you mean quantifying it?

  437. Spence_UK
    Posted Feb 26, 2008 at 7:01 AM | Permalink

    Neal says:

    My point is that sometimes the answer to a problem doesn’t depend on the fine details of behavior of the dynamical variables you happen to first be thinking about, but upon their integrals.

    Right. Sometimes it does. Sometimes it doesn’t. So doesn’t it make sense to spend some time investigating whether it applies in this case or not?

    It does not depend on the chaotic details of the turbulent behavior of the gases, considered as fluids. It just doesn’t.

    Oh. Apparently we’re not going to do any investigating. You just “know”. Just like John V’s “IMHO”. Articles of faith, not science. OK then, no reason to continue debating the point with you.

    But the sentence prior to this one was just gold:

    It also depends on the cloud cover, but this can be bracketed.

    Right. Bracketed with power-law scaling and exponential error growth from initial conditions, as observations and analyses seem to support. The power law scaling renders this magical “weather noise” vs. “climate signal” breakpoint between 7 and 30 years moot. So the only possible saving grace would be if the exponential error term ameliorated by averaging at scale x, defined by equation (e^x)/x, becomes small for large x. Good luck with that.

  438. MarkW
    Posted Feb 26, 2008 at 7:12 AM | Permalink

    Solar affects have only been partly taken into account in the models. TSI is not the sum total of the sun’s impact on earth’s climate.

  439. beng
    Posted Feb 26, 2008 at 8:03 AM | Permalink

    Steve_M in 160 says:

    Her rationale was that in the 1990s Exxon funded people to foment dissent. So people in the trade had guns drawn. Thus when I wandered into the debate from out of left field in 2003, whatever my motives were, people assumed that I was another hired gun from Exxon and started shooting guns in all directions paying little attention to anything that Ross and I actually said.

    Seems like the ever-present vein of paranoia in the academic scene. My first question would be “How many of these hired-guns were really in the pay of Exxon or some other “evil” organization?”.

    From what I’ve gathered over many yrs, prb’ly close to none. The omnipresent Exxon “hired-gun” instead seems like a convenient & habitual label for any dissenters to the cause, even against fellow academics. Steve_M was obviously mis-indentified. How many others are simply trying to get to the truth? If academia is so easily self-deceived, how can their objectivity on science in general be trusted?

    This paranoia present in academia (& other tight-knit organizations) is troubling, but is common thru history. The abhorent treatment over decades of Pat Michaels for speaking his mind at Univ of Virginia is a perfect example.

  440. John V
    Posted Feb 26, 2008 at 8:52 AM | Permalink

    bender:
    I acknowledge that my original tone was dismissive. I had heard the “SH cooling disproves AGW” theory one too many times, but could and should have done better.

    As for the rest, it’s a bunch of tangential arguments that I was not making. I’m doing my best to resist. My statement referred to “modern” climate models. I would not consider Hansen1988 modern. That does not mean I don’t think a 20-year old model should be investigated and critiqued — only that I was talking about something else.

    You and I have a way of annoying each other. I’m going to take Anthony Watts’ advice from another thread and return to civility.

    Please drop this. Thanks.

  441. bender
    Posted Feb 26, 2008 at 9:28 AM | Permalink

    #429
    If you know Popper then you know that science progresses i.e. “self-corrects” fastest when self-criticism occurs at the individual level. Both before AND after publication.

    Self-deception (i.e. firm belief) is the biggest threat to rapid progress in science. The problem with your scheme of duelling laboratories is that it allows for too much self-deception. Science isn’t law. He who knows his data best is in the best position to criticize it, and to imagine alternative hypotehses to explain it. The majority of scientists will agree with me that a proponent has a responsibility to explore alternatives, and not just shabbily assembled straw men – real alternatives. But, hey, this might not be the custom in the speshul case of climate science.

    If your interest is narrowly focused on radiation, then you’re not interested in the AGW debate. Convection is the issue. .,

  442. Raven
    Posted Feb 26, 2008 at 9:39 AM | Permalink

    Bender,

    I generally find your posts quite interesting but I think you are being way too hard on John V.

    It is tough to a lone voice in a forum of critiques but he is trying to be civil and admits mistakes (unlike some of the trolls that stop by).

  443. Bruce
    Posted Feb 26, 2008 at 9:40 AM | Permalink

    JohnV

    Not just the SH.

    January 2007 NH to January 2008 NH.

    A drop of .963C.

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3nh.txt

  444. cce
    Posted Feb 26, 2008 at 9:42 AM | Permalink

    Unless it is their practice to include partial years, Hadley’s graphs in 422 are wrong. They include 2008, which is only January.

    Re: Hansen ’88
    In the paper, Hansen notes a difference between the GISS model and Manabe’s (which “probably arise from different heat transports by the oceans” of the models). Manabe’s model showed cooling around Antarctica “for the first few decades after an instant doubling of atmospheric CO2”

  445. bender
    Posted Feb 26, 2008 at 9:43 AM | Permalink

    #444 Ok, Raven. I have had my say. Battle’s over and the dust is clearing.

  446. John V
    Posted Feb 26, 2008 at 9:45 AM | Permalink

    #442 bender:
    Is this the claim you would like me to back up?
    “Modern climate models all predict less warming in the SH.”

    Let’s define the terms:

    1. “modern” models are those that were used in IPCC AR4;

    2. For testing significance, should we use the uncertainty in the mean of the ensemble (per Douglass et al 2007), or the 95% confidence interval of the ensemble values? In my mind, the ensemble range is more correct although it is a much stricter test for significance.

    3. What is the interval for calculating trends? 20 years?

    =====
    #445 Bruce:
    Yep, it sure was cold last month.
    Single month comparisons? Really?

  447. John V
    Posted Feb 26, 2008 at 9:46 AM | Permalink

    #447 bender:
    D’oh!
    We posted at the same time.
    Disregard my comment above if you’re willing to let it go. I don’t want to re-ignire anything.

  448. bender
    Posted Feb 26, 2008 at 9:49 AM | Permalink

    #446

    Hansen notes a difference between the GISS model and Manabe’s (which “probably arise from different heat transports by the oceans” of the models). Manabe’s model showed cooling around Antarctica “for the first few decades after an instant doubling of atmospheric CO2″

    “Probably”?

    What, did they figure it wasn’t really worth figuring out FOR SURE what caused the difference?

  449. John V
    Posted Feb 26, 2008 at 10:16 AM | Permalink

    #450 bender:
    Hey — what happened to #450.
    It was here a minute ago.
    SteveMc, did you delete it?

    Here are a couple of images from IPCC AR3 model runs which show less warming in the SH than in the NH:

    http://www.ipcc.ch/ipccreports/tar/wg1/fig9-2.htm
    http://www.ipcc.ch/ipccreports/tar/wg1/fig9-10.htm

    I do not know if the differences are statistically significant.
    The corresponding figures in IPCC AR4 are not linkable.

  450. bender
    Posted Feb 26, 2008 at 10:28 AM | Permalink

    #450 This from the guy who’s tired of hearing how the models deviate from observations in the SH.

    John V, do your linked maps of model output in #450 match Ken Fritsch’s Antarctic maps of observations?
    (If you need to put them on a common basis to make the comparison, then do it, o tired one.)

    And can we puh-leeze now move this to an Antarctic thread?

  451. Bruce
    Posted Feb 26, 2008 at 10:44 AM | Permalink

    JohnV

    single month comparisons

    Not single month. January was the most dramatic.

    Dec 2006 to Dec 2007 – A drop of .354
    Nov 2006 to Nov 2007 – A drop of .203
    Oct 2006 to Oct 2007 – A drop of .193

  452. John V
    Posted Feb 26, 2008 at 11:01 AM | Permalink

    #451 bender:
    This from the guy who’s tired of hearing how the models deviate from observations in the SH.
    What I said was “I had heard the “SH cooling disproves AGW” theory one too many times”. The argument about agreement between models and observations was yours.

    You had problems with two of my statements. First, that the SH is not cooling. Second, that models show less warming in the SH than the NH. I have dealt with both of them. If you would like to demonstrate that modern or old models do not agree with observations, please do.

  453. John V
    Posted Feb 26, 2008 at 11:03 AM | Permalink

    Bruce, I can only imagine the punishment I’d get doing those kinds of comparisons.
    You’re a lucky guy. 🙂

  454. Steven Mosher
    Posted Feb 26, 2008 at 11:36 AM | Permalink

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies include unrealistically weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcings are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds.

  455. bender
    Posted Feb 26, 2008 at 11:36 AM | Permalink

    A dislike of criticism is a sure sign of a weak scientist.

  456. bender
    Posted Feb 26, 2008 at 11:39 AM | Permalink

    #455 What’s that from?

  457. Steven Mosher
    Posted Feb 26, 2008 at 11:42 AM | Permalink

    BENDER. The artic will warm without cause.

    Guess who wrote this:

    “It may be fruitless to search for an external forcing to produce
    peak warmth around 1940. It is shown below that the observed
    maximum is due almost entirely to temporary warmth
    in the Arctic. Such Arctic warmth could be a natural oscillation
    (Johannessen et al. 2004), possibly unforced. Indeed, there are
    few forcings that would yield warmth largely confined to the
    Arctic. Candidates might be soot blown to the Arctic from industrial
    activity at the outset of World War II, or solar forcing of
    the Arctic Oscillation (Shindell et al. 999; Tourpali et al. 2005)
    that is not captured by our present model. Perhaps a more likely
    scenario is an unforced ocean dynamical fluctuation with heat
    transport to the Arctic and positive feedbacks from reduced sea
    ice.”

  458. bender
    Posted Feb 26, 2008 at 11:47 AM | Permalink

    #459 Hansen 1999 or 2006?

  459. Posted Feb 26, 2008 at 12:07 PM | Permalink

    @Neil–

    The Stefan-Boltzmann formula would roughly apply to the Earth if you were talking about the radiation very near the surface.

    Yes. That’s what I was saying, and I thought I was responding to someone who says this is not so. At the surface SB applies. That’s how S-B is used when discussing AGW.

    In the air itself, it doesn’t apply. So, it’s not used there.

    With regard to the NS, you just picked a bad example because you were mixing flow with flow.

    If your point is you don’t need a decent solution for the flow to get a good handle on the radiation: I believe you are correct.

  460. cce
    Posted Feb 26, 2008 at 12:12 PM | Permalink

    449:

    The purpose of the ’88 paper was not to compare and contrast different models. The cooling of Antartica modeled by Manabe and Bryan and described by Weart’s RC article was specifically mentioned in Hansen’s ’88 paper as a difference between the two models. The Antarctic warming in Scenario B corresponds primarily to the Antarctic Peninsula, which is one of, if not the fastest warming place on earth.

  461. Neal J. King
    Posted Feb 26, 2008 at 1:01 PM | Permalink

    #437, Peter Thompson:

    Yes.

  462. Neal J. King
    Posted Feb 26, 2008 at 1:12 PM | Permalink

    #442, bender:

    My point is that one can’t depend upon the proponent of a theory to find the holes: It goes against human nature. What’s so hard to understand about that?

    As I stated before, analyzing a problem means breaking it into parts that can be separately attacked, rather than getting them all mixed up. Convection is one part of the problem, which can be taken into account by using the adiabatic lapse rate for the temperature profile.

  463. Neal J. King
    Posted Feb 26, 2008 at 1:20 PM | Permalink

    #460, lucia:

    I think we see eye-to-eye on this situation. My analogy to the bathtub was not intended to show how to replace a Navier-Stokes computation, but how to avoid it, if the problem you are interested in doesn’t require it.

    And, as we’ve touched upon before, the radiative transfer problem alone is challenging enough to be worth some study without dragging all other aspects of the problem in as well. Different aspects can be isolated and analyzed on their own.

  464. Neal J. King
    Posted Feb 26, 2008 at 1:35 PM | Permalink

    #434, Raven:

    I believe the aerosols that are used as input for calculations are not clouds, but sulfate aerosols, because they describe them as “man-made”: that term does not apply to clouds.

    This applies certainly to your second link, as well as to the graph from the IPCC TAR I posted earlier.

    I think a major problem with the cosmic-ray explanation is the lack of correlation between the putative cause and the putative effect. How do you explain a rising temperature from an oscillating flux?

    #436:

    I don’t see that anything is being ignored: If more energy is coming into the system than is leaving it, things should warm up. And that is the issue of carbon-dioxide: the amount of sunlight is continuing to come in at about the same rate, but the amount of IR that is leaving is being reduced. Hence the global warming. The calculations are done for equilibrium in order to find out how far from equilibrium we actually are.

  465. Posted Feb 26, 2008 at 1:53 PM | Permalink

    How do you explain a rising temperature from an oscillating flux?

    Give enough power to it and pass it through a non-linear system. We’ve discussed about this earlier, let’s see, here .

  466. bender
    Posted Feb 26, 2008 at 2:05 PM | Permalink

    #466 Ray Pierrehumbert refuted that argument. Or did he?

  467. Posted Feb 26, 2008 at 2:32 PM | Permalink

    @Neil

    My point is that one can’t depend upon the proponent of a theory to find the holes: It goes against human nature. What’s so hard to understand about that?

    Sure. But even if they won’t look for their own holes, others don’t have to believe a scientist proposed theory if the scientist won’t, doesn’t or can’t bring forward sufficient evidence to support it in the first place. We don’t, for example, just accept the theory of “ESP”, just because someone advances it and someone else publishes it.

    The problem we have in these discussions is that the proponents of AGW assume they have met the hurdle of proving their hypothesis already and insist others must now disprove it. The skeptics think the hypothesis is not yet proven, and so the burden still remains on proponents to proves.

    There is also the hurdle of the ever shifting hypotheses that are supposedly proven.

    For example, is the “proven” hypothesis: a) AGW happens and CO2 is the cause “the hypothesis”?

    Or is the hypothesis that may or may not be “already proven” have tagged on here: b) “And the temperature rise in the next century will be at least 4.0C ± 0.1C if we follow business as usual?”

    Or is the proven hypothesis: c) “GCM’s are grrrreeat! I can forecast climate to within ±0.?C, but don’t no one should ever dare ask me to put a number on ‘?’, just believe me when I say we do it and the number is small.”

    Seriously, on some blogs, you get the impression that if you don’t accept all of a-c, and claim everyone of those things has been proven and accepted since the time of Darwin, you are a denialist and probably a flat-earther too.

  468. Posted Feb 26, 2008 at 2:44 PM | Permalink

    #466 Ray Pierrehumbert refuted that argument. Or did he?

    That with uncorrelated Gaussian forcing, the output cannot be Hurst-like? Or something more climate-specific? At RC I guess, do you have a link?

  469. bender
    Posted Feb 26, 2008 at 3:08 PM | Permalink

    Re #469

    See this post by raypierre:
    http://www.realclimate.org/index.php/archives/2007/12/les-chevaliers-de-l%e2%80%99ordre-de-la-terre-plate-part-ii-courtillots-geomagnetic-excursion/
    where he says:

    “Say it three times every night before going to sleep: Temperature goes up. Solar stuff goes up and down and up and down and up and down. You can no more make a trend out of that than you can make a silk purse out of a sow’s ear.”

    Richard Sycamore asked him for a proof, but never got it. Eventually this reply from Gavin Schmidt:

    http://www.realclimate.org/index.php/archives/2007/12/live-almost-from-agu%e2%80%93dispatch-3/#comment-77919

    It goes on and on. But it ended inconclusively.

  470. Neal J. King
    Posted Feb 26, 2008 at 3:38 PM | Permalink

    #468, lucia:

    – A chemist came to the conclusion that increasing C-O2 should give rise to a temperature increase before there were any measurements (and, by the way, he thought it was a good thing, too). So the prediction came well before the evidence. Generally speaking, you get points for predictions, in this game.

    – There have been a couple of generations of looking for holes and fixing them. The early students did not have a sophisticated idea of radiative transfer, and fell into the same conceptual errors that many websites do today: questions about “saturation”, etc. Now, one doesn’t have to take for granted that every hole has been fixed, but wouldn’t it make sense to study the history of the topic, to see what previous generations have found? How many folks have read Weart’s book or website, The Discovery of Global Warming? Even Newton credited his success on standing on the shoulders of giants; and I don’t think many of us would claim to have the insight or originality of Newton.

    – It is not so much that the case is proven (something which, according to Popper’s philosophy, is technically impossible) as that they have a theory that accords with the facts and is consistent with what is known of physical science as applied to other phenomena. The second part is the hard part, not the first. It’s relatively easy to dream up mechanisms that might be behind any particular phenomenon, but it’s hard to find one that doesn’t, ultimately, predict things that don’t happen.

    (An interesting read on this point: http://www.physicstoday.org/vol-60/iss-1/8_1.html?type=PTALERT

    – You can be “in accord” with experiment by avoiding experiment. To me, the idea of a trend-less cause (like cosmic-ray flux) giving rise to a trend (like global warming) is like that. It could conceivably be true, but how do you pin it down? I would want something a bit more definite to associate: like, some other trend that should be stimulated by this cyclic phenomenon, and should therefore correlate with GW. Otherwise, it really is “proof by free association”.

  471. Posted Feb 26, 2008 at 4:01 PM | Permalink

    @Neil

    Generally speaking, you get points for predictions, in this game

    Agreed. I’ve said the same, right here at CA.

    There have been a couple of generations of looking for holes and fixing them.

    There have been generations of scientists looking at holes in something about the greenhouse phenomema and the radiative properties of CO2.

    There have definitely not been generations of scientists looking for holes in models of GCM’s or even Svante’s specific hypothesis about enhanced greenhouse warming due to industiralization. And there have certainly not been generations of scientists developing cloud models that are good enough to predict feedback.

    This is an important issue with regard to teh “generations” argument, because arguing back to Svante only works if you are countering a strawman that almost no one claims.

    The real arguments between skeptics and warmers is about magnitude, not the baseline effect. (Yes, there are people on the fringes. I agree. But they get slapped down by most here too. )

    that they have a theory that accords with the facts and is consistent with what is known of physical science as applied to other phenomena

    Neil.. which theory? “A” above? That there is some effect. Or “B” or “C”? (Please do not tell me I must believe GCM’s just because I believe in warming. I’ve gotten that from too many people, and it’s a crock.)

    Practicallyy everyone here agrees with A. (I’d say everyone… but well… we know that’s not most people.) The fact is, practically everyone who is labled a deniaists agrees that CO2 had some effect

    Many doubt B&C. Many doubt C.

    But the argument about generations only supports “A”, it doesn’t get you to B&C we need way more than Svante for that. And for that, we really DO need models, and the Navier Stokes starts to matter. (Judy ran LES to develop cloud models. She wouldn’t do that if the NS didn’t matter!)

    The rea; arguments are over the sensitivity, which is all driven by feedbacks, including clouds. And even the most ardent warmer agrees that CO2 doesn’t get you to the catastrophic scenarios without water. And water makes clouds.

    Given the context of the real argument, “generations” and “Svante” are counter arguments to strawmen.

    To me, the idea of a trend-less cause (like cosmic-ray flux) giving rise to a trend (like global warming) is like that.

    Hey… I don’t personally buy the idea that the 1970 until now did what it did without CO2. I don’t even think it was due to the interesting slowing down of volcanic activity in this century compared to last century. And it’s not sun. Or cosmic rays… well, that could be, but I need them to tell connect that more specifically to warming before I’ll accept that as at all likely.

    I think it’s mostly CO2 and other GHG’s.

    But, you can’t just post claims that the whole burden is on the skeptics. It’s not. One must always take up the burden of convincing others of our position. They must take the burden of convincing us.

    If neither of us can prove our case sufficiently, then, we all have to agree to disagree. And if there is sufficient disagreement on some point, then there is no consensus on that point.

  472. Andrew
    Posted Feb 26, 2008 at 4:38 PM | Permalink

    There seems to be some confusion over the real point of the CRF low cloud link. It isn’t that it cuases all the warming (Nir says 2/3 of 20th century warming, and claims that there is a trend, but I can’t verify that). The link really just increases the forcing attributed to the sun, in theory, by a very large amount, both over the solar cycle and the long term. This is good becuase the general difference between minima and maxima is .1 C, about ten times what it should be theoretically. Any increase in the forcing over the twentieth century necessarily reduces the the climate senitivity (unless you want to argue that the heat is just miraculously disappearing).

  473. Spence_UK
    Posted Feb 26, 2008 at 4:41 PM | Permalink

    Generally speaking, you get points for predictions, in this game.

    You get points for STATISTICALLY SIGNIFICANT predictions, in this game.

    You don’t get any points for predicting a 50/50 event (“temperatures at some point in the future will go up”), then waiting one hundred years, a 30-year uptick occurs (did you notice the 30-year downtick in between?), then claim your prediction came true.

    One of the problems with systems that exhibit LTP is that you can demonstrate pretty much anything you want by cherrypicking the scale at which you carry out your analysis. Why do you think climate scientists are so wedded to 30 year scales? This is the scale on which their hypothesis works. The scale at which we have just a few degrees of freedom of instrumental data. And even then they have to invoke fudge factors to explain the correlation failures.

    You want a simple answer? LTP from complex nonlinear dynamics is a simple, elegant answer. Unfortunately it is also quite counter-intuitive to most scientists, and many people who don’t understand are unwilling to accept it. But the theory is well founded; try tracing some of UC’s links.

    I’m certainly willing to accept the possibility that climate scientists are correct; but when specifically probed on topics such as LTP and self-similarity, the arguments they give show no understanding of the issues, and duck most of the questions, or respond (like you did, like John V did), with articles of faith, it doesn’t inspire confidence.

    To me, the idea of a trend-less cause (like cosmic-ray flux) giving rise to a trend (like global warming) is like that.

    Just for the record, my view is that the solar cause is as much of a sharpshooter’s fallacy as A-CO2. Both CO2 and solar variability are comfortably within the power-law scaling of natural variability; and the longer the scales these operate on, the more difficulty they have breaking through the natural variability. Could solar be influencing clouds? Again, possible, but once again highly unlikely to break through the self-similar natural variability.

  474. Craig Loehle
    Posted Feb 26, 2008 at 4:47 PM | Permalink

    When raypierre dismisses the Cosmic ray theory by saying : “Say it three times every night before going to sleep: Temperature goes up. Solar stuff goes up and down and up and down and up and down. You can no more make a trend out of that than you can make a silk purse out of a sow’s ear.”
    It seems he is referring to the sunspot 11 yr cycle, but the cosmic ray theory is not a sunspot theory. Direct measures of cosmic ray intensity as well as the geomagnetic aa index correlate with ups and downs of 20th century climate better than the GCMs do. Sunspots are a red herring, as is TSI.

  475. Andrew
    Posted Feb 26, 2008 at 5:02 PM | Permalink

    Also (Craig kinda mentioned this already, but to add weight to it) a graphic:

    Calculate the more recent trend yourself (Oulu Nuetron monitor):

    1965 6517
    1966 6296
    1967 6034
    1968 5877
    1969 5773
    1970 5810
    1971 6191
    1972 6294
    1973 6344
    1974 6275
    1975 6408
    1976 6447
    1977 6421
    1978 6239
    1979 5946
    1980 5794
    1981 5639
    1982 5583
    1983 5771
    1984 5869
    1985 6141
    1986 6342
    1987 6388
    1988 6013
    1989 5444
    1990 5380
    1991 5396
    1992 5883
    1993 6162
    1994 6238
    1995 6315
    1996 6429
    1997 6471
    1998 6327
    1999 6139
    2000 5732
    2001 5826
    2002 5754
    2003 5710
    2004 6044
    2005 6206
    2006 6426

    Additionally, Henrik Svenmark had a paper called “The Persistant Role of the Sun in Climate Forcing” where he showed that by removing ENSO, volcanoes and a trend (.14 C a decade, not much) he could get a remarkable correlation. Anyone got the link?

  476. Posted Feb 26, 2008 at 5:11 PM | Permalink

    Craig–
    Raypierre often gives mind-numbingly idiotic retorts.

    I realize he may get tired of giving real answers and real explanations. But posting at blog s is entirely elective. I don’t understand why he doesn’t just take up gardening instead of wasting time posting snarky commments that detract from his ability to persuade skeptics.

  477. Andrew
    Posted Feb 26, 2008 at 5:18 PM | Permalink

    Lucia, I suspect he’d annoy the flowers to death. 😉

    More seriously, Realclimate finds it easier (and faster, speed sometimes being necessary to combat the fast paced media) to concoct out of hand dismissals of theories, and avoid quantitative assessments at all costs (unless perhaps they understand it well enough to construct a real argument. Over the CRF low cloud link, they don’t) thus alienating number lovers like me.

    Lucia is a far more reasonable “warmer” if you ask me.

  478. jae
    Posted Feb 26, 2008 at 5:20 PM | Permalink

    465, Neil:

    I think a major problem with the cosmic-ray explanation is the lack of correlation between the putative cause and the putative effect. How do you explain a rising temperature from an oscillating flux?

    What rising temperature? How do you get a constant or lowered temperature for 10 years with a rising amount of CO2?

  479. Posted Feb 26, 2008 at 5:34 PM | Permalink

    Andrew,
    I think RayPierre is in Chicago. It’s too cold to garden right now, and even the dogs are wearing sweaters. Why do you think my blog doggy mascot is bundled up?

  480. Andrew
    Posted Feb 26, 2008 at 6:08 PM | Permalink

    Jae, see my posts above (where I show that there does indeed appear to be a trend).

    Lucia, I thought it was just trying to me cute! Where I come from (Florida) it doesn’t ususually get so cold, do I hope you’ll excuse this ignorane on my part.

  481. MarkW
    Posted Feb 26, 2008 at 6:48 PM | Permalink

    Craig,

    Not just that, but temperatures have also been going up and donwn, up and down, up and down.

  482. Neal J. King
    Posted Feb 26, 2008 at 7:04 PM | Permalink

    Spence_UK,

    It’s not an article of faith to see that the radiative forcing due to a 2X in C-O2 won’t be affected by turbulence, etc. It’s a matter of thinking about the physics of how molecules interact with radiation:
    – the Doppler shifts by speeds as slow as that expected for atmospheric turbulence won’t affect the absorption lines
    – the clouds are made of water vapor, so they won’t affect the interaction between C-O2 molecules and IR
    – etc.

    Now, if I were trying to calculate the impact of a 2X in water vapor on radiative forcing, I would be worrying about the clouds; or if the upper atmosphere were so cold that turbulent speeds began to catch up to molecular speeds (but this would be ridiculously extreme), I might want to think about how the absorption lines would be affected. But I’m not, so I don’t.

    The more relevant questions are:
    – What is the C-O2 concentration profile
    – What is the atmospheric temperature profile
    – How good is the adiabatic lapse rate as a guide

    It’s not a matter of faith. It’s a matter of thinking about the relevant physical mechanisms.

    The toughest part of the calculation depends on the detailed line spectra, and the software packages to calculate them which Judith pointed me to. It is always a little opaque to rely on someone else’s software, of course. But the alternative is to do it yourself – rather time-consuming!

  483. Neal J. King
    Posted Feb 26, 2008 at 7:29 PM | Permalink

    #479, jae:

    There are other factors going on than C-O2: weather oscillations like ENSO, volcanoes, etc. In the meantime, the glaciers keep melting all over the world; except Antarctica, where you get extra precipitation. My example of a pendulum clock in a slowly rising elevator is relevant: the average height of the bob is rising, but for short intervals, the motion will go back and forth.

    As you may recall, about a year ago, on one of these threads, the physicist Lubos Motl estimated that it would take about 20 years of flat temperatures before you could credibly claim that GW is not happening, given the statistics and the noise. I did not have time to read his estimation process carefully, but I was surprised that the number was so high. As you may know, Motl is not exactly a fan of global-warming scenarios.

  484. Jesper
    Posted Feb 26, 2008 at 7:48 PM | Permalink

    Neal – real convection doesn’t just happen like in a pot simmering on a stove. In the climate system convection is strongly sensitive to dynamics – horizontal flows in the troposphere which alter the SST fields and convergence zones in the tropics. The entire pattern of tropical convection is constantly being reorganized by dynamics (i.e. ENSO), and these dynamics have major effects on global temp. We see for instance a 0.5 to 1 deg spike in global temp in 1 year from the 1998 El Nino. Is it unreasonable to consider that more subtle dynamical changes over the course of a century could produce warming at 1/100th this rate? Are you really sure that the dynamical effects on global clouds over this span sum to less than a few W/m squared? There is no arbitrary reason why the dynamical influence on global temps should be limited to the ENSO time scale or ENSO mechanisms.

    Very few will argue against the basic greenhouse physics. The question is, why does the global temperature change, and how much real effect does CO2 have? Is the purely radiative ~1 deg C amplified or muted? When we see low-frequency dynamical effects like the PDO warm phase and increased El Nino frequency and intensity, which coincide with the post-1980 warming trend, it suggests that other things are indeed going on. Very little is known concerning the true origins of things like the PDO and AMO.

    Hansen and friends are already stretching the time scale of the ‘noise’ now out to a decade. How much longer will it stretch? How much longer will temperatures stagnate? If the next El Nino fails to produce a record temperature, like the 2002 event did, what ad hoc excuses will be made?

    A philosphical question: If another natural 50-year megadrought occurred today in California, would it still be man’s fault? Of course it would! Just imagine the headlines….it would make a fantastic novel.

  485. Spence_UK
    Posted Feb 26, 2008 at 7:52 PM | Permalink

    Neal (#483),

    If you’re just interested in the radiative forcing, I agree with you. If you want to analyse a tightly controlled laboratory experiment, yes.

    If you want to know the average temperature of the earth and how that varies, then no your analysis is unhelpful because you cannot disaggregate the two effects in a coupled non-linear system. The turbulence fundamentally affects the hydrological cycle which then affects the radiative forcing of the earth, both water vapour and clouds (which are not trivial, linear feedbacks, but complex, self-similar systems). The clouds and water vapour in turn affect the turbulence. Even if you claim the turbulence is like uncorrelated gaussian noise (unlikely), this can lead to self-similarity and complex noise structure through the non-linear climate system – see UC’s post above. Because of this you cannot isolate the radiative effect of CO2 in the atmosphere by measuring temperature of the earth (or even its radiative properties) averaged over time. The loss of information is twofold: the signals are confounded and inseparable in the proxy measure for radiative forcing that is temperature, plus the presence of clouds and water vapour directly change the efficacy of CO2 forcing. This is just one example of coupling; there will be many more.

    So your inquiry is an interesting lab experiment, and has been an interesting lab experiment for the last 110 years since Arrhenius first conducted it, but cannot either falsify or support the AGW hypothesis. If it were, the experiment would have answered the question once and for all 110 years ago (or at least, when the experiment was conducted correctly).

    On the topic of radiative codes; I have some experience of MODTRAN, it is widely used in my community (remote sensing) as a benchmarking tool. If I punch in a 1976 std US atmosphere I get the same set of extinction coefficients when analysing my sensor as Bob over in JPL did when he analysed his sensor, so we have an apples-for-apples comparison. However whilst it is a useful for benchmarking, nobody that I know in the remote sensing community actually thinks it does a particularly good job of modelling the real world; it is actually pretty poor, and nobody is surprised when direct measurements give very different values to model predictions, even when the model is careful set up with detailed “ground truth”. I have relatively little experience of tools such as HITRAN and FASCODE though, so I can’t really comment on these.

    Postscript: I was about to submit when I saw this in your post:

    – the clouds are made of water vapor, so they won’t affect the interaction between C-O2 molecules and IR

    I hope I’ve misunderstood you here. Clouds are quite opaque in the IR region, they are made of water droplets not vapour, and entirely disrupt the radiative transfer between the surface of the earth and space, making GHGs beneath them irrelevant. What do you mean when you say they have no effect?

  486. jae
    Posted Feb 26, 2008 at 9:08 PM | Permalink

    Neil:

    There are other factors going on than C-O2: weather oscillations like ENSO, volcanoes, etc.

    Yeah, this is the mantra of both the deniers and the deniers of the deniers. I’ll turn another characteristic AGW strawman argument on you: If CO2 can’t explain it, then it must be something else. LOL.

  487. Posted Feb 27, 2008 at 12:49 AM | Permalink

    bender,

    thks. It was more climate-specific, refuting “it’s the sun” argument. Next he should plot Earth incoming energy and outgoing energy (daily values, say last 1000 years), that would be a good start for refuting LTP / 1/f arguments. Not sure if he is interested.

    lucia,

    Raypierre often gives mind-numbingly idiotic retorts.

    Ray “A certain amount of hysteria is justified, indeed encouraged” Pierrehumbert seems to have interesting comments from time to time. It would be nice to share thoughts at RC, but the red button that protects Hockey Stick prevents me posting there.

  488. Posted Feb 27, 2008 at 1:12 AM | Permalink

    Spence

    If you want to know the average temperature of the earth and how that varies, then no your analysis is unhelpful because you cannot disaggregate the two effects in a coupled non-linear system.

    But if A-CO2 effect and ‘weather noise’ are components of linear system, we can separate them. Then you can compute CO2 sensitivity, and add weather noise (1/f, white noise, whatever seems most reasonable), and make predictions with valid prediction intervals. In this case, 1/f -group and A-CO2 group are not in disagreement. However, with 1/f ‘weather noise’ it means that statements like ‘we smoothed/downsampled the data to get rid of weather noise’, or ‘recent warming cannot be explained without taking into account changes in greenhouse gases’ would not be meaningful, because white noise assumption is implicitly included in such statements.

  489. Spence_UK
    Posted Feb 27, 2008 at 4:41 AM | Permalink

    Re #489

    This is true, and normally I try to make it clear that I leave a window open for this.

    Even for a chaotic attractor, a sufficiently large external independent forcing will overcome the attractors and rise above the noise. Whilst I was critical of the solar option above (because of the low variability wrt natural variability), if the sun suddenly started pumping out an extra 100W/m2 you can be pretty sure it would be immediately visible in the global average temperature, chaos or not. The sun is a nice example because it can be viewed as external to climate, i.e. internal climate variability does not cause changes to the solar output.

    The A-CO2 option is more complex for two reasons; firstly, it is more closely coupled to climate, as temperature can drive the carbon cycle (albeit at larger scales). Secondly, because the response is logarithmic to concentration, you would have to put a very large amount into the atmosphere to really step above the natural variability.

    So we consider two cases: the chaotic case, and the non-chaotic case. In the former, because of the coupling, we are likely to kick the attractor on to a different trajectory, but this can cause cooling as much as it causes warming as a response to the hike in CO2, sensitive to the exact concentration change. In the non-chaotic case, yes, we should see a response to the forcing in a predictable manner above the natural variability.

    There seems to be a lot of “ifs” and questions here which merit investigation that are not being discussed: that is my main contention in climate science. That said, with the logarithmic behaviour in the latter case, I suspect toxicology of CO2 will be a greater concern before climate change kicks in.

  490. Tom Vonk
    Posted Feb 27, 2008 at 6:23 AM | Permalink

    UC # 489

    However, with 1/f ‘weather noise’ it means that statements like ‘we smoothed/downsampled the data to get rid of weather noise’, or ‘recent warming cannot be explained without taking into account changes in greenhouse gases’ would not be meaningful, because white noise assumption is implicitly included in such statements.

    Yes , that is the point I am making but I come to it through another path .
    Allow me to go off on a short tangent concerning the relevance of statistical arguments applied to the climate problem .
    There is a well known chaotic system called Rayleigh Taylor instability that consists in a heavy liquid on top of a light liquid .
    You can buy decorative devices with 2 liquids in different colors that produce fascinating chaotic forms when the device moves .
    Thermodynamically it is a simple isothermal system where only gravity plays .
    I have happened on a paper where people used a diamagnetic heavy liquid which enabled an extremely accurate control of the initial conditions .
    Applying N-S on the initial conditions where the interface was a horizontal plane it basically said 0 = 0 and nothing happened .
    So they applied a numerical perturbation by deforming slightly the plane and things began to happen .
    By applying the same perturbation to the experimental plane they could observe the difference .
    As expected the numerical simulation fast diverged from observation as the initial plume amplified and the whole interface began to break up in secondary plumes and swirls .
    Now N-S is a continuous deterministic PED system , there is nothing statistical in it – you can’t use any statistical argument to make it “predict” better the evolution of the plumes . Probability distributions don’t conceptually fit in systems where a point either belongs to the solution or it doesn’t .
    Yet the statistics naturally appear in statistical thermodynamics . So they constructed a model on molecular scale only (nanometer) . Of course due to computing limitations the number of molecules was limited .
    Interestingly the model not only agreed with N-S continuum for short time scales but reproduced much better longer time scales before also finishing by diverging .
    On top this model was spontaneously breaking the initial plane conditions by microscopical instabilities like what the reality does .
    What this suggests is that the unpredictability of macroscopical fluid dynamics is a fundamental feature emerging from the randomness of molecular motions and collisions .
    Therefore the only way to get a (partial) grip on the stochastical elements of macroscopical fluids’ behaviour (micrometer scales and above) is to model the dynamics at a molecular level .
    From what I remember the authors writing is that “this will not happen in any foreseeable future” .

    Neal # 484

    As you may recall, about a year ago, on one of these threads, the physicist Lubos Motl estimated that it would take about 20 years of flat temperatures before you could credibly claim that GW is not happening, given the statistics and the noise. I did not have time to read his estimation process carefully, but I was surprised that the number was so high.

    It is not the only thing that you didn’t read carefully if you have read it at all .
    On the contrary what L.Motl did was to show how naive and unfouded those ideas were (http://motls.blogspot.com/2008/01/weather-climate-and-noise.html)
    3 quotes from his memo :

    There is no sharp boundary between the short term and the long term. A choice of such a boundary is a pure convention and one must always be aware that there exist “faster” as well as “slower” effects that influence reality.

    When they talk about noise, they clearly imagine some kind of non-autocorrelated noise in which the temperature anomaly for one day is uncorrelated to the anomaly from the previous day (or year compared to the previous year). In reality, the “noise” always has some autocorrelation, regardless of the time frame we use for averaging, and it is important to know the color of the noise.

    Some laymen (and even poor-quality scientists) think that if they create a graph of x vs. y and the correlation is nonzero – if there is some increasing or decreasing underlying trend in it – it means that they have discovered an important signal from God or Nature that must be taught and used by the society. Some scientists, especially those who promote the climate alarm, like to abuse this irrational feeling of the laymen.
    In reality, the correlation coefficient never ends up being exactly zero.

    Spence_UK #486

    Even if you claim the turbulence is like uncorrelated gaussian noise (unlikely), this can lead to self-similarity and complex noise structure through the non-linear climate system – see UC’s post above. Because of this you cannot isolate the radiative effect of CO2 in the atmosphere by measuring temperature of the earth (or even its radiative properties) averaged over time. The loss of information is twofold: the signals are confounded and inseparable in the proxy measure for radiative forcing that is temperature, plus the presence of clouds and water vapour directly change the efficacy of CO2 forcing. This is just one example of coupling; there will be many more.

    Of course , all that should be obvious . Please also note that L.Motl substitutes “wrong” to your “(unlikely)” .

  491. bender
    Posted Feb 27, 2008 at 9:01 AM | Permalink

    UC, Spence_UK, Tom Vonk #489-#491:
    I am glad you comment here. Climate science never made any sense to me for all the reasons you cite. Now I don’t feel so alone.

  492. bender
    Posted Feb 27, 2008 at 9:08 AM | Permalink

    #491 TV says:

    On top this model was spontaneously breaking the initial plane conditions by microscopical instabilities like what the reality does. What this suggests is that the unpredictability of macroscopical fluid dynamics is a fundamental feature emerging from the randomness of molecular motions and collisions. Therefore the only way to get a (partial) grip on the stochastical elements of macroscopical fluids’ behaviour (micrometer scales and above) is to model the dynamics at a molecular level. From what I remember the authors writing is that “this will not happen in any foreseeable future”.

    Now THAT is a money quote. Anyone who claims that weather noise is chaotic but climate noise is not (i.e. climatic time-series averages are interchangeable with climatic ensemble means) can come here and address this point. I do NOT buy the ergodicity assumption in climate modeling.

  493. bender
    Posted Feb 27, 2008 at 9:15 AM | Permalink

    Speaking of money quotes.

    My plea is for Alan Robock, Carl Wunsch and Leonard Smith to develop an essay apiece on this issue to subnmit to Steve M for publication at CA.

  494. Mark T.
    Posted Feb 27, 2008 at 10:36 AM | Permalink

    I do NOT buy the ergodicity assumption in climate modeling.

    It’s used in at least one proxy reconstruction incorrectly as well, probably more if not most.

    From Tom’s quote of Lubos:

    In reality, the correlation coefficient never ends up being exactly zero.

    If I ever saw a zero correlation I’d be worried something was wrong with my math.

    Mark

  495. Posted Feb 27, 2008 at 3:18 PM | Permalink

    @Andrew–
    I knew you were being cute. This is what a Chicago garden looks like right now:

    Speaking of Lubos calculation to estimate the number of years to falsify climate, I focused on a more concrete issue: How long must a flat trend persist to be inconsistent with the short term IPCC prediction of 2.0C/ century. Falsifying predictions.

  496. MarkR
    Posted Feb 27, 2008 at 7:34 PM | Permalink

    the unpredictability of macroscopical fluid dynamics is a fundamental feature emerging from the randomness of molecular motions and collisions

    But surely, if quantum theory works, it can be applied here?

  497. Posted Feb 27, 2008 at 8:25 PM | Permalink

    MarkR–

    But surely, if quantum theory works, it can be applied here?

    with respect to those aspects of the problem affected by NS issues, no. There is not enought computational power.

    The NS can be solved exactly. But not for the entire atmosphere, and not with heat transfer, mass transfer and other things going on at the same time.

  498. bender
    Posted Feb 27, 2008 at 10:31 PM | Permalink

    The link by lucia in #496 on falsifying predictions is worth a read. It’s consistent with what Gavin Schmidt says (when repeatedly pressed), but provides a lot more clarity and transparency on the q. how much cooling, and over what time frame, would it take to falsify the AGW hypothesis. Nice job, as usual, lucia.

  499. Ian Castles
    Posted Feb 27, 2008 at 10:42 PM | Permalink

    I agree that the post by Lucia (link at #496) adds clarity to the ‘Falsifying predictions’ issue. However I have added some details which change the conclusion somewhat (Comment 868).

  500. Gerald Browning
    Posted Feb 28, 2008 at 12:34 AM | Permalink

    lucia,

    Answer the questions I posed on the Jablonowski thread that Judith and you have continued to avoid. Using incorrect dissipation operators requires unphysical forcing terms. The mathematical proof is shown on the Growth in Physical Systems thread (two lines of calculus) and clearly shown in the manuscripts by Sylvie Gravel (on this site) and Dave Williamson’s manuscript available on request. There is no mathematics that shows otherwise – only tuning games.

    Jerry

    Jerry

  501. Posted Feb 28, 2008 at 1:39 AM | Permalink

    lucia, in your post, you write

    For current purposes, I will be neglecting serial autocorrelation in the residuals. (People can argue about that later.)

    This can be linked to discussion here, see 489,

    Neglecting serial correlation in the residuals is actually the ‘white noise’ assumption. So, no we have a recipe to make IPCC predictions falsifiable (*)

    1) Assume that A-CO2 effect and weather noise are linear components of the whole system, i.e. they can be separated

    2) Assume that weather noise is white. Or, if not white, you need to know the properties of noise before proceeding from this point.

    3) Proceed as lucia writes

    (*) statistics and falsification, I think even Popper stumbled with this issue.. 🙂

  502. Tom Vonk
    Posted Feb 28, 2008 at 3:09 AM | Permalink

    Lucia # 498

    The NS can be solved exactly. But not for the entire atmosphere, and not with heat transfer, mass transfer and other things going on at the same time.

    This can be very misleading for a casual reader .
    Actually the contrary is true . N-S can NOT be solved exactly . The true assertion is even stronger : it is not proven that N-S admits a unique continuous solution for any initial and boundary conditions .
    What is true is that simplified versions of N-S under specific initial and boundary conditions may be solved .
    Typically those simplification involve going 2D and/or assume different linearity constraints (f.ex laminar flows) and/or assume steady flows .
    None of these simplification applies to the atmosphere and more generally for chaos transitions .

    As for “falsifying predictions” I make the same comment like UC .
    Assuming white noise and linearity is no obvious and self explaining assumption .
    Actually for many and me among them it is a wrong assumption .

  503. bender
    Posted Feb 28, 2008 at 9:32 AM | Permalink

    #500-#503
    Agree strongly with criticism of white noise assumption.
    lucia’s counter will be: “ok, tell me how we make it better”.

  504. bender
    Posted Feb 28, 2008 at 9:44 AM | Permalink

    Gerry,
    I don’t see anyone as “walking on water”. But, yes, I am satisfied with the effort that Judith Curry and lucia put into following arguments, including yours, at CA. Your expectations, as you surely know by now, are very high. Unreasonably high? I guess that’s a judgement call. My expectations are lower.

    Keep commenting. You have a lot that we can learn from.

  505. bender
    Posted Feb 28, 2008 at 9:55 AM | Permalink

    Can anyone show how the noise structure in the GCM output might change as the GCMers chuck out the runs they’re not happy with, in generating the sub-ensembles that are presented to us in IPCC reports? Does the noise structure go from, say, 1/f to Gaussian? Good question for GS at RC.

  506. Tom Vonk
    Posted Feb 28, 2008 at 10:35 AM | Permalink

    Bender # 504

    Agree strongly with criticism of white noise assumption.
    lucia’s counter will be: “ok, tell me how we make it better”.

    In that case the answer would be obvious – determine the topology of the climate attractor (note that we are talking about a space with several millions dimensions) .
    Check that the model output given the right initial and boundary conditions lies within the attractor in the neighbourhood of the real trajectory for a reasonable time – let’s say at least a century .
    But I know that this is not a satisfactory answer for it is like saying “To conciliate QM and general relativity , find a good theory of quantum gravity .”
    It is extremely hard to be done but it is unfortunately the right answer .
    As the “noise” hypothesis is probably wrong yet completely consubstantial to ALL climate models what exactly can we falsify ?
    If the model is wrong and the data is treated in a consistent but also wrong way (e.g making averages and calculating confidence intervals with gaussian assumptions) , what conclusion to draw from a difference between data and model ?

    Lucia’s proposal is however not completely circular – she might be using a wrong test but even if a wrong test falsifies a model , you don’t escape the rejection of either the test or the model .
    In the present case the former would lead to reject the “noise” assumption while the latter would lead to reject something that all models have in common and that might very well be their claims of prediction skills .
    In both cases it would constitute a progress .
    Now of course if a wrong test doesn’t falsify a model , you stay with a wrong model untill mother Nature decides to show you , generally in a completely unanticipated way , how wrong it was .

  507. Posted Feb 28, 2008 at 11:03 AM | Permalink

    re:#507
    This’ll be a toughy 🙂

    In that case the answer would be obvious – determine the topology of the climate attractor (note that we are talking about a space with several millions dimensions) .
    Check that the model output given the right initial and boundary conditions lies within the attractor in the neighbourhood of the real trajectory for a reasonable time – let’s say at least a century .

    And exactly how many untestable assumptions are needed just to formulate the procedure?

    I recall GS stated something like: weather is chaotic, climate is not, but a given trajectory in a climate simulation is. Cleared that up for me.

  508. Dave Dardinger
    Posted Feb 28, 2008 at 11:24 AM | Permalink

    re: #503 Tom,

    The true assertion is even stronger : it is not proven that N-S admits a unique continuous solution for any initial and boundary conditions.

    Could you be more specific? I thought N-S was a set of differential equations. Isn’t it necessarily true that there can only be a unique solution, provided we’re talking classical physics? Of course, if we’re talking quantum mechanics, it’s a given that we’re not going to have the same results in any two runs (real world or model) if we have sufficient measurement accuracy.

  509. Posted Feb 28, 2008 at 12:41 PM | Permalink

    bender,

    lucia’s counter will be: “ok, tell me how we make it better”.

    If the covariance matrix of errors, say \sigma ^2 V is known, then the covariance matrix of estimator becomes

    Var(\hat{\beta})= \sigma ^2 (X^TV^{-1}X)^{-1}

    where

    \hat{\beta} = (X^TV^{-1}X)^{-1}X^TV^{-1}y

    instead of currently used

    Var(\hat{\beta _I})= \hat{\sigma} ^2 (X^T X)^{-1}

    But we don’t know V, and if we use estimate of V from past experience, the optimum properties of estimator cannot be guaranteed ( and 1/f case would probably lead to floor-to-ceiling CI).

    TV,

    Lucia’s proposal is however not completely circular –

    It is good proposal for the white noise case, another interesting approach would be to compute recursive residuals ( produces independent residuals when the model is correct) for monthly data. Last few (and upcoming) months might look interesting..

  510. Craig Loehle
    Posted Feb 28, 2008 at 1:01 PM | Permalink

    Neal King: Sorry, you can not ignore turbulence/convection/whatever and “just” think about radiation. The rate at which heat is transferred to the poles from the tropics influences the temperature of the poles which in turn affects the radiation balance there. The question is: will this transfer rate speed up? If so, that is a negative feedback. Spencer believes he has found a cloud negative feedback whereby warmer temperatures speed up the conversion of clouds into rainclouds which then become clear sky. Low clouds are impenetrable to IR, as noted above, and warm the surface, esp at night. High clouds reflect light. Turbulence itself (convective clouds) helps maintain the vertical lapse rate: will turbulence change in a warmer world to change the lapse rate such that more heat is lost (a negative feedback)? I bet it will. Just focusing on radiation is ok for the moon, but not the earth.

  511. Posted Feb 28, 2008 at 2:34 PM | Permalink

    Tom V:

    Yeah… if you are going to go all “mathy” on me, it’s true that no one has proven the existence of a unique continuous solution for the full 3D time dependent NS with arbitrary BC’s or IC’s.

    Still, I consider fully 3D DNS a full numerical solution.
    DIRECT NUMERICAL SIMULATION: A Tool in Turbulence Research. DNS as a technique exists. It’s been compared to experiments. It’s pretty much bang on correct.

    As a practical engineering matter, does it really matter whether or not anyone has proven the existence of a unique continuous solution for the fully 3D time dependent NS? We know there is something that happens that forces certain properties to converge. The spectral shapes, moments, means etc. all converge in many problems. This is observed empirically and in DNS. They agree.

    However, GCM’s are nothing like DNS.

    Jerry-
    I am not undertaking the silly quiz you assigned to Judith on another thread. If you think the answers to those questions teach us something you answer the questions and make whatever point you think needs to be made. Preferably, you should make your point that on that thread.

    I would like to remind you that you proactively chased Judy, Roger Jr. and Roger Sr. to review a paper of your chosing. SteveM has been more than willing to create a thread for you. Long ago, you promised that after one of them reviewed the thing, you would post your review of Jablonski. You didn’t.

    You now complain that Judy’s review somehow failed to make whatever important point you feel we all should infer from that paper. Not one person inferred “your” point in that paper. I realize you wish to reveal your point forcing the two people you dubbed “the ladies” to undergo obscure quizzes that would consume our time and energy, while sparing yours.

    Please, stop with the juvenile pretense that you are Socrates and Judy and I are your students who must be assigned homework and quizzes.

    If you think the Jablonowski paper hides some important message between its lines, or, even more obscurely, in the text the authors chose to leave out, you post your review. Stop indulging in th vain hope that if you ask probing questions, and demand answers, people will somehow see the “the truth” you discovered in that paper, but are too busy to actually explain in your own words.

    In short: Post the answers to your own darn quiz.

    UC–
    Yep. I did the white noise case. Based on historic data, the autocorrelation for adjacent residuals is… ummmm about 0.8 (If memory serves me correctly.)

    My estimate is all done using past data — same as I would do if I did a pilot study and later wanted to design the full experiment, making sure I take neough data. (Of course, when I design experiments, I either make sure the time between intervals is longer than a few integral time scales or much shorter than the kolmogorove time scale. Which depends on what I wnat to learn!)

    So… do I really just divide the variance by the covariance..for what? I’m a big vauge on what your saying,because I know that will approach a constant as N->infinity. So will . And… obviously, I cant just divide by that. Soo…. If you want, email me, or stop by my blog and give me detials.

    I think it’s important to discuss falsification before the data are in! That prevents the possibility ocherry picking and so minimizes the accusations.

    Ian– Yes. Thanks for the info. That was useful, as it means that starting a comparison in 2001 is NOT cherry picking.

  512. kim
    Posted Feb 28, 2008 at 3:08 PM | Permalink

    Here’s something amusing, lucia. Tamino has been allowing most of my comments for the last couple of days. One that didn’t make the cut was a suggestion that people look at your Blackboard for how close falsification of the IPCC’s hypothesis of CO2 warming is getting.

    Also amusing is how much better you made my question appear on your blog. Thanks.
    ===========================================

  513. Posted Feb 28, 2008 at 3:31 PM | Permalink

    Kim– Well, Tamino’s not letting that through won’t make any different to the number of people who read the post. I’m getting LOTS of visits — about 500 so far today.

  514. Kenneth Fritsch
    Posted Feb 28, 2008 at 4:25 PM | Permalink

    http://www.climateaudit.org/?p=2708#comment-218487

    During the late 1990s when the tech stock boom was in full force, a web site was promoting a mechanical stock picking strategy of which some versions of that strategy were showing phenomenal returns versus the S&P 500 index. I posed the question how many years would it take with those phenomenal gains over those few years for the strategy to show no long term statistically significant gain over the S&P 500 if the strategy returned the same as the S&P 500 going forward. My calculation which showed something on the order 700 or 800 years as I recall and was meet with much derision until a favorite of the web site did the calculation and determined I was wrong – it would take only 650 years. It was a simple calculation and assumed “white noise” and little or no autocorrelation, but it was the calculation that the proponents of the strategies were using to evaluate their results.

    Lucia, my wife had the same inclination to take some winter pictures of our Chicago area backyard the other morning with the snow covering all the tree branches. She thought it was a beautiful scene, but my view of it was that it would be beautiful in December but not so beautiful after a long hard Midwestern winter.

  515. steven mosher
    Posted Feb 28, 2008 at 5:07 PM | Permalink

    RE 514. My plan is working.

    Side question. You have historical forcings ( for volcanos et al) can you post those
    forcings as a +- C. ?? please and thank you, with sugar on it.

    Also, http://fumapex.dmi.dk/Pub/Docu/Reports/FUMAPEX_D4.6.fv.pdf

    HELP!!!!!!!!!!!!!!!!!!!!

    when you get a minute.

  516. Posted Feb 28, 2008 at 5:39 PM | Permalink

    steven… you mean me? No. I plotted a file gavin posted at RC. That corresponds to some number GISS used…. somewhere, sometimes…. I’d have to read details to say more.) But, the file didn’t have and +
    /- values, so I can’t provide them.

  517. steven mosher
    Posted Feb 28, 2008 at 6:01 PM | Permalink

    Re 517. Gavin gave you forcings ( in watts per whatever) I want delta C.

    So simply, lets take volcanos. From 1880 to today, delta C due to volcano.

  518. Posted Feb 28, 2008 at 6:11 PM | Permalink

    Oh… I’ve never run Lumpy with only one forcing at a time. Is that what you mean?

    To get delta C from forcing, you need a model.

  519. Posted Feb 28, 2008 at 9:07 PM | Permalink

    Re #496 lucia, here is a photo of the poinsetta bush (and a few pink rose blooms) in my backyard this afternoon. It’s warmer than Chicago at the moment but your backyard will be much more pleasant than mine this summer.

  520. Kenneth Fritsch
    Posted Feb 28, 2008 at 9:47 PM | Permalink

    The wife’s pix of the backyard – that white stuff on the trees ain’t cherry blossoms.

  521. Posted Feb 28, 2008 at 10:13 PM | Permalink

    Re #521 Kenneth that’s an awe-inspiring sight to a southerner.

  522. Gerald Browning
    Posted Feb 29, 2008 at 1:03 AM | Permalink

    lucia (#51),

    I know the answers and have repeatedly stated the consequences of those answers on the Exponential Growth in Physical Systems thread and on the Jablonowski thread. They undermine any confidence anyone should have in any climate model.

    Let us choose just one of those questions.
    You have explicitly stated that an incorrect dissipation operator (hyperviscosity based on fourth order derivatives) produces a different spatial spectrum than that of the second order derivative type dissipation operator specified in the Navier-Stokes equations.(This difference has been proved mathematically.) Given that you have so stated, please state how that incorrect nonlinear cascade of enstrophy due to an incorrect type of dissipation operator can be forced to provide a realistic looking spatial spectrum in the forced case.

    As you continue to avoid the answers to the specific questions that you said that I did not ask, I will provide the answer to this particular question again at this point.

    The answer is that only by using unphysical forcing terms as demonstrated
    by Sylvie Gravel’s manuscript on the Canadian global NWP model (on this site) and Dave Williamson’s manuscript on the NCAR CAM3 atmospheric component of the NCAR climate model can the spatial spectrum of the enstrophy be made to appear realistic when the nonlinear cascade is incorrect. But that spectrum is not due to the correct nonlinear cascade or accurate approximations of the real forcings, i.e. both the nonlinear dynamics and the physical parameterizations (tunings) are incorrect.

    It has been proved mathematically (and demonstrated in Sylvie’s manuscript) that in the case of a very short term
    weather forecast, the periodic updating of the winds can keep the forecast
    from going off track for a short period of time (a matter of hours rather than days). But that data must be inserted often (on the order of 6-12 hours) to keep the model on track because the inaccurate physical parameterizations cause the model to become unphysical in less than 36 hours.

    In the case of a climate model, these periodic updatings of the winds are not possible and the full impact of the inaccurate forcings and inappropriate nonlinear cascade of the enstrophy cannot be corrected.
    Thus the model becomes unphysical in a few days.

    Jerry

  523. Gerald Browning
    Posted Feb 29, 2008 at 1:28 AM | Permalink

    lucia (#512),

    I also would like to remind you that it was you that said that I did
    not ask specific scientific questions and that is when I posed the
    set of very specific questions that you then avoided on the other thread.
    That set of questions can easily be expanded to be a review of the Jablonowski manuscript, i.e. they are the scientific questions that should have been asked by a reviewer and were questions very apparent to me when I read the manuscript.

    Jerry

  524. Tom Vonk
    Posted Feb 29, 2008 at 5:07 AM | Permalink

    Lucia # 512

    Yeah… if you are going to go all “mathy” on me, it’s true that no one has proven the existence of a unique continuous solution for the full 3D time dependent NS with arbitrary BC’s or IC’s.

    Still, I consider fully 3D DNS a full numerical solution.
    DIRECT NUMERICAL SIMULATION: A Tool in Turbulence Research. DNS as a technique exists. It’s been compared to experiments. It’s pretty much bang on correct.

    As a practical engineering matter, does it really matter whether or not anyone has proven the existence of a unique continuous solution for the fully 3D time dependent NS? We know there is something that happens that forces certain properties to converge. The spectral shapes, moments, means etc. all converge in (many) some problems. This is observed empirically and in DNS. They agree.

    This is precisely at the heart of the issue .
    When the issue is existence and properties of solutions of a continuous PED system , it is hard not to go all “mathy” .
    I will not pay 20$ for the paper linked but I think that I know what it says because I have done some DNS anyway .
    There are 2 problems – one concerns the convergence and another the predictability . The former is much weaker than the latter .

    To the convergence .
    If a set of discrete equations (DNS) converges what is after all a necessary condition for the thing to make a sense , what does it prove ?
    That given an algorithm and a constant arbitrary parameter (step size) , a set of numerical values (IC and BC) allows to generate a set of other numerical values that don’t go to infinity during the finite time of the calculation .
    Does this set of numerical values depend on the step size ? You bet .
    Does this set of numerical values depend on IC and BC ? Of course .
    Does this set of numerical values uniformly converge to THE solution of the continuous PED ? Perhaps . It depends . We can’t say . Some aggregates or specific functions of (Xi , Yi , Zi , Ti) may converge to their continuous equivalent other not .
    There are papers on Dan Hughes site that show that numerical methods applied to known chaotic systems exhibit strong dependence on the step size (and of course IC/BC) so that they don’t converge to the (unknown) solution .
    Dan goes even so far as to call the results “numerical artefacts” and I’d agree with this definition by adding that even artefacts can have a small number of not completely absurd features .
    The paper I have mentioned about Rayleigh Taylor instability (and there are certainly many more on the same issue) shows that DNS diverges from the molecular size model and from the observation .

    Of course for engineering purposes it sometimes doesn’t matter if the simulated velocity field converges to the continuous velocity field solving N-S .
    What matters is that some interesting parameters in some specific conditions over small spaces and short times be shown as little dependent on the local variations of the velocity field .
    After all you don’t even need (and can’t do it anwyay) to go to Kolmogorov scales to make planes fly or to build pumps moving water .
    This ability shows certain partial properties of N-S but it certainly doesn’t prove anything about convergence of simulated velocity fields .
    Mother Nature is sometimes very tolerant and allows people to make pretty absurd assumptions like infinite
    speed propagation of interactions at infinite distances and rewards you with what seems “bang on” description of trajectories .
    But as soon as people get a big head and begin to take the assumptions literally , they get a tap on the fingers because the thing is not Lorentz invariant .

    To the predictability
    That is actually much more my issue than the previous one because what interests me most in a theory is whether it can predict and how it can be falsified .
    And here DNS fails .
    In chaotic systems it doesn’t predict anything but that’s OK and expected . In simplified systems postulated non chaotic it can calculate some features of the fields that may have a statistical interpretation .
    I agree that making a statement about probabilities be it exclusively empirical and only valid for short times is better than making no statement at all .
    But introducing means and standard deviations of assumed probability distributions that can’t be inferred in any way from the properties of (unknown) N-S solutions doesn’t a prediction make .

    Last but not least and like you said , the climate models are not even DNS .

  525. bender
    Posted Feb 29, 2008 at 8:52 AM | Permalink

    Re #525 Another T.V. money quote:
    assumed probability distributions”

    A fact that RC refused to acknowledge when I once asked: “Do these probability distributions in fact exist?”

  526. bender
    Posted Feb 29, 2008 at 9:03 AM | Permalink

    Re #526 Like unstable elements, these probability distributions seem to exist … for short periods of time, but then are destoyed by the nonlinear cascade referred to by GB in #523. Climate is spatiotemporally chaotic. Time-series expectations for a given point are not equivalent to the short-term time-series averages for that point. As different portions of the attractor are explored, so the climate at a given point will appear to shift. Climate is not shifting; its full variability is being realized. Weak effects of CO2 are layered on top of these powerful internal machinations.

    Any climatologist who wants to argue that climate is non-chaotic and that the GCMs are valid should come to CA and explain themselves.

  527. Tom Vonk
    Posted Feb 29, 2008 at 9:54 AM | Permalink

    Bender # 526

    A fact that RC refused to acknowledge when I once asked: “Do these probability distributions in fact exist?”

    I am not surprised .
    With that question you kick any and I really mean any climatologist in the corner .
    The less bright will unintelligibly mumble something about gaussians .
    The brighter would say that they tried several distributions with this or that result and that their superior insights have allowed to eliminate all but one .
    The brightest would say nothing .
    So what if it there are none at all because the whole system is governed by deterministic chaos ?

  528. Pat Keating
    Posted Feb 29, 2008 at 10:07 AM | Permalink

    528 Tom V

    …..governed by deterministic chaos

    Well, that’s an interesting phrase. Serious or sarcasm?

  529. Posted Feb 29, 2008 at 10:20 AM | Permalink

    lucia,

    Yep. I did the white noise case. Based on historic data, the autocorrelation for adjacent residuals is… ummmm about 0.8 (If memory serves me correctly.)

    And that’s just the beginning, sample first-lag autocorrelation, estimate of one descending diagonal of V.

    So… do I really just divide the variance by the covariance..for what? I’m a big vauge on what your saying,because I know that will approach a constant as N->infinity. So will . And… obviously, I cant just divide by that. Soo…. If you want, email me, or stop by my blog and give me detials.

    Sure, those were equations from the textbook. Your formulas seem to be correct (just in different form), except sign error in Eq. (3) 🙂 ) And white-ish noise + A-CO2 + astronomical forcing seems to be the hockey stick / IPCC model, so your story is good in that sense. I’d add 1/f term to the model, and, as in Koutsoyiannis’ terms, 1/f noise ‘forgets’ its mean.. Thus, short sample variance from the past data cannot be used in future. Problematic case indeed..

  530. Posted Feb 29, 2008 at 10:23 AM | Permalink

    JerryB:
    I never said you did not ask questions, please reread comment 275 on Judy’s Jablonski thread. Going forward, I will ignore you, until such time as you get around to posting your full evaluation of Jablonski, and explaining whatever the heck you think is important in that paper. Your post should presented in the form of answers, given by you, not questions or pop quizzes assigned to others.

    Socrates used the Socratic method; he was forced to drink Hemlock: Justice was served.

    TomV: I didn’t expect you to read the paper. I linked so people could see dates, know it’s been doing a long time, and that this isn’t just climate modeling.

    I think it’s worth writing something rather long in response, because I know lots of readers (including bender) here are interested in precisely how and why the intractability of the NS is or is not important. The length of the following shouldn’t be read to suggest I disagree with you. Otherw than identifying where the “heart” of the matter lies, I disagree with nothing you say.

    The problem is, where the heart lies depends on what “the matter” is.

    If our concern is climate models, rather than arcane details about the Navier Stokes equations, heart of the matter is not the “mathyness”. The heart of the matter is: Can GCM’s do what climate modeler claim they do? Which is predict climate to some required level of accuracy? Even if “average” is ill-defined, we need to remember that climate is an average of some sort; individual trajectories or realizations are “weather”.

    In this context, when those debating GCMs, bring up the intractability of the Navier-Stokes (NS), the question is: Is there something about the NS that means climate scientists clearly can’t predict some sort of average behavior? Are climate scientists claiming to do something others can’t do because the NS are too intractable?

    The answer is: engineers can and have done what climate scientists claim to do — predict average behavior of turbulent flows and they’ve done it exactly. They can predict probablity distribution functions, spectra etc . They do it using research codes cased “Direct Numerical Simulation of the Navier Stokes”. (No one uses this in real problems– this is purely a research tool.)

    In your response to me, to use statistical terms that many here will prefer, you point out modelers can’t predict individual trajectories or realization. Translating this to the AGW / GCM debate, this means modelers can’t predict weather. They can’t.

    The “mathyness” tells us the weather (i.e. individual trajectories, or realization) can’t be predicted.

    Everyone is in violent agreement on this: We can’t predict weather after some amount of time. If we make mistakes on the boundaries, the errors will propagate into the dynamical core and cause weather predictions to go wrong. Quickly.

    But this particular aspect of not being able to predict the weather is, oddly enough, not relevant to GCM’s. Climate scientists only claim to predict average behavior — climate– with GCMs.

    So, all the attempts to prove GCM’s can’t work by proving they can’t predict weather are arguments against strawman.

    And now, we must move on the issue that, quite likely, is the one that zillions of CA readers have been dying to learn more about. Yes, we have read everyone’s favorite climate modeler mantra:

    “We can’t predict weather, but we can predict climate”.

    Is this statement true? Maybe. Or not.

    What would be more truthful is this: “We know we will provably never be able to predict weather for very long, but that doesn’t prove we can’t predict climate.”

    Note all the double negatives and qualifiers: They matter.

    So, how does the irritating mantra relate to DNS?

    DNS cannot be proven to reproduce individual realizations of an ensemble: True.

    Moreover, if I had to guess, I’d say it probably doesn’t reproduce individual realizations of an ensemble. As a practical matter, we would never, ever be able to reproduce individual realizations (or trajectory) for a long time. In fact, if our goal is to predict the evolution of an individual trajectory (or storm), mistakes in boundary conditions propagate into dynamics cores, and problems with initial conditions are expected to grow.. possibly even exponentially! So, the predicted storm, will quickly look different from a real storm.

    This is the pure fluid dynamics equivalent to not being able to make weather predictions.

    Translation to climate: DNS, if extended to a climate model, cannot be proven to reproduce weather; moreover, I bet we will never be able to predict weather for very long.

    But what about climate— which is some sort of average?

    The fact is– and it’s a well known fact– with respect to the NS by themselves, failure to predict individual trajectories or realizations (weather) is not inconsistent with predicting averages — (i.e. climate).

    The technique of Direct Numerical Simulation of the Navier Stokes (DNS) has been shown to reproduce correct probability distribution functions, spectral characteristics etc. These computations include long times– in fact, by definition, they must be run for long enough to compute convergent pdf, spectra, averages and what-not.

    With regard to probability distribution functions/ spectra etc. these computations validate perfectly against very good, accurate data collected in physical experiments. The results are not overly sensitive to small variations initial conditions (ICs) or boundary conditions (BCs).

    So, for *DNS* , the claim (to make an analogy) that we “can’t predict weather” (because we can’t predict individual realizations) , but we “can predict climate” (because we can get the probability distribution functions) would make sense.

    I’ll return in a moment to that claim– but for now, I think many of the statisticians here will appreciate what I’m saying with this. The claim that we can predict averages, but can’t predict individual realizations has some basis in reality. The basis is Direct Numerical Simulation of the Navier Stokes. (DNS.) Physicists will recognize the parallel claim that we can’t predict the behavior of individual molecules, but we can say the ideal gas law applies etc.

    So.. I’m satisfied with DNS, and when I say the NS can be solved, this is what I mean. I realize that, from a mathematical POV, this is not precisely true because from a math point of view, the NS aren’t ‘solved’ unless I can predict individual realizations (that is, weather).

    I think our conversation is communicating to readers the difference between the two ideas: I’m talking about the implications of the NS to predicting average behavior – climate. If I am not mistaken, your discussion of the mathematical issues pertains to individual realizations– weather.

    But, I can’t leave this here, because now it sounds like I’m saying the GCM modelers can predict climate. Maybe they can. Or maybe they can’t.

    All I’ve really said is: If they could DNS the whole planet, then with if the climate predictions problems were due to the problems solving the NS, then yes, they could predict climate. (But not weather!)

    But of course, GCM’s don’t do DNS. They can’t. There is not enough computer power in the world. Also, GCMs contain some uncertain parametrizations that fall outside the Navier Stokes. (These include parameterizations for clouds.)

    But, just as engineers don’t need to used DNS for engineering problems, climate scientists probably don’t need to used DNS to predict climate.

    Yes, you are correct that engineers don’t need to go down to the Kolmogorov. (smallest) scaled for engineering. For one thing, one almost never cares about individual realizations. We don’t care when any individual vortex is shed behind the wing of an aircraft — ever. More importantly, we can even get sufficient accuracy for means, moments and other features of interest without resolving the smallest scales. This is why we don’t use DNS is engineering.

    But this doesn’t make engineering different from climate modeling: It makes it exactly the same. This means, engineers who use models can look at climate models and grasph where they could be largely correct and where they could be largely incorrect.

    As an engineer, I’d wager climate modelers probably don’t need to capture the smallest scales of turbulence to predict climate. Why not? Well, just as aeronautical engineers don’t give a hoot about when an individual vortex is shed behind a wing and they certainly don’t care about anything smaller than the largest scales of a vortex, climate modelers don’t give a hoot about an individual hurricane, and they certainly don’t care about the predicting each instantaneous gust during the storm.

    In contrast, weather forecasters would need DNS to work perfectly to make eventually make very long range, very precise weather forecasts. They’d also need really detailed data for initial conditions, and they will probably never have that.

    But, even with all that: the fact that modelers don’t need DNS to predict climate, tells us that there is some hope that GCM’s could hypothetically be accurate, it tells us nothing about the actual accuracy of GCMs at doing what they claim to do in practice . There are at least two problems:

    1) The types of parameterizations in GCMs, of which there are many. While in principle if GCM’s were “DNS” like (or just good enough) they could predict climate, it’s not obvious they are good enough to do so impractice. They may introduce errors that cause the GCM predictions to be wrong on average. and

    2) What GCM’s are really, really trying to do: Which is to predict the “average future” for a specific individual realization, and given uncertain forcings. (They try to deal with this through ensemble averaging of runs, starting with slightly different initial conditions and sligly different forcings and slightly different parametersizations. BTW– this goes toward benders comment — 527.)

    I don’t personally know how much uncertainty these things introduce. I should think enough that validation against data collected after predictions are made is required. Moreover, the validation should be quantitative, providing the sort of numerical metrics bender would like to see. (RMS errors for surface temperatures? Precipitation? What not. )

    I know, based on various conversations, the issue of parameterization alone is enough to inject doubt. Judy suggested cloud parameterizations introduce the largest uncertainty. So….lucky for all of us, Munchkin, The global climate change dog, is hard at work, guarding the instruments at the ARM site used to collect the data to help improve cloud models. If only the other ARM sites had such a loyal dog overseeing the equipment. We could lick this AGW problem in a jiffy!

  531. Posted Feb 29, 2008 at 10:39 AM | Permalink

    @UC–
    Yeah. I make ‘consistent’ sign errors with ‘t’ because…. well… Excel and tables only who positive ‘t’s for some issues problems, and then make you swap the signs. So… I just do that to make it work out instead of shoving in a 2 paragraph explanation. Actually, Kreysig, my math book does it that way too. I suspect anyone who can cope with the problem knows how to deal with the ± issue, and has even experienced it in the past. (Did you know in engineering, some books say the first law is Q+W = dH/dt and some say Q-W =dH/dt. What’s worse is most American thermo books use one form, the fluids books use the other, and students in some disciplines take their first course in each subject at the same time! It all goes back to Watt and Carnot using different conventions for defining positive work on a system. )

    Can you suggest a specific statistics text that discusses how to correct for the a

  532. Jonathan Schafer
    Posted Feb 29, 2008 at 10:44 AM | Permalink

    After reading so many of these threads regarding the ability of GCM’s to determine future climate states, etc, I am beginning to wonder if it wouldn’t be easier to make a model that is based on probability distributions of certain events occurring plus initial conditions and just use that to make your future climate predictions.

    It seems to me that if you took current average temperature, added in occasional ENSO events, PDO/AMO oscillation flips, volcanic eruptions, etc., run them multiple times, varying the probabilities/strengths of events, and then generated an ensemble from all the runs, you would probably end up with a climate that matched reasonably well with reality, and probably forecasts just as well as current GCM’s. You could even assign error bars to it, although they would in reality be meaningless, just as the current GCM forecasts are. But aleast no one would have to argue over whether the NS equations are resolved at all levels at all times, whether there are unphysical parameters and flux adjustments applied to GCMS’s, and whether there is an appropriate confidence level assigned to their output.

  533. jae
    Posted Feb 29, 2008 at 10:49 AM | Permalink

    531, Lucia: Thanks for that explanation. I even understood most of it. It seems to me that some folks overemphasize the chaotic nature of climate. Weather is definitely chaotic, but the chaotic effects are expressed over relatively short time scales (weeks for most effects, maybe 20 years for some). But the average climate is not so unpredictable. I can be quite confident, within a couple of degrees, of what the summer and winter averages will be next year where I live.

  534. steven mosher
    Posted Feb 29, 2008 at 10:51 AM | Permalink

    re 531. “The debt shall be paid, “is there anything else?”

  535. Posted Feb 29, 2008 at 10:52 AM | Permalink

    UC_- oops…

    Can you suggest a specific statistics text that discusses how to correct for the autocorrelation? With equations I can read? (I keep finding all sort of thigns that tell me how to use a statistical package, but no actual equations or math! I like to see the math.)

  536. John V
    Posted Feb 29, 2008 at 10:53 AM | Permalink

    #531 lucia:
    Great post.
    You should consider writing that up as an article in your blog.

  537. Posted Feb 29, 2008 at 10:59 AM | Permalink

    Jae–
    Climate may well be chaotic, and on geological time scales. But the cause may not be the Navier-Stokes.

  538. gb
    Posted Feb 29, 2008 at 11:02 AM | Permalink

    Lucia,

    A small correction. Even if you can do a full DNS of the atmosphere you still can’t predict the weather for a very long time since you don’t know the exact initial conditions. Furthermore, also models such as LES are able to describe th e mean and other statistics of turbulent flows, even the time scales and the intensity of the fluctuations. For the rest, a very good post!

  539. Posted Feb 29, 2008 at 11:15 AM | Permalink

    John V–
    I’m considering organizing it and posting. The ‘debate’ does need to get beyond:

    Denialist: I’ve proven you can’t predict the weather. That proves you can’t predict climate in any way what-so-ever!
    Alarmist: We can’t predict weather. That proves we can predict climate! And down to a gnats-hindquarters!
    Denialist: Cannot predict weather!
    Alarmist: Can so predict climate!
    Denialist: Cannot!
    Alarmist: Can So!
    Denialist: Cannot!
    Alarmist: Can So!

    Also, everyone needs to recognize when their argument has been transformed into a strawman and then the strawman rebutted. If you see it’s been done, it’s easier to call and eventually get clarification.

  540. Kenneth Fritsch
    Posted Feb 29, 2008 at 11:38 AM | Permalink

    Re: # http://www.climateaudit.org/?p=2708#comment-218834

    But this doesn’t make engineering different from climate modeling: It makes it exactly the same. This means, engineers who use models can look at climate models and grasph where they could be largely correct and where they could be largely incorrect.

    I think many of us understand where Lucia, Jerry and Tom are coming from and it breaks on the level of how well numerical solutions can approximate nonlinear physical and deterministic processes.

    Lucia, this layperson has problems with your projection of your engineering experience with models onto climate modeling. Surely you have advantages of testing (and adjusting) your model in real time — or are you saying that your numerical solutions perfectly predict the experimental results all the time.

    I like these more philosophical discussions such as the three of you have posed here and with comments from others, but to my mind the devil in climate modeling remains in the details and it is those details that too often are given short shrift or sidetracked.

    Lucia, when will you have the numerical solutions that explain why a drop of alcohol does not splash in an environment of reduced pressures?

  541. Posted Feb 29, 2008 at 11:46 AM | Permalink

    gb–
    Agreed. On both counts. I wrote a few more paragraphs, and edited down. Guess what I cut out? 🙂

  542. Dekarma Spence
    Posted Feb 29, 2008 at 11:59 AM | Permalink

    Re #540

    If that is what you see when you see the discussion between Tom, bender, UC, myself, then I’m very disappointed – what is being discussed is based soundly on theory, and in many cases, supported by observations.

    Your comparison to engineers is poorly reasoned. Engineers need funding to work. If it costs 5 billion for a new civil airliner, 2 billion for a nuclear reactor, 500 million for a chemical engineering works. If the engineer puts a black box with “then a miracle happens” in his proposal, no-one is going to commit that kind of money. That’s why engineers work on predictable stuff. They choose predictable solutions by design.

    Nobody designed the climate. Different ball game.

  543. Posted Feb 29, 2008 at 12:36 PM | Permalink

    Dekarma Spence
    That’s not the argument I see here. That’s the argument I see at blogs with, shall we say, highly moderated comments? Not knowing if a comment is going to get through, people post short comments. Then, the return quip is equally short. I find those type of comments threads counter productive.

    I like comments threads here.

    Of course everyone needs funding to do good work.

    Of course, engineers often have the advantage of also working on tractable problems; some with adequate funding. But, sometimes not. At Hanford, engineers often got stuck with intractable problems we couldn’t design around. Often, the projects were woefully underfunded because it was difficult to communicate why solutions were hard.

    But these problems fell in engineers laps because the phenomenology for multiphase flow is a topic in engineering, not physics. No one would dream about transfering the computations to nuclear physicists or climate scientists or some other group who are described as “scientists”.

    @Ken….

    Lucia, this layperson has problems with your projection of your engineering experience with models onto climate modeling. Surely you have advantages of testing (and adjusting) your model in real time

    Engineers have this advantage and it’s an important one when we are disucssing all real engineering codes. It’s not all that relevant to DNS– which is not used in nuts and bolts engineering appliations. It’s too computationally intensive for real problems even at the engineering scale.

    The discussion comes up with respect to what absolutely, positively can or cannot be modled.

    — or are you saying that your numerical solutions perfectly predict the experimental results all the time.

    For the types of problems that can be solved using the NS, DNS will predict results with correct probabiilty density functions, spectral characteristics etc. Yes. Everytime.

    It can’t predict the splash problem. That involves factors outside the Navier-Stokes equations. At a minimum, it involves surface tension, which is outside the NS! Also, I don’t think they can do gas-liquid interface problems in a way that’s considered “Direct” or “full”. (I could be wrong on that though.)

    So.. you see where many problems you can dream up are hard– but the difficulty is not due to the impossibility of dealing with Navier-Stokes. There are other impossible problems “out there”.

    There are, undeniably devils other than the NS in climate modeling. In some sense, that’s part of the point. If we could solve the NS, modeling problems remain. But, with regard to the NS “we can’t predict the weather” issue: that specific problem is not necessarily “the” problem, or even “a” problem.

    The NS are difficult. Modelers need to make approximation. That does matter. And the fact that we can’t re-run the climate of the earth itself over and over is a disadvantage relative to engineering. So, yes– there can be, shall we say “issues” with GCMs.

    But it’s still best to know which things are issues and which aren’t.

  544. Kenneth Fritsch
    Posted Feb 29, 2008 at 2:57 PM | Permalink

    http://www.climateaudit.org/?p=2708#comment-218873

    It can’t predict the splash problem. That involves factors outside the Navier-Stokes equations. At a minimum, it involves surface tension, which is outside the NS! Also, I don’t think they can do gas-liquid interface problems in a way that’s considered “Direct” or “full”. (I could be wrong on that though.)

    The splash problem does involve nonlinearities like the Navier-Stokes equations and I guess I was looking at the NS discussion as a special case of the problems involving numerical solutions to nonlinear equations – realizing, of course, that NS can loom large in climate modeling. I was led to believe that the splash problem was hung on numerical solutions to nonlinear equations.

    I guess another question your reply poses could be: Does the success of numerical solutions to NS depend on the application.

  545. Sam Urbinto
    Posted Feb 29, 2008 at 3:10 PM | Permalink

    I put no more stock in the last 13 months than I do in the last 10 years (or any 10 years or 20 or whatever) for the anomaly, but some of you have talked about it. It is trending down. But it’s still higher than the base period each month. So if anything, it’s “not warming as much” if you put stock in such unphysical things as global anomalies.

    Here’s a graph of the last 13 months of gistemp with linear and log trend.

    http://www.climateaudit.org/phpBB3/download/file.php?id=17

    Oh, and if 1999 and 2000 both had an unusually high anomaly (equal to 2005) rather than an unusually low one, the 10 years would be down.

    http://www.climateaudit.org/phpBB3/download/file.php?id=20

    Neal: “an increase in the global average temperature (not, as some would have it, in a mythical ‘temperature of the Earth’).”

    No, an increase in the global mean temperature anomaly.

    Tom: ” the ‘global average temperature’ is unphysical in the sense that there is no physical law that would depend on it”

    That’s what “mythical ‘temperature of the Earth'” means, perhaps?

    beng: “The omnipresent Exxon ‘hired-gun’ instead seems like a convenient & habitual label for any dissenters to the cause, even against fellow academics.”

    Right, it’s the boogie man (no, not disco stu). 🙂 How about this: On the BB I quoted

    “Some corporations whose revenues might be adversely affected by controls on carbon dioxide emissions have also alleged major uncertainties in the science.[2]”

    She gets the year wrong in the reference, this is it, with title, something she ‘forgot’ to put in)
    van den Hove, S., Le Menestrel, M. & de Bettignies, H.-C. (2002): “The oil industry and climate change: strategies and ethical dilemmas.” Climate Policy 2(1): 3-19.

    Here’s an interesting tidbit on BP from the paper:

    In a recent development {2002}, BP’s CEO announced that the company would to end all its political donations world-wide: “We must be particularly careful about the political process because the legitimacy of that process is crucial both for society and for us, a company working in that society. That is why we’ve decided, as a global policy, that from now on we will make no political contributions from corporate funds anywhere in the world. We’ll engage in the policy debate, stating our views and encouraging the development of ideas, but we won’t fund any political activity or any political party”

    bender: “It goes on and on. But it ended inconclusively.”

    Sounds like RC, that’s for sure. Seems most of the discussions earlier on when Gavin came here just faded into nothingness with no resolution.

    lucia: “I think it’s mostly CO2 and other GHG’s.”

    I think that if the anomaly is representing some rise in energy levels (something I’m not convinced of, obviously) I think the AGHGs (no need to break them down) are 20%, with water vapor the unknown X factor that could make that number zero after all is said and done. That would leaves 80% for land-use. You’re not just going to get rid of the power stations, agrigculture, biomass and residential, much less coal, natural gas, cement and petroleum. But maybe it’s 50/50 or 80/20. Does it matter anyway?

    BTW, even forgetting everything else, even some of the GHG are from land-use change anyway.

    The clearance of grassland releases 93 times the amount of greenhouse gas that would be saved by the fuel made annually on that land, said Joseph Fargione, lead author of the second paper, and a scientist at the Nature Conservancy.

    Regardless. If (since) the root causes are technology and population numbers, what we can do is limited, in any case, so the anomaly and what it means is really not important. That makes the (A)GHG and land-use unimportant to a large degree, so the percentages can be anyone one wishes to make up, I mean, model.

    Pielke Sr seems to think about the same way I do also, http://climatesci.org/2008/02/20/a-new-york-times-report-by-elisabeth-rosenthal-biofuels-deemed-a-greenhouse-threat/

    I’m more than willing to have somebody prove me wrong, and would graciously accept any non-hand waving comments on my ideas. 🙂

    http://commons.wikimedia.org/wiki/Image:GHG_intensity_2000.svg
    http://cait.wri.org/

    jae: “What rising temperature?”

    There’s a rising temperature?

    Spense_UK: But I want to disaggregate the two effects in a coupled non-linear system, I wanna wanna wanna.

    bender: How can you have weather noise, use that to create something else, and not have it noisy too? It’s like baking a cake and saying there’s no flour in it.

    Or maybe it isn’t….

  546. Posted Feb 29, 2008 at 3:16 PM | Permalink

    Ken–

    I guess another question your reply poses could be: Does the success of numerical solutions to NS depend on the application.

    Well… sort of no. If you could do DNS, and the problem was strictly a navier stokes problem, you could predict average results.

    But… that’s deceptive beacause DNS can only be applied to a relatively small subset of problems of interest to scientists or engineers.

    This is mostly because DNS is way, way to computationally intensive for most problems.

    But in the drop problem there are other complications, unrealted to the NS, and being able to compute solutions to the NS doesn’t get around them.

    But the fact is, some of these issue have nothing to do with the insolvability or intractability of the NS. Sometimes, you have NS problems plus other intractable problems!

  547. Gerald Browning
    Posted Feb 29, 2008 at 3:53 PM | Permalink

    Tom (#525)

    As usual we are in agreement. But it seems that the importance of convergent numerical solutions (without continual alterations of the unphysical dissipation operators and forcings) as a necessary requirement for accurate numerical models of the atmosphere and oceans is beyond some peoples comprehension. This is not a great surprise when it is relatively easy to build a numerical model and tune it to obtain any solution one wants. It is quite a bit more difficult to properly analyze the continuum PDE system and provide an accurate and stable numerical method that will converge without tuning the dissipation operators and the forcing, especially for the initial-boundary value problem.

    Jerry

  548. Craig Loehle
    Posted Feb 29, 2008 at 4:06 PM | Permalink

    Lucia: very clear and excellent post. The crux of the problem is that we want to apply our model (the GCM) to a change in forcing (a perturbation) but have only been able to test it with a very short run (past 30 yrs) of a small forcing. It would be like a model for turbulence for aircraft which we only tested in a light breeze, but not for a plane getting in the vortex of another plane or flying through a thunderstorm or landing in a crosswind. Would anyone want to get on a plane that had only been tested with computer codes validated for mild spring days?

  549. Gerald Browning
    Posted Feb 29, 2008 at 4:23 PM | Permalink

    lucia (#531),

    Clearly you have not answered any of the specific questions because you are unable to do the mathematical analysis mentioned in the first question. (I have gone all “mathy” on you).

    I suggest you read Tom Frank’s upcoming article in the March issue of Skeptic that shows just how lame the climate models really are. He statistically shows that he is able to produce a better solution than all of the Climate models that have wasted untold number of computer resources by using a simple linear formula based on some quality science. Quite illuminating.

    You have also chided me about providing references that were not easily accessible. Is that what you call the reference above that can only be accessed by payment?

    Jerry

  550. Kenneth Fritsch
    Posted Feb 29, 2008 at 4:37 PM | Permalink

    Re: http://www.climateaudit.org/?p=2708#comment-218960

    Well… sort of no. If you could do DNS, and the problem was strictly a navier stokes problem, you could predict average results.

    By the way I second Loehle’s comments on your post, but I still would question your reply above with regards to its application to climate models. Average results bother me like the guy who on average was ok when falling from the 20th floor to the ground.

    The Chicago Tribune article on the splash problem was rather specifc that the problem was solving nonlinear equations. Have you ever known the Trib to get things wrong — after Truman and Dewey that is?

  551. maksimovich
    Posted Feb 29, 2008 at 4:37 PM | Permalink

    In an interesting chapter entitled Engineers Dreams from his book Infinite in

    all directions,
    Freeman Dyson explains the reasons for the failings of Von Neumann and his team for the prediction and control of Hurricanes.

    Von Neumann’s dream

    “As soon as we have good enough computers we will be able to divide the phenomena of meteorology cleanly into two categories, the stable and the unstable”, The unstable phenomena are those that are which are upset by small disturbances, and the stable phenomena are those that are resilient to small disturbances. All disturbances that are stable we will predict, all processes that are unstable we will control”

    Freeman Dyson page 183.

    What went wrong? Why was Von Neumann’s dream such a total failure. The dream was based on a fundamental misunderstanding of the nature of fluid motions. It is not true that we can divide cleanly fluid motions into those that are predictable and those that are controllable. Nature as usual is more imaginative then we are. There is a large class of classical dynamic systems, including non-linear electrical circuits as well as fluids, which easily fall into a mode of behavior that is described by the word “chaotic” A chaotic motion is generally neither predictable nor controllable. It is unpredictable because a small disturbance will produce exponentially growing perturbation of the motion .It is uncontrollable because small disturbances lead only to other chaotic motions, and not to any stable and predictive altenative.

    Or as Vladimir Arnold said

    For example, we deduce the formulas for the Riemannian curvature of a group endowed with an invariant Riemannian metric. Applying these formulas to the case of the infinite-dimensional manifold whose geodesics are motions of the ideal fluid, we find that the curvature is negative in many directions. Negativeness of the curvature implies instability of motion along the geodesics (which is well-known in Riemannian geometry of
    infinite-dimensional manifolds). In the context of the (infinite-dimensional) case of the diffeomorphism group, we conclude that the ideal flow is unstable (in the sense that a small variation of the initial data implies large changes of the particle positions at a later time).Moreover, the curvature formulas allow one to estimate the increment of the exponential deviation of fluid particles with close initial positions and hence to predict the time period when the motion of fluid masses becomes essentially unpredictable.

    For instance, in the simplest and utmost idealized model of the earth’s atmosphere (regarded as two-dimensional ideal fluid on a torus surface), the deviations grow by the factor of 10^5 in 2 months. This circumstance ensures that a dynamical weather forecast for such a period is practically impossible (however powerful the computers and however dense the grid of data used for this purpose)

    IE We can only predict when unpredictability occurs an interesting conundrum(paradox) depending on ones POV.

    NS is not the major problem with GCM.The dimension of the GCM where no mathmatical theorem in a spherical coordinate system with “correct” boundary conditions.

    PS FD quoted above is not intended as an argument against engineers per se ,it lets one understand our limitations.

  552. Gerald Browning
    Posted Feb 29, 2008 at 10:10 PM | Permalink

    All,

    I have one last rebuttal to lucia’s comment about Judith Curry’s less than impressive review of the Jablonowski manuscript. Judith had claimed that increased computer power would lead to improved climate simulations.
    I disagreed with that assessmment and asked her, Pielke Jr., and Pielke Sr. (an expert on mesoscale modeling) to review the Jablonowski manuscript. Pielke Jr. advised that he was not competent to do so and Pielke Sr. never responded even when I asked Pielke Jr. to ask him. To Judith’s credit, she took a crack at a review, but the review is brief and certainly not thorough. And it turned out that Judith is on a panel seeking funds to acquire a new computer at Georgia Tech to perfrom better climate simulations. Thus Judith’s glowing review came as no surprise to me
    and in fact I expected exactly that type of review. But neither Lucia or Judith would answer the specific mathematical questions I asked them.
    Instead they both addressed tangential topics even when I asked Steve M to keep them from doing so. The thread was suppose to be about the problems with dynamical cores and the subsequent implications for climate models. But lucia went down the code checking path
    and Judith down the climate model forcing path although neither had anything to do with dynamical core problems.

    Jerry

  553. Gerald Browning
    Posted Feb 29, 2008 at 10:21 PM | Permalink

    snip – Jerry: please, no squabbling.

  554. bender
    Posted Feb 29, 2008 at 10:41 PM | Permalink

    #553 There are some important bits in #553 that are not what I would call squabbling.

    Some people may be put off by Jerry’s challenging, socratic style. I’m not.

  555. bender
    Posted Feb 29, 2008 at 11:08 PM | Permalink

    Alright you PhD climatologists, you GCMers …
    It is time for you to step up and address this reasonable skepticism. Why is Jerry’s argument (1) wrong or (2) irrelevant?

    It seems to me that if you can’t predict fluid flows on Earth then you can not predict weather and you can not characterize “weather noise”. And if you can’t characterize weather noise, how do you characterize “internal climatic variability”. And if you can’t characterize “internal climate variability” you can not distinguish it from an externally forced trend. So Jerry’s argument is not irrelevant.

    Where is this reasoning going awry?

    We have reached a watershed moment, where RC will continue to dwindle in relevance and scientific defenders of the hypothesis will come to CA to explain where the reasonable skeptics are wrong.

    Which of you has the guts to step out and tackle Jerry’s argument? Explain to me how you have come to know “internal climate variability” so well.

  556. gb
    Posted Mar 1, 2008 at 2:37 AM | Permalink

    Re # 556:

    Bender, a question for you then: Who is right, Jerry (weather is predictable) or Lorenz (weather is unpredictable after some finite time)? They can’t be both right.

  557. bender
    Posted Mar 1, 2008 at 3:19 AM | Permalink

    Yes they can.

  558. bender
    Posted Mar 1, 2008 at 3:44 AM | Permalink

    For the humorless: “going mathy” is a joke.

  559. Posted Mar 1, 2008 at 3:56 AM | Permalink

    lucia,

    Can you suggest a specific statistics text that discusses how to correct for the autocorrelation? With equations I can read? (I keep finding all sort of thigns that tell me how to use a statistical package, but no actual equations or math! I like to see the math.)

    My list of favorites is here,

    http://www.climateaudit.org/phpBB3/viewtopic.php?f=11&t=68#p756

    With hockey stick model as H0 you can try to estimate \sigma ^2 V (covariance matrix of errors) from reconstruction during pre-industrial times. One simple structure for V is the one based on AR1 model (IPCC AR4 Ch 3 goes this path).

    In general, without hockey stick, the situation is problematic. See for example http://www.climateaudit.org/?p=317 ,

    It has been well known for some time now that if one performs a regression and finds the residual series is strongly autocorrelated, then there are serious problems in interpreting the coefficients of the equation.

    Other example is the case where we have random walk or trend+AR1 noise, how to tell which is the true case? I think these problems are unsolvable statistically, so we need hockey stick or good GCM to find out what is going on.

    Yeah. I make ‘consistent’ sign errors with ‘t’ because

    I meant Eq 3 in http://rankexploits.com/musings/2008/can-ipcc-projections-be-falsified-sample-calculation/

  560. Posted Mar 1, 2008 at 4:49 AM | Permalink

    Speaking of hockey stick, Spot the Hockey Stick #18

    http://www.fmi.fi/kuvat/Aurinkovari.pdf ( 1MB, strange language 😉 )

    page 38. The text says that ‘McInture & McIntosh (*)’ criticisms of MBH are unfounded.

    (*) McInture, S. and McKitrick, R., 2003. Corrections to the Mann et al. (1998) proxy data base and northern hemispheric avaerage temperature series. Energy & Environment, 14, 751-771.

  561. TAC
    Posted Mar 1, 2008 at 5:13 AM | Permalink

    #561 UC, what point are you trying to make by referring us to an obscure 4-year-old Finnish-language report?

  562. TAC
    Posted Mar 1, 2008 at 6:32 AM | Permalink

    #560 UC:

    Other example is the case where we have random walk or trend+AR1 noise, how to tell which is the true case? I think these problems are unsolvable statistically, so we need hockey stick or good GCM to find out what is going on.

    AR1 and random walks (both are special cases of ARIMA models) do not seem to provide good models for climate processes.
    The more interesting, and likely more relevant case, is to assume that climate is stationary with long memory (LTP; fGn; FARIMA; etc.). Koutsoyiannis, among others, has written a lot on this (e.g. here).

    Koutsoyiannis’s approach is both parsimonious and realistic. Synthetic samples generated from such models exhibit patterns — both in time and frequency domains — that are remarkably similar to what we see in real climate data.

  563. Neal J. King
    Posted Mar 1, 2008 at 6:56 AM | Permalink

    Separation of the radiative-forcing issue from the GCM in general:
    #485, Jesper:
    #511, Craig Loehle:
    #486, Spence_UK:

    When I say I’m focusing on the radiative aspect of the problem, that’s exactly what I’m doing: focusing. I am not saying that the other aspects of GCMs are unworthy of discussion, I’m just not focusing on them for the time being.

    I don’t see any real problem with this:
    – The physics and equations behind the radiative aspects are largely separable from the Navier-Stokes hydrodynamic stuff.
    – The question of how the planetary climate responds to a 2X in C-O2 is logically separable into two questions: a) How much radiative forcing is implied by a 2X? The generally accepted answer is about 3.7 W/m^2. b) What will 3.7 W/m^2 (or whatever) do to the climate? Question a) is largely a question of radiative-transfer theory, b) is largely a hydrodynamical problem.
    – In my opinion, b) is a lot harder, particularly if you don’t have ready access and time to modify & run large GCMs. I can read about that stuff (I have a couple of books I haven’t cracked yet), but I’m not sanguine about trying to draw conclusions about giant simulations (positive or negative) that I don’t have my hands on. I have had some experience with simulations of free-electron lasers (FELs) which I did have my hands on: that was hard enough.
    – Whereas a) seems to have a simpler conceptual structure. It still depends on numerical models of absorption-line structure, but I think these are checked with lab measurements as well. And radiative transfer is not a chaotic problem.
    – An analogy: In a multivariable differential equation, sometimes you can do a separation of variables. That makes it easier to solve the problem, because you can work on one aspect at a time.

    With regard to clouds:
    – Clouds most certainly affect IR transmission; however
    – The most significant impact of C-O2 on the enhanced greenhouse effect is precisely in that region of the IR band which is not absorbed by water. In other regions, the impact of even a 2X in C-O2 should be minimal, given that there is so much more water than C-O2.
    – Conceivably, C-O2 could be interesting even in IR regions overlapping the water absorption band, if there were so much of it that the optical depth due to C-O2 were = 1 (as measured from outer space radially inward). But I don’t know the numbers. Spence_UK, do you know if there are databases of the optical depth of C-O2 (and other GHGs) as a function of frequency & altitude?

  564. Neal J. King
    Posted Mar 1, 2008 at 6:58 AM | Permalink

    Correlation vs. Explanation
    #515, Kenneth Fritsch:
    #487, jae:
    #472, lucia:

    In my view, correlation is a consideration that follows upon having a reasonable theory. Based on a theory that is consistent with what is known of the rest of the universe, one can make predictions about observable measurements. If they match, to within the known uncertainties, that’s encouraging. If they don’t, you have to look further; and if you have to look too far, you better give up the theory.

    For example, when tobacco companies were fighting against the concept that cigarette-smoking causes lung cancer, they came up with some remarkable correlations: It turned out that the number of telephone poles in a region correlates to the frequency of lung-cancer cases. Should we then propound the theory that telephone poles cause cancer? Of course, the flaw in the argument is that no one has an even semi-reasonable theory of how telephone poles would cause lung cancer, but there are lots of obvious avenues by which smoking cigarettes would do so: toxic chemicals, radioactivity in tobacco, etc. It makes sense to subject the cigarette-cause proposal to a correlation test, but it does not make a lot of sense to attribute much weight to the telephone-pole-cause correlation.

    In the case of the AGW situation, there is a reasonable theory as to how extra C-O2 could give rise to radiative imbalance and thus to global warming. But there are also other factors which have to be taken into account: the IPCC report indicates that their explanation of the cooling in the period 1940-1970 is due to sulfate aerosols from the unscrubbed burning of coal. All of these issues have to be taken into account, to the extent possible. Science is not a game show or debate, where there is a “moment of truth”; it is more of an on-going discussion. If a certain line of explanation needs to be stretched indefinitely, eventually people will look for other explanations. As Planck stated, in science, revolutions take place not by the definitive defeat of the old guard, but by their retirement, and by the lack of interest by their colleagues and students to pursue or defend the old explanations.

    As I mentioned earlier, the physicist Motl (who is a vehement anti-AGWist) stated somewhere among the ClimateAudit threads that it would take 20 years to come to the conclusion that the expectation of the IPCC about global average temperature to be proven wrong. I did not read his explanation carefully, and indeed was surprised at how long he was willing to be patient. On the other hand, lucia (#496) has been trying to calculate that number some other way, and is getting closer to 10 years. I haven’t studied the matter; but from what I can see so far, the known noise level in the “climate signal” is high enough that there is not going to be a mass defection from C-O2. Even if it turns out that the sun starts to cool down (as some solar physicists have suggested is doing), that does not get C-O2 “off the hook” as a problem; because the sun can also heat back up, and the C-O2 will still be in the atmosphere.

    For some useful background on theory vs. proof, I recommend: [url] http://www.physicstoday.org/vol-60/iss-1/8_1.html?type=PTALERT [/url]. No equations, it’s philosophy of science.

  565. Neal J. King
    Posted Mar 1, 2008 at 7:01 AM | Permalink

    Quantum theory won’t help:
    #497, MarkR:
    #509, Dave Dardinger:

    The mathematics of fluid dynamics will not be smoothed out by quantum mechanics:
    – Quantum-mechanical equations are usually not any nicer than classical equations
    – The differences in results of different trials is not a result of different solutions to the equations, but of the range of different measurement results (for the specific question posed) available to the same solution.

    There is an old joke I remember:
    Erwin Schrödinger, the inventor of quantum wave mechanics, dies. As a great physicist, and a reasonably reasonable man, of course he goes to heaven. He is greeted by God Himself at the gates to heaven.

    As a special favor, God allows him to ask three questions before entering the gates to eternal bliss. Erwin asks, “Can you explain General Relativity to me?” God answers, “Sure, it’ll take 15 minutes.” Erwin goes further, “What about Quantum Mechanics?” God replies, “That’s a bit more complicated, but I can do it in half an hour.”

    So then Erwin asks, “Well, I realize this is old classical stuff, but I’m still curious: What about hydrodynamics? All this turbulence?” God shakes his head, and says: “I’m sorry, that stuff is so complicated that I don’t understand it Myself. Try another question.”

  566. Posted Mar 1, 2008 at 7:03 AM | Permalink

    UC
    Thanks!

    Other example is the case where we have random walk or trend+AR1 noise,

    I’ll definitely need to look at this.

    JeanS, at ome point said something like “it would help if we knew a model equation”…. I interpreted that to mean that it would help if, instead of assuming “linear trend plus noise”, we assumed the data fit to some known phenomenological equation. I fit to the IPCC equation (conservation of energy), using ‘known’ forcings for the ‘x’ and temperature for the ‘y’.

    Thus, Lumpy was born. Lumpy fits pretty well. But, I’m having trouble getting real statitician’s to tell me what’s right or wrong about Lumpy from a statistical point of view. Maybe it IS Koutsoyiannis approach in some sense. There is a time constant– AKA “memory”. That’s justifyable by heat capacity. “Lumpy” weather isn’t stationary, it’s “pulled” by the forcings. (Which, supposedly we know since 1880.)

    I hope reading how to test the trend stuff will tell me how to do Lumpy right!

  567. Neal J. King
    Posted Mar 1, 2008 at 7:03 AM | Permalink

    #546, Sam Urbinto:

    Whether we talk about an increase in global mean temperature anomaly or in the global average temperature, the average of the temperature, over the surface of the earth, is going up, isn’t it? I fail to see your distinction.

    Also, as far as I can tell, the discussion on how a cyclical cause could give rise to a secular effect was not inconclusive: I don’t believe the proponent could give any specific proposal of an equation or of a plausible physical mechanism that would give rise to an equation that would behave this way.

  568. Neal J. King
    Posted Mar 1, 2008 at 7:06 AM | Permalink

    #491, Tom Vonk:

    – As most readers would be able to tell from context and from reading my earlier comments in the thread, I don’t take Motl as an authority on AGW. I quoted him because: a) many readers here do; and b) his estimate of 20 years seemed surprisingly large.
    – Therefore, my failure to call attention to the issues raised in your reference to Motl’s memos were not due to my lack of reading skills, but to simple disregard. For issues other than string theory, I’m not particularly interested in Motl’s opinion; and for string theory, there are actually better people I know.

  569. Posted Mar 1, 2008 at 7:08 AM | Permalink

    @Craig–
    I agree. Because GCM’s are NOT DNS, we need lots of validation, using data collected after predictions are made. If and when done, this needs to be communicated clearly to decision makers — aka the voting public. I don’t mean “trust us, it’s right, or “if you think it’s wrong, repeat it.”

    I don’t want people to get the impression what I say about DNS– if it could be used for climate, somehow magically “blesses” GCM’s. It absolutely doesn’t.

  570. Judith Curry
    Posted Mar 1, 2008 at 7:48 AM | Permalink

    I’ve quickly glanced at the discussion going on here, i’ve found a figure that is relevant to the discussion of why weather and climate models work, even though the do not explicitly simulate the smallest scales of motions or necessarily parameterize them very well. The figure can be found in the book “synoptic and dynamic climatology” by Roger Barry , section 1.2 figure 1.6. The book is available online in google book search. The figure is a plot of atmospheric kinetic energy vs frequency. The spectrum is not continuous, there are “gaps”.

    Apart from the fidelity of a numerical solution to the NS equations in a weather prediction model, weather is not predictable in a deterministic sense, owing to its chaotic nature. This is why ensembles of simulations are run for weather prediction. There is another book to refer to (online in google book search) which is “predictability of weather and climate” by Tim Palmer. This is a must read for anyone trying to understand this issue.

    p.s. I am not going to repeat the exercise i did on jablonsky thread, i will restrict my comments on this topic to referring people to literature on the topics being discussed

  571. Posted Mar 1, 2008 at 9:22 AM | Permalink

    Neil–

    On the other hand, lucia (#496) has been trying to calculate that number some other way, and is getting closer to 10 years. I haven’t studied the matter; but from what I can see so far, the known noise level in the “climate signal” is high enough that there is not going to be a mass defection from C-O2.

    Neil, you misunderstand what I have claimed. I have said that
    a) The IPCC consensus claim is 2.0 C/century of warming over the next to decades. They evidently based this on predictions published first sometime during 2000.
    b) It makes sense to validate this with data that comes in after the claim. So, we start the clock with the first year of weather after the first published predictions: 2001.
    c) IF, a 10 year run of GMST is fit with an linear regression, using OLS and the slope turns out to be 0 for the 10 years, that “falsifies” any prediction of 2.0C/century or higher in the sense that a zero slope is inconsistent with a slope of 2.0C/century or higher, to the 95% confidence interval.

    There are a few things to note:
    * The 10 year run starting in 2001 hasn’t yet ended– the next few years could warm as expected by the IPCC consensus. The trend has been flat for 7 years — but even if the real trend is 2C/century, you would expect to see 7 year runs with zero from time to time. (I haven’t run the numbers– but it’s not negligible.) That’s why 7 year runs of zero slope don’t falsify. If, in fact, the underlying trend for warming is much larger than 2.0c/century, this warming will happen. People like Raven will be disappointed. (Others are rooting for not warming.)

    * Even if 2.0 c/century ends up falsified by the data, falsifying 2.0C/century is not falsifying AGW in it’s entirety. AGW could still be true, , but the GCM’s over predict the magnitude of theeffect. Maybe the “real” underlying trend is 1.0 C/century. Maybe 1.5 C/century. Maybe 0.5 C/century. Maybe zero. All those hypothesis will remain untouched.

    * If we “fail to falsify ‘X'” , that doesn’t mean “X” is proven true. (But if it’s our provisional, null or consensus hypothesis, we behave as if it’s true because we believed it in the first place before even 1 scrap of data came it.) But, the concept htat “fail to falsify” is not “proven true” is important when comparing what Lubos says to what I said. And, in fact, because it’s difficult to falsify things that are false, there is a good change we won’t falsify 2.0C/century in only 10 years, even if it’s false! (This is called β error. You’ll see α all over in everyones’ analysis, but have you seen β yet? It’s important oo.)

    * Lubos’ calculations and mine are consistent. He just asked a different question of himself. I’ll be addressing the sort of question Lubos’ is answering later. (It would help if he’d explained it more precisely, but I also get that, spoken imprecisely, in roughly 20 years, if AGW is mostly false, we’d have a really good chance of falsifying a claim that is false. This is a doubled edges sword.)

  572. kim
    Posted Mar 1, 2008 at 10:18 AM | Permalink

    With any luck, when we start warming again in 50 years, we’ll have figured out the effect of CO2 and can deal with it then.

    ::grins::
    =====

  573. Peter Thompson
    Posted Mar 1, 2008 at 10:22 AM | Permalink

    Neal #565,

    This statement is untenable: In the case of the AGW situation, there is a reasonable theory as to how extra C-O2 could give rise to radiative imbalance and thus to global warming.

    Imbalance as opposed to what? To say that the earth is in radiateve balance, (whatever that is) as if it matters, is to trivialize the theory of AGW. We’ve been having warming and cooling for millions of years, on fairly regular schedules with an ice age every 100k years or so. The mean estimate for doubling, which looks like a WAG to a critical eye is 2C. Since AGW didn’t cause that, it should be sent to the cheap seats as an annoyance worth no further scientific time. There are clearly bigger fish to fry.

    One another note, when I asked you if your goal was to quantify the radiative effect of doubled CO2, I meant to get around to asking you why bother? If you see that the climate system is dynamical and chaotic, then quantifying it is a purely academic exercise, with nothing useful to tell us. If a given forcing today causes “x”, next year “y”, and the year after “z” etc., and x,y, and z are not related, who cares? If on the other hand, you don’t believe it is truly chaotic, fine, but you are now operating in a faith (not science) based belief system.

  574. Posted Mar 1, 2008 at 10:38 AM | Permalink

    In context of this discussion, I reproduced and circles a few things:

    DNS were it ever run for a flow of this type, resolves all scale of motion in the graph. If done, the computations would reproduce this curve. These are the sorts of models I say “solve the NS”, in the sense of giving correct probablity distribution funcitons, specra, means etc.

    For full climate modeling, DNS takes more computer resources than available in the world.) Climate Models don’t do this. Weather models don’t do this.

    When I was discussing climate vs. weather with Tom V etc. one question is:

    If we ran these could we be sure the individual realizations were correct? No. We can’t. No one has proven that. This means even DNS may never be good enough to truly predict weather. Never.

    However, if it could be done it would predict climate.

    This is why saying “We can’t predict weather because of the Navier Stokes”, doesn’t prove we can’t predict climate because of the Navier Stokes.

    But– with regard to the people who then bring up engineering models, advantages of engineering etc. DNS is not an ‘engineering model’ in the sense of being a model used to solve engineering application. (A decent chunk of early development work was done by engineering faculty, and generally mechanical engineers– but that doesn’t make it an ‘engineering model’. A lot of validation is done in engineering flows.) It’s not a climate model. It’s a research tool.

    ===========
    LES (Large Eddy Simulation) resolves the structures to the left of some cut off line chosen by a modeler, but models stuff to the right.

    Depending one who/how for what reason, one might expect to get good enough results by modeling stuff to the right of the green line, the blue line, or only the stuff in the red circle. The models for this stuff will tend to look like “turbulent dissipation”, because, in particular, for very small scales of motion, that’s whatt it looks like.

    This does introduce a parameterizaitons. In general, all paremterizations require validation. However, we know that LES, particular with cut-offs further and further to the right tends to be pretty good right out of the box. (Engineers will agree.)

    Question: Are climate models LES like? If yes, how far to the right is the cut off for a model? I have no idea, but these are valid questions to ask.

    =============
    RANS resolves only the mean flow and models allfluctuations. That means, pretty much, the everything in that diagram is modeled. RANS has difficulties that are too numerous to discuss in blogs. RANS is used. It works for some flows,but breaks down, needs lots and lots of validation to give results that extrapolate to other flows.

    Climate models may be sort of RANS like. OR, they may be LES like, but with cut-off so high that they are nearly RANS like. I don’t know. When I ask people like Judy questions, I’m often trying to pin-point this sort of thing out to get a “gut feeling” for how likely things are to be fairly decent out of the back, or how….well cr*ppy they might be. (Not that RANS is entirely cr*ppy… but… well. Over to you TomV! 🙂 )

  575. Posted Mar 1, 2008 at 10:39 AM | Permalink


    From Roger Barry’s book

  576. Posted Mar 1, 2008 at 12:24 PM | Permalink

    I started this last night while reading the several comments above in this thread about Navier-Stokes, DNS, turbulence, GCMs, and stuff. I was getting concerned about an issues that lucia addressed in #570 above; GCMs are not DNS. And while lucia has now clarified the situation, this comment will provide another outlook about the issues.

    The Basic Equations
    Much is known about the basic nature of the continuous form of the Navier-Stokes equations. Very much, in fact; hundreds of books have been written along with an uncountable number of reports and papers..

    Nothing is known about the continuous system of PDEs plus ODEs plus algebraic equations that make up the basic continuous equations for GCMs. I’ll go so far as to say that for some GCMs very few people even know what the complete system of continuous equations are. Chaotic response is an hypothesis that has been attached to GCM model/codes strictly by osmosis and extreme extrapolation from simple systems of ODEs about which much is known.

    Turbulent Fluid Flows
    Very much is known about the basic nature of turbulent fluid flows at all scales.

    Little, if anything, is known about the fundamental basic nature of ‘the climate’ as expressed in the GCM equation systems mentioned above at any scale. The system of PDEs plus ODEs plus algebraic continuous equations have received almost no analysis. Generally, GCM results for a single scale, or at most a few very large spatial scales, are reported. Temporal scales are generally ignored. Additionally it is a known and often repeated fact that GCM models/codes were never intended to produce valid results at small spatial and temporal scales.

    Validation
    Numerical solutions of the Navier-Stokes equations have produced Validated results at almost all temporal and spatial scales of interest. The primary focus of solutions of the discrete approximations to the equations is very deep investigations to ensure that the calculated numbers are not artifacts of the numerical methods. Given that parameterizations and flow field models/approximations are not needed, this is about the only aspect that needs investigations into potential problems.

    Numerical solutions of the GCM equations are generally ‘validated’ by comparison of a single solution meta-functional; The Global Average Temperature ( The Totem with The Most Mojo ever in The Entire History of The Planet). Solution functionals, global functionals, and especially solution meta-functionals can easily be calculated by an enormous number of incorrect models/equations/codes/applications. This is not Validation. Equally important, Verification must always precede Validation.

    Model Error
    There are no model errors in DNS of the Navier-Stokes equations. In so far as the equations and BCs and ICs provide for existence and uniqueness of solutions.

    GCMs, on the other hand, are more nearly akin to rough approximate process models of a few important physical phenomena and processes. In this sense, GCM model equations are very much more like engineering approaches to turbulence modeling in that the flow field is modeled. This approach to modeling is in stark contrast to the fundamental Navier-Stokes equations for which the constitutive equations (models) are fluid properties. So long as flow field or processes are modeled and not fundamental properties of the fluid, model errors are always present. Flow-field and process modeling will always have limitations associated with them and extrapolation is to be avoided.

    Summary
    Based on these few matters, a compare/contrast exercise between Navier-Stokes equation solutions by DNS and GCMs is not possible.

    And all of this does not begin to touch on the very serious issues associated with computer software relative to Navier-Stokes solvers and GCM codes. The coding and numerical solution methods of any GCM code have yet to be Verified. The actual real-world-application order of the solution methods are unknown. Convergence is known and acknowledged to be unattainable. Verification must precede Validation; there are no exceptions.

    GCMs are not DNS in any senses what so ever. GCMs are not even engineering approaches to turbulence closures. GCMs are not even fundamental models of climate processes; many significant physical phenomena and processes are represented by use of ad hoc, heuristic algebraic EWAGs that do not represent constitutive properties of the materials of interest.

    That the calculated numbers from GCMs represent the chaotic response of a complex, non-linear, dynamical system has never been demonstrated. In this regard it is a mistake to invoke such behavior as a property of the physical climate system. In contrast there is a large probability that the calculated numbers are merely the results from error-filled numerical solutions of error-containing simplistic process models of a complex system. This possibility requires deep investigation, to the same degree has done for DNS solutions of the Navier-Stokes equations, prior to invoking ensemble averaging.

  577. Neal J. King
    Posted Mar 1, 2008 at 1:24 PM | Permalink

    #572, lucia:

    My point is that there is no particular reason, in the immediate future, to expect even the most data-circumspect climate scientists to abandon the sense that globally averaged temperature is on the increase, based on the last few globally averaged temperature measures.

    In fact, even the GAT is only one indicator. Progress in polar-ice area coverage and worldwide glacial retreat, as well as other indices point in this direction as well; zoological trends also hint at this, and I expect will continue to do so on into the future.

  578. Posted Mar 1, 2008 at 1:34 PM | Permalink

    Neal:

    My point is that there is no particular reason, in the immediate future, to expect even the most data-circumspect climate scientists to abandon the sense that globally averaged temperature is on the increase, based on the last few globally averaged temperature measures.

    But who are you rebutting? I agree.

    The only reason anyone would abandon the current consensus is if the trend changed in some way that is inconsistent with the theory. So far, that hypothetical trend hasn’t happened.

    I don’t understand what statement of mine you are trying to rebutt.

    Dan Hughes,

    Numerical solutions of the Navier-Stokes equations have produced Validated results at almost all temporal and spatial scales of interest….., this is about the only aspect that needs investigations into potential problems.

    This is what I mean by “DNS solves the NS”.

    GCMs are not DNS.

    Violent agreement.

    GCMs are not DNS in any senses what so ever.

    Violent agreement.

    GCMs, on the other hand, are more nearly akin to rough approximate process models of a few important physical phenomena and processes. In this sense, GCM model equations are very much more like engineering approaches to turbulence modeling in that the flow field is modeled….

    Glad to see we engineers are in violent agreement on this! GCMs are like what engineers call “engineering codes’. 🙂

    Based on these few matters, a compare/contrast exercise between Navier-Stokes equation solutions by DNS and GCMs is not possible.

    And that was not my intent. My intent is to discuss the idea that predicting climate is impossible per se because the N-S are intractable. Or, predicting climate per se is impossible because weather predictions go awry in a few hours after we start.

    That issue is the only place I know where the existance of DNS is relevant. Becasue DNS works, we know that predicting climate not impossible per se. If we went all mathy (and yes, that’s a joke), we would say:

    Predicting weather is not a necessary condition to predicting climate.

    This doesn’t mean climate models are accurate. It just means that proving we need weather balloons to keep weather predictions on track, or that we can’t predict individual weather trajectories doesn’t mean we can’t predict climate.

    On the balance, I’m not so sure I’d go so far squewering GCM’s as Dan. But, I want to be certain people understand: My point in brining up DNS is absolutely not to say that “DNS works, therefore GCM’s work”. That would be like saying: “Apples are fruits, so grass is green” or “Apples are fruits, so grass is purple”. The fruity-ness of apples is entirely unrelated to the color of grass.

    And the only two ways in which DNS is relevant to GCM’s is:
    a) It is hypothetically possible to solve the NS in a way that could hypothetically permit us to predict climate. The intractability of the NS does not make predicting climate impossible per se. It could be done with DNS. (It might require less, but at a minimum, it could hypothetically be done.)

    b) GCM’s are not DNS. The existence of DNS does not “bless” GCM’s and propell what they are doing into the realm of accuracy.

    These two thigns are important in terms of blog-climate wars because

    a) There are some who think that if they prove we can’t predict weather that proves we can’t predict climate. This is wrong. We don’t have to predict weather to predict climate.

    b) There are others who think that because DNS show you don’t have to predict weather to predict climate, then the difficulties with randomness, orthe intractability of the NS go away. We hear coin flipping arguments, with allusions to fair coins. This is wrong to: GCM’s may not be “fair coins”. If not validated and verified, for all we know, they may be biased. You can flip till the cows come home, you’ll get the wrong answer.

  579. Neal J. King
    Posted Mar 1, 2008 at 1:53 PM | Permalink

    #574, Peter Thompson:

    Draw a Gaussian surface around the Earth, at a radius beyond the top of the atmosphere. Measure the net radiation from the Sun entering that spherical surface. Measure the net radiation leaving that spherical surface. Subtract the second number from the first. If the average over time of that number is 0, the system is in radiative balance. If the average is negative, the system is imbalance, and more radiant energy is entering the system than is leaving it. This concept is not based on some mystical or holistic view of the earth.

    – The period of averaging need not be millions of years. It would probably be meaningful even to do an average over a few minutes, I would think.
    – As far as I’ve seen, the generally accepted estimate for a 2X in C-O2 is 3.7 W/m^2; converting that into climate sensitivity is a hard problem that this whole thread has been about; and much more expert knowledge has been applied to it than that, as well. If you can dismiss it so easily with logical arguments, I’m sure that Nature will be excited about publishing it.
    – It’s a basic aspect of a dynamical system whether its internal energy is more-or-less stable or increasing/decreasing. Even chaotic systems with stable energy behave differently than chaotic systems with increasing energy. And, as you might also gather from lucia’s discussion in #531, the fact that a system is chaotic does not mean that it’s impossible to make predictions about it. One’s predictions will have to be statistical, rather than specific trajectories, however.

  580. Neal J. King
    Posted Mar 1, 2008 at 2:09 PM | Permalink

    #578, lucia:

    The original comment was directed to jae (#487).

  581. Posted Mar 1, 2008 at 2:17 PM | Permalink

    Neal–
    Ok. But you keep putting ‘lucia’ in lots of these, and I couldn’t ken. . .

  582. Neal J. King
    Posted Mar 1, 2008 at 2:31 PM | Permalink

    #582, lucia:

    My general practice is to specify the comment to which I am replying, both by name and #.

    So this note is a reply to you at #582; but this was a reply to my #578; but that was a reply to your #572; in which you claimed that I misrepresented you in #565 (which was the rebuttal to jae (#487)).

  583. Posted Mar 1, 2008 at 2:37 PM | Permalink

    Neal– #565 lists my #472, not just jae. In fact, it cites several people all in one lump.

    I don’t see how your 565 relates to anything I actually said in 472. But, I commented on your 565, which seemed intended to rebutt something I said, somewhere.

    Threads get confusing. . .

  584. Gerald Browning
    Posted Mar 1, 2008 at 2:55 PM | Permalink

    gb (#557),

    I wish that people would quit misquoting me. I never said that weather is predictable. In fact I said quite the opposite. The weather models are so bad that unless they insert new observational data (winds) every 6-12 hours
    the arbitrary tunings destroy the “accuracy” of the model relative to the observations. Please read Sylvie Gravel’s manuscript on this site to understand the updating process in more detail.

    Jerry

  585. Neal J. King
    Posted Mar 1, 2008 at 3:03 PM | Permalink

    In #472, you asked me “Which theory?”.

    In #565, I addressed that point and a few related points. In general, I felt that the coherent response to related points was to tie them together; and to reference all the comments that inspired this response, rather than either write a separate response to each person, or to omit a response.

    I guess this could have been avoided if you had not interpreted the reference to your question in #565 as something requiring specific rebuttal.

  586. Gerald Browning
    Posted Mar 1, 2008 at 3:19 PM | Permalink

    Judith Curry (#571),

    There is not sufficient observational data to prove that the spatial spectrum is disjoint, especially at the smaller scales of motion. In fact the Jablonowski manuscript and the runs I made on the Exponential Growth in Physical Systems thread show just how fast the cascade of enstrophy occurs, i.e. in a matter of hours. There is no indication in those runs of discrete jumps in the spectrum. Nor is there any mathematical proof that the smaller scales have no impact on the larger scales over longer periods of time. The arguments you are citing have been around a long time in meteorology, but they are wishful thinking. In fact in our 2002 manuscript we carefully analyze the change from a balanced large scale flow to the next scale down (mesoscale) and the smaller scale can occur in a continuous manner depending on the nature of the total heating (condensational heating and evaporative cooling). There needs to be less hand waving in meteorology.

    Also you always seem to manage to jump in and out on a thread instead of resolving the difficult questions with any mathematics. How about answering the mathematical questions I asked on the Jablonowski thread. If you answer those maybe we can all have more (less) confidence in a climate model.

    Jerry

    Jerry

  587. Posted Mar 1, 2008 at 3:28 PM | Permalink

    Oh–Heh. I wasn’t rebutting.

    I read the lucia/lubos comparison, thought you suggested we contradicted each other in a particular way. I thought I was clarifying. Clearly, I failed!

    Usually, you an are don’t end up in great disagreement in the end. After a few posts exchange, points get more clarified and there is some overlap. So, I got confused when things seemed to be getting to “Huh?”

  588. Gerald Browning
    Posted Mar 1, 2008 at 3:32 PM | Permalink

    All,

    If you want a really good text on atmospheric data, I would suggest Roger Dasley’s book. On page 7 he shows a spatial spectrum plot of kinetic energy, but then honesty states, “The observed spectra are not very reliable for the shorter scales (dashed line) but seem to fall somewhere between k^ (-2) and k^(-3).”

    And I would add that I doubt that there is sufficient observational data even for the global scale based on Anthony Watts studies.

    Jerry

  589. Gerald Browning
    Posted Mar 1, 2008 at 3:33 PM | Permalink

    I mistyped Daley.
    Sorry about that.

    Jerry

  590. Posted Mar 1, 2008 at 3:48 PM | Permalink

    TAC,

    sorry about that. I’ll just send Steve’s last slide to the author (or maybe the full ppt).

    AR1 and random walks (both are special cases of ARIMA models) do not seem to provide good models for climate processes.

    I know, just an example that I’ve used before.

    Koutsoyiannis, among others, has written a lot on this

    Yes yes, I know.

  591. Kenneth Fritsch
    Posted Mar 1, 2008 at 4:00 PM | Permalink

    http://www.climateaudit.org/?p=2708#comment-219485

    Dan Hughes, thanks for the explanations and insights that to this layperson come across as on point and crystal clear. I would think that your post would become the focus of future discussions going forward and spark some attention away from the bickering and side issues.

  592. maksimovich
    Posted Mar 1, 2008 at 5:13 PM | Permalink

    The Roger Barry Graph taken from (P and O 1992) and after (Vinnichenko 1970) are first order structures based on (Lorenz 1955)

    Oort (1964) found that K=1.5 x 10^6 J/m^2 is only 25% of the available kinetic energy.

  593. Sam Urbinto
    Posted Mar 1, 2008 at 5:44 PM | Permalink

    Neal:

    Whether we talk about an increase in global mean temperature anomaly or in the global average temperature, the average of the temperature, over the surface of the earth, is going up, isn’t it? I fail to see your distinction.

    So what is the temperature in Boulder Colorado right now? 72 ? 77.4 ? 64 ?

    What the distinction is, that if your bowl of soup is at 180 F and the thawing shrimp is at 30 F, is the temperature 110 F. If your feet are in ice and your head in steam, are you a comfy 50 C.

    This anomlay is not a measure of the surface of the Earth. Hardly. It’s a conglomeration of air samplings and sea surface readings. The bottom line; even if that all meant something, do you care that the globe is at 15 C if it’s 35 C or 0 C outside.

    The anomaly is a derived, adjusted average of averages of averages of averages of spot checks. Even if the all the readings or adjustments are perfect, it is not “the temperature” of the Earth any more than an a simple accurate known quality of the avarage of temperatures of places in a house’s garage, kitchen, attic, one North-facing window, one East-facing window, and one bedroom is “the temperature” of the house. And much less indicative of a house across the street or on across town.

  594. Neal J. King
    Posted Mar 1, 2008 at 6:38 PM | Permalink

    #594, Sam Urbinto:

    Neither I nor anyone else I know has referred to anything as “the temperature of the Earth”; although I have seen a few skeptics try to skewer people on this scarecrow. However, I have not found a precise definition of the temperature anomaly, although there is usually a reference to some specific time frame.

    An average over temperature values does have some validity as an index, because:
    – To the extent that you can neglect phase transitions (and thus latent heat), a temperature increase is linearly related to an energy increase;
    – Energy increases throughout a system add up linearly;
    – So an increase in the temperature averaged over this system is an indication of an increase in internal energy; actually an underestimate.

    It’s not perfect; but few indices of complex systems are perfect.

    So if you look at a big terrarium, with little ponds, rocks, plants, bits of wood, and an overhead light, the temperature will also vary from point to point; an average temperature can be calculated by measuring temperature in a 3D grid, or over the ground, or some reasonable subset. Now, over a week, if that average were to increase a couple of degrees, I would argue that it would be in the best interests of the critters that have to live there to investigate why that is happening, and to stop this trend; else you will start to have sick critters. The trend itself is more meaningful than the exact definition of the index, because there is an important message there for the owner of the terrarium, no matter what the precise meaning of the index.

  595. Craig Loehle
    Posted Mar 1, 2008 at 7:51 PM | Permalink

    Dan Hughes #577: Excellent summary, though I am afraid that many in the climate modelling commnity do not even understand what you are saying. To give an analogy, I recently reviewed a ms of a model of forest dynamics that was validated by doing a 10 year simulation using an initial forest inventory and comparing it to the 10 yrs later inventory. BUT: a model that said nothing happened would match pretty well because the forest doesn’t change much in 10 yrs. So the test was useless and the predictive power of the model is completely unknown.

  596. Gerald Browning
    Posted Mar 1, 2008 at 8:02 PM | Permalink

    I have a question that I would like answered by someone.

    I believe that the average temperature is mathematically defined to be the integral of the temperature over the area or volume of interest divided by the integral of the unit function over the area or volume of interest, i.e.

    mean temparature (T bar) = integral_V T dV / integral_V dV

    where V is the domain of interest. How can that mean be determined when there are insufficient observational stations to provide an accurate approximation of the integrals even if the observations were perfect
    at those stations?

    Jerry

  597. Posted Mar 1, 2008 at 8:35 PM | Permalink

    Jerry Browning (#598 — maybe this thread has gone on too long!) writes,

    How can that mean be determined when there are insufficient observational stations to provide an accurate approximation of the integrals even if the observations were perfect at those stations?

    Good question, but as long as there is sufficient spatial autocorrelation in the local temperatures about the global mean, i.e. positive nearby correlation, a finite number of spatially distributed readings can give a good estimate of the global average.

    CA regular Hans Erren, at http://www.climateaudit.org/?p=2711#comment-210492, shows that there is indeed a strong positive spatial autocorrelation to temperature anomalies, so that the concept of global temperature may not be completely vacuous.

    Presumably his graphs are for monthly or annual anomalies. Daily anomalies must decay at a much faster rate with respect to distance.

  598. Bernie
    Posted Mar 2, 2008 at 12:09 AM | Permalink

    Craig #596
    But the physics, biology and botany are correct!! 😉

  599. Gerald Browning
    Posted Mar 2, 2008 at 12:30 AM | Permalink

    Hu (#598),

    I am not buying this explanation. I do agree that pressure is one of the
    smoother fields in mathematical terms, i.e. large scale pressure is the inverse Laplacian of vorticity above the lower turbulent boundary layer
    in the midlatitudes so is much smoother than vorticity in that area (that is one reason that the geopotential at 500 mb is always shown in forecast plots and in error computations, but that is not very convincing). But that still is not a proof that the sparse observational system is sufficient to accurately approximate the necessary integrals for the mean temperature, especially at the surface. Are there any other suggestions?

    Jerry

  600. Gerald Browning
    Posted Mar 2, 2008 at 12:40 AM | Permalink

    Hu (#598),

    I also noticed that Hans mentions that the correlation is much worse near the equator where the balance equation involves all of the terms, i.e. the divergence is important near the equator. And given how crucial the equatorial region is to determing the mean temperature, I think that alone raises doubts about the approximation of the integrals?

    Jerry

  601. Posted Mar 2, 2008 at 2:25 AM | Permalink

    #595

    How can that mean be determined when there are insufficient observational stations to provide an accurate approximation of the integrals even if the observations were perfect at those stations?

    See Shen et al ( 700 KB ),

    Globally averaged temperature is defined as

    \bar{T} _{\tau} (t) = \frac{1}{4 \pi}\int _{4 \pi} T_ \tau (\hat{r}, t) d\Omega

    where

    T_ \tau (\hat{r}, t) = \frac{1}{\tau}\int _{t-\tau/2} ^{t+\tau/2}T(\hat{r},t')dt'

    is the \tau -length averaging, and \hat{r} expresses location and t time. This average temperature is estimated using sparse station network, replace integrals with sums, and make sure there’s spatial correlation so that you won’t need millions of stations. Shen concludes that 60 will do ( in the abstract of 98 paper) .

  602. Posted Mar 2, 2008 at 6:44 AM | Permalink

    @Bernie

    Craig #596
    But the physics, biology and botany are correct!! 😉

    Or, as the more precise person might say:

    But the physics, biology and botany are not inconsistent with what we know about physics, biology and botany!

  603. Scott-in-WA
    Posted Mar 2, 2008 at 7:06 AM | Permalink

    UC #602 … This average temperature is estimated using sparse station network, replace integrals with sums, and make sure there’s spatial correlation so that you won’t need millions of stations. Shen concludes that 60 will do ( in the abstract of 98 paper.

    A further question: If that philosophy is appropriate, then should these 60 stations be all that are needed to intercept, capture, and process the all-important climate signals?

    Another related question: are these all-important climate signals — whatever they are and however they are defined, mathematically or otherwise — being estimated as well?

  604. Ron Cram
    Posted Mar 2, 2008 at 7:52 AM | Permalink

    re: 579

    lucia,
    You expressed some interesting thoughts about predicting climate. I am not sure I agree with you completely. Roger Pielke has opined on the issue as well. You might find his thoughts interesting here and here.

    The claim by some that climate prediction is easier than weather prediction can only be true if CO2 is the sole driver of climate. Since we had very rapid global warming prior to WW II (1910 to 1940), this is clearly not so.

  605. Posted Mar 2, 2008 at 8:21 AM | Permalink

    Scott,

    Exact quote, Shen98 (500 KB) Intro:

    Using EOFs to take account of spatial inhomogeneity, Shen et al. (1994) developed such an optimal method for the average on the entire globe. They concluded that about 60 well-distributed stations can yield a global average annual mean surface air temperature with an error less than 10% compared with the natural variability.

    I don’t know how natural variability is defined, but if all variability has std of 0.25 K, I assume that the sampling error with only 60 stations would be less than 0.025 K ( imaginary ‘Earth full of stations -solution’ as a reference ) .

  606. Posted Mar 2, 2008 at 9:45 AM | Permalink

    Ron Cram–
    I don’t see any major contradiction between what Roger Sr. says and what I say. I simply say that some of difficulties in weather prediction are different from the difficulties in climate prediction. (Some difficulties are shared.)

    The fact that some difficultlies affect weather prediction more than climate (and vice versa) means that some particular difficulties in weather prediction may not exist in climate models (and vice versa). That doesn’t magically make climate modeling easier than weather modeling I’ve never claimed that, and in fact I think climate modeling is very difficult. It is particularly difficult to predict future changes, that represent very small changes in the average temperature of the planet.

    When reading what I wrote, you need to bear in mind that my comments were in response to a specific discussion with TomV. We were discussing intractability of solving the Navier Stokes, and the fact that we can’t prove it possible to predict individual trajectories. That is a problem for weather models. It’s not necessarily a problem for climate models– or if it is, the problem is not necessarily rooted in the Navier Stokes.

    In contrast, Dan Hughes’s comments describe some very real problems for climate models. These real problems can’t be waved away by claimes that one runs ensembles and averages. Some of those problems are very difficult– but affect weather prediction somewhat less.

    So: The source of uncertainty in predicting weather and climate are not entirely the same.

  607. Ron Cram
    Posted Mar 2, 2008 at 10:06 AM | Permalink

    re: 607

    lucia,

    I agree the uncertainty in predicting weather and climate are not the same. Pielke is of the opinion that predicting climate one hundred years in the future is much more difficult than predicting weather two weeks in the future. Predicting weather two weeks out is pretty much impossible. I submit that the number of factors affecting climate are not even known.

    Have you had a chance to read Orrin Pilkey’s book “Useless Arithmetic?”

  608. Sam Urbinto
    Posted Mar 2, 2008 at 11:09 AM | Permalink

    Neal: Perhaps a good point about the terrarium. But not for the idea there’s a global temperature, but rather against it. The analogy that a small glass encasement’s air temperature could apply to the entire dynamic climate system (even just the atmosphere portion) is not a good one I don’t think. Much like a fish tank, the thermometer at the top right back fairly close to the heating element compared to the bottom left front far away from it, there may be a slight difference, but it’s water inside a glass casing, not the ocean surface versus the depths even at one point, much less an average of all the water area of the planet. And so on and so forth. The temperature in the center of a 20 gallon fish tank is one thing. The surface of the ocean is another.

    Scenario. I put an MMTS in the middle of the livingroom, halfway between the floor and ceiling. I put another one outside in the middle of my grass covered backyard that I do not water, but keep mowed down to 2″.

    I track both of these every ten minutes, down to .01 centigrade, and calibrate them every week. I weight the number of readings of each one by the number of observations at each temperature per day. Every month, I take a mean of those. Then I get the mean of the year based upon the months. I do that for 60 years. I then use the mean of the center 20 years upon which to base each individual month of the 720 as an anomaly.

    Does the inside reading tell me the temperature of the house, and does the outside reading tell me the temperature of the city?

  609. Kenneth Fritsch
    Posted Mar 2, 2008 at 11:53 AM | Permalink

    Re: http://www.climateaudit.org/?p=2708#comment-219892

    This average temperature is estimated using sparse station network, replace integrals with sums, and make sure there’s spatial correlation so that you won’t need millions of stations. Shen concludes that 60 will do ( in the abstract of 98 paper) .

    How big of an assumption is spatial correlation and how would it be determined without accurate measurements from a huge number of stations? The US makes up approximately 2% of the earth’s surface which I assume means it would be covered by about one of these hypothetical stations. I have to assume that the Shen et al authors are competent in reasoning the linear algebra required to do the interpolations, but it appears that the end result of an accurate measure of the global average temperature would have little or no value in tracking the localized (the important measures for where people live most of the time, except perhaps for those like the globe hopping environmentalists bent on saving the world) effects of GW.

  610. Scott-in-WA
    Posted Mar 2, 2008 at 12:52 PM | Permalink

    UC #606

    I remember reading the Shen paper some time ago, and upon rereading it, I was reminded once more of the problems I faced thirty years ago in the mining industry having to perform a feasibility end-of-life mine design for a proposed hard rock gold & silver mine while having only limited drilling/sampling data—in comparison with what past experience told us would be necessary for that type of deposit and that type of economic feasibility analysis.

    The Big Boss pulled my copy of Geostatistical Ore Reserve Estimation (1977) off my shelf and said to me, “We don’t need more drilling and the great expense which goes with it. What we need is for you to start applying the techniques that are in this book. That will give us the answers that I’m willing to pay for, recognizing that with the data we have, some number of uncertainties are unresolvable for all practical purposes.”

    Here was someone who understood the issues, that without either more drilling or actually mining the deposit, we wouldn’t get the kind of quality information that would be most valuable for us to have. These facts were explicitly documented in our internal analysis and the Big Boss took formal responsibility for his decisions.

    This is not what we see in climate science, of course, where everything—science, process, and data alike—is publicly sold as being settled for all practical purposes.

  611. Posted Mar 2, 2008 at 1:21 PM | Permalink

    #610, 611

    I share your concerns. Shen94 uses UK monthly data, and ‘entire globe’ term becomes into play in Shen98 intro. Haven’t seen average of 60 well-distributions stations anywhere. Global annual data probably looks something like this before the anomaly method is applied.. Add some 1/f elements to both spatial and temporal side, and I see some problems. I’d buy the science is settled argument, but I have just recently found out what MBH9x really is!

    This gets a bit OT, phpBB3 has a topic Surface Record, where we amateurs can wonder the accuracy of global data sets 😉

  612. Ron Cram
    Posted Mar 2, 2008 at 2:03 PM | Permalink

    re: 607

    lucia,

    Please forgive me if my last comment seemed snippy. I was walking out of the door when I decided I should take two minutes and respond. I just reread what I wrote and it appears to me my reply could be misinterpreted as having a certain “tone.” This was not intentional. I now wish I had taken more time to shape my reply.

    Let me lay out some facts. I respect you a great deal. I am not trying to win an argument. I know if I were to challenge you – brain cell for brain cell – I would lose because I am completely out of my class. I come to ClimateAudit mainly to read Steve McIntyre but am completely thrilled whenever you or Roy Spencer have posted. There are several other bright posters here, but you and Spencer are right at the top of my list.

    It is my belief that I may have been interested in this topic a little longer than you have and therefore it is possible I have read relevant literature you have not read. I mention certain articles or web pages on occasion in case you are interested in reading them sometime. Please do not think I am trying to lecture you or give you a homework assignment. I understand we are all busy and you probably already have a lengthy reading list. But also know I would not mention an article if I did not think it was important.

    I think of you as an honest seeker after truth. Because of that I want to make sure you have access to certain information. Please consider me an asset in your search for truth and not some kind of opponent. I may or may not agree with you on a particular point, but I will always respect you.

    Fair enough?

  613. Neal J. King
    Posted Mar 2, 2008 at 2:42 PM | Permalink

    #609, Sam Urbinto:

    When I average the temperature, what is of interest is not the resulting value, but the trend over time, which is really just the difference from a reference value.

    In other words:
    – Let T(t,x) = the temperature at time t, position x.

    – Let Tr(x) = the reference-value for temperature at position x. The reference-value may be an average over a specified time interval, at position x.

    – Then I guess what you mean by the anomaly would be a(t,x) = T(t,x) – Tr(x).

    Thus the spatially averaged anomaly would be [a(t,x)] = [T(t,x)] – [Tr(x)],
    where the operation [] = (integral over x)/(range of x)
    (Normally I would use the expression “”, but the HTML reader is swallowing whatever I put inside “”.)

    So the globally averaged temperature (GAT) is [T(t,x)],
    which is [a(t,x)] + [Tr(x)]

    So I guess your point is that [T(t,x)] does not have a direct physical meaning. But my point is that if one is interested in trends and thus in changes, then:
    delta-[T(t,x)] = delta-[a(t,x)],
    since by definition [Tr(x)] is a constant; so the trends of the two are the same.

    So even if I take the global average of the temperature, when I consider its trend over time, I am not asserting any interest in the actual value, but in the difference from the reference-value. I guess what you are pointing out is that it would be conceptually more economical not to have this constant [Tr(x)], since it doesn’t have a significance other than as a starting point.

  614. Posted Mar 2, 2008 at 3:01 PM | Permalink

    Ron Cram

    Please forgive me if my last comment seemed snippy.

    No explanation required.

    I didn’t think you were being snippy. In fact, I found the article interesting– but not contradicting what I think I said.

    I tend to type long answers– to explain what I mean. That just means I tend to long windedness– not that I thought someone was snippy.

  615. Scott-in-WA
    Posted Mar 2, 2008 at 3:06 PM | Permalink

    #614 Neal King: So even if I take the global average of the temperature, when I consider its trend over time, I am not asserting any interest in the actual value, but in the difference from the reference-value. I guess what you are pointing out is that it would be conceptually more economical not to have this constant [Tr(x)], since it doesn’t have a significance other than as a starting point.

    Excuse me, isn’t the future absolute value of the world’s average global temperature being used right now as an indicator of the potential environmental and economic damage which could result from an increase in said average temperature?

    The potential environmental and economic damage is what makes the question of 2.5C warming for 2xC02 an issue, does it not?

  616. Neal J. King
    Posted Mar 2, 2008 at 3:40 PM | Permalink

    #616, Scott-in-WA:

    As far as I know, the indicator that people talk about is the delta. What you mention is a 2.5-C increase: a delta.

    So whether you want to talk about a 2.5-C delta in global average temperature or a 2.5-C delta in the average temperature anomaly, it amounts to the same thing. The important thing is the change.

  617. Ron Cram
    Posted Mar 2, 2008 at 4:36 PM | Permalink

    re: 615
    lucia,

    I was referring to my comment in 608 referring you to Pilkey’s book. Since you had not yet responded to 608, you could not have said anything to make me think you were offended. I simply reread my comment and thought I sounded snippy even though that was not my intention. Anyway, enough of that.

    Pilkey’s book is an interesting read. He is impressed with the IPCC and likes the fact they discuss the scientific uncertainty of their predictions, but he bemoans the fact the uncertainty does not seem to be communicated to policymakers or the public. Pilkey is more familiar with environmental issues around the coastline than he is with global warming. Computer models of the coastline have never performed well.

  618. Sam Urbinto
    Posted Mar 2, 2008 at 4:49 PM | Permalink

    Neal:

    So whether you want to talk about a 2.5-C delta in global average temperature or a 2.5-C delta in the average temperature anomaly, it amounts to the same thing. The important thing is the change.

    The global average temperature would be a point value. Like starting a functioning washer on hot with the water heater set to 140 F; at first, the water in the washer is cold, then it’s warmer and warmer, and after time we hit some temperature a while after agitation starts. The anomaly is a derived value over time that even at one location is a mean of days averaged into a mean of 30, and if you talk about a year, it’s a mean of 12 of those.

    Just a “what if”. Nothing physicial. An eighty thousand gallon pool being at 115 F on the top doesn’t make me warmer if I dive down to the bottom and it’s 35 F, passing through various temperature gradients, even if the mean temperature is 75 F. And that’s just one pool. WHat does it tell you about the average temperature of the pool next door that’s frozen, or the average of the two?

    “He only got shot 92 times, but none of them alone was life threatening.” doesn’t make the widow very happy. Well, unless she disliked the guy.

    The point is that the scale we’re dealing with makes less and less sense as we go up the levels; 1 cup of water is not 20 gallons which is not the Gulf of Mexico.

    “But officer, my average speed in the last 3 hours was only 55, how can you give me a ticket for going 95?”

    So even if I take the global average of the temperature, when I consider its trend over time, I am not asserting any interest in the actual value, but in the difference from the reference-value. I guess what you are pointing out is that it would be conceptually more economical not to have this constant [Tr(x)], since it doesn’t have a significance other than as a starting point.

    I fully understand the concept of the trend itself, but it is not “the temperature” is all I am saying. I don’t care about the actual value, either, but I am questioning the concept of meaning in the reference value as well as how it is measured and how it is derived. The simple answer is that if I measure 10 spots in a house and get the mean, it may give me an idea if it’s an environment that’s more like a working freezer than it is like a working oven. But it doesn’t tell me the total energy levels are, nor does it tell me if it will freeze at point A versus boil at point B, unless I know the specifics.

    Although over time, the average going up or down as a delta may tell me if things are getting warmer or colder, but it doesn’t do me any good if the measurements are not complete, not indicative of the area, or cover too large an area.

    If I know the wind speed is an average of 1 MPH from the South from 20 locations nearby, it does me no good while trying to light a cigar where it’s 30 MPH from the North, nor does it help if where I’m at is calm on wind but currently raining.

  619. Neal J. King
    Posted Mar 2, 2008 at 5:13 PM | Permalink

    #619, Sam Urbinto:

    When I say, “So even if I take the global average of the temperature”, the temperature I am talking about is the temperature as a function of space, at a given time: T(t,x). This is well-defined, as a scalar field; and the GAT is [T(t,x)].

    The significance of global climate change can be meaningfully discussed in terms of delta-[T(t,x)], as an index.

    You seem to be wanting to interpret what I am saying in ways that I am not using the terms. What I mean is clearly explained by the equations of #614. If you have a problem with the equations, please explain that. Otherwise, I am not really interested in further nitpicking over wording: I’m explaining what I’m talking about, not trying to get your signature on a document.

  620. Posted Mar 2, 2008 at 5:43 PM | Permalink

    @Ron– In that case, you might be happy to know I’m planning to get the book from the library tomorrow.. 🙂

  621. Gerald Browning
    Posted Mar 2, 2008 at 6:06 PM | Permalink

    UC (#602),

    For numerical examples, Shen et al. use U.K. data sets that are on grids of 5 degrees latitude by 5 degrees longitude. Without looking at the references for what U.K data means, I would assume that they are so called reanalysis type of data, i.e. a combination of observational data and model data information used to interpolate irregularly spaced observational data to a regular grid.

    My first thought was that such data is not accurate at the smaller scales of motion in the midlatitudes (neither the smaller scales of vorticity or heating are well known in the midlatitudes) and the observational data (winds and heating) are extremely sparse in equatorial regions. In the latter case almost all of the information is derived from model parameterizations that are known to be inaccurate. Thus in both of these cases the reanalysis data are suspect.

    My second observation is that the initial resolution chosen is a very crude and would tend to favor large scales where interpolation over long distances is enhanced by the choice of resolution. Then to even further enhance this selection of the very largest scales, the authors only use data from T11 and T15 spectral truncations, i.e. effectively even larger wavelengths.

    Why would the authors not use data from the very highest resolutions available from ECMWF and try their proposed scheme on those? Could it be that there the scheme would not work?

    All of these choices lead me to be very skeptical of these results. Is there a rational argument I am missing for these peculiar choices?

    Jerry

  622. Ron Cram
    Posted Mar 2, 2008 at 6:43 PM | Permalink

    lucia,

    Yes, I am happy you plan to look at it! I look forward to your comments on it, if you choose to comment.

  623. Gerald Browning
    Posted Mar 2, 2008 at 11:42 PM | Permalink

    UC (#602)

    Here is an example of my previous concerns.
    Consider a straight line. It can be accurately determined by two points any distance apart. That does not mean that
    two points will suffice for more complicated functional variation.

    Jerry

  624. Tom Vonk
    Posted Mar 3, 2008 at 9:50 AM | Permalink

    Lucia # 607

    When reading what I wrote, you need to bear in mind that my comments were in response to a specific discussion with TomV. We were discussing intractability of solving the Navier Stokes, and the fact that we can’t prove it possible to predict individual trajectories. That is a problem for weather models. It’s not necessarily a problem for climate models– or if it is, the problem is not necessarily rooted in the Navier Stokes

    Actually I also like the discussion style on CA because the long argumented posts are perefrable to 1 line comments on other blogs .
    However it may be that even long posts don’t prevent misunderstandings .
    The motivation of my first post was neither to say that predicting weather was a necessary condition to “predicting” climate
    defined as spatial and temporal averages of the same parameters as weather over some intervals nor that the problem was necessarily rooted in N-S .
    That’s why I precised in the second post that for me the heart of the matter was the question of convergence and predictability .
    N-S is for me merely a convenient and relevant example of convergence and predictability problems without needing to go “all mathy” .

    That is why I mentionned the Rayleigh – Taylor instability which is a very good example and not the only one . It happens to be governed by N-S but that is not necessary , it could have been any other non-linear system governed by another set of PDE .
    DNS fails and that’s why I do not consider DNS a universal method that “solves N-S” even in the cases where the dimensions are very small .
    The root question of predictability can’t be answered by DNS .
    Or better said one has to specify very carefully WHAT we are trying to predict and that is missing in most posts
    dealing with the climate .
    If that is not specified then we might be talking about very different things .

    In the specific case of N-S and I stress one more time that N-S is only a convenient illustration of the problems that Dan Hughes has also rightly exposed in his post , everybody agrees that it can be solved neither analytically nor numerically (DNS or otherwise) .
    By solution I mean to give T , V , P , Rho as function of (x,y,z,t) .
    So here is a point everybody agrees on – we don’t know the functions that solve N-S for given IC/BC and we don’t even know if such functions exist .
    Now this true statement applies exactly on what it says it applies to .
    Follows and this is also a true trivial statement that an integral of an unknown function is also unknown unless we have an independent relation explicitely allowing to calculate this integral . In the case of N-S this independent relation must be different from energy and momentum conservation because those are already covered by N-S . Such an independent relation doesn’t exist .

    Is that all there is ?
    No .
    We have the specific case of energy . Energy being V² , it has a more tractable form than V because it is a strictly positive scalar instead of being a vector . On top it conserves .
    Of course it is a very degenerate form of V because I lost almost all the information about V by squaring it .
    Now everybody who played a little with Fourrier transforms applied to N-S knows that because of this simplification , there are many statements that can be made about the function V²(x,y,z,t) and if there are not 10 000 papers about it then there are 100 000 .
    Spectrums of energy dissipation can be estimated and there are many papers quoted by Jerry that do exactly that .
    It is even possible to prove statements about the properties of V² while ignoring most properties of V .
    That is this feature that allows engineers to get answers on questions they ask because they ask energy questions .
    It is also that feature that allows to say that DNS “solves” N-S because whatever velocity field is calculated , it always obeys the right restrictions on V² .
    Now did it really “solve” something ?
    Either the system admits a unique solution and then obviously DNS or any other method did NOT find the unique solution .
    Or the system admits an infinity of solutions and then DNS might have found some combination of those solutions .
    In both cases we will be able to make valid statements about energy and in both cases we will stay unable to make any accurate predictions about the velocity field , only about its norm .

    What is necessary is to explain how can be done the jump from unknown and unpredictible functions to some form of the same functions that would exhibit some , to be defined form of predictibility .
    We have seen that it works for V² but doesn’t work for V . We will never be able to predict the velocity field of a cigarette smoke neither instantaneously nor in average (not that the velocity average makes much sense in this particular case anyway) .
    Pressure and temperature are equivalent extremely local parameters .
    There are no conservation laws and no conserved functionnals – there is therefore no theoretical possibility to derive or bind temperature and pressure time averages in a similar way like it was done with V² .
    However as Jerry said , pressure is a sympathetic smooth , non vectorial variable – this may simplify the case over short times (and indeed does in engineering simulations) .
    Over long times there doesn’t exist any relation featuring temperature or pressure time averages which stay then unpredictible .

    This shows that the velocities fields , temperatures and pressures are neither predictible as such nor as temporal averages .
    V² and related quantities as well as in a much weaker form the vorticity can be both bounded and predictible in some form .

    As the predictability in a strictly physical meaning is not given , is it possible to make the jump to probabilities ?
    I suppose that everybody realizes that giving a probability of an event is not the same thing as predicting an event and that we very significantly chage our paradigm wrt to what precedes .
    There are 3 possibilities to introduct the probabilities .
    – the first can be evacuated fast because it is self contradictory . It consists to say that the dynamic parameters are random with known probability ditributions . It is contradictory with the use of PDE systems which states exactly the contrary e.g that the evolution of the parameters is strictly deterministic .
    – I have mentionned the second in the frame of the Rayleigh Taylor instability . Probabilities appear naturally when considerening the phenomenons at the molecular scale . Here however in order to avoid the self contradiction , the price to pay is to drop all deterministic PDE . Such theory despite its obvious superiority will not be developped in any foreseeable future and if it is one day , it will probably be numerically untractable .
    – the third is to assume that the phenomenon is chaotic . The advantage is that the chaos theory is not interested in individual trajectories which stay unpredictible but in attractors who are topological invariants in the phase space . Strictly speaking it is not exactly probabilities that get introduced but the metrics on the attractors .
    The distance between individual trajectories is bounded and given by the attractor’s metrics . We still can’t predict the exact evolution but we can indicate “trends” , “authorised evolutions” , “limits” .
    Of course as there is a whole zoology of possible attractor families , the difficulty is to find THE right attractor and its metrics .

    By considering those 3 possibilities , I observe that none applies to GCMs .
    Therefore stays unanswered the simple question of Bender “Are those probability distributions you use real ?”

    And to finish I leave the N-S example to go over to the climate .
    Dan Hughes has already said the essential so I will only say that the climate is much , MUCH more complexe , non linear and self correlated than N-S so the odds that it is a chaotic system governed by an infinity of phenomenons on all time scales are great .
    Then its parameters with few exceptions (energy might be one) are not predictible .

  625. Sam Urbinto
    Posted Mar 3, 2008 at 1:42 PM | Permalink

    Neal:

    The significance of global climate change can be meaningfully discussed in terms of delta-[T(t,x)], as an index.

    I understand you. I just don’t agree it’s a particularly meaningful index in the first place, so conclusions derived from it are questionable at best.

    But it’s all we have, so it’s what we use. Doesn’t make it appropriate for the purpose for which we’re using it. Doesn’t mean it’s not meaningful. So basically, no comment.

  626. Neal J. King
    Posted Mar 3, 2008 at 2:18 PM | Permalink

    #626, Sam Urbinto:

    Ah, but as I pointed out earlier: if delta-[T(t,x)] is positive, it indicates an increase in the summed-up internal energy of the relevant spatial expanse (surface of the world, or whatever); and if it is twice as big (for example), the corresponding internal-energy increase is more than twice as big.

    An increase in internal energy should be meaningful and have some kind of impact, yes? And if it is bigger, whatever impact it has should be bigger.

  627. Neal J. King
    Posted Mar 3, 2008 at 2:26 PM | Permalink

    #627, Andrew:

    But the purpose of the IPCC studies is not primarily to predict the future but to provide some input to planning for it. All they can do is to give projections based on stated scenarios. It is up to the rest of the world to decide which scenario will actually be followed. Perhaps more scenarios should be considered, not fewer; but unless the implications of the different scenarios are spelled out, there’s no input from climate science.

  628. Andrew
    Posted Mar 3, 2008 at 2:53 PM | Permalink

    But I get the sense that most people believe that the IPCC actually is set up to predict the future. If not, then this is a misconception that badly needs correcting!

  629. Neal J. King
    Posted Mar 3, 2008 at 4:08 PM | Permalink

    #629, Andrew:

    What can you do about such people? The IPCC AR4 runs some 20 scenarios: In the best possible case, under such an interpretation, 19 of them would have to be wrong.

    Obviously, that’s the wrong interpretation.

  630. Sam Urbinto
    Posted Mar 3, 2008 at 5:31 PM | Permalink

    If delta-[T(t,x)] is positive, it indicates an increase in the summed-up internal energy of the relevant spatial expanse (surface of the world, or whatever); and if it is twice as big (for example), the corresponding internal-energy increase is more than twice as big.

    No, it tells me that overall, the readings for whatever the air samplings as proxies for ground temp as an indication of what surface energy levels happened to be at those locations and the surface of the water samplings as proxies for whatever the water energy levels happen to be are up or down for some period. I think you don’t get that I am not working from the assumption that anyone’s conclusion is correct, which is why I say the anomaly is trending up. It is. What it represents is not something that’s been demonstrated true.

    Even if the readings are accurate (doubtful) and the readings are consistent over time as to what is measuring them (doubtful) we are seeing an increase in the spots at which we’re looking.

    Now, that might reflect an increase in energy levels overall, or it could be that we’re looking at point X which rose and not looking at point Y where it happened to fall. I don’t think we can say that this is some proven fact that we actually know what it is we’re looking at.

    Which is why I say the anomaly is rising, because it is. If you wish to correlate the anomaly itself to accurately and meaningfully represent a rise in temperature overall, from these point samples a certain location and altitude for land and to a certain depth for sea, as means, you’re perfectly free to do so, but it’s still correctly put as a rise or fall in the anomaly. I prefer to put out the fact and let the observer reach their own conclusion, just like I prefer to make my own conclusion on what the anomaly may or may not mean.

    Or in other words, I reject your assumption that anomally=temperature=energy levels.

    One question that I believe invalidates accepting the anomaly as the temperature change:

    Even if accurate, if I get a min of 10 and a max of 70 for an average of 40, what does that 40 F tell me of the energy levels for the day even in that one spot, much less however many tens or hundreds of kilometers it represents?

    If you can’t get past that question, even the further questions of combining multiple ones of those into a month, multiple ones of those into a grid, and multiple grids into the total. That’s not even taking the seas into consideration.

    Which data would be the ERSST, constructed from the ICOADS SST data and “improved statistical methods that allow stable reconstruction using sparse data”:

    Regarding the ERSST:

    The extended reconstructed sea surface temperature (ERSST) was constructed using the most recently available International Comprehensive Ocean-Atmosphere Data Set (ICOADS) SST data and improved statistical methods that allow stable reconstruction using sparse data.

    ERSST.v3 is an improved extended reconstruction over version 2. Most of the improvements are justified by testing with simulated data. The major differences are caused by the improved low-frequency (LF) tuning of ERSST.v3 which reduces the SST anomaly damping before 1930 using the optimized parameters. Beginning in 1985, ERSST.v3 is also improved by explicitly including bias-adjusted satellite infrared data from AVHRR.

    Regarding ICOADS:

    vThe original COADS project, and the continuing US contribution toward the new international database, ICOADS, is the result of a cooperative project between the National Oceanic and Atmospheric Administration (NOAA)–specifically its Earth System Research Laboratory (ESRL), its National Climatic Data Center (NCDC), and the Cooperative Institute for Research in Environmental Sciences (CIRES, conducted jointly with the University of Colorado)–and the National Science Foundation’s National Center for Atmospheric Research (NCAR).

    ICOADS data are made available in two primary forms:

    Observations: Surface marine reports from ships, buoys, and other platform types. Each report contains individual observations of meteorological and oceanographic variables, such as sea surface and air temperatures, wind, pressure, humidity, and cloudiness.
    Monthly summary statistics: Ten statistics (such as the mean and median) are calculated for each of 22 observed and derived variables, using 2° latitude x 2° longitude boxes back to 1800 (and 1°x1° boxes since 1960).

    Which do you think is the conversation:

    Neal: “The ERSST V.3 is up this month.”
    Sam: “The ocean must be getting warmer.”

    or

    Neal: “The ocean is getting warmer.”
    Sam: “Why do you say that?”
    Neal: “ERSST V.3 is up.”
    Sam: “So?”

  631. Neal J. King
    Posted Mar 3, 2008 at 5:43 PM | Permalink

    #631, Sam Urbinto:

    I never said or implied:
    “anomally=temperature=energy levels”

    Instead, what I said was equivalent to:
    average energy change
    roughly proportional to
    average temperature difference
    equal to
    difference in average temperature

  632. Sam Urbinto
    Posted Mar 3, 2008 at 6:19 PM | Permalink

    Neal; there is no way I’d argue that a temperature difference doesn’t show increased energy levels. I’m just um skeptical 🙂 that a temperature reflects anything except in an environment where we expect the same temperature in the entire environment (at equilibrium stove, freezer, terrarium, fish tank, tub of water or whatever).

    The entire Earth from air and water samples? No. A living room? Maybe. A bedroom? Probably.

    I am totally unconcerned that it is 64 F 73% in Bogota Columbia, 52 F 53% in Bogota NJ and 41 F 81% in Bogota TN. Just like I don’t care that it’s -23 in Barrow, -6 in Fort Yukon, and 43 in Wrangell in Alaska, or 75 in Mexico Philippines and 33 in Mexico IN

    1.2 F, so what. What’s the error bars on that over 120 years, again?

    😀

  633. Neal J. King
    Posted Mar 3, 2008 at 6:30 PM | Permalink

    #632, me:

    Actually, I do see a hole: If the heat capacity is not roughly constant throughout, there will be a problem with the proportionality between average energy change and difference in average temperature:
    1) Given three lumps of matter, with heat capacities, temperatures and energies (C1,T1,E3), (C2,T2,E3), (C3,T3,E3)

    2) Under temperature changes dT1, dT2, dT3, the corresponding energy changes are dE1, dE2, dE3, with total energy change
    dE = dE1 + dE2 +dE3
    = C1*dT1 + C2*dT2 + C3*dT3

    3) The average temperature is [T] = (T1 + T2 + T3)/3, so the change in average temperature is d[T] = (dT1 + dT2 + dT3)/3

    4) If C1 = C2 = C3 = Ca, then dE = Ca*(dT1 + dT2 + dT3) = 3*Ca*d[T];
    so dE = 3*Ca*d[T]

    5) Or if dT1 = dT2 = dT3 = dTa, then dE = (C1 + C2 + C3)*dTa;
    so dE = (C1 + C2 + C3)*d[T]

    6) However, if the C’s are different and the dT’s are different, there will not be a general proportionality between dE and d[T].

    So that does reduce the significance of d[T].

  634. Neal J. King
    Posted Mar 3, 2008 at 6:38 PM | Permalink

    #633, Sam Urbinto:

    I’m afraid I don’t understand how you can not care.

    If everywhere in the world, the temperature were to go up by 1 degree; and 5 years later this were to repeat; and 5 years later, again; so that after 25 years the temperature were 5 degrees higher (the same difference from today as we are now from the middle of the last ice age); I think it would be extremely solipsist of you to assume that this didn’t matter, even if it still turned out that it were colder in Antarctica than in Ecuador.

  635. Neal J. King
    Posted Mar 3, 2008 at 6:39 PM | Permalink

    I am giving an extreme and unrealistic example, of course.

  636. Andrew
    Posted Mar 3, 2008 at 9:45 PM | Permalink

    Suppose it were just some parts of the globe warming a lot (say ten percent ten times as much; and admittedly extreme example) and most warming little or even cooling. Obviously not what’s happening, but in this hypothetical scenario, I don’t see why the mean matters. Its impersonal anyway.

  637. Neal J. King
    Posted Mar 4, 2008 at 12:48 AM | Permalink

    #637, Andrew:

    – The mean indicates a change. It’s an index.
    – A big change in a small area can have a large impact worldwide. Possible case: Antarctica. A concern I have seen just recently is that, should the Antarctic Ocean’s temperature rise a degree or two more, that sharks and crabs would be able to invade. If they do that, they are likely to have a tremendous impact on the current species, which are not at all adapted to such predators. See, for example, http://news.bbc.co.uk/2/hi/science/nature/7248025.stm .

  638. Posted Mar 4, 2008 at 1:31 AM | Permalink

    Jerry,

    For numerical examples, Shen et al. use U.K. data sets that are on grids of 5 degrees latitude by 5 degrees longitude. Without looking at the references for what U.K data means, I would assume that they are so called reanalysis type of data, i.e. a combination of observational data and model data information used to interpolate irregularly spaced observational data to a regular grid.

    Jones 86 seems to be the source. And they assume it is OK to use gridded data:

    Starting with such a gridded dataset entails some error being introduced into our procedure from the outset since smoothing has already been applied in putting the data into gridded from. In the present application we consider this to be unimportant. We simply are considering the U.K. data to be an example of a dataset nearly the same statistical properties as the real temperature anomalies.

    Later, in conclusions it is noted that Hansen-Lebedeff data gives different result, doh..

    Interesting paper anyway, and maybe worth a thread at some point. My question: does the paper results mean that we can find those 60 stations post hoc (i.e.useless)? Or can they point me now 60 stations that will produce global averages for the next 10 years with accuracy better than 0.025 K? IOW, is the temporally smoothed covariance matrix stationary? That would help with lucia’s falsification work, we could just monitor 60 stations instead of full network.

  639. Tom Vonk
    Posted Mar 4, 2008 at 3:49 AM | Permalink

    6) However, if the C’s are different and the dT’s are different, there will not be a general proportionality between dE and d[T].

    So that does reduce the significance of d[T].

    What reduces it farther is that dE = dm.L for phase change . It doesn’t even depend on temperature .
    There is also of course dE = d[N.(Th/2 + Th/(Exp(Th/T) -1))] which is not proportionnal to dT .
    The proportionnality is generaly not valid locally so making global averages only makes a wrong assumption wronger .

  640. Neal J. King
    Posted Mar 4, 2008 at 5:01 AM | Permalink

    #640, Tom Vonk:

    – Your first point was already taken into account explicitly in #595; also subsequently, when I mentioned that this expression is an under-estimate.

    – So long as one is using the Taylor-series approximation:
    dE = (dE/dT) * dT + (higher-order terms) ~ C*dT
    (as is explicitly being done when we bother to talk about heat capacity), Boltzmann-factor population effects are automatically included. I don’t quite see why you are bringing in a Planck-spectrum-like factor: I’m not trying to include the heat capacity of radiation. It would be clearer if you would define your notation.

    On the other hand, if the measurements are all taken in air (rather than including stone, water, etc.) the approximation that the heat capacities relevant to each measurement are the same should not be so far off, being proportional to the local number density. This will vary with altitude, so higher altitudes will be disproportionately represented. But an index is just intended to indicate, not to define.

  641. Tom Vonk
    Posted Mar 4, 2008 at 7:43 AM | Permalink

    Well if you eliminate all terms that are not proportionnal to dT then it is a tautology to say that something is proportional to dT .
    In non linear systems reacting extremely sensibly to small perturbations showing that a linear approximation is legitimate is quite another cup of tea and generally it is not legitimate .
    It is even because of that that chaotic systems exist .
    The second example of my previous post was the contribution of the vibrationnal energy to the average energy of molecules because temperature is only measuring their average kinetic energy .

    What you actually say is that if we have some function f(T) = A.T with A constant and T some other C1 function of x,y,z over some volume V then the average of f over V is A . average of T over V .
    This is trivial for every V where f is linear and wrong for every V where f is not linear .
    In practice it means that if we take for V an ideal gaz layer of a couple of m thickness and have temperature measures of this layer , then its internal energy will vary proprotionnaly to its temperature (both locally and in average) .
    That of course doesn’t mean that the same holds for mixtures of non ideal gazes with phase changes over V with thickness scales of km .
    Not that we have temperatures measures on scales of km anyway .

  642. Andrew
    Posted Mar 4, 2008 at 7:50 AM | Permalink

    I don’t know how likely it is that the Antartic ocean will suddenly start behaving differently than it has, but I see your point.

  643. Neal J. King
    Posted Mar 4, 2008 at 8:02 AM | Permalink

    #642, Tom Vonk:

    No, the point is that this kind of approximation is used, and legitimately, whenever people talk about the heat capacity of gases. It is not always necessary, or useful, to worry about the change in the occupancy of the vibrational states of the constituent molecules. Cv = (dU/dT) at constant volume: it already includes that in the definition. It is absolutely not sensible to try to apply statistical mechanics when you can so much more easily use thermodynamics: It’s like applying a laser when you need a flashlight.

  644. Neal J. King
    Posted Mar 4, 2008 at 8:16 AM | Permalink

    #643, Andrew:

    The concern for the Antarctic is that they have been observing increase in the water temperature there. If this keeps going, the crab/shark tipping point could be reached: biological systems are famously nonlinear.

    I don’t know enough about marine biology to draw definite conclusions about the implications for the rest of the oceans. Obviously, there has been some isolation due to temperature (or the crabs and sharks would already be there), but I don’t know if there are other isolating factors. I guess some penguins go in and out of the Antarctic Ocean: if it undergoes some kind of collapse, it would likely affect them. An analogy: How many other trees will a falling sequoia take down with it?

    From my point of view, these consequences are far more significant than absolute amounts of sea-level rise.

  645. Mike B
    Posted Mar 4, 2008 at 8:49 AM | Permalink

    But that still is not a proof that the sparse observational system is sufficient to accurately approximate the necessary integrals for the mean temperature, especially at the surface. Are there any other suggestions?

    Jerry

    Rather than getting dragged into a classic “bring me a rock” exercise, what exactly do you have in mind, Jerry?

  646. Tom Vonk
    Posted Mar 4, 2008 at 9:33 AM | Permalink

    It is not always necessary, or useful, to worry about the change in the occupancy of the vibrational states of the constituent molecules. Cv = (dU/dT) at constant volume: it already includes that in the definition.

    Partial derivatives . That doesn’t mean that dU = Cv.dT .
    When people talk about heat capacity , they talk only about a part of gases’ internal energy .
    The point is that what you wrote was trivial for a linear relationship depending only on temperature and wrong for other cases . So I guess nobody sees what you try to say when U of the real atmosphere is neither proportional to T nor depending only on T and the goal is to see whether U of the whole system Earth/atmosphere is somehow related to surface temperature averages .

  647. Neal J. King
    Posted Mar 4, 2008 at 10:03 AM | Permalink

    #647, Tom Vonk:

    As I made clear originally, the point of the index is to indicate what happens with incremental changes. It’s rough – but it can be useful, when dealing with a complex situation.

    Partial derivatives are often used in that way. If you haven’t seen it done before, well, what can I say? The really good physicists that I knew applied anything that came to hand. They didn’t worry unduly about questions of rigor or overly-fine detail, until they had an idea of how things were coming out.

  648. Andrew
    Posted Mar 4, 2008 at 10:28 AM | Permalink

    Ah, I think I get it now. Seems to me, however, that if (can’t say if this would happen) the ocean circulation patterns around Antartica change as a result (or, perhaps a cause) of temperature shifts, Antartica’s ocean could experience changes completely different than those you are anticipating (I don’t claim they necessarily will, but I hope they do, since the consequences of the ongoing behavior you describe would be bad, obviously). I don’t know though, why is change always good for bad species and bad for good species? Seems fishy to me.

  649. Andrew
    Posted Mar 4, 2008 at 10:33 AM | Permalink

    Speaking of sharks in Antartica:
    http://www.worldclimatereport.com/index.php/2008/02/20/more-bad-for-good-and-good-for-bad/#more-309

    😛

    Incidentally (and its not that I doubt it (yet)) how does sea ice expand if water temperatures rise? I must be missing some important physics here…Perhaps them ol’ ocean cycles?

  650. Posted Mar 4, 2008 at 10:35 AM | Permalink

    re: #648

    Partial derivatives are often used in that way. If you haven’t seen it done before, well, what can I say?

    LOL.

    Neal I suggest you recall/review who you are having a discussion with (or more nearly correctly, with whom you are having a discussion).

    BTW, Neal, does thermodynamics allow a decrease in temperature of a material while energy content is increasing?

    OIC now,

    They didn’t worry unduly about questions of rigor or overly-fine detail, …

    And in #641:

    – So long as one is using the Taylor-series approximation:
    dE = (dE/dT) * dT + (higher-order terms) ~ C*dT

    Will you point me to the thermodynamics textbooks in which the ‘development’ of the concept of specific heat, or any other thermophysical property of materials, employs such ‘approximations’. How about S=S(U,V)?

  651. Neal J. King
    Posted Mar 4, 2008 at 11:01 AM | Permalink

    #651, Dan Hughes:

    a) I’m not a respecter of persons: I call it as I see it. If someone wants to miss the point, all I can do is point out that he’s missing the point. If someone wants to get upset about not-crossing t’s, he’s entitled to do that, but I’m not going to waste much time dealing with it. I’ve had interesting bi-directional discussions on physics with folks who are completely out of this league, so I’ve come to some conclusions about how to tell when a discussion is going somewhere and when it’s not.

    b)
    – Heat Capacity at constant volume = (dU/dT) at constant V

    – Heat Capacity at constant pressure = (dU/dT) at constant P

    The equation you ask about follows from freshman calculus.

  652. Sam Urbinto
    Posted Mar 4, 2008 at 11:25 AM | Permalink

    Neal:

    I didn’t say I didn’t care, I said I was unconcerned. Why? Picking a weather station in a town that tells me it’s X degrees gets me what, even if I track it over time.

    Even then, we’re talking about 1.2 F over 125 years, using different measurement equipment and methods. Seems more than likely that’s within the margin of error of the devices and methods and time frame involved.

    I understand your example of 1 everywhere every 5 years, but how do you measure everywhere at once constantly to track it? Looking at one place at one height every hour for every 100 square kilometers and getting min and max for the day hardly does that.

    So, point X had an anomaly of +.5 or -.5 or 0 for month Y. What does that tell me about the distribution of daily means? What does a daily mean of Z tell me about the distribution of readings picking from the highest and lowest of 24 (hourly) or even 144 (10 minutes), even assuming it’s accurate and unbiased by anything? Or adjusted properly by people that don’t even know the status of the station or its environment. Even better, what does that reading at 1.5 meters tell me about the 100 square kilometer area it’s in, or the mean of each of those in a 5×5 grid.

    If you can’t take a simultaneous reading of the infinite points from the ground to the top of the stratosphere at every 100th of a second of lat/long, all I get is an idea of what energy levels might be. My own extreme and unrealistic example. 🙂

    I think you got it with “If the heat capacity is not roughly constant throughout, there will be a problem with the proportionality between average energy change and difference in average temperature.” Well, as Tom said “The proportionality is generally not valid locally so making global averages only makes a wrong assumption wronger.”

    Antarctica ocean rising a degree or two seems far fetched, even if we know what the surface is doing to some extent (or not). What about the other variables involved? What about the water going in and out of the area; it’s not like this is some static swimming pool or something. In any case, why would we be able to do anything to stop that (1-2 degree rise) even if it is in the works, or know we wouldn’t mess something else up trying to do something. And if new predators move in, well, it’s not like it hasn’t happened before to a species. Heck, it has happened to all species; humans!

    “But an index is just intended to indicate, not to define.” If it’s not defining anything, why trust it?

    As a practical matter, take for example San Francisco California. Right now I’m looking at a few blocks near downtown (around where 101/280/80 meet) and I have 10 stations, ranging in temp from 47 to 64 F.

    Castro/San Francisco KPCASANF2 47.3° F
    China Basin Landing KCASANFR14 54.7° F
    Eureka Valley KCASANFR21 62.1° F
    Jewish Community HS/Western Addition KCASANFR57 64.0° F
    Mission Bay KCASANFR53 53.4° F
    Mission: Even the weather is hip KCASANFR79 54.1° F
    Noe Valley KCASANFR3 47.0° F
    North Mission (Valencia & Market) KCASANFR49 49.8° F
    Potrero Hill – High Above Hospital Curve KCASANFR35 51.0° F
    SOMA – Near Van Ness KCASANFR58 52.1° F

    http://www.wunderground.com/stationmaps/gmap.asp?lat=37.76644&lon=-122.41542&zoom=14

    Which one do I pick to derive the anomaly? What does it mean to me the average is 55.5 F for that small section?

    I think you’re applying too much worth to the anomaly as a proxy for energy levels on even a scale such as the above, much less over the entire globe.

    So, as I said, the anomaly is rising. Attach what significance to that as you wish.

  653. Kenneth Fritsch
    Posted Mar 4, 2008 at 11:26 AM | Permalink

    Re: http://www.climateaudit.org/?p=2708 and Neal J. King linked article:

    In the last 50 years, sea surface temperatures around Antarctica have risen by 1 to 2C, which is more than twice the global average.

    In the interest of keeping these claims in perspective of the details (where the devil to be exorcised lies in wait), I was wondering whether the claim in the excerpted comment above and that for the plight of bottom dwelling animals in question was confined to the sea around the Antarctica Peninsula. The sea in that area has warmed considerably more than it has in other areas around the Antarctica from the maps I have seen.

    uiuc.edu/ANTARCTIC/TRENDS/trends.1958-2002.html

    The sea ice extent would indicate cooling in those areas around the Antarctica since the late 1970s (with the exception of the Peninsula). Or are there 50 year historic measures of temperatures at the bottom of the sea where these animals live that might explain what the article was talking about — specifically? I am getting that fuzzy image again.

  654. Sam Urbinto
    Posted Mar 4, 2008 at 11:29 AM | Permalink

    That should have been the anomaly trend is rising. The anomaly is a single offset monthly or yearly some amount above or below (or at) the base period that rises and falls.

  655. Sam Urbinto
    Posted Mar 4, 2008 at 11:46 AM | Permalink

    Regardless of what the anomaly means or tells us, isn’t the question can the average of the surface tell us what the non-surface is doing?

  656. Neal J. King
    Posted Mar 4, 2008 at 12:00 PM | Permalink

    #653, Sam Urbinto:

    An important point you’re missing: Mathematically, the only real difference between the global mean anomaly and the difference in global average temperature field (to be clear) is whether you take the average before the delta or after the delta. If before the delta, you get delta-GAT; if after the delta, GMA.
    – Mathematically, the numbers are the same.
    – In practice, procedurally, do you know anybody who goes around trying to measure/calculate the GAT (un-delta-ed)? I doubt anybody does it: It would involve a lot of cross-continental/cross-methodological calibration of temperature measures. And to what end? So that you could make sure that it gets properly put into the current measure and properly subtracted out in the constant.
    – So we are literally just talking about what you want to call it: delta-GAT or GMA.

  657. Neal J. King
    Posted Mar 4, 2008 at 12:06 PM | Permalink

    #656, Sam Urbinto:

    If the surface is considered representative, the answer is “yes”.

    If the surface is not representative, the answer is “no”, unless you can find some clever way to meaningfully extrapolate.

    If you need to make progress, then the answer is to try to become more clever.

  658. Posted Mar 4, 2008 at 1:16 PM | Permalink

    re: #652

    Neal, you did not respond to a single technical issue. It’s clear who is not capable of doting is and crossing ts.

  659. Neal J. King
    Posted Mar 4, 2008 at 1:35 PM | Permalink

    #659, Dan Hughes:

    a) “Neal I suggest you recall/review who you are having a discussion with (or more nearly correctly, with whom you are having a discussion).”

    No technical issue here.

    b) “BTW, Neal, does thermodynamics allow a decrease in temperature of a material while energy content is increasing?”

    If you go back and look at the thread, this is so many left-turns from what we have been talking about that I really didn’t want to try to fix it. We are talking about whether it makes sense to even talk about a delta-[T(t,x)], not what the specific behavior is; plus, I stated at the beginning of the discussion that there is no such thing as a “global temperature”. Frankly, I thought it was kinder to let your question die, as an irrelevance.

    c) “Will you point me to the thermodynamics textbooks in which the ‘development’ of the concept of specific heat, or any other thermophysical property of materials, employs such ‘approximations’. How about S=S(U,V)?”

    I defined for you the heat capacity. This is basic freshman/sophomore physics, I don’t know why you need a textbook for it: Any basic thermo book that uses calculus. If you need a name, try Fermi’s little book on Thermodynamics.

  660. Sam Urbinto
    Posted Mar 4, 2008 at 2:04 PM | Permalink

    “Regardless of what the anomaly means or tells us, isn’t the question can the average of the surface tell us what the non-surface is doing?”

    Actually that was a rhetorical question about the surface of the seas around Antarctica.

    is whether you take the average before the delta or after the delta.

    I’m not talking about how or when it’s averaged, I’m saying the number is very possibly meaningless on a physical level because it’s air samples and I don’t see them as representative as the energy levels of the Earth’s surface area. At best, it’s an assumption they are. I refuse to work with them or call them the temperature.

    Or in other words, if there’s no global temperature, how can the anomaly represent something that doesn’t exist?

    How do you think most people parse calling it the temperature? They infer the number represents actual temperature! They don’t know it’s samples, put into a mean for the day, put into a mean for the month and then compared to some base period to come up with an offset. And then the rest of the process.

    Is the anomaly trend reflecting physical warming, that is, a rise in energy levels? Maybe. Maybe not.

    It’s not the temperature. Why do you think it gets spoken of in terms of warming and cooling in the media and popular culture? Think maybe that’s a purposeful implication designed to foster a certain mindset and get people thinking it’s warming so they believe most of things like AIT?

    All I’m asking is for a little more exactness when discussing what we call the anomaly, is that so wrong?

  661. Posted Mar 4, 2008 at 3:08 PM | Permalink

    re:#660

    You’re still dodging Neal.

    The ‘Taylor-series approximation’ has absolutely nothing what so ever to do with defining any thermophysical property of materials. You should have admitted that. Instead you throw around absolutely nonsensical comments about freshman calculus and physics. Show me where in ‘Fermi’s little book on Thermodynamics’, or any ‘basic freshman/sophomore physics’ text any approximations are employed to obtain the defining equations for thermophysical properties of materials. Are you saying that these fundamental thermodynamic considerations are based on a series approximation?? That is wrong. That would be saying, for example, that the thermophysical properties are approximations, or at least that thermophysical properties are obtained by use of approximate equations. Is this the freshman/sophomore physics and thermodynamics that you use?

    All state properties of materials are based on derivatives of the fundamental relationship for the material. First derivatives give the state properties and the thermophysical properties are derivatives of the state properties, second derivatives of the fundamental relation are related to stability of thermodynamic phenomena and processes under local thermodynamic equilibrium.

    No ‘Taylor-series approximation’ required. None! You have not defined anything for me. Instead you insist on hand-waving attempts to divert attention from the technical issues. And if this is the best you have to offer, you cannot define anything for me.

  662. Neal J. King
    Posted Mar 4, 2008 at 3:09 PM | Permalink

    #661, Sam Urbinto:

    If the anomaly were increasing everywhere, would that indicate something to you?

    If it were increasing everywhere except 1 square kilometer, would that indicate something?

    If it were increasing over half the earth’s surface, and stable over half, would that indicate something?

    At what point does it cease to indicate anything to you?

  663. Neal J. King
    Posted Mar 4, 2008 at 3:58 PM | Permalink

    #662, Dan Hughes:

    In #641, I said:

    – So long as one is using the Taylor-series approximation:
    dE = (dE/dT) * dT + (higher-order terms) ~ C*dT

    So, looking at Fermi’s book:

    Cv = (dQ/dT)v = (dU/dT)v ……….(25)

    So when we calculate a finite energy change using a finite temperature change, we can use Taylor’s theorem, which I borrow from the eponymous Wikipedia article:

    f(x) = f(a) + f'(a)(x-a)/1 + f'(a)(x-a)^2/2 + …

    Substituting in: let f = U, x = T, a = To:

    U(T) = U(To) + U'(To)(T-To) + U'(To)(T-To)^2/2 + …
    = U(To) + Cv(To)(T-To) + Cv'(To)(T-To)^2/2 + …

    or
    dU = U(T) – U(To)
    = Cv(To)(T-To) + Cv'(To)(T-To)^2/2 + …
    = Cv(To) dT + Cv(To) dT^2/2 + …
    = Cv(To) dT + (higher-order terms)

    which bottom-lines to:

    dU ~ Cv(To) dT

    and thus does not need anything related to change in the occupation of the vibrational states of diatomic molecules – which was the point. (I notice on second reading that Tom Vonk stated in #642, “The second example of my previous post [#640] was the contribution of the vibrationnal energy to the average energy of molecules because temperature is only measuring their average kinetic energy” (emphasis added); and I have to point out that the statement in boldface is absolutely false: if all the dynamical degrees of freedom are not contributing to the full extent allowed by quantum mechanics, then these molecules are not in a well-defined state of thermal equilibrium and do not have a well-defined temperature: the opposite of a temperature inversion.) This reinforces my original point that the heat capacity takes into account the entire energy change associated with a differential increase in temperature.

  664. Gerald Browning
    Posted Mar 4, 2008 at 4:05 PM | Permalink

    UC (#639),

    The authors make a number of assumptions that they just pass off as being no issue. This is very typical in many manuscripts. But usually it is those exact issues that are critical (or issues that the authors fail to mention or address). If in fact the issues I have mentioned are a problem and the
    sparse network cannot accurately determine a global mean, then that needs to be known as it is the basis for much of the warming argument.

    Jerry

  665. Sam Urbinto
    Posted Mar 4, 2008 at 6:46 PM | Permalink

    Neal:

    The daily mean isn’t even weighted, nor are (were) most readings meant to track climate, come on now.

    “At what point does it cease to indicate anything to you?”

    At the point a non-quality controlled sample at one location 1.5 meters up is taken to represent an entire area. Even if the areas weren’t variable.

    I was going to say the point min/max are averaged into a simple mean for the day, but really before even that.

    So your answer is “Immediately I doubt its worth at any given location, even before it’s processed in the first place into a daily mean, because it assumes too much about the actual reading itself and what it represents.”

    Unreasonable?

  666. Willis Eschenbach
    Posted Mar 4, 2008 at 7:12 PM | Permalink

    Jerry and Neal and Sam and UC, please correct me if I am wrong, but isn’t the question you are discussing whether or not the air temperature is a good metric for the thermal energy of the entire planet?

    If that is in fact the question, I would answer with a resounding “no”, because the thermal mass of the air is so tiny. The total mass of the atmosphere is only 0.004 that of the ocean, and the heat capacity is less as well.

    For example, in 1998, air temperature anomalies all over the world went very high. The usual explanation of the is “El Niño”, but the El Niño is not a source of energy. The only thing that happened is that energy was transferred from the ocean to the atmosphere. No new energy was added to the system by the El Niño, it was only redistributed.

    Which is a clear response to Neal’s question, viz:

    If the anomaly were increasing everywhere, would that indicate something to you?

    Yes, it would indicate the anomaly is increasing. But, as the everywhere-increasing anomaly of 1998 shows, this does not necessarily mean that the energy of the planet is increasing. All that may be happening is that some of the other 99.6% of the energy, that energy which is contained in the ocean, has moved to the atmosphere.

    In other words, the fact that the anomaly is increasing may indicate … absolutely nothing.

    w.

  667. jae
    Posted Mar 4, 2008 at 8:48 PM | Permalink

    667: Willis: Brilliant post, IMHO. It’s all about energy storage and release, not temperature. If we ever figure out what drives that (those) mechanisms, we will finally understand climate.

  668. Raven
    Posted Mar 4, 2008 at 9:21 PM | Permalink

    Everyone seems to accept that CO2 out-gassing from the oceans can take up to 1000 years after the warming starts.

    Everyone seems to accept that ocean currents take 1000 years to move heat around the world.

    Then why do so many people insist that the warming today must be 100% explained by CO2 and nothing can be attributed to the release of heat sequestered in to oceans 1000+ years ago?

  669. Pat Keating
    Posted Mar 4, 2008 at 9:47 PM | Permalink

    669 Raven
    ….during the Medieval Warming Period.

  670. Gerald Browning
    Posted Mar 4, 2008 at 10:07 PM | Permalink

    Hi Willis (#667),

    My comments were only to question if the mean temperature could even be determined from the sparse observational network, not if temperature was the right variable to be used to determine global cooling or warming.

    There are so many issues in this area that have not been satisfactorily answered that it is no wonder there are many of us that are a bit
    skeptical. 🙂

    Jerry

  671. Willis Eschenbach
    Posted Mar 4, 2008 at 10:41 PM | Permalink

    Jerry, many thanks. I was trying to get a handle on the underlying question, and as you point out, there’s definitely more than one.

    All the best,

    w.

  672. Neal J. King
    Posted Mar 5, 2008 at 3:17 AM | Permalink

    #664, me:
    “the opposite of a temperature inversion” => “the opposite of a population inversion”

  673. Scott-in-WA
    Posted Mar 5, 2008 at 4:15 AM | Permalink

    (1) What climate-affecting mechanisms cause the oceans to gain and lose energy in the form of heat and/or mass motion?

    (2) What mechanisms can cause a conversion of energy within the oceans from mass in motion to heat and back again?

    (3) Are questions 1 and 2 in any way useful and relevant in demonstrating that 2xC02 yields 2.5C global average atmospheric warming?

  674. Tom Vonk
    Posted Mar 5, 2008 at 7:49 AM | Permalink

    Dan Hughes # 662

    You’re still dodging Neal.

    It is either that or he didn’t get the point .
    He writes dU = Cv.dt and thinks that we have a problem with trivial Taylor expansions while what you meant was that
    dU = DU/DS .dS + DU/DV . dV with D being the partial derivative .
    Of course it is also possible to write dU = DU/DT .dT + DU/DV . dV .
    Of course it is also possible to say that DU/DV = 0 and DU/DT constant .
    The problem is that he dodges the justification of the first assumption and says D2U / D T 2 = 0 (or epsilon) to the second .

    To all I already said

    What you actually say is that if we have some function f(T) = A.T with A constant and T some other C1 function of x,y,z over some volume V then the average of f over V is A . average of T over V . This is trivial for every V where f is linear and wrong for every V where f is not linear .

    what he of course dodged too .

    Neal # 652

    – Heat Capacity at constant pressure = (dU/dT) at constant P

    The equation you ask about follows from freshman calculus.

    No Neal . Heat capacity at constant pressure = DH /DT . Different function even for freshmen .

    if all the dynamical degrees of freedom are not contributing to the full extent allowed by quantum mechanics, then these molecules are not in a well-defined state of thermal equilibrium and do not have a well-defined temperature:

    No Neal . I am not sure that you understood yourself what you were writing . The right formulation is “The equipartition theorem doesn’t hold at every temperature for gases . Follows that vibrational energy must be treated separately for temperatures of interest here . This has nothing to do with equilibriums or quantum states populations and temperatures are perfectly defined .”
    You still didn’t understand what I was saying . I never said that vibrationnal energy doesn’t contribute to U . Of course it does as well as to its derivatives so to Cv but it doesn’t contribute linearly .

    Now my and I believe Dan Hughes’ purpose is not to teach you things about PEDs what would be boring for everybody anyway .
    By building strawmen and dodging arguments you already forgot that you made a claim according to which the average surface temperature of the Earth was linearly giving the internal energy of the Earth .
    And that is obviously an extraordinary claim that is interesting for everybody .
    Willis is of course very right by saying that the surface temperature is irrelevant to the energy state of the Earth .
    But even by restricting the claim only to the atmosphere it still stays wrong .
    You may write dU = DU/DT .dT + DU/DV . dV for an infinitely small volume and make all Taylor expansions you want , once you begin to integrate your local linear approximations don’t hold anymore and you will get no linear relation between averages .

  675. Posted Mar 5, 2008 at 9:55 AM | Permalink

    Willis,

    Jerry and Neal and Sam and UC, please correct me if I am wrong, but isn’t the question you are discussing whether or not the air temperature is a good metric for the thermal energy of the entire planet?

    I’m mostly chatting about accuracy of 60 station average wrt ideal average. (60 grid-cells are another topic, BTW)

    If that is in fact the question, I would answer with a resounding “no”, because the thermal mass of the air is so tiny. The total mass of the atmosphere is only 0.004 that of the ocean, and the heat capacity is less as well.

    This is a good point to remember, when bucket adjustments are discussed. Energy-wise they are the big thing.

    For example, in 1998, air temperature anomalies all over the world went very high. The usual explanation of the is “El Niño”, but the El Niño is not a source of energy. The only thing that happened is that energy was transferred from the ocean to the atmosphere. No new energy was added to the system by the El Niño, it was only redistributed.

    Good point. But if the system consists of the surface of Earth only, temperature transfer from deep sea is external forcing, exogenous change ? Or, if one says that seas are included in the surface sample, then we have extremely sparse sample of the whole system, and redistribution of energy will look like change in energy.

  676. Stan Palmer
    Posted Mar 5, 2008 at 10:19 AM | Permalink

    re 669

    Then why do so many people insist that the warming today must be 100% explained by CO2 and nothing can be attributed to the release of heat sequestered in to oceans 1000+ years ago?

    Because there was a major cooling event in between??

  677. Sam Urbinto
    Posted Mar 5, 2008 at 10:20 AM | Permalink

    Neal, Tom, Willis, Jerry, UC:

    Yes, more than one issue here. Certainly what exactly the mean of the 2×2 (1×1 5×5) means of the surface temperature of the water tells us is one, as is what the mean of the 5×5 means of various point samples tells us is another. Then another is what they tell us versus each other. But we don’t need to discuss them all at once! 🙂 Or even at all; so let’s just dig down into the ground surface temperature issue.

    I believe the hypothesis is that at 5 feet you get what the ground is doing (but the reading is not directly influenced by the ground itself directly under the sensor). (What is that, “well-mixed boundry layer”?) So you supposedly get land surface temperatures. Fine. But then what? That point is supposed to tell us something about the area up to 100s of kilometers around it? I don’t buy that.

    How many meters in radius does an air sample represent? 100? 1000? 50? 2? (I find it interesting that some of the arguments about the value of checking the sitings dismiss the effects of air conditioners or sprinklers or buildings 1 or 2 or 10 feet away from the sensor as not affecting the readings, and yet the sensor is supposed to be a proxy covering the land for up to hundreds of kilometers.)

    But forget even that. I don’t care macro, I care micro. Two scenarios/issues. Start with the monthly anomaly on a .1 square kilometer basis.

    I take a 100 x 100 meter area put a calibrated accurate bias free sensor (ahem) in at one corner, in a nice little climate-friendly spot. The monthly mean, from t_mean for the days, is 60 F. The base period was 59.5 F so my anomaly is +.5 Or is it? Consider the opposite corner. What if we had put the senor there instead and gotten a monthly mean of 59 F and an anomaly of -.5? How about if we had both for an anomaly of 0?

    So even in a small area, the monthly anomaly alone is possibly/probably meaningless. But dig even deeper, to the days.

    I have a mean for the day of 59.75 F So what? Were the min and max 38.3/81.2 or were they 40.5/79 or something else? Which one is more energy, the one with the higher max or the one with the smaller range? Then, what if for the first one the bulk of the day was around 50 and the second one was mostly over 65 (or vice versa)? Same mean, very different days. We don’t know any of that.

    So sparse network or not, thermal mass of air compared to ocean or not, and surface temperature compared to energy state or not, the daily means themselves are immaterial. We don’t even need to get to the monthly anomaly and its issues, much less up any higher.

    We simply have more questions and more issues than we have answers.

    So it’s not the temperature, energy levels or even physical at all. It’s the anomaly. Remember, the anomaly (“global mean temperature anomaly”) is weighted evenly between land and sea; if land is meaningless, so becomes sea.

    As far as the SSTs themselves, I’m not so convinced they tell us anything either.

    Nobody knows.

    ————————

    ERSST.v3 is an improved extended reconstruction over version 2. Most of the improvements are justified by testing with simulated data. The major differences are caused by the improved low-frequency (LF) tuning of ERSST.v3 which reduces the SST anomaly damping before 1930 using the optimized parameters. Beginning in 1985, ERSST.v3 is also improved by explicitly including bias-adjusted satellite infrared data from AVHRR.

    ICOADS Data
    The total period of record is currently 1784-May 2007 (Release 2.4), such that the observations and products are drawn from two separate archives (Project Status). ICOADS is supplemented by NCEP Real-time data (1991-date; limited products, not fully consistent with ICOADS).
    Observations
    The observational records (surface marine reports from ships, buoys, and other platform types) are all available online. This set of basic data may be used to develop products to fit specific research needs.

    As the result of a US project starting in 1981, available global surface marine data from the late 18th century to date have been assembled, quality controlled, and made widely available to the international research community in products of the Comprehensive Ocean-Atmosphere Data Set (COADS). A new name, International COADS (ICOADS), was agreed in 2002 to recognize the multinational input to the blended observational database and other benefits gained from extensive international collaboration, while maintaining continuity of identity with COADS, which has been widely used and referenced.

    The original COADS project, and the continuing US contribution toward the new international database, ICOADS, is the result of a cooperative project between the National Oceanic and Atmospheric Administration (NOAA)–specifically its Earth System Research Laboratory (ESRL), its National Climatic Data Center (NCDC), and the Cooperative Institute for Research in Environmental Sciences (CIRES, conducted jointly with the University of Colorado)–and the National Science Foundation’s National Center for Atmospheric Research (NCAR). The NOAA portion of ICOADS is partially funded by the NOAA Climate Program Office (CPO).

    ICOADS data are made available in two primary forms:

    Observations: Surface marine reports from ships, buoys, and other platform types. Each report contains individual observations of meteorological and oceanographic variables, such as sea surface and air temperatures, wind, pressure, humidity, and cloudiness.
    Monthly summary statistics: Ten statistics (such as the mean and median) are calculated for each of 22 observed and derived variables, using 2° latitude x 2° longitude boxes back to 1800 (and 1°x1° boxes since 1960).

  678. Richard Sharpe
    Posted Mar 5, 2008 at 11:54 AM | Permalink

    re 669

    Then why do so many people insist that the warming today must be 100% explained by CO2 and nothing can be attributed to the release of heat sequestered in to oceans 1000+ years ago?

    Because there was a major cooling event in between??

    Clearly, the warming was in the pipeline. Someone just forgot to open the valve.

  679. Tom C
    Posted Mar 5, 2008 at 12:57 PM | Permalink

    #667 Willis

    This is the point that William Gray keeps making, and I think, in the end, he will be shown to be correct. The staggering differential in thermal mass between the oceans and atmosphere has to dictate that it the sloshing around of the oceans that controls the temperature trends of the atmosphere.

  680. kim
    Posted Mar 5, 2008 at 1:01 PM | Permalink

    Someone bright here said that the climate is the continuation of the oceans by other means.
    =====================================================

  681. Raven
    Posted Mar 5, 2008 at 1:14 PM | Permalink

    Richard Sharpe says:

    Because there was a major cooling event in between??
    Clearly, the warming was in the pipeline. Someone just forgot to open the valve.

    Who says warming in the pipe needs to show up immediately? Energy stored in deep water could take time to surface again – in the meantime short term effects of surface temperature dominate.

    Alarmists have used this argument to make their predictions even more scary. But what happens if cold water sequestered during the LIA starts to surface first?

    The 1000 year oscillations in the recent past climate are not well understood.

  682. Richard Sharpe
    Posted Mar 5, 2008 at 6:29 PM | Permalink

    Raven says

    Richard Sharpe says:

    Because there was a major cooling event in between??
    Clearly, the warming was in the pipeline. Someone just forgot to open the valve.

    Who says warming in the pipe needs to show up immediately? Energy stored in deep water could take time to surface again – in the meantime short term effects of surface temperature dominate.

    Alarmists have used this argument to make their predictions even more scary. But what happens if cold water sequestered during the LIA starts to surface first?

    The 1000 year oscillations in the recent past climate are not well understood.

    I forgot to include the tongue in cheek smiley:-)

    Of course it would be useful to come up with evidence for millennium long residence times of large energy anomalies in the ocean deeps.

  683. Sam Urbinto
    Posted Mar 6, 2008 at 2:28 PM | Permalink

    Here’s what somebody said:

    There is no single thermometer measuring the global temperature. Instead, individual thermometer measurements taken every day at several thousand stations over the land areas of the world are combined with thousands more measurements of sea surface temperature taken from ships moving over the oceans to produce an estimate of global average temperature {for the month} every month. To obtain consistent changes over time, the main analysis is actually of anomalies (departures from the climatological mean at each site) as these are more robust to changes in data availability. It is now possible to use these measurements from 1850 to the present, although coverage is much less than global in the second half of the 19th century, is much better after 1957 when measurements began in Antarctica, and best after about 1980, when satellite measurements began.

    FYI: Best after about 1980; roughly when all the monthly anomalies jumped to be consistently positive every month. Note also the reason for the anomaly according to them; data availability.

    Expressed as a global average, surface temperatures have increased by about 0.74°C over the past hundred years (between 1906 and 2005; see Figure 1). However, the “warming” has been neither steady nor the same in different seasons or in different locations. There was not much overall change from 1850 to about 1915, aside from ups and downs associated with natural variability but which may have also partly arisen from poor sampling. An increase (0.35°C) occurred in the global average temperature from the 1910s to the 1940s, followed by a slight cooling (0.1°C), and then a rapid warming (0.55°C) up to the end of 2006 (Figure 1). The warmest years of the series are 1998 and 2005 (which are statistically indistinguishable), and 11 of the 12 warmest years have occurred in the last 12 years (1995 to 2006). Warming, particularly since the 1970s, has generally been greater over land than over the oceans. Seasonally, warming has been slightly greater in the winter hemisphere. Additional warming occurs in cities and urba areas (often referred to as the urban heat island effect), but is confined in spatial extent, and its effects are allowed for both by excluding as many of the affected sites as possible from the global temperature data and by increasing the error range (the blue band in the figure).

    Two issues. First, there’s that around 1980 change again. Second, how can something subject to wind and rain and temperature gradients be confined; it can’t.

    Actually a third; it’s assuming that the “poor sampling” from 1850-1915 is now “good sampling” (Which I take to be insufficient sampling locations; that they are sufficient now is also an assumption, whatever it is meant. And a fourth; the error range has been increased enough.

  684. Posted Mar 6, 2008 at 2:41 PM | Permalink

    lucia

    Can you suggest a specific statistics text that discusses how to correct for the autocorrelation? With equations I can read? (I keep finding all sort of thigns that tell me how to use a statistical package, but no actual equations or math! I like to see the math.)

    I think this one can be shared to others as well (hope u got my mail) :

    A simple message for autocorrelation correctors: Don’t
    by Mizon G.E., Journal of Econometrics, Volume 69, Number 1, September 1995 , pp. 267-288(22

    That’s more or less my point of view as well, but I need to read your work carefully before further comments.

  685. Neal J. King
    Posted Mar 6, 2008 at 6:30 PM | Permalink

    What about heat hiding in the ocean?

    Raven: #669, #682
    Stan Palmer, #677:
    Richard Sharpe, #679, #683:

    Out-gassing from the oceans depends on how long it takes for the temperature of the oceans to rise. If it’s a slow rise, the out-gassing will proceed slowly. In general, one would expect the maximum amount of dissolved C-O2 to be determined by the temperature of the water.

    But if heat that had been “stored” in the ocean were being released into the atmosphere, the oceans would have to be warmer than the air to which the heat was being released: heat flows from warmer to cooler. In general, that doesn’t seem to be the case: the last time I saw anything about temperature profiles in the oceans, they were all consistent with the idea that the oceans are being warmed from the surface by the air. In particular, progressing vertically downward, the temperature generally gets cooler. That means the temperature gradient is upward, so the heat flux is downward.

  686. Neal J. King
    Posted Mar 6, 2008 at 6:32 PM | Permalink

    Isn’t the anomaly meaningless?

    Sam Urbinto: #666, #678
    Willis Eschenbach: #667
    UC: #676

    Your arguments show that the delta-[T] is not a good quantitative measure for an increase in energy. I agree, it’s not: I’ve pointed out that it will be affected by weighting of various types, that it leaves out latent heat, etc. But what I’ve been saying is that it’s an indicator: something that points in the right direction, to give some guidance.

    Analogously, we get a crude idea of someone’s state of health by sticking a thermometer under his tongue. What % of the body are we sampling? How representative is it of the whole person? I think it’s also a bit crude.

    But I think the most important point is that this indicator is made up of available data. When people started to take temperature measurements a hundred or more years ago, they weren’t thinking about GCMs or climate prediction in any way. They just had an interest, probably for very practical reasons, in having a record of the temperatures. So much later, when people have now developed an interest in climate prediction and GCMs, they can go back to the data available and try to get a handle on it. Reading some of the responses here at ClimateAudit, I sometimes get the feeling that folks here seem to think that the entire goal of the climate-science community, from time beyond memory, has been to deliberately concoct a fake record of temperature trends. I really think that is implausible. The data that are available are the data that are available, and no one can go back into the past and collect missing data. So they have to use whatever tools and techniques are available to get a handle on it. Or give up.

  687. Neal J. King
    Posted Mar 6, 2008 at 6:36 PM | Permalink

    Ocean/atmospheric mechanisms?

    Scott-in-WA: #674

    Your questions seem to boil down to, How does thermal energy give rise to ocean currents, and vice versa? I haven’t studied oceanography, but from my general reading I would believe that heating (from absorbed radiation or conducted from air and land) causes temperature changes in the water, and temperature changes cause density changes & expansion, leading to horizontal pressure differences and motion. Changes in salinity (due to inflowing water and melting ice) will also lead to density changes and thus also to ocean currents. How much of this is taken into account in GCMs? I suspect that it’s not done from the partial-differential equations, since modeling hydrodynamics is pretty darn hard. I would guess that you could use some more macroscopic methods to estimate how much heat convection would be driven by these flows.

    This reference might be interesting: http://www.pik-potsdam.de/~stefan/Lectures/ocean_currents.html

  688. Neal J. King
    Posted Mar 6, 2008 at 7:20 PM | Permalink

    #675, Tom Vonk:

    Once and for all, I am going to review our previous discussion:

    a) In #675, you write:

    He writes dU = Cv.dt and thinks that we have a problem with trivial Taylor expansions while what you meant was that
    dU = DU/DS .dS + DU/DV . dV with D being the partial derivative .
    Of course it is also possible to write dU = DU/DT .dT + DU/DV . dV .
    Of course it is also possible to say that DU/DV = 0 and DU/DT constant .
    The problem is that he dodges the justification of the first assumption and says D2U / D T 2 = 0 (or epsilon) to the second .

    My simple point has been, and remains, the fact that one can do a first-order analysis with the leading term in the Taylor’s series, and physicists do it all the time. In a example that is actually of formal relevance to the current application, this is done in the context of the calculus of variations. Reference to: http://mathworld.wolfram.com/FunctionalDerivative.html will lead you to the conclusion that the functional derivative of the integrated internal energy density with respect to the temperature anomaly is the heat-capacity field. That’s not an especially significant insight, but gives an indication that linear approximations are used in physics all the time, on the general principle that some information is better than no information.

    b)

    – Heat Capacity at constant pressure = (dU/dT) at constant P
    The equation you ask about follows from freshman calculus.
    No Neal . Heat capacity at constant pressure = DH /DT . Different function even for freshmen .

    Yes, I made a mistake in #652, should have changed it from dE to dQ = dU + pdV (where dV = 0 when V is fixed). I noticed this oversight when documenting #664, but by that point it seemed that Dan Hughes’ concern was the legitimacy of using Taylor’s series rather than Cv vs. Cp, so I decided not to further divert the issue with even more corrections.

    I don’t feel too bad about it: Richard Feynman told me he could never keep the thermodynamic potentials straight.

    But actually this just underscores the point I was trying to make: the reason you have to use
    H = U + pV instead of U is that
    dH = dU + d(pV) = dU + pdV + Vdp ;
    so when dp = 0 (the situation when you’re trying to calculate Cp),
    dH = dU + pdV = dU + d(work) = dQ
    The point: The heat capacity includes all the heat that goes into that clump of matter to increase the temperature during the process of interest, you don’t subtract out the part that has to do with vibrational energy of the molecules. ALL of it is included.

    Thus I completely disagree with your statement that the vibrational energy has to be added in separately from the heat capacity. History:

    – In #640, you said:

    There is also of course dE = d[N.(Th/2 + Th/(Exp(Th/T) -1))] which is not proportionnal to dT .

    My first reaction is, If the expression [N.(Th/2 + Th/(Exp(Th/T) -1))] does not have derivative 0 with respect to T, then d[expression] can be approximated to first-order as being proportional to dT. Half of physics depends on making linear approximations of this type.

    My second reaction, in #641, was to ask:

    I don’t quite see why you are bringing in a Planck-spectrum-like factor: I’m not trying to include the heat capacity of radiation. It would be clearer if you would define your notation.

    – In #642, your response was

    The second example of my previous post was the contribution of the vibrationnal energy to the average energy of molecules because temperature is only measuring their average kinetic energy

    Because of dealing with the main issue of linear approximation, I didn’t even notice this until #664 (about which more later). However, now in #675 you claim:

    You still didn’t understand what I was saying . I never said that vibrationnal energy doesn’t contribute to U . Of course it does as well as to its derivatives so to Cv but it doesn’t contribute linearly .

    This makes me stare at the page for quite awhile. I do not know how you can claim that:

    “temperature is only measuring their average kinetic energy”

    is equivalent to

    “vibrationnal energy…contribute[s] to U . Of course it does as well as to its derivatives so to Cv but it doesn’t contribute linearly .”

    These two statements seem to me to be completely opposed and irreconcilable.

    c)

    By building strawmen and dodging arguments you already forgot that you made a claim according to which the average surface temperature of the Earth was linearly giving the internal energy of the Earth .
    And that is obviously an extraordinary claim that is interesting for everybody .

    – My first mention of this question was in #595, where I said:

    An average over temperature values does have some validity as an index, because:
    – To the extent that you can neglect phase transitions (and thus latent heat), a temperature increase is linearly related to an energy increase;
    – Energy increases throughout a system add up linearly;
    – So an increase in the temperature averaged over this system is an indication of an increase in internal energy; actually an underestimate.
    It’s not perfect; but few indices of complex systems are perfect.
    So if you look at a big terrarium, with little ponds, rocks, plants, bits of wood, and an overhead light, the temperature will also vary from point to point; an average temperature can be calculated by measuring temperature in a 3D grid, or over the ground, or some reasonable subset. Now, over a week, if that average were to increase a couple of degrees, I would argue that it would be in the best interests of the critters that have to live there to investigate why that is happening, and to stop this trend; else you will start to have sick critters. The trend itself is more meaningful than the exact definition of the index, because there is an important message there for the owner of the terrarium, no matter what the precise meaning of the index.

    After that, I further qualified this evaluation, and without any input from you:

    #632:

    Instead, what I said was equivalent to:
    average energy change
    roughly proportional to
    average temperature difference
    equal to
    difference in average temperature

    And #634:

    Actually, I do see a hole: If the heat capacity is not roughly constant throughout, there will be a problem with the proportionality between average energy change and difference in average temperature:
    1) Given three lumps of matter, with heat capacities, temperatures and energies (C1,T1,E3), (C2,T2,E3), (C3,T3,E3)
    2) Under temperature changes dT1, dT2, dT3, the corresponding energy changes are dE1, dE2, dE3, with total energy change
    dE = dE1 + dE2 +dE3
    = C1*dT1 + C2*dT2 + C3*dT3
    3) The average temperature is [T] = (T1 + T2 + T3)/3, so the change in average temperature is d[T] = (dT1 + dT2 + dT3)/3
    4) If C1 = C2 = C3 = Ca, then dE = Ca*(dT1 + dT2 + dT3) = 3*Ca*d[T];
    so dE = 3*Ca*d[T]
    5) Or if dT1 = dT2 = dT3 = dTa, then dE = (C1 + C2 + C3)*dTa;
    so dE = (C1 + C2 + C3)*d[T]
    6) However, if the C’s are different and the dT’s are different, there will not be a general proportionality between dE and d[T].
    So that does reduce the significance of d[T].

    So after all this, you summarize my point of view as:

    you made a claim according to which the average surface temperature of the Earth was linearly giving the internal energy of the Earth.

    I give up on you.

    d)
    In #664, I said:

    …if all the dynamical degrees of freedom are not contributing to the full extent allowed by quantum mechanics, then these molecules are not in a well-defined state of thermal equilibrium and do not have a well-defined temperature: the opposite of a temperature inversion.) This reinforces my original point that the heat capacity takes into account the entire energy change associated with a differential increase in temperature.

    (See also #673)

    To which you responded in #675:

    No Neal . I am not sure that you understood yourself what you were writing . The right formulation is “The equipartition theorem doesn’t hold at every temperature for gases . Follows that vibrational energy must be treated separately for temperatures of interest here . This has nothing to do with equilibriums or quantum states populations and temperatures are

    The point was:
    – As discussed already in point b), the heat capacity already includes all increases in the energy, including that due to increased occupation of vibrational energy levels. There is no need to add your Planck term: It’s already included, because I am not assuming that
    Cv = 3NkT/2 or 5NkT/2, which would be the expectation from the equipartition theorem: instead, Cv is whatever Cv is.
    – Another way of putting it: If you are adding it because it’s not contributing a “full share” according to the equipartition theorem, that is not correct: Even if the quantum energy increments are too large to allow passage to the classical limit and thus the equipartition theorem, the increase in energy is still included in Cv. For temperatures below the classical limit, with T You may write dU = DU/DT .dT + DU/DV . dV for an infinitely small volume and make all Taylor expansions you want , once you begin to integrate your local linear approximations don’t hold anymore and you will get no linear relation between averages .

    However:
    – I don’t use the (dU/dV)dV term, because I don’t assume that the volume is changing.
    – I integrate over (dU/dT) a(t,x) dV to cover the Earth: it doesn’t affect the question of the validity of the linear approximation, which depends on small values of a(t,x), the anomaly field.
    – The relationship has already been discussed in part c).

    My bottom line on all this is that discussing anything with you has been a complete waste of time. I especially consider the final issue in point b), where you claim

    “temperature is only measuring their average kinetic energy”

    is equivalent to

    “vibrationnal energy…contribute[s] to U . Of course it does as well as to its derivatives so to Cv but it doesn’t contribute linearly .”

    Either:
    – You are lying about what you meant; or
    – Your way of using the English language is completely incomprehensible to me.

    My resolution? It’s not necessary for me to decide between these two views:
    – If you lie about what you meant, there’s no need to pay any attention to what you say.
    – If I can’t understand what you are saying at a yes/no, black/white level, there is also no benefit to paying any attention to what you say.

    So henceforth, I am simply not going to respond to anything you write. Symmetrically, I will not care, and am likely not to notice, if you do not pay any attention to what I write.

    Life will probably be simpler that way.

  689. Sam Urbinto
    Posted Mar 6, 2008 at 7:38 PM | Permalink

    Neal; sticking a thermometer in a place we know accurately describes what’s going on in a single person (with a “normal temperature” that covers a degree or two as it is) is hardly like getting the anomaly. The two are not comprable.

    On to other subjects; My take on the global warming debate!

    The eating (top) side of a spoon is made of iron and with a black plastic covering with raised flower designs, while the non-eating (bottom) side is made of silver and covered with gold with a raised center. Then we have four people asked to describe it.

    Dr. Handletop is examining only the black raised-flower handle.
    Mrs. Handlebottom is examining only the gold raised-center handle.
    Ms. Bowltop is examining only the concave black bowl.
    Mr. Bowlbottom is examining only the convex gold bowl.

    Handletop says it’s flat, with a rough but generally smooth covering, and black.
    Handlebottom says it’s two-tiered and smooth, and shiny yellow.
    Bowltop is color-blind so says it’s concave and gray.
    Bowlbottom is blind but knows Bowltop is colorblind and doesn’t trust Handlebottom; besides, shiny yellow wouldn’t be gray to the colorblind (and doesn’t know what concave means) so even though what he’s examining is smooth not rought, it isn’t two-tiered, it’s flat (as far as he’s concerned). He says it’s flat and doesn’t know what it looks like, but is probably black (since Bowltop says it’s gray and Handletop says it’s black, and it doesn’t have two tiers.)

    Those four are replaced now by their brother or sister (as appropriate ) and asked about what the item is made of and if it would hold water or not.

    Handletop says it’s made of black plastic, and no it wouldn’t.
    Handlebottom says it’s made of gold and no it wouldn’t.
    Bowltop pokes an instrument into the plastic and says it’s made of metal but doesn’t know what kind, and yes it would.
    Bowlbottom’s sister is blind too, but doesn’t know anyone. She does know that Handletop’s sibling is a doctor also, but feels the metal and know’s it’s not plastic. But also knows it won’t hold water. So has to agree with Handlebottom’s brother. Bowlbottom says it’s made of gold and wouldn’t hold water.

    Those four are now replaced with the original four’s spouses or signficant others. Handletop and Bowltop are blind, the other two aren’t. They are asked what the item is and what it’s made of and what the color is.

    Handletop says it’s a bumpy plastic stick and has no idea what colour it could be; the answer would have to be gagued by those who can see it she thinks.
    Handlebottom says it’s a smooth basically flat gold stick.
    Bowltop says it’s a small curved-in plastic bowl and doesn’t know the color either, but doubts it’s gold because it’s not a stick, and wonders why the other two are lying about it being a stick but is too civil to say anything.
    Bowlbottom is a liar and trickster. And she hates Mrs. Handlebottom and knows Dr. Handletop’s blind . She says it’s a small red curved in plastic bowl.

    The four are now replaced by 3 groups of other people, random people.

    First group is asked to describe what it is.

    #1. It’s something or another made of black plastic, but it’s probably a cover over metal given the feel to it.
    #2. Based upon these tests, it’s made of vermilion and is probably the handle for an eating utensil.
    #3. It’s the eating side of a spoon.
    #4. It curves out and is gold. Obviously made by evil corporations out to despoil the planet in the name of profit.

    Then another four are brought in and asked if it’s magnetic.

    A. It’s plastic. No.
    B. It’s gold. No.
    C. It’s a spoon with a plastic cover that may have another metal underneath, so perhaps.
    D. {Ignores everything else, grabs a magnet and lifts it} Yes.

    Then another four are brought in and asked if it’s worth money.

    A. Looks like a fork handle or something, so maybe.
    B. Might be gold, I so I’d say yes, but maybe not much.
    C. Given our current level of understanding, it’s likely to a high degree of certainty that this is the concave utility portion of a plastic spoon. It is also very likely it was created by humans. What do the models say?
    D. I don’t know. Looks like a gold spoon. What is it made out of? Elaborate please.

  690. Gerald Browning
    Posted Mar 6, 2008 at 7:54 PM | Permalink

    Neal J. King (#687).

    If the data are not sufficient to answer the scientific question there is the honest alternative to clearly state that is the case instead of cherry picking the data and fudging the methodology. Of course then the perpetrators couldn’t obtain grants or make headlines.

    Jerry

  691. Posted Mar 6, 2008 at 8:02 PM | Permalink

    Thanks UC.
    I’ve always avoided correction for autocorrelation by designing physical experiments so it can’t screw things up!

  692. Neal J. King
    Posted Mar 6, 2008 at 8:07 PM | Permalink

    #691, Gerald Browning:

    I’ve noticed your postings, but haven’t followed up on them.

    Question: Do you know the work of the climatologists from the inside, or just from the outside? Do you honestly believe that you have understood what they are thinking about with their tools, evidence and models, or have you just examined it as a “prosecutor”?

  693. Neal J. King
    Posted Mar 6, 2008 at 8:14 PM | Permalink

    #690, Sam Urbinto:

    – It’s only according to a simple expectation/model that this specific point can be expected to be in any way representative of a person’s health. I am sure that there are all sorts of conditions in which the temperature, so taken, looks normal, but the individual is in danger.

    – Similar question as raised with Gerald Browning: Would you be willing to do the work of these climatologists, given their real-world constraints of measurements and accuracies? How would you go about making sense of the global climate situation?

  694. Posted Mar 6, 2008 at 9:39 PM | Permalink

    Neal J King
    You seem to be saying the climatologists are doing the best they can given limited resources.
    Others are saying the temperature data set may not be adequate to provide the accuracy and precision we need to answer climate questions being asked.

    Both things could be true.

    This is not an either/or issue. Rather, the latter may be true because the the resources are too limited to permit adequate measurement. That would the uncertainties in the data are not the scientists fault but it doesn’t make them vanish.

  695. Neal J. King
    Posted Mar 7, 2008 at 3:20 AM | Permalink

    #695, lucia:

    What I’m saying is that quite a few of the posters here seem to be really into a “gotcha” attitude, as if they assume that climate scientists are just looking for ways to fool everybody.

    What I see seems to indicate a stronger interest in proving that they are wrong than in finding out what they are actually doing.

    I’m not saying that “to understand all is to forgive all”. But to start out with condemnation is going to have a limiting effect on one’s own understanding.

    This is not to say that difficulties should be swept under the rug. But at the end of the day, the field of study remains, the questions in the field remain. “Gotcha” specialists can walk off when they’re done, but the folks working in the field have to work with the data and techniques that are actually available. If critics can help develop approaches and techniques that do more with the data, that’s very positive, and more impressive than just throwing stones.

  696. Neal J. King
    Posted Mar 7, 2008 at 3:43 AM | Permalink

    #696, cont’d:

    In a word, there is a distinction between honest critique and outright hostility.

    Sometimes the postings here cross that line.

    Steve:
    I repeatedly ask people to avoid angriness in their postings here and frequently delete or snip posts merely because of angriness. I try hard myself to avoid that tone. There are so many posts and comments here that it’s hard for me to catch everything. If there are posts or comments that you or others, for that matter) feel to cross the line, please email me if you notice them and let me deal with them as opposed to getting into a foodfight.

  697. Peter Thompson
    Posted Mar 7, 2008 at 5:10 AM | Permalink

    Neal #696:

    “In a word, there is a distinction between honest critique and outright hostility.

    Sometimes the postings here cross that line.”

    You are quite correct, but such is the legacy of Mann, Thompson, Hansen and a few other “pioneers”. There work was used, with their aquiescence to advance a a political agenda with a radical re-ordering of the world economy as it’s goal. In the case of Mann, the evidence available on this blog points to a lot more than a few innocent errors. Thompson allowed his name, and “some” of his work to be used in Gore’s alarmist propaganda in an absolutely unscientific way. Hansen is responible for a global temperature record which has much different (warmer and warmer lately) results than it’s competitors, all the while running a political style campaign of fear of catastrophic warming. You are then suprised by hostility from those whose lives are to be so re-ordered, without their consent? Really?

    At the same time, evidence has begun to accumulate that the science is generally not done with the rigour expected of other fields, and is not receiving the level of scrutiny required to improve. Last but by no means finally, more and more people are coming to grips with the fact that the “science” is often comprised of model runs, and output from models is often used as “data” for other “experiments”. Pardon us for our skepticism.

    If this were being treated as an academic pursuit of knowledge, I am sure all parties would behave nicely, as with so many other areas of research. I somehow can’t imagine any other field having “the science is settled” as one parties’ mantra. Once this became political, it infected the scientific process and turned it tribal. And ugly.

    Let mea ask you one hypothetical? Do you suppose Lonnie Thompson would have stayed quiet if someone cut a chunk out of his ice core, welded some piece of an unrelated series on the end, called it Dr. Thompson’s thermometer and used it in The Great Global Warming Swindle?

  698. Neal J. King
    Posted Mar 7, 2008 at 5:34 AM | Permalink

    #698, Peter Thompson:

    The best disinfectant is light; but the more heat, the less light.

  699. Posted Mar 7, 2008 at 7:56 AM | Permalink

    Neal @696.

    What I’m saying is that quite a few of the posters here seem to be really into a “gotcha” attitude, …

    Yes. There is some “gotcha” here. But it’s not most people. Also, the gotcha game is hapening on both sides in these climate blog wars.

    Tamino’s thread on Watts seems pure “gotcha”. Yeah…. so Watts screwed up a few graphs by not recoginzing how anomalies are defined and not setting to the same baseline. Gavin didn’t reset baselines when writing his blog entry on validating Hansen A,B,C. He compared anomalies with temperature zeros in “perpetual computed 1958” to GISS baselines anomalies that … when are the? 55-85? 60-90? 65-95? I can’t ever even remember.

    But basically Gavin compared temperature anomlies with different baselines. But did that unhinge Tamino? No. Has Tamino called Gavin an idiot for comparing temperature anomalies with different baselines? Questioned Gavin’s understanding of the whole concept of baselines? Has Tamino decreed GCM’s are wrong because Gavin failed to set temperatures to identical baselines in a blog post? No.

    No one would say those things about Gavin for failing to rebaseline in a blog post and no one should be pillorying Watts for failing to rebaseline in a blog post. Either can correct their error if they wish. Anthony Watts has. The whole “gotcha” thread over at Tamino’s should now end, right? But I was it continuuing in full force last night. (Never mind that the gotcah isn’t to get a single person who admires Anthony to decides the forgetting to rebaseline means that it’s ok to have thermometers mounted next walls that can radiate heat.)

    Frankly, the best way to minimize the amount of gotcha behavior is to ignore the ones who are into nothing more than “gotcha”. It’s out of your power to prevent “gotcha” behavior on blogs.

    But at the end of the day, the field of study remains, the questions in the field remain.

    Yes the questions remain.

    But don’t forget “questions remain” is the skeptic position! Whether or not questions remain is the argument between skeptics and those who believe AGW is proven.

    So, of course, skeptics are going to show the problems in the field, because these problems are one of the big reasons questions remain. Showing these problesm isn’t “gotcha”; it’s providing the basis for skepticims about our ability to distinguish AGW signals based on existing data.

    Would you prever that the skeptic argument were just “I just don’t believe the data. But it would be impollite to show the problems with the system. That might hurt someone’s feelings.”?

    “Gotcha” specialists can walk off when they’re done, but the folks working in the field have to work with the data and techniques that are actually available. If critics can help develop approaches and techniques that do more with the data, that’s very positive, and more impressive than just throwing stones.

    Sure, it might be better if critics got up, went out and came up with better data. But quite frankly, what are you suggesting they do?

    No one here can go out, pull a temperature station off its mount over a bed of black lava rocks and next to air conditioner exhaust, and put it in a new and better spot. No one here can force GISS or Hadley to adopt new data analysis techniques. For that matter, it’s not even easy to get all the data.

    Moreover, the facts is, even if we could get the data, it simply may not be possible to “correct” poor data and bring it up to par. With physical experiments, it is the rule, and not the exception, that poor data is poor data. It’s often impossible to “fix” using even the most impressive amounts of brainpower because the information is no longer there.

    So, suggesting people here should jump in and fix that data fails to recognize one point a few people are making:

    * The data may simply be unfixable. This may be unfortunately, and no one’s fault, but it results in uncertainty in testing certain hypotheses.

    If people are correct about this, it is unfortunate. But the observation is important in the debate over whether or not AGW is proven. It’s even more important if we believe AGW but are trying to discriminate between 1C/century warming or 6C/century warming.

    This means, the problems with data and stations needs to be discussed openly and the warts shown and understood. And, if one is to make the positive contribution you suggest one must first understand what is wrong with the data.

    So, in this vein, what SteveM and Anthony Watts are doing is not counter productive it is essential. No one can fix what is wrong with the data if they don’t understand what is happening at individual stations.

    So, it would probably be best stop reacting as if each and every criticism of the data is gotcha behavior.

    But, FWIW, I dispute your suggestion that one must jump in and fix the data rather than criticize the data. Sometimes, people need to just evaluate other people’s claims. When the claims are supported by data, one needs to evaluate data. Finding flaws in that data, or the methods of obtaining data, isn’t just “gotcha”.

    It’s figuring out whether or not claims about AGW are supported by the data. That’s a perfectly valid thing to do, and suggesting that’s it’s less worthy than fixing the data itself is both unfair and unscietific. Both evaluating what we currently know, which involves criticism and recognizing what we don’t know, and trying to extend our knowledge are essential in science. They are also important in the political debate.

    But to let me close by repeating myself: Yes. Some commenters do nothng but play ‘gotcha’. If this worries you, the best way to minimize the amount of gotcha behavior is to ignore the ones who you think are doing nothing more than playing “gotcha”. It’s out of your power to prevent “gotcha” behavior on blogs. If you think commenter “X” is playing “gotcha”, ignore their comment.

    Neal @696.

    What I’m saying is that quite a few of the posters here seem to be really into a “gotcha” attitude, …

    Yes. There is some “gotcha” here. But it’s not most people. Also, the gotcha game is hapening on both sides in these climate blog wars.

    Tamino’s thread on Watts seems pure “gotcha”. Yeah…. so Watts screwed up a few graphs by not recoginzing how anomalies are defined and not setting to the same baseline. Gavin didn’t reset baselines when writing his blog entry on validating Hansen A,B,C. He compared anomalies with temperature zeros in “perpetual computed 1958” to GISS baselines anomalies that … when are the? 55-85? 60-90? 65-95? I can’t ever even remember.

    But basically Gavin compared temperature anomlies with different baselines. But did that unhinge Tamino? No. Has Tamino called Gavin an idiot for comparing temperature anomalies with different baselines? Questioned Gavin’s understanding of the whole concept of baselines? Has Tamino decreed GCM’s are wrong because Gavin failed to set temperatures to identical baselines in a blog post? No.

    No one would say those things about Gavin for failing to rebaseline in a blog post and no one should be pillorying Watts for failing to rebaseline in a blog post. Either can correct their error if they wish. Anthony Watts has. The whole “gotcha” thread over at Tamino’s should now end, right? But I was it continuuing in full force last night. (Never mind that the gotcah isn’t to get a single person who admires Anthony to decides the forgetting to rebaseline means that it’s ok to have thermometers mounted next walls that can radiate heat.)

    Frankly, the best way to minimize the amount of gotcha behavior is to ignore the ones who are into nothing more than “gotcha”. It’s out of your power to prevent “gotcha” behavior on blogs.

    But at the end of the day, the field of study remains, the questions in the field remain.

    Yes the questions remain.

    But don’t forget “questions remain” is the skeptic position! Whether or not questions remain is the argument between skeptics and those who believe AGW is proven.

    So, of course, skeptics are going to show the problems in the field, because these problems are one of the big reasons questions remain. Showing these problesm isn’t “gotcha”; it’s providing the basis for skepticims about our ability to distinguish AGW signals based on existing data.

    Would you prever that the skeptic argument were just “I just don’t believe the data. But it would be impollite to show the problems with the system. That might hurt someone’s feelings.”?

    “Gotcha” specialists can walk off when they’re done, but the folks working in the field have to work with the data and techniques that are actually available. If critics can help develop approaches and techniques that do more with the data, that’s very positive, and more impressive than just throwing stones.

    Sure, it might be better if critics got up, went out and came up with better data. But quite frankly, what are you suggesting they do?

    No one here can go out, pull a temperature station off its mount over a bed of black lava rocks and next to air conditioner exhaust, and put it in a new and better spot. No one here can force GISS or Hadley to adopt new data analysis techniques. For that matter, it’s not even easy to get all the data.

    Moreover, the facts is, even if we could get the data, it simply may not be possible to “correct” poor data and bring it up to par. With physical experiments, it is the rule, and not the exception, that poor data is poor data. It’s often impossible to “fix” using even the most impressive amounts of brainpower because the information is no longer there.

    So, suggesting people here should jump in and fix that data fails to recognize one point a few people are making:

    * The data may simply be unfixable. This may be unfortunately, and no one’s fault, but it results in uncertainty in testing certain hypotheses.

    If people are correct about this, it is unfortunate. But the observation is important in the debate over whether or not AGW is proven. It’s even more important if we believe AGW but are trying to discriminate between 1C/century warming or 6C/century warming.

    This means, the problems with data and stations needs to be discussed openly and the warts shown and understood. And, if one is to make the positive contribution you suggest one must first understand what is wrong with the data.

    So, in this vein, what SteveM and Anthony Watts are doing is not counter productive it is essential. No one can fix what is wrong with the data if they don’t understand what is happening at individual stations.

    So, it would probably be best stop reacting as if each and every criticism of the data is gotcha behavior.

    But, FWIW, I dispute your suggestion that one must jump in and fix the data rather than criticize the data. Sometimes, people need to just evaluate other people’s claims. When the claims are supported by data, one needs to evaluate data. Finding flaws in that data, or the methods of obtaining data, isn’t just “gotcha”.

    It’s figuring out whether or not claims about AGW are supported by the data. That’s a perfectly valid thing to do, and suggesting that’s it’s less worthy than fixing the data itself is both unfair and unscietific. Both evaluating what we currently know, which involves criticism and recognizing what we don’t know, and trying to extend our knowledge are essential in science. They are also important in the political debate.

    But to let me close by repeating myself: Yes. Some commenters do nothng but play ‘gotcha’. If this worries you, the best way to minimize the amount of gotcha behavior is to ignore the ones who you think are doing nothing more than playing “gotcha”. It’s out of your power to prevent “gotcha” behavior on blogs. If you think commenter “X” is playing “gotcha”, ignore their comment.

    If you apply this to your own behavior, and hope others do too, there is some chance we can all benefit from more light rather than heat.

  700. Tom Vonk
    Posted Mar 7, 2008 at 8:51 AM | Permalink

    Neal #689

    Normally I pay attention to what Jerry , UC , Dan Hughes , Spence_UK , Lucia and some others posters because they know what they are talking about and I would have skipped your posts because since the irrelevant bathtub analogy you didn’t seem to have much interesting to say anyway .
    Yet I found one thing that could be interesting for many readers despite being wrong and it was the claim that the variation of the average of the internal energy was proportionnal to the the variation of the average temperature expressed by :

    average energy change roughly proportional to average temperature difference equal to difference in average temperature

    and by

    So an increase in the temperature averaged over this system is an indication of an increase in internal energy;

    Then it was interesting to show why this statement was wrong because it gives an insight in the kind of non legitimate assumptions the lead to such statements .
    Indeed it translates to saying that Int [delta U . dV] = A. Int [delta T .dV] where A is a constant and the integral is over some very large volume V .
    Delta T is T(x,y,z,t + dt) – T (x,y,z,t) and delta U is U(x,y,z,t + dt) – U (x,y,z,t)

    You then proceed to assume that dU = K dT with K constant by making a Taylor expansion around T (x,y,z,t) . No problem with Taylor expansion in itself but a problem with the assumption .
    Indeed as Dan Hughes has pointed out : dU = DU/DS .dS + DU/DV . dV so how to get from there to U(T) where U depends only on T ?
    Then comes another argument

    I don’t use the (dU/dV)dV term, because I don’t assume that the volume is changing.

    So dU = DU / DS . dS with all functions depending only on T what means that gases don’t expand . It also means that density is constant .Well why not .

    What’s left now is to manage the jump from dU = K dT to Int [delta U . dV] = A. Int [delta T .dV] .
    To have Int [K.dT . dV] = A. Int [delta T .dV] there is only one way : K = constant = A over the whole integration domain .
    As K is Cv that means that Cv is constant over the whole integration domain . It doesn’t depend on density , temperature , pressure .
    So we spent this whole discussion to say that if U = A.T with A a constant everywhere in space and time then the average U is A. average T what is trivial as I said from the beginning .

    After having dodged for a long time , you finally admit it yourself .

    In a example that is actually of formal relevance to the current application, this is done in the context of the calculus of variations. Reference to: http://mathworld.wolfram.com/FunctionalDerivative.html will lead you to the conclusion that the functional derivative of the integrated internal energy density with respect to the temperature anomaly is the heat-capacity field. That’s not an especially significant insight,

    For those not familiar with functionnal derivatives , all this translated in a couple of words that everybody understands means that if U = A.T and A constant then dU/dT = A . It is only called “functionnal” because T is a function of x,y,z,t .
    Not an especially significant insight indeed .

    Your final rant where you talk about “lies” is not really important so only a few words .

    “temperature is only measuring their average kinetic energy”

    Yes what else ? For you it measures the potential or the chemical energy ?
    I gladly admit that this statement was a tautology and probably you got confused because I used 2 different terms (vibrationnal energy and kinetic energy) and you didn’t see that they were synonymous .

    vibrationnal energy…contribute[s] to U . Of course it does as well as to its derivatives so to Cv but it doesn’t contribute linearly .”

    Of course that it does . Every kind of energy contributes by definition to U . The 2 statements are not equivalent because U and T are not equivalent but they are both true .
    The point was that if one wanted to REALLY calculate DU/DT then it was necessary to calculate the different contributions (f.ex translationnal , potential , vibrationnal , chemical etc) because their dependence on T and other variables was very different and the equipartition theorem didn’t hold at least for the vibrationnal kinetic energy .

    I leave other readers to judge what is the relevance of your assumptions to the real world and whether the statement

    So an increase in the temperature averaged over this system is an indication of an increase in internal energy;

    holds when one considers the real world .

  701. Neal J. King
    Posted Mar 7, 2008 at 10:49 AM | Permalink

    #700, lucia:

    If people can discover inconsistencies in the data, that’s fine. I would not be expecting people to correct the data: How would they have access to or insight into the details of someone else’s experimental work? Would you expect someone else to trouble-shoot your code based on the graphical output?

    What I find objectionable is the attitude some display that every methodology that is not perfect must be outright fraudulent. When I say, “The questions remain,” what I mean is that the scientific issues that climate scientists are addressing are still there, and they have only the data that they have. Trying to draw plausible conclusions out of imperfectly captured data is an important part of science.

    Steve:
    Neal, who says that “every methodology that is not perfect must be outright fraudulent”? I don’t say that. I’m unaware of anyone else saying that and certainly discourage people from angry posts. If you have some examples that bother you, please itemize some for me and I’ll review them to see whether they comply with blog rules. If authors are working with poor data, I believe that it is important that the limitations of the data be properly disclosed and the adjustments fully described and justified and should be criticized where they fail to do so.

  702. Spence_UK
    Posted Mar 7, 2008 at 10:57 AM | Permalink

    Re #700

    Is there an echo in here? 😉

  703. MarkW
    Posted Mar 7, 2008 at 11:11 AM | Permalink

    Nobody has any problems with scientists trying to draw plausible conclusions.

    It’s those scientists who insist on playing politician who are insisting that we must start implementing policy based on at best “plausible conclusions” that are creating the problem.

    Those scientists, who stick to doing science, are not getting any flack.

  704. kim
    Posted Mar 7, 2008 at 11:11 AM | Permalink

    Once more, lucia; I’ve almost got it.
    =====================

  705. Dave Dardinger
    Posted Mar 7, 2008 at 11:21 AM | Permalink

    re: #702 , Neal

    Trying to draw plausible conclusions out of imperfectly captured data is an important part of science.

    It’s an important part of science, sure. But it’s not an important part of policy making, which is the problem people who are skeptical of the science are making an issue of. Policy makers need to have realistic values for the actual degree of confidence which can be given to particular conclusions, and that’s where the present results of climate science fall down big time. If you’ll go back and look at this blog from the beginning till now, this has always been the big problem found. Steve first tried to verify the accuracy of Mann’s findings and found that they often didn’t exist and when they did they were based on incorrect statistical analysis. The same was found to be true in essentially all of the other multiproxy studies. Moving on to the surface temperatures, it has been found by Steve, Anthony and others that the degree of reliability of the resuts is again either determined incorrectly statistically speaking is or based on questionable data.

    Finally, add the usual caveat that this doesn’t necessarily mean the results are wrong, just that they’re unproven given the methods used. And mind you, if we were arguing the value of the second digit of the results that wouldn’t have much importance, but we’re talking the first digit and sometimes even the sign or order of magnitude.

  706. Sam Urbinto
    Posted Mar 7, 2008 at 12:20 PM | Permalink

    Q: Would you be willing to do the work of these climatologists, given their real-world constraints of measurements and accuracies?

    A: Yes. And I would start by trying to do experiments that disprove my ideas, release the results whatever they are, and getting everything I’ve done into the hands of others so they could replicate the experiments in case I got something wrong.

    Q: How would you go about making sense of the global climate situation?

    A: By trying to find out if we know enough to do anything other than the obviously beneficial. (Which would be researching and implementing renewable alternative less polluting fuel sources, coupled with conservation and improved efficiency.) By investigating the information we do have to see how correct it is and how reliable it is.

    What I wouldn’t be doing is trying to prove some idea that’s not really provable, or implementing solutions to fix ‘something or another’ that might be wrong.

  707. Raven
    Posted Mar 7, 2008 at 12:28 PM | Permalink

    Neal J. King says:

    What I find objectionable is the attitude some display that every methodology that is not perfect must be outright fraudulent.

    It comes down to a question of trust and the sad truth is that many scientists who support the AGW hypothesis have demonstrated that they are not deserving of trust by engaging in ridiculous ad hominum attacks instead of looking at the science. In my opinion, a scientist loses all credibility the moment he/she starts calling skeptics deniers or tries to smear them by claiming they are shills for ExxonMobil/Tobacco companies. I realize that you may think that reaction to be unreasonable, however, I would turn the question around: why should I trust any scientist who engages in such tactics?

  708. Willis Eschenbach
    Posted Mar 7, 2008 at 1:12 PM | Permalink

    Neal King, thanks for your reply. You say:

    Isn’t the anomaly meaningless?

    Sam Urbinto: #666, #678
    Willis Eschenbach: #667
    UC: #676

    Your arguments show that the delta-[T] is not a good quantitative measure for an increase in energy. I agree, it’s not: I’ve pointed out that it will be affected by weighting of various types, that it leaves out latent heat, etc. But what I’ve been saying is that it’s an indicator: something that points in the right direction, to give some guidance.

    Analogously, we get a crude idea of someone’s state of health by sticking a thermometer under his tongue. What % of the body are we sampling? How representative is it of the whole person? I think it’s also a bit crude.

    But I think the most important point is that this indicator is made up of available data.

    I gave an example (El Nino 1998) where the anomaly was totally meaningless as an indicator of the overall thermal energy of the planet. The anomaly jumped wildly in 1998, but everyone agrees that it was merely an energy redistribution, that it was NOT measuring a change in the overall thermal energy of the planet.

    So if it jumps wildly without any change in underlying thermal energy, what evidence do you have that it is “something that points in the right direction”? I just showed that it DOESN’T necessarily point in the right direction, that it can give a TOTALLY FALSE INDICATION. Your claim (without evidence) that it is otherwise doesn’t carry much weight.

    Suppose that the human temperature did the same thing, that as energy shifted from the liver to the mouth the temperature would go up and down for no apparent reason … do you think doctors would use body temperature to diagnose fevers? No way – if a person has a fever, that is clear evidence that something is wrong. There is no corresponding “El Nino” effect in the body, which is why (unlike the earth) a fever in the body actually means something.

    In addition, you seem to misunderstand the difference between human bodies and the planet when you say “Analogously, we get a crude idea of someone’s state of health by sticking a thermometer under his tongue. What % of the body are we sampling? How representative is it of the whole person? I think it’s also a bit crude.”

    Since the blood in the human body circulates completely every few minutes, there will be little lasting difference between the core temperature and the temperature under your tongue. Unlike the human body, on the planet there is little circulation between say the north and south poles, so there can be a lasting difference between the two. So to answer your questions, a) we are sampling a large percent of the human body when we stick a thermometer under our tongues, and b) it is very representative of the whole person … which is why it is used for diagnosis. If it were not representative, we wouldn’t use it, so your example clearly supports my contention, not yours.

    Finally, I don’t understand your claim that the most important point is that delta T is made up of available data … please point me to some indicator that we use that isn’t made up of available data.

    Do you truly not understand this, or are you just making an argument?

    w.

  709. Gerald Browning
    Posted Mar 7, 2008 at 1:28 PM | Permalink

    Neal J King (#693),

    My credentials speak for themselves. I worked at NCAR for 23 years and
    part of that time was spent coding up an atmospheric component as used in climate models. My advanced degree is in Applied Math (Partial Differential Equations)and Numerical Analysis. What are your credentials in the area of climate modeling, continuum partial differential equations, and numerical analysis, i.e. the areas that are so vital to the discussions about climate?

    Jerry

  710. Spence_UK
    Posted Mar 7, 2008 at 1:35 PM | Permalink

    Crikey, looks like I can post again (most of my recent posts have been trapped by the filters)

    #564, Neal J King, ages ago (when I could last post!) you said:

    With regard to clouds:
    – Clouds most certainly affect IR transmission; however
    – The most significant impact of C-O2 on the enhanced greenhouse effect is precisely in that region of the IR band which is not absorbed by water. In other regions, the impact of even a 2X in C-O2 should be minimal, given that there is so much more water than C-O2.

    Perhaps I’m misunderstanding your post, but you seem to be referring to “band not absorbed by water” with respect to clouds. The absorption of IR by clouds is quite different, clouds contain water droplets, not gaseous water. The droplets are of sufficient size that they are generally beyond the Rayleigh region and into the Mie scattering region for even LWIR. The IR relationships are governed by a different set of equations for water droplet compared to water vapour.

    To give an analogy, the behaviour is similar to the difference between a humid and foggy day. On a very humid day, you can see a long way, right? That’s because gaseous water (i.e., water vapour), does not absorb visible light. On a foggy day, you might not even be able to see 50 metres. And if you shine a spotlight into the fog, the light scatters everywhere. Even though water vapour does not absorb visible light, liquid water in the form of droplets do scatter light.

    Clouds substantially disrupt the behaviour of all IR bands. A considerable amount of incoming solar radiation is either reflected, or absorbed and re-radiated as LWIR. Clouds radiate strongly in the LWIR, approximately as black bodies, dependent on their temperature (their temperature is usually close to the air temperature at that altitude). The disruption of IR transmissions is so substantial as to completely change the radiative transfer mechanisms for the duration of the cloud cover. The spatial and temporal spectrum associated with cloud cover will have substantial impacts on global (and regional) temperature metrics at a wide range of scales.

    – Conceivably, C-O2 could be interesting even in IR regions overlapping the water absorption band, if there were so much of it that the optical depth due to C-O2 were = 1 (as measured from outer space radially inward). But I don’t know the numbers. Spence_UK, do you know if there are databases of the optical depth of C-O2 (and other GHGs) as a function of frequency & altitude?

    Optical depth is trivially determined from the extinction coefficients that are generated from the radiative models that you are running. You will (should) never get total extinction for any concentration and reasonable bandwidth, other than due to the limits of accuracy of the models or the numerics of the computer. It is the frequencies that are at or around 50% absorption for a given concentration that are of the most interest at any one time (important: not as a band averaged sense).

    As for linear systems, I recommend you learn the lesson from Friedrich Paschen. His name is given to the relationship between the air gap between two plates, the pressure of the air, and the DC voltage required to break down the air and cause a spark. Many scientists produced hypotheses on this topic before him, and all failed to derive a model. What did they do differently to him? They all attempted to linearise the problem, separate out the two variables (gap size and air pressure). Paschen did not, treated the two as coupled, and formulated the correct answer. All scientists try to simplify things to figure out how things work. The difference between a scientist and a great scientist is knowing when and where it is valid to simplify things.

  711. Neal J. King
    Posted Mar 7, 2008 at 1:43 PM | Permalink

    #710, Willis Eschenbach:

    My point is: There are a whole lot of data out there, but not necessarily gathered the way that would be ideal to test a model. Still, there it is, and that is all that there is.

    So, do you do what you can to find out what you can from this information; or do you turn up your nose at this information, saying, “These data don’t meet my standards for proper data collection and sampling.”?

    And if you decide not to turn up your nose, you may have to tolerate some pretty “rough & ready” approaches to boiling down the data.

  712. Posted Mar 7, 2008 at 1:48 PM | Permalink

    Neal @702
    I’m not reading frequent accusations of fraud here. I have no doubt that if you search high and low you’ll find them. But then, I could spend my time mining several pro-AGW blogs finding all sorts of malevolent personal attacks. I don’t think it’s even worth the effort to comment on those.

    However, should you notice a specific accusation of fraud, I’d hardly fault you for pointing it out. Presumably, you’d do that specifically, and linking to the comment you found objectionalbe. I haven’t noticed any accusations of fraud on this thread, but maybe they went by me.

    As it happens, your words seem to imply is that people should stop expressing their opinion that the poor condition of the stations leads to uncertainty in the data, and/ or that the statistical methods that have been developed to address UHI are either poorly designed, or badly implemented.

    If you aren’t just telling people to stop expressing their opinion, and don’t intend your words to be read that way, maybe you should consider editing for clarity.

    Trying to draw plausible conclusions out of imperfectly captured data is an important part of science.

    Admitting the data are imperfect, and that this may cast doubt on your conclusions is an equally important part of science.

    It is unwise to attempt to surpress the questions, or the caution that are always required in science.

    BTW Neal. It was particularly ironic that the person you questioned about credentials as a working climatologist was Jerry B. Had you questioned me, I’d have happily admitted I have no credentials as a climatologist whatsoever.

    None are requried to know that, in science, on does evaluate the suitability of data used to test preditions and models. I think this idea of testing theories against was introduced by Francis Bacon, and predates even Svante Ahrrhenius.

  713. Sam Urbinto
    Posted Mar 7, 2008 at 2:21 PM | Permalink

    So, would anyone care to demonstrate their empirical experperiments to show what IR at 4 um does exactly at 200 400 and 800 ppmv of carbon dioxide. I’d also like the ones that show what water vapor is doing exactly from 20-70 um. Thanks in advance.

    But don’t forget; the pH of the ocean has become less alkaline by .11 since the 1750s. We must take action now! ;D

    Spence_UK: Not just water droplets in liquid form, but solid form involving dust and fossil-fule derived particles. And changing back and forth. 🙂

    Clouds!

    A cloud is a visible mass of condensed droplets or frozen crystals floating in the atmosphere above the surface of the Earth or another planetary body. On Earth the condensing substance is typically water vapor, which forms small droplets or ice crystals, typically 0.01 mm in diameter. When surrounded by billions of other droplets or crystals they become visible as clouds. Dense deep clouds exhibit a high reflectance (70% to 95%) throughout the visible range of wavelengths: they thus appear white, at least from the top. Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the cloud, hence the gray or even sometimes dark appearance of the clouds at their base. Thin clouds may appear to have acquired the color of their environment or background, and clouds illuminated by non-white light, such as during sunrise or sunset, may be colored accordingly. In the near-infrared range, clouds would appear darker because the water that constitutes the cloud droplets strongly absorbs solar radiation at those wavelengths.

    http://en.wikipedia.org/wiki/Cloud_forcing
    http://en.wikipedia.org/wiki/Cloud_feedback

    Clouds remain one of the largest uncertainties in future projections of climate change by global climate models, owing to the physical complexity of cloud processes and the small scale of individual clouds relative to the size of the model computational grid.

  714. Willis Eschenbach
    Posted Mar 7, 2008 at 6:24 PM | Permalink

    Neal, I appreciate you taking time to reply. You say:

    #710, Willis Eschenbach:

    My point is: There are a whole lot of data out there, but not necessarily gathered the way that would be ideal to test a model. Still, there it is, and that is all that there is.

    So, do you do what you can to find out what you can from this information; or do you turn up your nose at this information, saying, “These data don’t meet my standards for proper data collection and sampling.”?

    And if you decide not to turn up your nose, you may have to tolerate some pretty “rough & ready” approaches to boiling down the data.

    You posit this as an “either/or” situation, with your “either” being throw it all out, and the “or” being “rough and ready” approaches to boiling down the data.

    I submit that there is another possibility besides throwing it out or being rough and ready. This is:

    a) First, examine the provenance of the data, piece by piece. Some of it is coming from excellent collection stations, while other of it comes from a thermometer in a parking lot by an air conditioner. The fact the Anthony Watts has to be the one to do this should be your first clue about whether we can trust what Hansen and Jones are telling us. Sorry, but their plan is a bit too rough and not really ready.

    b) Next, examine the data itself piece by piece, looking for bad data (jumps in the data, single suspicious points, etc). Remove or fix these.

    c) Use the best of the data to test the rest of the data. In this way, you can start to see what kinds of station anomalies (pavement, trees, buildings, aircons, etc) are real problems, and which can be tolerated.

    d) Remove known discrepancies in the data by some method which does not depend on the other stations. It is useless to “adjust” for UHI by setting a city’s trend to equal the rural trends. Either throw it out or fix it by some scientifically based method. McKitrick’s method is a good start.

    e) Figure out a reasonable way to average the data. Much as I’d like to think that we can use stations 1200 miles away to fill in the empty spots, we can’t — it is an illusion to think that our answers will be more accurate if we do this, simply because we are not adding any new information to the mix.

    f) Put error bars on the results. HadCRUT has done this, but GISS has not.

    g) Publish your data and your codes in a useable form, so that others can easily check your work.

    Near as I can tell, GISS has not done any of these, and HadCRUT has only done a couple.

    In other words, I disagree entirely that if the data is poor, we need to use “rough & ready” methods. In fact, if the data is poor, we need to be even more careful with our methods, to avoid drawing entirely incorrect conclusions.

    Best regards to everyone,

    w.

  715. John A
    Posted Mar 7, 2008 at 7:18 PM | Permalink

    Actually Sam, Wikipedia turns out to be wrong on the subject of nucleation of clouds: http://www.wired.com/science/planetearth/news/2008/02/bacteria_clouds

  716. Sam Urbinto
    Posted Mar 7, 2008 at 7:19 PM | Permalink

    Willis, you rock.

  717. Willis Eschenbach
    Posted Mar 7, 2008 at 8:14 PM | Permalink

    Sam, thanks for the compliment. In fact, I’m on my holiday away from the Solomon Islands, so I’m in Fiji with a fast internet connection, a lot of time, and a half-fast idea what to do with it. In other words, I’m sitting on the beach with a cold beer and a wireless link … it’s easy to be genteel and generous in my situation. Next week I’ll be back at work, I may not be so pleasant …

    w.

  718. Armagh Geddon
    Posted Mar 8, 2008 at 3:26 AM | Permalink

    Many wonderful posts in this thread, but do you mind me saying that we have drifted OT, just a bit.

    What intrigues me is that we have had informative and dispassionate comment from our host, but almost no comment from the GATech team regarding their perspectives on the encounter. A few barbed comments from JEG on his blog, and a comment or two from Judith Curry, but so far as I can see, no comments from the younger folk who were present. I am really interested to learn about these different perspectives.

  719. Steve McIntyre
    Posted Mar 8, 2008 at 6:20 AM | Permalink

    #719. Jud Partin of Georgia Tech comments on my trip at http://www.climateaudit.org/?p=2708#comment-213839

  720. Geoff Sherrington
    Posted Mar 8, 2008 at 6:39 AM | Permalink

    Re # 719 Armagh Geddon

    Lovely nom-de-plume, silly Irish question.

    Why does it matter what the youngsters at Ga Tech thought, unless it was about the use of stats and data in general, following generous teaching by Steve?

    It’s a waste of time to ask what they thought about global warming because the very top experts cannot come to a credible, non-political solution.

    So many of us are in a state of mind like post Club of Rome that the inevitability of the fox crying wolf is almost established. We should perhaps consider asking not “What can we salvage from poor data?”, but “Is it worth using suspect data at all?”

    My personal litmus test is that in over 60 years of exposure to Life, past climate change has affected me not one memorable, measurable bit. (Except as an intellectual hypothetical exercise). Yet the graphs are said to have changed.

    Can any bloggers honestly raise an arm and say “I benefited” or “I suffered in this (described) way”?

  721. Judith Curry
    Posted Mar 8, 2008 at 7:02 AM | Permalink

    An interesting session at the American Physical Society meeting next week on The Physics of Climate and Climate Change http://meetings.aps.org/Meeting/MAR08/SessionIndex2/?SessionEventID=77740

    I refer you specifically to the paper of my colleague Annalisa Bracco at Georgia Tech, that is at the heart of at least part of this discussion thread, related to dynamical cores
    Geostrophic Turbulence and the Stability of Ocean Models
    http://meetings.aps.org/Meeting/MAR08/SessionIndex2/?SessionEventID=77740

    Abstract:
    Despite multiple efforts, predictions of climate change remain uncertain. Where precision is an issue (e.g., in a climate forecast), only ensembles of simulations made across model families which differ for parameterizations, discrete algorithms and parameter choices allow an estimate of the level of imprecision. Is this the best we can do? Or is it at least conceptually possible to reduce these uncertainties? Focusing on ocean models in idealized domains we describe chaotic space-time patterns and equilibrium distributions that mimic nature. Using the Navier-Stokes equations for barotropic flows as a zero-order approximation of analogous flow pattern, we then investigate if is possible, in this overly-simplified set-up, for which smooth-solutions exist, to bound the uncertainty associated with the numerical domain discretization (i.e. with the limitation imposed by the Reynolds number range we can explore). To do so we analyze a series of stationary barotropic turbulence simulations spanning a range of Reynolds number of 10$^{4}$.

    by the way, the simulations on this one took over 200,000 CPU hours (3 years). Annalisa will be part of a press conference from the meeting on Tues. I will post more info as it becomes available.

  722. MrPete
    Posted Mar 8, 2008 at 7:20 AM | Permalink

    Can any bloggers honestly raise an arm and say “I benefited” or “I suffered in this (described) way”?

    Good question, Geoff. Without getting into detail that would drag this wayyy off topic, I can’t point at clear AGW benefit/harm, but can easily point at threat-response harm. (Case in point: food being revalued as fuel.)

  723. kim
    Posted Mar 8, 2008 at 8:12 AM | Permalink

    Obviously, if it’s a chimera, any effort directed at its capture and control is utterly wasted, and there has been a lot of effort.
    ========================================

  724. Craig Loehle
    Posted Mar 8, 2008 at 9:09 AM | Permalink

    Neal discusses the imperfect nature of the data and implies we should “get over it”. In normal science, we are always faced with imperfect data. The struggle to make something of it, to clean it up, to avoid using it improperly (pseudo-correlations) etc is a big part of what science is about. In other fields, this imperfect data often means that it is impossible to come to definitive conclusions. Experimenter X does a study and says “this supports my theory” but Y says “not so fast…” and points
    out a complication. This can go on for decades. Generally no one says they have proved anything, nor do they claim “the science is settled”. Competing theories may coexist for decades. Here we have a claim for high confidence in results from a field that can only be called an immature discipline.

  725. Neal J. King
    Posted Mar 8, 2008 at 12:29 PM | Permalink

    #710, Gerald Browning:

    I was not challenging your technical competence on the matter, I was asking about your point of view, because your expressed perspective within this thread has been so “prosecutorial” – an issue that I have been pursuing in this discussion. So you have worked on the “constructive” end of this debate. Good.

    My knowledge of climate modeling is informal, based on self-study. My experience with numerical solutions of PDEs is from graduate school. Notice, however, that I don’t post on computational or stability issues of GCMs: I don’t feel it would be helpful.

  726. Neal J. King
    Posted Mar 8, 2008 at 12:36 PM | Permalink

    #711, Spence_UK:

    You have a good point, that the index of refraction can differ from 1 even if there is no absorption. This will result in scattering. Thinking it over, I conclude that the clouds will make a difference to the “2X in C-O2” calculation if the clouds are above the point where optical depth = 1 for C-O2 at the interesting frequencies (about 15 microns); and not if they are below.

    However, within the context of trying to calculate the 3.7 W/m^2 radiative forcing, that would not inspire me to try to incorporate the hydrodynamics of a GCM to do the radiative transfer calculation. I would treat it in the way I would treat the albedo issue:
    – The albedo reduces the energy input from the sun by a multiplicative factor. For purposes of calculating the radiative forcing, I can take that as a fixed parameter, within some range, even though it will in fact be dependent on cloud-cover, as well as other factors.
    – In the same way, I would take as a parameter the % of area covered by clouds that lie above the OD = 1 point. That area is not going to have its radiative loss affected by a 2X in C-O2, because the photosphere for that frequency is already at the level of the clouds.

    By the way, by the term “optical depth”, I mean the integral of the absorption coefficient over distance, starting at radius = infinity and moving inward. It is the negative of the logarithm of the ratio of the attenuated intensity to the original intensity (neglecting the in-scattered and thermally-emitted radiation of the same frequency) for an incoming beam, although the interest is in the ratio of the intensities at infinite distance and at the photosphere. Therefore, my interest is in the point at which the absorption (coming downward from space) is 1/e: this defines the photosphere for that frequency.

  727. Neal J. King
    Posted Mar 8, 2008 at 12:41 PM | Permalink

    #713, lucia:

    – “assumptions of fraud”: Perhaps the best way to identify what I am talking about would be to go to a ClimateAudit webpage and do a search on the term “the Team”. You are very likely to find quite a bit of the snarkiness that Steve McIntyre, earlier in this thread, has now vowed to personally stay away from. I welcome that intention, as I think that, like a dank basement, acceptance of snarkiness provides an unhealthy environment that encourages polarization and hostility.
    – I really take exception to your interpretation of my comments as “shut up and stop asking questions”. If this is the common interpretation of what I am saying, then I really find it counter-productive to spend any further time at this site.
    – My response to GB is posted is #726, above. But in general, I do not believe reliance upon credentials is practical on the internet: they cannot be verified by both sides. Weight should be given to clear logic and to support from linked evidence, at sites that both sides can trust.

  728. Neal J. King
    Posted Mar 8, 2008 at 12:43 PM | Permalink

    #715, Willis Eschenbach:

    You have suggested some ways to improve the data, or its presentation. I think these are mostly valid, although as a point of principle I think we are talking about improving analysis of the data, not the data themselves.

    But the main point that I am making is that there is enough meaning behind these methods that there is something that is worth fixing. If there were no meaning to this approach, there would be no reason to fix it.

    For example, if I predict the direction of change of tomorrow’s temperature by flipping a coin, I would argue that there is absolutely no rationale behind that, and essentially nothing you can do will improve on that approach, nor should anything be done. Conversely, if I assume that the direction of change will be the same as the day before, I will certainly not be right all the time, but I think I will be right more than half the time; and if I then start to incorporate other known or knowable factors, I should be able to improve on this crude predictor.

    The fact that you are able to suggest improvements implies to me that there is a certain validity behind the approach. Every field of study starts somewhere.

  729. Posted Mar 8, 2008 at 1:05 PM | Permalink

    Neal

    Assuming a tone of snarkiness is not the same as an accusation of fraud. It is not even tantamount to an accusation of fraud.

    I admit sanrkiness is regrettable, when it occurrs. On this thread, I see little snarkiness. So, at this point, I have no idea what point you are trying to make. Obviously, UC, JerryB, Bender, Willis, Anthony Watt’s or Bender’s discussions about the quality of the data set, which are normal in science, are going to continue.

    If all your recent comments were intended to make a point about past snarkiness– which seemed to sprout up about two months ago, and then ended– how were we supposed to figure that out?

    You seemed to be discussing the difficulties in the temperature record, and suggesting that UC, JerryB, Bender, Willis, Anthony Watt’s, Bender and I should accept the data, maybe not complian or something. We were responding.

    I don’t know why you think your comments don’t sound like “shut up and stop asking questions”. That’s how they sound.
    It is only when pressed that you tell us your concern is accusations of fraud, which you see mysteriously encoded in the term “The Team”. (“The Team” is, BTW, the self assigned name for “The team”. You will see many articles authored by “The Team” at RC.)

    I agree reliance on credential on the internet is not useful. However, tather than responding to Jerry’s argument, you asked for his credentials. GerryB is, indeed who he says he is. It’s also clear based on his comments that he has a good understanding of modeling and sampling issues. If you want to focus on logic and evidence, it might be best if you avoid responding to logical evidence arguments by demanding people’s credentials!

  730. Steve McIntyre
    Posted Mar 8, 2008 at 2:10 PM | Permalink

    Neal, I agree with Lucia on this. There’s a big difference between occasional snarkiness and accusations of fraud. You seem unable to back up your previous allegation. For what it’s worth, it’s been specifically noted in my favor by one third party that I’ve never imputed any such motives to Mann and I challenge you to find any such imputation.

    As to the term “Hockey Team”, I discussed the origin of the term here showing that it originated at realclimate, although I readily admit that I’ve had fun with the term.

    I followed your advice and looked at a couple of posts using the term Hockey Team. Here is the first post that I looked at. I see no possible support in this post for your absurd allegation. Among other things, the post stated:

    The Feb,10, 1978 issue of the Wall Street Journal, in an article entitled “Donald Duck Faces A Morals Charge”, covered the efforts of “Hans von Storch, a 28 year-old mathematician and founder of the 100-member Donald Duck Club” to overturn a morals decision by the Helsinki council against Donald Duck. The Wall Street Journal continued:

    Mr. von Storch says that studying Donald Duck isn’t as funny as it sounds. He argues that studying Duckburg is like studying mathematics or physics in that you have an “artificial system” in which an infinite number of questions can be asked. Every question can be answered, he says, by examining the evidence in the comic books themselves. In fact, some questions can be logically answered in more than one way, Mr. von Storch maintains, prompting heated debates among club members.

    Sort of like, umm, the Hockey Team.

  731. Bernie
    Posted Mar 8, 2008 at 3:45 PM | Permalink

    Neal (#702 & #728):
    Rule #1 – Don’t bring a penknife to a gunfight.

  732. Willis Eschenbach
    Posted Mar 8, 2008 at 4:11 PM | Permalink

    Neal, you say:

    Willis

    You have suggested some ways to improve the data, or its presentation. I think these are mostly valid, although as a point of principle I think we are talking about improving analysis of the data, not the data themselves.

    Absolutely not. I said nothing about the “presentation” of the data. You claimed that we had two choices in analyzing the data – throw out the data, or use “rough & ready” methods. I was attempting (without success, it seems) to show you that there is another way to analyze the data. In fact, when the data is poor, “rough & ready” methods are the last thing we want to use. We need to use tried and true, scientifically based, carefully and cautiously applied methods. Only in that way can we extract useful information from scanty, faulty data.

    Now, however, you are backing away from that “rough & ready” claim as fast as you can, and saying that you only meant that “there is enough meaning behind these methods that there is something that is worth fixing.” Well, in a word … no, that’s not what you said. You said we had two choices for analyzing the data, “rough and ready”, and throw it out. Which is nonsense, although I was too polite to say that last time. It’s the kind of absurd claim that people often make about climate science, right up there with “the climate is settled”. In this case the claim is “the data is so poor, we’re forced to torture it and extend it 1200 kms away”.

    No. You claimed that poor data is a justification for poor methods. This is neither scientific, nor reasonable, nor true. You are only compounding the foolishness by claiming that was not what you meant.

    Finally, you say:

    The fact that you are able to suggest improvements implies to me that there is a certain validity behind the approach. Every field of study starts somewhere.

    Again, no. The fact that the improvements I have suggested are so basic, and that I as an outsider to the process can see them so clearly, implies that the people currently doing the job are making an incredible hash of it. It does not mean that their approach has a “certain validity”. It means that their approach is so faulty that people from elsewhere, such as Anthony Watts, have to step in and do their job for them. That shows their method, far from having a “certain validity”, lacks validity entirely.

    You still don’t seem to get it – the people in charge of creating a global average temperature record are not doing some of the most basic things necessary to get it right. Every time Steve M. trawls through the data, he finds some other error. Do these errors make a difference? We don’t know yet … but the fact that the people maintaining the records didn’t find those errors is clear proof that they have not taken their task seriously.

    Which, knowing the main players (Phil Jones and James Hansen), is perfectly understandable. Both of them are agenda driven, they have something to prove, and so they are not looking hard at the data. I can understand that.

    The part that is a puzzle to me is … why are you defending them?

    w.

  733. Neal J. King
    Posted Mar 8, 2008 at 4:59 PM | Permalink

    #733, Willis Eschenbach:

    – Most of what you are talking about relates to the quality of the data. Certainly, no one can complain about an interest in improving the quality of data.

    – What I have been arguing about is whether the nature of the data (sampling density, geographic density, altitude) rules out the possibility of any ínsight to be gained from an indicator derived from them. I have argued that even an imperfect selection of data can provide some information, if approached with an intent to find out what’s going on.

    – The fact that you & others see it as a matter of my defending them (Jones and Hansen), instead of seeing it as my trying to clarify what I mean, is, to me, an indication of the problem: extreme polarization and lack of any goodwill.

    Steve: Neal, please do not impute motives to other posters. This leads to food fights. Same with others in respect to you.

  734. Neal J. King
    Posted Mar 8, 2008 at 5:15 PM | Permalink

    #731, Steve McIntyre:

    You have already announced that you intend to cut out / clamp down on the snarkiness. As already stated, I applaud this intent.

    What I am pointing to is, what is the message behind the snarkiness? Isn’t it the concept that “The Team is trying to pull another fast one over us.”?

    The result of designating mainstream climate researchers as part of “The Team” is inherently polarizing, regardless of the origin of the term.

    Regardless of what happens on Tamino’s site or on RealClimate, I urge you to drop this pejorative term in the interest of promoting open discussion, unjaundiced by unnecessary coloration.

  735. Neal J. King
    Posted Mar 8, 2008 at 5:32 PM | Permalink

    #730, lucia:

    – In #736, I’ve explained my point on snarkiness. It should have thought it would be clear, when I refer to other webpages, that I am not confining myself to this particular thread, which has indeed been freer than most of this unpleasantness. But you can look at #736 for a fuller statement.

    – I did not ask GB for his credentials, I asked him to check his point of view. Please look at #726.

    – Since I am by now sure that you will not believe my view of what I meant, just as you don’t believe my view of what I was saying about rough indicators, this takes me back to the issue I mentioned last time. After having made several rounds of attempts to get my point across, I have only seen it misinterpreted and distorted. I don’t see it as being of any possible benefit to myself or others to bother with further postings here. It just seems to be a waste of time.

  736. Willis Eschenbach
    Posted Mar 8, 2008 at 6:25 PM | Permalink

    Neal, you originally said:

    So, do you do what you can to find out what you can from this information; or do you turn up your nose at this information, saying, “These data don’t meet my standards for proper data collection and sampling.”?

    And if you decide not to turn up your nose, you may have to tolerate some pretty “rough & ready” approaches to boiling down the data.

    Now, you are trying to say that you meant:

    What I have been arguing about is whether the nature of the data (sampling density, geographic density, altitude) rules out the possibility of any ínsight to be gained from an indicator derived from them. I have argued that even an imperfect selection of data can provide some information, if approached with an intent to find out what’s going on.

    and that you have been misunderstood.

    Look, Neal, I take you at face value. If you say we have two choices, which are either throw out the data or use “rough & ready” methods to get information, foolish me, that’s what I assume you mean.

    SO …

    Next time you want to say that even imperfect data can provide some information …

    … please say that, and don’t give us a false “either-or” scenario that contains a justification of the type of methods used by Hansen and Jones. You claim that you weren’t referring to them, saying:

    The fact that you & others see it as a matter of my defending them (Jones and Hansen), instead of seeing it as my trying to clarify what I mean, is, to me, an indication of the problem: extreme polarization and lack of any goodwill.

    Well, if you’re not referring to Hansen’s and Jones’ extremely rough and far from ready methods of getting climate averages from sparse data, exactly whose “rough & ready” methods were you saying that we “have to tolerate”? Yours?

    w.

    PS – After the abuse that many on this list have taken from the defenders of the AGW hypothesis, and after such comments as Jones saying “Why should I reveal my data to someone who will just try to find fault with it”, and after my unsuccessful attempt to get Jones to reveal his data through a Freedom of Information application (which he stonewalled), and after Tamino abusing Steve in the most vicious terms about every third post, and after Michael Mann saying that asking a scientist for his data is “intimidation” … yes, there is polarization and a lack of good will. I’ve been called more nasty names than I care to recall, David Suzuki publicly states that folks like me should be thrown in jail for putting forward my beliefs, I’m equated with people who deny the Holocaust … are you following this story at all? Because if so, you have not been paying attention to the source of the ill will and the polarization. How many of us have called for Hansen to be put in jail? None, as far as I can tell. How many of us have called Tamino every nasty name in the book in every third post? None, as far as I can tell.

    Are we a bit testy about this mistreatment? Sure, but that hasn’t lead Steve to respond to Tamino in kind. Yes, we tweak their noses occasionally, but the regulars here have been remarkably restrained in their response to an all-out, unpleasant, no-holds-barred, unscientific, incredibly vicious attack that has included loss of jobs and even death threats … I’m not surprised by any imagined lack of goodwill here on this blog, quite the opposite. I have been amazed at the amount of goodwill shown towards the AGM attack squad in the face of extreme provocation.

  737. Bernie
    Posted Mar 8, 2008 at 8:46 PM | Permalink

    Neal #735
    My point was that if you make inaccurate statements about what Steve said you can be sure that he will bury you in the facts of what he did say. If you want to score points off of Steve you are really going to have to be very accurate and precise. Steve has an amazing recall and a mastery of what has been written by him here.
    Moreover, one problem I have with many of your observations is that they are the equivalent of “when did you stop beating your wife.” Hence Willis, Lucia and others will quickly and, for the most part politely, point out how unfounded are your allusions.
    The unfortunate reality is that the debate has become polarized and personalized. Much of the personalization is directed at SM who least deserves it. The poking fun here is a far cry from the genuine nastiness that appears at Tamino’s site. Check out “open mind”. Try making the same points there as you have here and see what happens. Please let us know how you make out.
    In the mean time, I look forward to your substantive rather than rhetorical contributions.

  738. Gerald Browning
    Posted Mar 8, 2008 at 10:45 PM | Permalink

    Neal J King.

    When you asked for my point of view, I indicated that I have extensive
    experience with observational data, atmospheric and oceanic numerical models and am trained in the analysis of partial differential equations and numerical methods.
    My point of view based on all of these credentials (easily verifiable
    because I use my real name) is that

    1) The atmospheric and oceanic data is too sparse to determine day to day weather over the entire globe, let alone the climate over extended periods of time.

    2) The atmospheric portion of climate models is based on an ill posed
    system of equations (hydrostatic equations) and a switch to the nonhydrostatic system will not help because of fast exponential growth that will destroy the numerical accuracy of any numerical approximation.

    3) The rapid cascade of the vorticity to scales of motion not resolved
    by a numerical model necessitates the use of an unphysically large dissipation leading to an incorrect cascade. To overcome the unphysical spatial spectrum (incorrect cascade of vorticity), unphysical forcing must be used to pump energy into a model that makes the spatial spectrum
    appear realistic when both the dynamics and forcing are not accurate.

    I have backed up these points of view with mathematical examples
    and/or manuscript citations that contain mathematics and illustrative
    examples. I suggest you read the examples and
    manuscripts and then show why the the mathematics in the examples and manuscripts is wrong. If you cannot do so, then all of your rhetoric
    is just that.

    Jerry

  739. Gerald Browning
    Posted Mar 8, 2008 at 11:10 PM | Permalink

    Judith Curry (#722),

    You continue to jump in with some reference that does not answer the basic set of specific mathematical questions I asked you on the Jablonowski thread.

    As lucia has pointed out, there have been DNS computations of the incompressible Navier Stokes equations for Reynolds numbers approaching those desired for accurate computations of real flows in 2D
    and there the mathematics is well known. Here you might look at the minimal scale estimates by Henshaw, Kreiss, and Reyna and subsequent computations by Henshaw et al. No where in their computations is it shown that the use of the incorrect dissipation size or type (or any ad hoc closure scheme) leads to an accurate solution of the fluid with the correct size and type of dissipation. The waste of computer resources repeating
    similar computations should be embarassing to say the least

    Jerry

  740. Spence_UK
    Posted Mar 11, 2008 at 10:59 AM | Permalink

    Neal, I’m not sure if you’re still here, but I disagree with some of your analysis in #727. My post is now getting into opinion territory, I do not have figures to justify the magnitude of effect of all claims in this post, but the mechanisms seem sound and worthy of consideration.

    You have a good point, that the index of refraction can differ from 1 even if there is no absorption.

    The index of refraction is relevant for NIR and visible as they are in the optical region with respect to cloud droplets, but not to other IR bands. As noted in my original post, SWIR/MWIR/LWIR is in the Mie region, dominated by Mie scattering. This should also be obvious from Sam’s excellent contribution, including typical droplet size (0.01mm) which should ring immediate bells with IR wavelengths. For Mie scattering, the metric of interest is ratio between the electrical size of the droplet and the frequency of the incoming radiation.

    Mie scattering is still “spikey” but because there is a continuous distribution of droplet sizes, these should be smoothed over a wide bandwidth.

    Please note that there will be some absorption as well. Because we are talking about Maxwell’s equations, we have broadband (grey-body) behaviour, not line spectra, for both absorption and emission. Liquid water absorbs IR, we can tell this because nobody uses IR cameras to detect submarines. I suspect the scattering is a greater effect than the absorption, but it shouldn’t be ignored. Absorbed SWIR/MWIR will be re-radiated as LWIR.

    This will result in scattering. Thinking it over, I conclude that the clouds will make a difference to the “2X in C-O2” calculation if the clouds are above the point where optical depth = 1 for C-O2 at the interesting frequencies (about 15 microns); and not if they are below.

    I don’t understand how you can conclude this, although I may be misunderstanding your point (below in optical depth? altitude? frequency?)

    As noted, clouds remove a considerable amount of SWIR / MWIR from the incoming solar. They also re-radiate LWIR. The changes are of the order of hundreds of Watts per metre squared, making the 3.7W/m2 rather irrelevant. Furthermore, I would urge you to consider the following case. At the extreme end, heavy, low cloud could cut SWIR/MWIR by a large amount, replacing it with LWIR at the cloud temperature. The cloud temperature will be lower than the surface temperature. Therefore the upwelling radiation could (in the extreme case) contain higher frequencies than the downwelling radiation; the CO2 will then (by a tiny amount) actually cool the surface and warm the clouds. By extension, above the cloud we would see reflected SWIR/MWIR, resulting in upwelling SWIR and downwelling SWIR, likewise substantially reducing the CO2 effect.

    This is an extreme case admittedly, and I doubt the cloud can sufficiently reduce the SWIR/MWIR to reverse the greenhouse effect, but the point is that the cloud will, to some percentage, reduce the behaviour of the greenhouse gas effect, irrespective of height. This effect will not be trivial.

    Also, clouds exhibit high-powered variablity (standard deviation close to the mean) and unipolar behaviour (i.e., you cannot have negative cloud cover), resulting in power law scaling from the principle of maximum entropy (Koutsoyiannis). This means you cannot replace the cloud variability with an average and expect a climatically representative distribution from it. Furthermore, I suspect it is likely that such a distribution will have a greater effect than CO2 and solar combined, on both a regional and global scale.

    However, within the context of trying to calculate the 3.7 W/m^2 radiative forcing, that would not inspire me to try to incorporate the hydrodynamics of a GCM to do the radiative transfer calculation. I would treat it in the way I would treat the albedo issue:
    – The albedo reduces the energy input from the sun by a multiplicative factor. For purposes of calculating the radiative forcing, I can take that as a fixed parameter, within some range, even though it will in fact be dependent on cloud-cover, as well as other factors.
    – In the same way, I would take as a parameter the % of area covered by clouds that lie above the OD = 1 point. That area is not going to have its radiative loss affected by a 2X in C-O2, because the photosphere for that frequency is already at the level of the clouds.

    The detailed radiative effect of clouds is greater than a simple albedo effect; however, I will leave it to you to decide how complex you wish to make your model.

  741. Sam Urbinto
    Posted Mar 11, 2008 at 1:19 PM | Permalink

    John A: Wikipedia gets things wrong or incomplete sometimes (or I should say various editors do!). I don’t remember reading anything that said bacteria can not be involved, but there’s one thing for sure …Clouds remain one of the largest uncertainties….

    🙂

    http://en.wikipedia.org/wiki/Nucleation
    http://en.wikipedia.org/wiki/Cloud_seeding

    Willis: I meant great explanation! Since I don’t get on your bad side (AFAIK!) you can be as crabby as you’d like, heh.

    Well, I wrote that before reading the rest of the thread… 😀

    Craig: “Here we have a claim for high confidence in results from a field that can only be called an immature discipline.”

    A false claim for sure.

    —————————-

    Other stuff. Gee. Okay.

    I see more than two choices “This data doesn’t meet my standards” (implying it won’t be used at all one would imagine but perhaps not) and “Let’s see if we can boil this down to something useful” The first is a choice (if the way it’s inferred is correct), the second a range. That doesn’t mean every rough and ready method will be used, or every such processed bit of data will be accepted. So I can see what it might have meant. Most of the disagreements here happen because of implications that weren’t meant to be made, or faulty inferrences on implications that were meant to be made. Or both; I say A (which would lead to B) but it comes out looking like C, the listener takes it as D, which should lead to E but that person think’s it’s F.

    Then we spend a thread where A is meant but F is received arguing about Z. 🙂

    That said, I totally agree with Willis about who it is that has the ill will (the evidence is there and plain) and that it’s a sorry state indeed that these obvious improvements and obvious problems are there in the first place, much less having the people involved in them seemingly unaware that they’re even there. And I agree with Jerry that the data is too sparse in the first place (one of the reasons I call it the anomaly and tend to discount it except on a very superficial basis at best) and the points in 2 and 3 about models and their worth.

    So it really boils down to the data is to a large part either incomplete, wrong or just plain junk, and some might be salvageable but then what do you do with it to get anything robust out of it. That’s just the models. Why do we have to put up with faulty centering causing 14 chronologies getting shoved to PC1 and accounting for 93% of the variance? Why do we have to put up with surface stations that don’t meet siting standards?

    Thinking we can know the physical realities of the Earth by sampling the air in a few spots and griding all the water surfaces strikes me as naive. Sorry.

  742. steven mosher
    Posted Mar 11, 2008 at 1:59 PM | Permalink

    re 742. wikified knowledge. on one hand the network effect is utilized to generate a document
    with speed and agility. Not your mom’s encylopedia. On the other hand, that same network effect can propagate a falsehood faster than flock of high school girls in a crowded cateteria.

    There is , of course, the error correction function. Sometimes it converges. othertimes not.

  743. Scott-in-WA
    Posted Mar 11, 2008 at 6:36 PM | Permalink

    Just a quick and very simple question for Spence, Geoff, or Jerry, whoever thinks they have the most credible answer: What total mass of C02 now resides in the earth’s atmosphere, and what total mass of C02 now resides in the world’s oceans?

    (In my mind, as a former mining engineer, this is sort of the same thing as asking the seemingly simple question, “Where did all those gazillion tons of limestone come from?”)

  744. Gerald Browning
    Posted Mar 11, 2008 at 9:17 PM | Permalink

    Sam Urbinto (#742),

    We are in agreement. But the cost of obtaining quality in situ observational data is not cheap. However, that cost should be compared with the funds that have been spent on supercomputers and manpower for climate simulations that IMHO will not lead anywhere.

    Jerry

  745. Gerald Browning
    Posted Mar 11, 2008 at 9:27 PM | Permalink

    Scott-in-WA (#744),

    I cannot answer this question and will be curious to see the answer
    (along with a legitimate scientific justification for the answer). However,
    if the AGW hypothesis cannot be validated in a rigorous scientific manner (as seems to be more and more the case), the answer might be irrelevant?

    Jerry

  746. Jud Partin
    Posted Mar 11, 2008 at 10:32 PM | Permalink

    #744 – Scott

    mass ATM * %CO2 = Mass of CO2 in ATM
    5.14×10^18 kg * 385 ppmv (or 585 ppmm) = 3.0×10^15 kg

    see http://www.agu.org/pubs/crossref/1988/87JD00743.shtml for mass atm

    The ocean holds ~50 times more CO2. Here’s a good source to learn about the carbon cycle as well as other biogeochemical cycles: http://www.amazon.com/Biogeochemistry-Analysis-Global-W-H-Schlesinger/dp/012625155X/ref=sr_1_6?ie=UTF8&s=books&qid=1205296483&sr=1-6

  747. Geoff Sherrington
    Posted Mar 11, 2008 at 10:42 PM | Permalink

    OT, came from USA, being returned:

    Apparently, a self-important college freshman attending a recent football game took it upon himself to explain to a senior citizen sitting next to him why it was impossible for the older generation to understand his generation.

    ‘You grew up in a different world, actually an almost primitive one’ the student said, loud enough for many of those nearby to hear’. The young people of today grew up with television, jet planes, space travel, man walking on the moon. Our space probes have visited Mars. We have nuclear energy, ships and electric and hydrogen cars, cell phones. Computers with light-speed processing… and more.’

    After a brief silence the senior citizen responded as follows:

    ‘You’re right, son.

    ‘We didn’t have those things when we were young … so we invented them. Now, you arrogant little prick, what are you doing for the next generation? Global warming?’

  748. D. Patterson
    Posted Mar 12, 2008 at 2:04 AM | Permalink

    744 Scott-in-WA says:

    March 11th, 2008 at 6:36 pm
    Just a quick and very simple question for Spence, Geoff, or Jerry, whoever thinks they have the most credible answer: What total mass of C02 now resides in the earth’s atmosphere, and what total mass of C02 now resides in the world’s oceans?

    (In my mind, as a former mining engineer, this is sort of the same thing as asking the seemingly simple question, “Where did all those gazillion tons of limestone come from?”)

    Yes, and there are some more companion questions.

    How many tons of carbon dioxide are contained in the current fossil fuel reserves?

    How many tons of carbon dioxide emissions would result in the combustion of the entire current fossil fuel reserves?

    How many additional ppm of carbon dioxide emissions into the atmosphere would result from the combustion of the entire fossil fuel reserves, and is the additional ppm of carbon dioxide a significant proportion of the carbon dioxide already resident in the atmosphere?

    How long would the carbon dioxide resulting from the combustion of the entire fossil fuel reserves remain resident in the atmosphere?

    If increased amounts of carbon dioxide in the atmosphere are supposed to result in warming of the atmosphere, how much does the carbon dioxide absorbed by the hydrosphere result in additional warming of the hydrosphere?

    How long would the former fossil fuel carbon dioxide absorbed from the atmosphere and into the hydrosphere remain resident in the hydrosphere before being absorbed by organic and inorganic sinks into the lithosphere?

  749. Geoff Sherrington
    Posted Mar 12, 2008 at 4:39 AM | Permalink

    Re #744 Scott-in-WA

    Hydrogen
    Helium
    Li ttle
    Be ryl
    B ates
    C ries (Carbon)
    N ightly
    O ver
    F reddy etc.

    The periodic table starts. What is the crustal abundance of any element? Variable for some, need to define bounds for others. Helium forms from alpha decay, can reside in natural gas traps but departs the atmosphere from low density.

    Bounds needed for carbon estimate, mostly how deep to look, which minerals to include. Try 150 g of calcium carbonate per Kg of rocks of the crust to 10 km depth. Enough to make a huge volume of atmospheric CO2, from whence a lot of it once resided, probably, in some past eras before Life became abundant.

  750. Scott-in-WA
    Posted Mar 12, 2008 at 5:24 AM | Permalink

    746 through 750

    Thanks everybody. I have to buy a copy of that book.

    It has long been thought that biogeochemical activity is responsible for the massive deposits of limestone (CaC03) which are so useful to civilization in this day and age.

    In the AGW literature, we see discussion of the carbon cycle in terms of atmospheric content of C02, and also the carbon cycle as it affects surface biota on land and in the oceans, but we don’t see a lot of discussion concerning the physics and thermodynamics of biogeochemical processes, especially the kinds of natural processes which produced limestone, which I will speculate might itself be a potential source of carbon for other natural processes of one kind or another — processess which might release C02 in response to a variety of biogeochemical activities which might be temperature sensitive.

    After all, for many millions years, the earth’s atmosphere held a concentration of C02 significantly higher than today’s levels, and atmospheric temperatures were also significantly higher. If biogeochemical processes and biogeophysical processes — especially those which occurred in the oceans — played some role in that circumstance, what were the mechanisms, what was cause, what was effect?

  751. Sam Urbinto
    Posted Mar 12, 2008 at 11:27 AM | Permalink

    Not one single shred of actual evidence exists now that can or does show us what happens to levels of AGHG in the atmosphere in the next 2 or 5 or 10 or 20 years.

    Much less even just what going to any one of 600 ppmv of carbondioxide or 3000 ppbv of methane or 400 ppbv of nitrous oxide or 400 pptv of CFC-11 or 700 pptv of CFC-12 causes.

    Q: What does the anomaly do if all six of the greenhouse gasses listed in Annex A of the Kyoto Protocols doubled, from an equal rise in all of the listed sectors/source categories ?
    A: We don’t know.

    http://unfccc.int/resource/docs/convkp/kpeng.html
    http://en.wikipedia.org/wiki/Kyoto_Protocol#Cost-benefit_analysis

  752. D. Patterson
    Posted Mar 12, 2008 at 12:45 PM | Permalink

    Approximately how many Gt of carbon dioxide are currently in the atmosphere?

  753. Sam Urbinto
    Posted Mar 12, 2008 at 3:44 PM | Permalink

    Ecoworld guesses 3000 gigatons

    http://www.radix.net/~bobg/faqs/scq.CO2rise.html

    The air weighs an average of 5 quadrillion metric tons FWIW. Dunno if that includes WV or just dry air.
    First of all, a gigaton is one billion metric tons. One metric ton (2,200 lbs.) is what a cubic meter of water weighs. One billion metric tons is what one cubic kilometer (one billion cubic meters) of water weighs, and it is called a gigaton.

    Next, remember atmospheric CO2 includes two oxygen atoms, and weighs 3.7x the carbon feedstock. So if there are 70 gigatons of carbon in the Amazon, for example, burning the remaining Amazonian carbon will release 2.7x that many gigatons of CO2 into the atmosphere (ref. Amazon Ecology Project). So far, tropical deforestation alone has resulted in the release of about 475 gigatons of CO2 into our atmosphere.

    So how many gigatons of CO2 are we contending with, anyway, in our atmosphere? Referencing and extrapolating from J. Schlorrer’s 1994 study, “Why Does Atmospheric CO2 Rise?”, there are probably about 3,000 gigatons of CO2 in the earth’s atmosphere right now.

    You probably need the specific gravity at various altitudes to calculate it. Given everything I’m sure it’s modeled. Or whatever.

    Molar mass of CO2 = 44.0095 g/mol
    Molar mass of O = 15.9994 g/mol
    Molar mass of N = 14.0067 g/mol
    Molar mass of Ar = 39.948 g/mol
    Molar mass of N2O = 44.0128 g/mol
    Molar mass of CH4 = 16.04246 g/mol
    Molar mass of H2O = 18.01528 g/mol

    You probably need the specific gravity at various altitudes to calculate it.

    http://wahiduddin.net/calc/density_altitude.htm
    http://www.sengpielaudio.com/ConvDensi.htm
    http://www.sengpielaudio.com/calculator-airpressure.htm

    Wiki has the other information that might be helpful.

    Composition of dry atmosphere, by volume
    Gas Volume
    Nitrogen (N2) 780,840 ppmv (78.084%)
    Oxygen (O2) 209,460 ppmv (20.946%)
    Argon (Ar) 9,340 ppmv (0.9340%)
    Carbon dioxide (CO2) 383 ppmv (0.0383%)
    Neon (Ne) 18.18 ppmv (0.001818%)
    Helium (He) 5.24 ppmv (0.000524%)
    Methane (CH4) 1.745 ppmv (0.0001745%)
    Krypton (Kr) 1.14 ppmv (0.000114%)
    Hydrogen (H2) 0.55 ppmv (0.000055%)

    Not included in above dry atmosphere:
    Water vapor (H2O) ~0.25% over full atmosphere, typically 1% to 4% near surface

    Minor components of air not listed above include
    Gas Volume
    nitrous oxide 0.3 ppmv (0.00005%)
    xenon 0.09 ppmv (9×10-6%)
    ozone 0.0 to 0.07 ppmv (0%-7×10-6%)
    nitrogen dioxide 0.02 ppmv (2×10-6%)
    iodine 0.01 ppmv (1×10-6%)
    carbon monoxide trace
    ammonia trace

    The mean molar mass of air is 28.97 g/mol. Note that the composition figures above are by volume-fraction (V%), which for ideal gases is equal to mole-fraction (that is, fraction of total molecules). By contrast, mass-fraction abundances of gases, particularly for gases with significantly different molecular (molar) mass from that of air will differ from those by volume. For example, in air, helium is 5.2 ppm by volume-fraction and mole-fraction, but only about (4/29) × 5.2 ppm = 0.72 ppm by mass-fraction.

  754. Ron
    Posted Mar 13, 2008 at 12:08 AM | Permalink

    Re Sherrington at 748.

    Great story Geoff. Also OT, it closely relates to my experiences of over 30 years in front of a college classroom full of some of the best and brightest. Far too many of them knew everything that happened the previous 24 hours that had absolutely no connection to their real lives, but knew absolutely nothing about what went on during the several thousands of years that produced the world they live in. (I think Andy Rooney –or maybe Chesterton—offered this observation —if not apologies to the person who did.)
    Ron

  755. Geoff Sherrington
    Posted Mar 16, 2008 at 1:43 AM | Permalink

    Re # 772 Judith Curry

    You cite-

    Despite multiple efforts, predictions of climate change remain uncertain. Where precision is an issue (e.g., in a climate forecast), only ensembles of simulations made across model families which differ for parameterizations, discrete algorithms and parameter choices allow an estimate of the level of imprecision. Is this the best we can do? Or is it at least conceptually possible to reduce these uncertainties? Focusing on ocean models in idealized domains we describe chaotic space-time patterns and equilibrium distributions that mimic nature. Using the Navier-Stokes equations for barotropic flows as a zero-order approximation of analogous flow pattern, we then investigate if is possible, in this overly-simplified set-up, for which smooth-solutions exist, to bound the uncertainty associated with the numerical domain discretization (i.e. with the limitation imposed by the Reynolds number range we can explore). To do so we analyze a series of stationary barotropic turbulence simulations spanning a range of Reynolds number of 10$^{4}$.

    A systematic superabundance of surplus suspected syllogolistic linguistics impresses not.

    1. What was the question?

    2. Was there a result?

  756. MrPete
    Posted Mar 16, 2008 at 4:48 AM | Permalink

    Geoff, Jerry, Judy, et al,
    This interaction reminds me of various situations I’ve observed in my lifetime. The key elements:

    * Hard problem to be solved or question to be answered
    * Bright professional(s) going after the solution
    * (Typically older, but y’never know) voice of experience

    So often, it comes down to the following interchange, with a couple of story lines that ensue, both good:

    BP: “We’ve attacked this issue but have not cracked it. We’ve got an idea for a creative approach that we hope will shed light on the situation.”

    VE: “Sounds like you haven’t stepped back to look at the forest instead of the trees. In my experience, you’re missing something important. [VE may be able to express: you’re missing X, Y, Z]”

    Story Line A: BP stays the course, uses creative approach A… then B… then C. Possibly never solves the original problem but produces surprisingly valuable results…postits, potato chips, superglue…

    Story Line B: BP asks more questions of VE, steps back, and uses a more experience-based approach to the challenge. Still may never solve the original problem which might be intractable, but builds on others’ knowledge to make progress.

    Overplaying this (a lot)… the hard part here, as has already been demonstrated: many lines of experimentation come at huge cost. Is it really worth the cost, when VE knows (for the generally stated issue) the answer will not be found. Serendipity says let’s explore anyway. Reason says let’s work smarter, not harder.

    I have a friend working on a twelfth form of combustion. The goal: “to serve god by designing more efficient combustion systems with exhaust so clean you can breathe it.” The point of mentioning this: on the one hand, while he may sound crazy to many, it’s actually real… in fact already EPA approved, just not yet manufacturable. On the other hand, how do you convince investors it’s worth the $10-20m cost for each next-generation test rig.

    My gut tells me Judith needs to talk with Jerry more than she knows. Even if she does end up continuing down her current path.

  757. Judith Curry
    Posted Mar 16, 2008 at 7:40 AM | Permalink

    Stepping back to look at the forest, while the dynamic cores of weather and climate models are robust (as per their theoretical and numerical foundations and decades of validation of the atmospheric cores using surface based and satellite data), there is much fundamental research being conducted in geophysical fluid dynamics, some of which may help improve weather/climate models by one of the following ways: improve the treatment of unresolved degrees of freedom in the model; increase the numerical accuracy of the solution; increase computational efficiency. I would anticipate that incremental improvements in some if not all of these aspects in the future.

    This is my assessment. However, please understand that I am on the periphery of the development of the dynamical cores of weather/climate models: my contributions are in the areas of parameterization of physical proceses; also I am part of a team that will be reviewing NOAA’s climate modelling at GFDL in a few weeks (this will included dynamical cores). I am but one voice in this community, and I am not working in the trenches to develop/improve dynamical cores (if the NOAA presentations are posted publicly, i will certainly point you to the web site). But I take on the responsibility to understand the models that I use in my research.

    Note, the community working on the broad issues associated with dynamical cores is very broad, with expertise in nonlinear science, computational science, mechanical and aerospace engineering, mathematics, as well as atmospheric and ocean dynamics. There is nothing that has been discussed here that represents new knowledge, new ideas, whatever. I have found the discussion interesting since i haven’t recently had any reason to dig into these issues in detail. It is good for the broader readership of CA to be cognizant of and ponder these issues. But i have seen nothing here that is news or discredits the models beyond the uncertainties described in model documentation.

    As i stated on the Jablonsky thread, there are many improvements needed to climate models, but the atmospheric dynamical core is quite robust, and is not a major problem.

  758. See - owe to Rich
    Posted Mar 16, 2008 at 8:14 AM | Permalink

    Judith,

    I have waited a long time to ask this question. I was thinking of asking it elsewhere, but your statements above seem to invite it, and you are probably the expert I seek.

    How many free parameters inside the climate models have to be estimated? And, is this done by least squares fitting? And, are you able to give a good narrative description of how these models operate, in terms of inputs, internal state adjustments, and outputs?

    If you are busy, just answering the first question (or even a good reference) will do 🙂

    Thanks,
    Rich.

  759. Judith Curry
    Posted Mar 16, 2008 at 10:09 AM | Permalink

    Rich, you raise an important question that doesn’t have a simple answer. An individual free parameter is set by trying to simultaneously meet the following tests: defensible in terms of our physical understanding and observations, provide a credible simulation when used as an approximation in a detailed process model, and give a reasonable result (in terms of climatology and sensitivity) when used in a climate model.

    i’ll reproduce something i wrote on the jablonsky thread, which gives you some idea of the complexity here.

    First re “tuning” of the models. There are certain fundamental physical constants, these are of course not tuned. There is external forcing (e.g. solar input, volcanoes) which is specified in historical simulations to the best of our knowledge based upon observations. The only real wiggle room in the external forcing is aerosols. This is handled differently by different modelling groups: some specify aerosol properties for purposes only of the radiative transfer calculations. Other models have aerosols that also interact with cloud processes, and a few have more interactive aerosols (NASA GISS is a leader in the sophistication of aerosol module). We have had good satellite observations of aerosols for the past decade or so, but it is a challenge to deal with historical aerosol loading. The IPCC 4th Assessment Report, Physical Basis, section 2.4, summarizes these issues. There is another category of “tuning” that occurs in the context of subgridscale parameterizations to deal with unresolved degrees of freedom in the model; this includes clouds for example.

    I am going to provide a specific example of parameterization tuning in the context of sea ice albedo (surface reflectivity), which is an example that I have been directly involve in (for a scientific reference, see my paper http://curry.eas.gatech.edu/currydoc/Curry_JGR106b.pdf (lucia, a paper from sheba). During the summertime sea ice melt, after the surface snow has melted off, the albedo of melting ice is complicated by the presence of melt ponds and depends on the areal coverage and depth distribution of the melt ponds. Current sea ice modules don’t explicitly model melt ponds, so they parameterize in some way the melting ice albedo, the simplest parameterization being setting the melting ice albedo to be constant. Now the constant is broadly constrained by observations, but could range from 0.3 to 0.56. How does a modeler select which value to use? Well there are a few other tunable parameters in sea ice models as well, so sensitivity tests are done across the plausible range of values, compared with observations, and then the parameters are selected. By the way, I have been arguing for an explicit melt pond parameterization in sea ice models, but a few years ago the NCAR climate modelers told me they didn’t want to use my parameterization since it would make the sea ice melt too quickly (maybe they would have predicted the meltoff in 2007 with my parameterization!) The next planned incarnation of NCAR’s sea ice model includes a number of upgrades, including melt ponds! the upshot of this is that the current albedo in NCAR’s model makes it too insensitive to melting, making it melt too slowly. The challenge with such tuning within parameterizations is that the fit best to the current climate, and not necessarily to future climate. Hence there are ongoing efforts to increase the sophistication of the parameterizations so fewer “tunings” are needed.

    The bottom line is that in a model with order 10**9 degrees of freedom, there are really very few tuning knobs, and while aspects of the model can be sensitive to an individual tuning, there is no way you can tune these models to give the observed space/time variability of so many different variables.

  760. Posted Mar 16, 2008 at 10:43 AM | Permalink

    760 (Judith):

    There is external forcing (e.g. solar input, volcanoes) which is specified in historical simulations to the best of our knowledge based upon observations. The only real wiggle room in the external forcing is aerosols.

    I think you take the solar forcing too much for granted. An example is TSI. There has been an evolution in our assessment of reconstructions of TSI before ~1978 [when measurements began]. The following graph illustrates the progressive ‘flattening’ of the TSI curve over the past 300 years, from the [grey] Hoyt&Schatten curve, to Lean’s [brown], and ending with the red and pink curves. If the solar forcing ‘parameters’ were derived from fitting to observed and reconstructed TSI, those might need adjustment. I presume that the solar forcing is not calculated from first principles [although I don’t know – and it could be so]. In any event, solar forcing based on TSI would seem to change if TSI has been re-assessed.

  761. Judith Curry
    Posted Mar 16, 2008 at 1:41 PM | Permalink

    Leif, the main IPCC simulations are since 1900, where we have pretty high confidence in the solar forcing, as your diagram indicates. prior to 1900, the uncertainty increases.

  762. Raven
    Posted Mar 16, 2008 at 1:49 PM | Permalink

    Leif, Judith
    Here is an example of model output that is using outdated TSI information.

    As you can see the model would significantly overestimate early 20th warming once it is corrected with the new TSI data.
    It may be possible ‘fix to model’ after such as such a change by adjusting aerosols or appealing to error bands.
    However, doing so would demonstrate that the available knobs give modellers a lot of flexibility despite their claims otherwise.
    Here is more information on the image: http://en.wikipedia.org/wiki/Image:Climate_Change_Attribution.png

  763. Kenneth Fritsch
    Posted Mar 16, 2008 at 2:10 PM | Permalink

    Re: #758

    It is good for the broader readership of CA to be cognizant of and ponder these issues. But i have seen nothing here that is news or discredits the models beyond the uncertainties described in model documentation.

    Judith, I sometimes think you get it backwards from what I see in the value of CA. Almost no one here at CA, except for occasional visiting experts, is doing any original climate science. Most of the serious observations here have to do mainly with analyzing climate science papers and methodologies while allowing some insights into the more personal and political worlds of climate scientists. I think that many readers and posters here get good satisfaction from simply being able to participate in the analyses and discussions to one degree or another and have tendencies, like the blog owner, to be puzzle solvers. We are outside observers and that means that most of us are not under the scrutiny that we place on climate scientists, in general, and particularly those who come here to visit.

    Climate scientists will do the changing in climate science, but meanwhile it is fun to be an observer and analyzer of the process.

  764. Posted Mar 16, 2008 at 2:23 PM | Permalink

    762 (Judith):

    The main IPCC simulations are since 1900, where we have pretty high confidence in the solar forcing, as your diagram indicates. prior to 1900, the uncertainty increases.

    The main problem with TSI is from 1900 to 1977; first, because that what we model; second, because reconstruction prior to 1900 depends on the trend assumed for 1900 to 1977.

  765. Judith Curry
    Posted Mar 16, 2008 at 3:08 PM | Permalink

    Kenneth, i agree totally. that specific comment was intended for one specific person 🙂

  766. Posted Mar 16, 2008 at 4:07 PM | Permalink

    767 (Carrick): You have ‘solar’ since 1980 being higher than at any time since 1900. There is no evidence for that. On the contrary, solar effects were highest around 1950 and have been decreasing since. Solar activity right now is what it was 107 years ago. Not just at a ‘single point’, but the whole of cycle 23 is very close to the level of cycle 13. This is not reflected in your graph at all.

  767. Greg Meurer
    Posted Mar 16, 2008 at 5:08 PM | Permalink

    Professor Svalgaard,

    GISS Model E uses Lean 2000 for solar forcing up to 2000, which appears to be the same as you have plotted in #761. This is used in their hindcat back to 1880 to validate the model.

    Hansen, et al. 2007, which is available at http://pubs.giss.nasa.gov/abstracts/2007/Hansen_etal_3.html. The issue is discussed on the 10th page of the article.

  768. Posted Mar 16, 2008 at 6:30 PM | Permalink

    Leif, I accept your criticisms of the solar activity curve. The curves are originally from Meehl et al. (2004), so if you have a dispute, you should probably address it with him. 😉

    In any case, my real point was with respect to the anthropogenic forcing terms, not the solar contribution, and it was simply to illustrate that anthropogenic climate change, at least that which is associated with emissions, is a relatively new phenomenon (circa 1970).

    Thanks for your comment though.

  769. Posted Mar 16, 2008 at 6:45 PM | Permalink

    769 (Greg): Lean is a coauthor of the Wang 2005 reconstruction [blue curve] which varies a lot less than Lean 2000, so it is not clear how ‘valid’ the model is when done with obsolete data. My point [which I have belabored in ~2000 posts and comments in the ‘Svalgaard’ thread] is that there are good indications that solar variability is less than were assumed even a few years ago and that that will have to have to be taken into consideration when injecting solar forcing into the models.

  770. Greg Meurer
    Posted Mar 16, 2008 at 7:54 PM | Permalink

    Professor Svalgaard re #770

    Yes, I have been following your thread and see your point. I am not disagreeing with you; perhaps I should have directed my comment to Professor Curry.

    The Hansen, et al. 2007 paper is a detailed defense of the ability of the Model E to replicate climate back to approximately 1900 in the case of temperature and various other periods in the case of other metrics. I was originally interested in the paper because of its exposition of how well Model E could replicate the mid-troposphere record in contradiction of other recent papers. It was clear that a choice had to make concerning whether to use the UAH data (cooler) or the RSS/NASA data (warmer) as the measure against which to compare the model simulation. The paper includes a discussion of why the GISS team used the RSS data. There are papers on both sides of that issue.

    What you have alerted me to what is perhaps a more curious choice in the choice of input data. The Hansen, et al. 2007 paper is based upon computer runs made apparently in 2005. Whether the Wang, et al. 2005 paper was available when the computer runs were made I don’t know, but this paper is not discussed in Hansen, et al. 2007 though one could reasonably assume it was available sometime during the publication process. Dr. Hansen does discuss Lean 2002 and pretty much dismisses it based upon earlier works of other authors.

    While I am not a scientist (lawyer/business man actually) I do have a sense for special pleading. Here the solar variability represented by Lean 2000 and earlier work is (apparently) necessary to explain the temperature increases in the first half of the 20th century and earlier variations. The defense of using the obsolete data smells like special pleading.

    It will be interesting to see whether some reanalysis of the solar data is published to keep the “essential correctness” of Lean 2000 alive so the variability can continue to be used.

    Thanks for so patiently helping to educate us here.

    Typo earlier: should have been “hindcast”

  771. Posted Mar 16, 2008 at 8:21 PM | Permalink

    772 (Greg): Please call me ‘Leif’. Now about this:

    It will be interesting to see whether some reanalysis of the solar data is published to keep the “essential correctness” of Lean 2000 alive so the variability can continue to be used.

    This ‘reanalysis’ has actually been done, even involving Lean herself. She was a coauthor of Wang et al. 2005; Krivova et al. 2007 come to pretty much the same result, while Dora Preminger 2006, and myself 2007 get an even smaller variation. There is little doubt among solar physicists than the recent TSI series are closer to the mark than the old of before, say, 2003. So, if the models only work with a large TSI-variation, there is something wrong.

    There are other lines of evidence pointing to a less-varying Sun: in the very first post in the Svalgaard #1 thread I summarize the evidence.

  772. Gerald Browning
    Posted Mar 16, 2008 at 11:53 PM | Permalink

    Judith Curry,

    Stepping back to look at the forest, while the dynamic cores of weather and climate models are robust (as per their theoretical and numerical foundations and decades of validation of the atmospheric cores using surface based and satellite data), there is much fundamental research being conducted in geophysical fluid dynamics, some of which may help improve weather/climate models by one of the following ways: improve the treatment of unresolved degrees of freedom in the model; increase the numerical accuracy of the solution; increase computational efficiency. I would anticipate that incremental improvements in some if not all of these aspects in the future.
    This is my assessment. However, please understand that I am on the periphery of the development of the dynamical cores of weather/climate models: my contributions are in the areas of parameterization of physical proceses; also I am part of a team that will be reviewing NOAA’s climate modelling at GFDL in a few weeks (this will included dynamical cores). I am but one voice in this community, and I am not working in the trenches to develop/improve dynamical cores (if the NOAA presentations are posted publicly, i will certainly point you to the web site). But I take on the responsibility to understand the models that I use in my research.
    Note, the community working on the broad issues associated with dynamical cores is very broad, with expertise in nonlinear science, computational science, mechanical and aerospace engineering, mathematics, as well as atmospheric and ocean dynamics. There is nothing that has been discussed here that represents new knowledge, new ideas, whatever. I have found the discussion interesting since i haven’t recently had any reason to dig into these issues in detail. It is good for the broader readership of CA to be cognizant of and ponder these issues. But i have seen nothing here that is news or discredits the models beyond the uncertainties described in model documentation.
    As i stated on the Jablonsky thread, there are many improvements needed to climate models, but the atmospheric dynamical core is quite robust, and is not a major problem.

    This is the biggest piece of nonsensical rhetoric I have seen. How can you validate a dynamical core against observations when it contains no physical forcings? And Dave Williamson et al. (also coauthor of Jablonowski manuscript) have shown that when physical forcings are added to the atmospheric component (CAM3) of the NCAR climate model the result is unphysical in a matter of days exactly as expected when the vorticity cascade is incorrect. The Jablonowski manuscript shows just how serious the incorrect cascade of vorticity is for a model (and that is even before any of the ill posedness problems start to appear). Please answer the mathematical questions at the end of the Jablonowski thread to prove or disprove if the dynamical cores are robust and stop confusing dynamical cores with models that contain approximations to both the dynamics and physics. They are not the same thing.

    Jerry

  773. Gerald Browning
    Posted Mar 17, 2008 at 12:04 AM | Permalink

    Does anyone else here find it curious that Judith will be reviewing GFDL models when clearly she does not understand the difference between a dynamical core (no physical parameterizations) and a full atmospheric model (approximations to both dynamics and physics). Might I suggest that the panel recruit Heinz Kreiss as a reviewer or would they rather not have an expert on PDE’s and numerics questioning their games.

    Jerry

  774. MrPete
    Posted Mar 17, 2008 at 5:09 AM | Permalink

    What I just heard:
    Judith:

    …the dynamic cores of weather and climate models are robust (as per their theoretical and numerical foundations and decades of validation of the atmospheric cores using surface based and satellite data)
    …there is much fundamental research being conducted…which may help improve weather/climate models
    …I am on the periphery of the development of the dynamical cores
    …my contributions are in the areas of parameterization of physical processes
    …I take on the responsibility to understand the models that I use in my research.
    …I have seen nothing here that is news or discredits the models beyond the uncertainties described in model documentation.
    …the atmospheric dynamical core is quite robust, and is not a major problem.

    Jerry:

    …[you can’t] validate a dynamical core against observations when it contains no physical forcings
    …when physical forcings are added to the atmospheric component (CAM3) of the NCAR climate model the result is unphysical
    …the vorticity cascade is incorrect.
    …the incorrect cascade of vorticity is [serious] for a model
    …ill posedness problems [are also significant]
    …[don’t] confuse dynamical cores with models that contain approximations to both the dynamics and physics.
    …a dynamical core [has] no physical parameterizations
    …an atmospheric model [has] approximations to both dynamics and physics

    MrPete’s uninformed kindergarten interpretation
    a) Both Judith and Jerry are in complete agreement on one thing: there’s nothing new or news here. Whether or not that is actually true remains to be seen, but both of them believe they have a good understanding of the situation.
    I’m glad about that, because I sure don’t see this clearly yet 🙂
    b) I sense a reasonable chance Judith and Jerry are working from different definitions of “dynamical core.” This is understandable: Jerry’s standing on his solid ground, the core of the physics/math; Judith is standing on her solid ground, the parameterization of physical models.
    Question for Jerry: A point of peanut gallery confusion: Jerry, you made two statements that appear potentially at odds, but that’s probably because I didn’t completely understand them. You said:
    1) a dynamical core [has] no physical parameterizations. This makes sense to me– I hear you saying the true dynamical core can and should be based on the physics (chemistry, whatever else) without tuning ‘knobs’.
    2) how can you validate a dynamical core against observations when it contains no physical forcings?
    What is the “it” in #2?
    If the dynamical core, how does one incorporate physical forcing into the core without incorporating physical parameterization into the core?
    If the model, then are you saying the vorticity cascade of the dynamical cores is incorrect, which is why the models quickly go ‘unphysical’ once the forcings are added to the model?
    I realize these are incredibly basic questions; just trying to scratch this out on a napkin so to speak, to keep it straight.
    Back to MrPete’s uninformed kindergarten interpretation
    If I’ve got it right so far, could it possibly be true that we’re dealing with a difference of opinion at the level of what constitutes a dynamical core…or whether the models even contain an identifiable dynamical core? Is there a separation of dynamical core from model built on that core?
    CA has some code-reading junkies who might be able to answer that question even more than Jerry or Judith, neither of whom have admitted to being computer code freaks 🙂
    Bottom line tentative observation:
    Jerry sees the models’ dynamical cores as being incomplete or at least incorrect, with incorrect vorticity cascade and other ill posedness issues.
    Judith sees the models’ dynamical cores as being stable and reliable, proven over decades.
    Seems to me this particular question can be resolved. Isolate the dynamical cores, test which perspective is correct, and/or resolve the differences in definition.

  775. Judith Curry
    Posted Mar 17, 2008 at 6:05 AM | Permalink

    Gerald, you continue to confuse a dynamical core with a climate model. Dynamical core is a portion of the code of a climate model. To put the importance of issues of the dynamical core into perspective in the context of overall issues with climate models, discussion on the dynamical core takes up about 1.5 hours in a 2.5 day meeting for the GFDL review. Also, you seem not to understand “physical forcings”. A weather model is forced by satellite derived sea surface temperatures, and solar radiation at the top of the atmosphere.

  776. Judith Curry
    Posted Mar 17, 2008 at 6:15 AM | Permalink

    There is no confusion re what the dynamical core is; it is the numerical solution of the Navier Stokes equations. The issue of interest is weather and climate models, which have a dynamical core plus treatments of the unresolved degrees of freedom (parameterizations). The bigger issue in climate/weather modelling is treatment of the unresolved degrees of freedom, not the dynamical core. Suitable treatments avoid inappropriate vorticity cascades, which is the issue of concern to Gerald. If this were a huge problem in weather models, we would not see the accurate prediction of midlatitude weather systems that the best of these models produce; rather we would see unrealistically strong storms. The fact that weather models lose predictability of individual storms on a time scale of say greater than 5-7 days has little to do with any potential problems with the vorticity cascade, but rather to the chaotic nature of weather. Hence ensemble simulations are made using weather models to cluster the simulations around weather regimes, and extended range weather forecasts are thus enabled.

  777. welikerocks
    Posted Mar 17, 2008 at 7:02 AM | Permalink

    Wikipedia:

    A simple general circulation model (SGCM), a minimal GCM, consists of a dynamical core that relates material properties such as temperature to dynamical properties such as pressure and velocity. Examples are codes that solve the primitive equations, given energy input into the model, and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are the ones most strongly attenuated. Such models may be used to study atmospheric processes within a simplified framework but are not suitable for future climate projections.

    Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) and impose sea surface temperatures (SSTs). A large amount of information including model documentation is available from AMIP [4]. They may include atmospheric chemistry.

    * AGCMs consist of a dynamical core which integrates the equations of fluid motion, typically for:
    o surface pressure
    o horizontal components of velocity in layers
    o temperature and water vapor in layers
    * There is generally a radiation code, split into solar/short wave and terrestrial/infra-red/long wave
    * Parametrizations are used to include the effects of various processes. All modern AGCMs include parameterizations for:
    o convection
    o land surface processes, albedo and hydrology
    o cloud cover

    And adding Atmospheric Turbulence:http://mysite.du.edu/~etuttle/weather/turbul.htm
    Turbulence remains an important unsolved problem of classical physics. It is important in pipe flow, ship design, aeronautics and, of course, in meteorology. Many things around us show the results of turbulence, and we rely on it to perform many useful duties, such as dissipating pollution and evaporating water. Turbulence is invisible, so it is often overlooked, even when most active.

    We can describe the state of a fluid flowing slowly and placidly in a capillary by its flow velocity, which is a function of the radial distance from the center of a circular capillary. This is laminar flow, which is easily handled by the equations of hydrodynamics. Flows like this can be described by a small number of degrees of freedom. The property of a fluid, a liquid or a gas, that supports laminar flow is viscosity, the ability to support a shear stress by a rate of change of velocity. The velocity is zero at the surface of the capillary, and increases inward, supporting a shear stress on infinitesimal cylinders that retards the flow, and makes the rate of discharge of fluid proportional to the pressure difference.

    this link also has information on Viscosity, Molecular Transport, Surface Turbulence…and much more.
    Trying to wrap my mind around that word “robust”

  778. Kenneth Fritsch
    Posted Mar 17, 2008 at 9:04 AM | Permalink

    Re: #778

    There is no confusion re what the dynamical core is; it is the numerical solution of the Navier Stokes equations. The issue of interest is weather and climate models, which have a dynamical core plus treatments of the unresolved degrees of freedom (parameterizations).

    Judith, I think as an observer that the crux of the confusion comes from the climate and weather models use of a dynamical core plus treatments with parameterizations. How dependent are the final results on the use (solutions?) of the Navier Stokes equations versus the parameterizations and/or the use of the equations to select parameterizations? In other words one could give lip service to Navier Stokes equations while in fact simply using the equations to point to (or legitimize) the selected parameterizations. Under these conditions the parameterizations surely become the issue of debate, but at the same time giving less credibility to the degree to which the model is based on first principles.

  779. Gerald Browning
    Posted Mar 17, 2008 at 12:59 PM | Permalink

    welikerocks (#779) and Kenneth Fritsch (#780),

    Doesn’t it seem a bit odd that both of you are able to understand the difference between a dynamical core (numerical approximation of the inviscid, unforced primitive equations) and full weather and climate models that contain a core plus physical parameterizations of things such as latent heating, while Judith continues to confuse the two? Could it be that Judith will not make the distinction because then the real issues are painfully clear?

    Note that she has also stated that there is nothing new on this site. I beg to differ and suggest that she look at comments #166 and #167 on the Exponential Growth in Physical Systems #1 thread. The new manuscript by the Hungarian (formerly of NASA) shows that there was a substantial error in the derivation of the green house gases equations. When corrected, the results are just the opposite of what has been quoted.

    One might ask Judith how she develops physical parameterizations when the basic vorticity cascade (dynamics) are not correct. Tuning anyone?

    Jerry

  780. Gerald Browning
    Posted Mar 17, 2008 at 1:45 PM | Permalink

    Mr Pete (#776),

    I have explicitly written down the inviscid, unforced dynamical equations on the Jablonowski thread comment #209. That system contains no physical parameterizations (approximations of physical processes like latent heating). Numerical approximations of system #209 (or the corresponding hydrostatic system that neglects the total derivative of the vertical velocity w) are called dynamical cores. However, as shown in the Jablonowski manuscript, a numerical approximation of the hydrostatic dynamical system (also the case for the nonhydrostatic system) will blow up unless an unphysically large dissipation operator is added to the numerical model. In the case of the Jablonowski manuscript, a very small perturbation (1 m/s compared to the steady jet that is 40 m/s) is sufficient to create a very rapid cascade of vorticity to scales not resolved by any of the model examples. Judith has claimed that more resolution (finer mesh) will help this problem, but the mathematics shows that this is not the case (see illustrative examples on Exponential Growth in Physical Systems #1, comments #166 and #167). In other words, the dynamical cores cannot and will not converge to the correct physical solution as the mesh is refined.
    Also I have cited a manuscript by Lu et al. that uses NCAR’s own models that shows the same problem as in the Exponential Growth thread.
    Thus there are serious problems with the numerical approximations of the basic dynamical systems due to properties of the continuous partial differential equations.

    I hope that this explanation makes things clear.

    Jerry

    Jerry

    Jerry

  781. Gerald Browning
    Posted Mar 17, 2008 at 2:06 PM | Permalink

    Judith Curry (#778),

    It is very clear that you have not read Sylvie Gravel’s manuscript
    on this site. The Canadian global weather model deviates from reality
    in 12-24 hours. And the main physical parameterization that helps in this time frame is the boundary layer dissipation (drag) operator.
    But the parameterization is not physically accurate (tuning) and one can watch the error propagate upward and destroy the accuracy of the model compared to obs in 24-36 hours. It is not the physical parameterizations that help. It is the insertion of new obs of the winds into the model every 6-12 hours (data assimilation) that keep the models on track. I have clearly shown that one can add forcing terms (physical parameterizations) to obtain any solution one desires. That does not mean that they are physically accurate.

    Jerry

  782. Gerald Browning
    Posted Mar 17, 2008 at 3:33 PM | Permalink

    Judith Curry,

    It is also clear that although you claimed that you reviewed the Jablonowski manuscript that you did not comprehend it.
    The zonal steady state analytic solution that is used in test 1 is a solution of the hydrostatic version of the system given explicity in comment #209 on the Jablonowski thread. That system completely defines the meaning of a dynamical core, i.e. a dynamical core is a numerical approximation of the system in #209 (or the corresponding hydrostatic version). I wrote down the system of equations so that there would be no confusion.

    In the abstract the authors state, “A deterministic initial-value test case for dry dynamical cores of atmospheric general circulation models
    is presented …” and clearly that is a solution of the hydrostatic version of the system presented in #209. The definition of dynamical core doesn’t get any clearer than that.

    And if the dynamical core has problems, then a weather of climate model
    that adds unphysical forcings can produce some output, but they have no physical accuracy, especially over longer periods of time.

    Jerry

  783. Erikk Hammerstad
    Posted Mar 17, 2008 at 4:02 PM | Permalink

    Re TSI and GCM

    Just for the record, the posted forcing graph from Wikipedia has its data from Meehl et al. (2004) which used the Hoyt&Schatten (1993) TSI curve. The Hansen et al. (2007) GISS modelE simulations used the Lean (2000) TSI curve, but also simulated the effect of a TSI curve with a floor very similar to the TSI curve proposed by Leif Svalgaard. All the simulations gave results close to the observed 20th century global mean surface temperature (except for the warm peak around 1940) including the rise at the beginning of the century. Presumably the aerosol differences used compensated for the differences in solar forcing (as Judith Curry suggested), but it would strongly suggest that solar variations may not be needed to explain the temperature rise of the last century. (The dramatic difference between the solar and GHG forcings used in the GISS modelEsimulations on a similar scale to the Wikipedia graph can be seen in slide 4 of this presentation http://lasp.colorado.edu/sorce/news/2008ScienceMeeting/doc/Session4/S4_03_Crowley.pdf )

  784. Carrick
    Posted Mar 17, 2008 at 4:57 PM | Permalink

    Erik, your figure four goes back to the point I made above. You are neglecting the influence of human-generated sulfate emissions with this curve. The net anthropogenic forcings from CO2+sulfates is much smaller than this, at least until after 1980.

  785. Raven
    Posted Mar 17, 2008 at 5:07 PM | Permalink

    Erikk Hammerstad says:

    Presumably the aerosol differences used compensated for the differences in solar forcing (as Judith Curry suggested), but it would strongly suggest that solar variations may not be needed to explain the temperature rise of the last century.

    What the exercise illustrates is that aerosols are the ultimate fudge factor that allow modellers to come up with almost any result that they want. This means that any studies that use models to attribute the warming to CO2 are worthless because the models were tuned based on the assumption that CO2 is the primary forcing.

  786. welikerocks
    Posted Mar 17, 2008 at 5:36 PM | Permalink

    To be clear, we are talking about a “result” that amounts to a number, which represents a fraction of 1 degree of temperature? Right? That’s what this dynamic thing is?

  787. jae
    Posted Mar 17, 2008 at 5:55 PM | Permalink

    799, Rocks”

    Parametrizations are used to include the effects of various processes. All modern AGCMs include parameterizations for:
    o convection
    o land surface processes, albedo and hydrology
    o cloud cover

    That’s a lot of guessing, isn’t it? 🙂

  788. Gerald Browning
    Posted Mar 17, 2008 at 6:30 PM | Permalink

    Raven (#787).

    Bang on. And everyone assumes that the models accurately describe the
    climate which is utter nonsense. They describe the energy dissipation and creation of the model that is quite different than reality.

    Jerry

  789. Greg Meurer
    Posted Mar 17, 2008 at 7:24 PM | Permalink

    Erikk #765:

    Regarding solar forcing used in the Hansen, et al. 2007 paper comparing Model E runs with observations.

    The principal body of the paper is based upon the TSI from Lean 2000. Thus the comparisons of Model E to various observational metrics of climate in the paper are comparisons based upon use of solar forcing described in Lean 2000.

    The paper uses the somewhat less obsolete TSI forcing data (described as using only the Schwabe approx. 11 year cycle) only for one alternative run which compares the two runs with only the surface temperature metric. Since the trend in the two TSI data sets tend to converge in about 1950 it is not surprising that in the limited runs involved there is a small difference in the main vs alternative model runs’ temperature results. While the global mean results differ only a small amount (.09C) over the two periods reported, there are interesting differences in the results in the polar and other latitudes and in different periods reported by latitude bands.

    Also the alternative data still has greater variability than Leif shows.

  790. Tom Vonk
    Posted Mar 18, 2008 at 9:29 AM | Permalink

    Kenneth Fritsch # 780 and others

    How dependent are the final results on the use (solutions?) of the Navier Stokes equations versus the parameterizations and/or the use of the equations to select parameterizations? In other words one could give lip service to Navier Stokes equations while in fact simply using the equations to point to (or legitimize) the selected parameterizations.

    Those are correct and legitimate questions .
    Do not , ever , let you fool by people saying that “dynamical cores” or even worse “climate models” solve Navier Stokes equations .
    They do NOT and they do not even attempt it .
    People saying things like that are either ignoring Navier Stokes or trying to manipulate you .

    To understand in simple terms this issue that has long been both Jerry’s and my concern , one has to explain what “solving Navier Stokes equations” means .

    First possible meaning could be to find an analytical solution depending on arbitrary initial and boundary conditions e.g finding explicit functions V(x,y,z,t) (vectorial velocity field) and P (x,y,z,t) (scalar pressure field) .
    To make a long story short , this is impossible and it is not even demonstrated that such solutions exist .

    Second possible meaning could be to find a numerical “solution” .
    In practice that means that you replace the space-time continuum by a finite set of points and try to determine the value of the unknown functions V,P only in those points .
    You give yourself a space grid and a time step .
    You can focus only on the space step because the time step is correlated to it by the speed of propagation of perturbations .
    For air the relevant speed is the speed of sound (pressure wave) but if you make the approximation that air is incompressible ,
    you may take f.ex 100 m/s (jet speeds) . Therefore if you take a space step of 100 m , you must take a time step of 100/100 = 1 s .
    That gives you a finite number of algebraic equations (so no more PED or ODE) that can be solved for given initial and boundary conditions .
    If you stopped here however , you would only get a finite number of values that solve the system of the algebraic equations which may have no relevant relationship with the true unknown solutions of the continuous equations assuming that they exist .

    Therefore it is of paramount importance to make sure that the values you calculated in the finite points will be at all times near to the values that the exact continuous solution would take in those points .
    Obviously if your space steps tend to zero , the number of points tends to infinity and the numerical values must tend to the values of the continuous functions .
    As this is strictly speaking impossible to verify numerically because an infinite computation time would be necessary , you need a theory (a mathematical demonstration) that there exists some finite space step that guarantees the convergence .
    In other words if decreasing the space step to this limit values produces convergence , then you can stop the computations and say that your numbers are proved to converge uniformly to the unknown continuous solutions .

    This theory exists – the Kolmogorov theory .
    The Kolmogorov theory is an asymptotic , statistical theory with validity at very high Reynolds numbers .
    Basically what it says is that the microscopical turbulence is a universal feature of turbulence and that its statistics are valid for any turbulent flow .
    It derives the minimal scale at which this assumptions are valid – it is typically 10 – 100 µ .
    Obvously it is not possible to have a numerical resolution of 10 to 100 µ even for structures of dozen of meters of size .
    Therefore even the DNS methods have to restrict themselves to much bigger steps and to much smaller Reynolds where the Kolmogorov theory is not valid anyway .
    The LES methods take profit of the postulated universality of microscopical turbulence features what allows them to use a bigger step (we are still talking mm or cm though !) and a subgrid parametrisation which develops a universal code for non resolved dimensions .
    That works for very high Reynolds and is consistent with Kolmogorov’s theory . It fails for lower Reynolds .

    Now GCMs work with resolutions like …. 100 km !
    So you see that we are so staggeringly far from Kolmogorov scales or even from the typical LES scales that there can be no talk at all about “solving Navier Stokes equations” .
    It doesn’t even remotely attempt to solve them .
    And now you will also understand Jerry’s exasperation when he reads people talking about “robust N-S solutions” on a 100 km grid (!) while he is a N-S expert and very well knows that such numerical constructs can not BY DEFINITION stay stable nor be proven to uniformly converge to whatever the right N-S solutions would be .

    Last comment . And what if the weather AND the climate were chaotic ?
    Here it is important to understand that despite the fact that 99% of people assimilate chaos to randomness , nothing could be wronger .
    Chaotic systems are not random , they are deterministic yet unpredictible .
    They have topological invariants (attractors) and are also not necessarily unstable .
    Well if the climate was chaotic what the common sense suggests because the property of chaos is self similarity what means that weather and the climate that it generates are both the same chaos with only different time scales (or technically at different frequency resolutions) , then it is unpredictible .

    In that case there is no DNS or LES that helps and numerical simulations only produce numerical artefacts that loose any meaning extremely fast in time .
    What would be necessary then is to attack the question of stability and of the attractors .
    The techniques for that exist (Kolmogorov-Arnold-Moser theorem) but you would not be surprised that while thousands of people run
    probably irrelevant computer programms and talk in TV , people who are seriously looking at the climate stability questions are not considered like a showbiz stars .
    To see how deterministic chaos plays havoc in any attempt to numerically solve the equations leading to chaos and how there can never be a uniform convergence , read the following very accessible paper : http://www.maths.uwa.edu.au/~kevin/Papers/JAS07Teixeira.pdf

  791. Craig Loehle
    Posted Mar 18, 2008 at 10:02 AM | Permalink

    This discussion reminds me of the cartoon where the scientist is writing his theorem on the blackboard and in the middle it says “and then a miracle occurs” and the other scientist is saying “I think you need to be a little more explicit here”. AGW proponents aren’t bothered by that missing step, whereas many on this site would really like to see a little more detail at that “miracle occurs” step (and when it is uncovered it often doesn’t look so miraculous).

  792. Posted Mar 18, 2008 at 10:38 AM | Permalink

    As an addendum to Tom Vonk’s excellent summary, I would add a note as to why (IMO) Jerry keeps exasperatedly mentioning Sylvie Gravel’s manuscript.

    What Sylvie’s manuscript indicated was that the models at issue here don’t, somehow, miraculously, ‘get the right answer anyway’. When started from a typical set of initial conditions, her weather model (apparently a typical one and better than most) diverged pretty rapidly from reality. The only thing that was able to keep it on track was a continuous procedure of pumping new measurements into the system — a procedure that is obviously not possible for long-term climate models.

    Jerry has noted the problems, shown how they arise, and pointed to examples in the literature. Repeatedly. Reading these threads I continue to be confounded by the apparent assertion on the part of climate scientists that ensembles of climate models are able to skirt these difficulties — that everything comes out in the wash. As if averages and moments from a bunch of wrong answers taken from a completely unknown distribution could tell you anything about the right answer?

    Any enlightenment and all corrections appreciated.

  793. Sam Urbinto
    Posted Mar 18, 2008 at 11:09 AM | Permalink

    ‘Robust’, or ‘as robust as they can be’? Perhaps this is all simply a matter of the use of a word that means different things depending on the viewpoint.

  794. Gerald Browning
    Posted Mar 18, 2008 at 11:29 AM | Permalink

    Neil Haven (#794),

    Thank you for an excellent summary. Now if Judith would just either read the cited manuscripts or produce answers to the very specific mathematical questions I have asked her instead of extended verbiage that only serves to confuse the main issues …

    Jerry

  795. Gerald Browning
    Posted Mar 18, 2008 at 11:40 AM | Permalink

    Tom Vonk (#792),

    I truly believe that people like Judith either do not comprehend
    the mathematical arguments or do not want to admit that they are true because then they would not have anything to do.

    Jerry

  796. See - owe to Rich
    Posted Mar 18, 2008 at 4:08 PM | Permalink

    Re #760, Judith:

    Thank you. You’re getting more stick here than I think Steve M would like, so I’ll try to be more civil with my own supplementary questions.

    First, could you confirm my understanding that you have (at least) 6 free parameters:

    a. an intercept (base level for temperature)
    b. a W/m^2 to temperature conversion factor (sensitivity)
    c. an aerosol (=sulphate in Raven’s diagram) to W/m^2 conversion factor
    d. a CO2 to W/m^2 conversion factor (you forgot to mention this one!)
    e. a conversion factor for albedo changes from ice variation
    f. sub-gridscale parametrizations; these are rather worrying – are you saying that local cloud amounts are adjusted to get best temperature fit?

    Second, regarding melt ponds, I can see you find them fascinating to study, but are they not a second order effect (compared with albedo from overall area of ice)? And presumably the melting feedback follows a diminishing curve as snow-line latitude increases. Incidentally, though your model might well have helped predict the 2007 meltoff, are melt ponds really the reason for that? I have seen statements that the two principal reasons for the melting were excessive sunshine and excessive wind in directions blowing the icebergs out of the Arctic. If that is the case, we probably won’t see the same in 2008, unless Arctic summer conditions tend to recur. Thus off the top of my head I would predict a -2m.sq.km Arctic anomaly this summer rather than last year’s -3m.sq.km.

    Third, I don’t think you can properly claim 10**9 [ages since I saw FORTRAN!] degrees of freedom in the models. How can you measure degrees of freedom when there is so much spatial and temporal correlation? And for me, there is only one output variable I really care about, and that is global anomaly, though for extra marks an explanation of any transfer of heat/temperature between the hemispheres would be fine. So the number of internal and highly correlated states in the model does not really concern me except insofar as it may encourage over-fitting.

    Fourth, here is an invitation, to you or your research students. Can you try to find a way to remove all the sub-gridscale parameters, and concentrate on a. to e. above. Then see if you can model 13 solar cycles’worth of mean HadCRUT3 data better than I can in this article. It would be necessary for the aerosol data to use either satellite verified data (how do they do that, is it spectroscopy?) or data inferred from known levels of say CO2, because of the limited number (13) of degrees of freedom.

    If we are going to step back to look at the forest, I would argue that a climate model that can explain temperatures on a decadal time scale (e.g. solar cycles 10-22 or 23), with a manageable set of parameters, is just what is needed. But perhaps it’s just not possible for a GCM?

    Rich.

  797. Scott-in-WA
    Posted Mar 18, 2008 at 6:30 PM | Permalink

    Neil Haven 794 ….Reading these threads I continue to be confounded by the apparent assertion on the part of climate scientists that ensembles of climate models are able to skirt these difficulties — that everything comes out in the wash.

    In addition, there is the stubbornly-held belief among many climate scientists that performing successive runs of a simulation model is fully equivalent to collecting a physical set of field observations of a naturally-occuring process.

  798. Gerald Browning
    Posted Mar 18, 2008 at 9:28 PM | Permalink

    Judith Curry (#760),

    It is very naive to think that there are only a few tunable knobs.
    And some of the knobs are more crucial than others, e.g. the release of latent heat is extremely crucial. And this knob is controlled by others that control the clouds and radiation and so on ad nauseum. You might want
    to read my exchange with Jimy Dudhia on the Exponential Growth thread
    where he admitted that there are hundreds of knobs that are tuned to produce graphical output.(Notice that I did not say to produce accurate approximations of reality). And even in your simple example, there is a knob that must be randomly chosen. Please address the mathematical
    questions I asked on the Jablonowski thread. A few mathematical equations
    are all that is necessary to prove or disprove the “robustness” of
    a dynamical core. And if a core has problems, forget the rest of the game.

    Jerry

  799. Gerald Browning
    Posted Mar 18, 2008 at 9:32 PM | Permalink

    Scott-in-WA (#799)

    Yup.

    Jerry

  800. Jonathan Schafer
    Posted Mar 19, 2008 at 12:04 AM | Permalink

    #800,

    Dr. Browning et al,

    The GCM’s seem to suffer from an enourmous number of issues. I have tried to read and follow the posts on the exponental growth threads, the Jabolonski thread, and now this thread. I have a pretty good feel for the arguments against the GCM’s.

    My questions then, would be, what alternatives exist, if any, to the GCM’s, that can assist in estimating future potential climate conditions, and how do we create said alternatives, or would our limited resources be better utilized on other areas of study.

  801. Willis Eschenbach
    Posted Mar 19, 2008 at 2:43 AM | Permalink

    First, let me offer my thanks to all of the participants. Judith, wonderful to see you in the dialogue. My apologies for any out of bounds behavior on the part of the polloi. Jerry and Tom, thank you both for an excellent exposition on the infamously intractable Navier-Stokes equation. Any of you, please correct me if I misrepresent your position. Here’s my understanding of the situation.

    Jerry and Tom say, and in my opinion substantiate, that the N/S equations cannot be solved at lower Reynolds numbers and large grid size. What we are left with, then, are approximations of various degrees of validity. None of them can be shown to converge to the continuous solution.

    This reliance on semi-heuristic approximations is unfortunately a common occurrence where systems contain turbulence, like the climate. To make these approximations, one has to do something like artificially adjust the viscosity to keep the approximation stable.

    Once this is done, however, such a system can be quite valuable in predicting upcoming weather. It gets it approximately right. Which is reasonable, since it is an approximation.Now. Does such a system have a “dynamic core”?

    Both sides are right on this one. It has a dynamic core, that is to say, a central program whose job it is to handle the physics of the situation.

    But it does not have a dynamic core which can actually solve the relevant equations. Instead, it is using simplified approximations, so Jerry and Tom are right.

    In truth, however, it doesn’t matter. Either side could be right and the models are still junk. Someone above gave the reason:

    Parametrizations are used to include the effects of various processes. All modern AGCMs include parameterizations for:

    o cloud cover

    It is useful to consider the global climate as a heat engine, the output of which forces the motion of the atmosphere and ocean. Every heat engine has a throttle, which regulates the amount of energy entering the system. In an automobile, for example, the throttle is the gas pedal. In the climate system, the albedo is the throttle. And the main part of the albedo is clouds. A tiny change in the albedo changes the forcing more that a doubling of CO2.

    Try this claim on for size:

    “I’ve just built a computer model of the automobile. It models the whole combustion process, the entire engine, wheels, and everything. I’m going to use it to predict how fast the car will go under certain conditions … oh, and by the way, we have to to parameterize the position of the gas pedal …”

    I mean, how nuts is that? If you parameterize the gas pedal, the car (ceteris paribus) will go at the speed you set the pedal for. Sure, you could tune the carburetor a little and increase the speed, but don’t let that fool you into thinking the carburetor controls the speed. As they say in politics … it’s the throttle, stupid.

    So the idea of a climate model with a parameterized throttle is … well … to me it’s as ludicrous as a car model with a parameterized gas pedal, although I suppose your mileage may vary.

    And that’s why the discussion of the dynamic core, while important in one sense, doesn’t matter a bit in another sense. Because if the clouds are parameterized, all bets are off.

    w.

  802. MrPete
    Posted Mar 19, 2008 at 3:04 AM | Permalink

    Interesting analogy, Willis. So, perhaps relevant questions include: does the climate have an automatic speed control, and how accurate is it, and how does it respond when going over hills and through valleys?

  803. Geoff Sherrington
    Posted Mar 19, 2008 at 3:38 AM | Permalink

    Re Gerald Browning 774
    Tom Vonk 792
    Neil Haven 794

    There might be small differences between your lessons but the thrust is similar to my earlier assertion that some specialists invoke N-S more as a bully-boy tactic than as a teleconnection with reality.

    I ask again of the modellers-
    1. What is the question?
    2. What is the answer that would satisfy the question?

    now, with an addition
    3. What criteria cause admission of defeat?

    There are many aspects of Life where knowing when to pull out is more important than not knowing. Like in illicit seduction.

  804. Jaye Bass
    Posted Mar 19, 2008 at 3:50 AM | Permalink

    because the property of chaos is self similarity what means that weather and the climate that it generates are both the same chaos with only different time scales (or technically at different frequency resolutions)

    Is that always the case? The Lorenz attractor is not self-similar. I suppose that if the attractor for a chaotic system were a fractal, then one could make the claim of self-similarity over a range of scales?

  805. Geoff Sherrington
    Posted Mar 19, 2008 at 3:51 AM | Permalink

    # 805 addendum

    And would you be able to predict the motion of the smoke coming from the traditional cigarette afterwards? For more than a few milliseconds? Why no, you would not even know the point of the compass to which it would head, without imposing some boundary conditions about which the cigarette might disagree.

  806. Willis Eschenbach
    Posted Mar 19, 2008 at 4:12 AM | Permalink

    MrPete, you say:

    Interesting analogy, Willis. So, perhaps relevant questions include: does the climate have an automatic speed control, and how accurate is it, and how does it respond when going over hills and through valleys?

    Yes, it does have an “automatic speed control”, which in heat engine terms is called the governor. It works like this:

    More temperature ==> more evaporation ==> more clouds ==> less incoming sunlight ==> less temperature

    Neat system, huh?

    w.

  807. Tom Vonk
    Posted Mar 19, 2008 at 5:08 AM | Permalink

    Jaye Bass # 806

    Is that always the case? The Lorenz attractor is not self-similar. I suppose that if the attractor for a chaotic system were a fractal, then one could make the claim of self-similarity over a range of scales?

    No it is not . Also don’t forget that attractors happen in the phase space and not necessarily in the space-time .
    The issue is both important and relevant but I am afraid that we would go wildly off topic in this thread .
    Parhaps there will be one day a specific thread about chaos in climate .

  808. Tom Vonk
    Posted Mar 19, 2008 at 6:07 AM | Permalink

    Willis Eschenbach # 803

    Once this is done, however, such a system can be quite valuable in predicting upcoming weather. It gets it approximately right. Which is reasonable, since it is an approximation.Now. Does such a system have a “dynamic core”?

    Both sides are right on this one. It has a dynamic core, that is to say, a central program whose job it is to handle the physics of the situation.

    But it does not have a dynamic core which can actually solve the relevant equations. Instead, it is using simplified approximations, so Jerry and Tom are right.

    In truth, however, it doesn’t matter. Either side could be right and the models are still junk. Someone above gave the reason:

    In principle you are right .
    Now imagine the climate problem like a sum of a very well defined problem (let’s call it the dynamical core)
    and a very badly defined fuzzy problem (let’s call it the parametrization) .
    This sum is a GCM .
    You will observe that for obvious reasons a well defined problem can be much better tackled mathematicaly and tolerates rather little hand waving .
    That explains that many people who have worked with this part of the problem (here Navier Stokes) prefer by far to analyse this well defined part rather than to waste their time in fuzzy parametrization discussions where everything depends on everything and the impact of a particular parametrization on the whole system can’t be understood by anybody .
    It can only be computer simulated so you are delivered hands and feet to what the millions of computer lines say .
    That’s the attitude “If it is not outright crazy , we’ll take it for true .”

    I will illustrate by an example .
    Many years ago I was doing work that involved to understand how ice melts .
    So I looked up many papers (you’d be surprised how many people write papers about melting ice) and discovered that it is in detail a horribly complicated thing done with very simple physics .
    For instance the particle deposits , their size , their properties , their distribution within the ice volume alter significantly the melt speeds .
    Then on big pieces of ice the melting is never homogenous – first form small pools that then grow bigger .
    As water absorbs radiation better than ice , these pools “bore holes” in the ice . And of course the pools are not nice spheres or cylinders , they have all kind of geometries .
    If there is more wind than usual and the humidity varies it also impacts significantly the picture , the night is not the same as day .
    Etc , etc .
    The point being that when one makes a calculation taking in account all or most of the above , the difference of the speed of melting compared to simple “common sense” models is huge .
    Now obviously this cannot be solved within a GCM so if you want to take it in account , you’d do a “parametrization” .
    Of course a simplified one because you can’t solve everything .
    You’ll take an average here , assume homogeneity there , add a bit of isotropy and gaussian noise . The routine , there doesn’t seem to be anything crazy in the assumptions .

    Then you PLUG this thing in a GCM and the GCM gives a run different from the previous run where you had a primitive model .
    The differences may but must not be due to the parametrization but how do you want to discuss that ?
    How could you or anybody for that matter know what the millions of lines actually did ?
    You will be actually staring at printouts , draw some charts , compute statistics and all that with the fundamental assumption that what changed was due to a change in parametrization .
    How do you want to discuss or criticize that ?
    It is impervious to any discussion , it is “believe the computer , it has physics inside .”
    It is not that a particular parametrization is junk in itself – you could surely analyze every particular piece and conclude that it looks reasonable . Not very accurate but reasonable .
    The problems begin only when you PLUG it in a huge , complexe , non linear system that may react in very unexpected ways that don’t allow easily to distinguish causes from efefcts .

    That’s why I prefer to stay on the N-S core only and it has already enough problems by itself .

  809. MrPete
    Posted Mar 19, 2008 at 6:42 AM | Permalink

    Willis — yes, exactly. And so, I’m curious how well the GCM’s model this, as well as the other attributes. Everything I’ve been hearing suggests a model that does not incorporate sufficient “governor” feedback.

    I know I’m not being rigorously mathematical in my pictures. There are plenty of folk here who can do that 🙂 — some of us relish tieing the math to pictures that can be visualized.

    Tom Vonk — appreciate your ice-melt illustration. My dad’s PhD long ago was working out the detailed math of dendritic (crystal) growth. Took 200 pages… and so you bring that memory full circle, to the vagaries of melting 😉

  810. welikerocks
    Posted Mar 19, 2008 at 6:58 AM | Permalink

    This is a picture of Los Angeles from space.
    http://gocalifornia.about.com/cs/photos1/l/bl_lastp_lalb.htm
    I don’t understand the math of these models at all-but I get the gist pretty much. And it’s just not the chaos and the turbulence and the clouds etc etc that I can’t picture being modeled inside of a computer”as robust” , it’s the lack of awe and wonder, or understanding of how big and old this planet is that is missing.. How big and old our atmosphere is, how vast the land is, the mountains, the oceans, and how “we” are really just tiny specks on the landscape who’ve barely even lived on this planet compared to other vast amounts of time that all sorts of different things happened. The lack of understanding of geologic time….all this is lacking here for me in the GCMs and “climate science” If that makes any sense at all.

  811. Pat Keating
    Posted Mar 19, 2008 at 7:38 AM | Permalink

    812 rocks

    how “we” are really just tiny specks on the landscape

    The first time I took my younger daughter up in a small plane, she was astonished at how many trees there were, compared to houses (and this was in the area between Baltimore and DC!)

  812. Posted Mar 19, 2008 at 8:05 AM | Permalink

    Tom,
    Your quotes “if it is not outright crazy, we’ll take it as true” and “believe the computer, it has physics inside” are sadly hilarious. Thanks.
    It has been a lesson I’ve learned through pain and many late nights that gobs of mathematics, thousands upon thousands of lines of computer code, careful physics, apparently reasonable output, and all the wishing in the world do not insulate my models from being catastrophically wrong.

    Geoff,
    I believe your questions in #805 are the critical ones: We do not know what questions the climate modellers are trying to answer with any specificity; We do not know the criteria for correct answers; We do not know the criteria for wrong answers. The arguments all seem to be slapped together post hoc.

  813. welikerocks
    Posted Mar 19, 2008 at 8:48 AM | Permalink

    #813
    Pat, exactly. I am glad you understood what I mean. Driving for hours from Southern California to Northern California it’s the same feeling about trees, mountains, large stretches of land, rock and water where not a soul or building is around at all. Many people who accept the AGW urgency from the media and the consensus groups without question that I’ve talked to have not been given the experience or that perspective – basically have not traveled anywhere very far from home-which is hard to believe in this day and age, but it is true.

  814. MarkW
    Posted Mar 19, 2008 at 9:06 AM | Permalink

    2 years ago, my wife and I drove from Las Vegas to Iowa. A two day drive, and with the few exceptions, the land was almost totally vacant of human occupation the whole way. (Salt Lake City, Denver, Omaha)

    There were places where one could go for several hours without seeing a single man made structure. (Other than cars and the road of course.)

  815. welikerocks
    Posted Mar 19, 2008 at 10:31 AM | Permalink

    813 & 816

    And besides all that, then there are the oceans: vast massive places and miles miles deep that is unexplored territory, and like deep space we can only send little probes to to collect information and like in space we discover things we didn’t know existed or blow our minds and tell something completely opposite of what we assumed was going on. There are thousands of vents and trenches, and cracks and faults down there.
    Bernie posted this link in the Unthreaded too link and I’ll give it as an example here. I know the GCM’s model the air, but our air isn’t a separate thing from the oceans (or any body of water for that matter) in -real- earth climate and we don’t know everything about our deep oceans, and I have no idea how THAT is handled in a climate model either.

  816. welikerocks
    Posted Mar 19, 2008 at 10:49 AM | Permalink

    And I really apologize for sounding so naive or simple here. Keep in mind, I am still wrapping my head around that word “robust” and that “dynamic” rise of a fraction of a degree of temperature representing the whole globe inside a computer model. (And that article says something about a sea level rise of 4 inches? I don’t believe that is a possible thing to know like it is worded there) Sorry for being somewhat OT too.

  817. welikerocks
    Posted Mar 19, 2008 at 10:57 AM | Permalink

    Gads, my bad, it’s half an inch in four years, not what I said! which makes it even worse -I don’ think anybody can know that like it’s worded there.

  818. Jonathan Schafer
    Posted Mar 19, 2008 at 12:33 PM | Permalink

    #818,

    And don’t forget, this isn’t the consistent result of running a computer program and generating the results, but instead of running a number of simulations, and then averaging the results, after throwing out solutions which seem improbable.

    In some respects, the GCM’s do exactly what they are purported to do. But I believe you could come up with a similar result by taking all historical accounts of climate conditions we have on record, assign probabilities to events occuring (ex: ENSO, volcano, AMO/PDO flips, etc. and generate random numbers to determine if the event occurs and its severity), run it X times, toss out invalid looking results, average the remaining, and you wouldn’t be any different than the GCM’s and you don’t even need to “model” the physics or try to solve the NS equations on any phase space.

  819. Gerald Browning
    Posted Mar 19, 2008 at 6:37 PM | Permalink

    Jonathan Schafer (#820),

    If you add a number of model results that are wrong and take an average, what do you get?

    You might read Pat Franks article that will appear this month in Skeptic
    (peer reviewed). It shows what a waste of computer resources the simulations have been. He obtains a better verified statistical result
    than all of the simulations with a simple linear formula developed
    from some basic quality science. And the subsequent error bars for the model simulations are devasting.

    Jerry

  820. Willis Eschenbach
    Posted Mar 19, 2008 at 9:30 PM | Permalink

    MrPete, thanks for the question. You say:

    Willis — yes, exactly. And so, I’m curious how well the GCM’s model this [the cloud “governor”], as well as the other attributes. Everything I’ve been hearing suggests a model that does not incorporate sufficient “governor” feedback.

    Well, I don’t know the answer. However, in James Hansen’s GISS GCM (and most other models), cloud cover is adjusted until the albedo is correct. While this procedure gives the correct albedo, it gives the wrong cloud cover (59% in the model, vs. 69% in the real world).Because of this huge (~15%) error in cloud cover, and because the cloud cover is parameterized rather than calculated, I suspect that whatever cloud cover variations happen on Planet GISS, they don’t have much to do with what happens on Planet Earth. It’s one of the “tuning knobs” that Judith didn’t mention.

    Judith, you say:

    The bottom line is that in a model with order 10**9 degrees of freedom, there are really very few tuning knobs, and while aspects of the model can be sensitive to an individual tuning, there is no way you can tune these models to give the observed space/time variability of so many different variables.

    However, you neglect to specify how many “tuning knobs” there are … in their analysis of their own model (PDF), the GISS folks list the following parameterizations

    • Gravity waves
    • Hygroscopic aerosol radiative properties
    • Outgoing TOA flux
    • Cloud scattering asymmetry
    • 3D cloud heterogeneity
    • Prognostic cloud optical properties
    • Melt pond extent
    • Mean cloud particle density distribution
    • Cloud overlap
    • Effects of foam and hydrosols on ocean albedo
    • Cumulus updraft mass flux
    • Cumulus downdraft mass flux (calculated, curiously, as 1/3 of updraft mass flux, whoa, that sounds scientific to me)
    • Cumulus downdraft entrainment rate
    • Maximum cumulus mass flux entrainment rate
    • Convective cloud cover
    • Stratiform cloud volume fraction
    • Cloud areal fraction
    • Cloud-top entrainment
    • Cloud droplet effective radius
    • Cloud droplet maximum effective radius
    • Cumulus updraft speed profiles
    • Cumulus updraft entraining plume speed profiles
    • Proportion of freezing rain vs. snow
    • Stratiform cloud formation location
    • Amount of water in liquid phase stratiform clouds
    • Evaporation (sublimation) of stratiform precipitating water droplets (ice crystals)
    • Precipitation attenuation rate
    • Threshold relative humidity for the initiation of ice and water clouds
    • Overall distribution of temperature, moisture, and scalar fluxes
    • Pressure/velocity and pressure/temperature calculations above the PBL
    • Radiation absorption in clouds
    • Heat and mass flux at the base of lake ice
    • Heat and mass flux at the base of sea ice (different parameterization)
    • Ice albedo
    • Nonlocal turbulent atmospheric mixing
    • ALL SUB-GRID PHENOMENA, such as thunderstorms (the main mechanism for moving warm tropical air aloft and one of the most important parts of the climate system)

    Note that these are by no mean all of the parameterizations, just some of the ones they happen to discuss. They also do not include “external tunings” such as the selection of a particular time evolution for historical aerosols over some other historical data. As to whether these parameterizations constitute “tuning knobs”, they comment that:

    Further work is being done to improve the higher resolution simulation, which includes tuning of various parameterizations …

    So I would take great exception to Judith’s statement that there are “very few” tuning knobs, I have listed 35 without even trying, and on my planet that’s not “very few”.

    Judith says that “there is no way you can tune these models to give the observed space/time variability of so many different variables,” which is true. But given that that’s the case, why are they tuning the parameters at all?

    The truth is, they don’t care about most of the variables. They don’t care if the cloud cover is way below that of the real world. They are tuning for only a few of the 10^9 variables, the rest they ignore. Take precipitation. You’d think that would be important … but the GISS model overstates it by 15%. If they tuned for that, they could get it right … but then something else would be thrown off.

    Even Top of Atmosphere (TOA) cloud forcing is way out of scale. You’d think that they’d tune to get this right, since the TOA forcing is the most important quantity for the determination of the effect of 3.7 W/m2 of TOA forcing from CO2 doubling. But the GISS model is far off the rails on this one, with the model being only 60% of the reality. You are right, Judith, you can’t tune for all of the variables … but that doesn’t stop the tuning.

    Finally, let’s look at whether there is a scientific basis for the parameterizations. The modelers keep saying that the parameters can’t just be adjusted at will, they have to match the physical reality. Judith, you describe the process of parameterization of melt ponds on sea ice as follows:

    Current sea ice modules don’t explicitly model melt ponds, so they parameterize in some way the melting ice albedo, the simplest parameterization being setting the melting ice albedo to be constant. Now the constant is broadly constrained by observations, but could range from 0.3 to 0.56. How does a modeler select which value to use? Well there are a few other tunable parameters in sea ice models as well, so sensitivity tests are done across the plausible range of values, compared with observations, and then the parameters are selected. By the way, I have been arguing for an explicit melt pond parameterization in sea ice models, but a few years ago the NCAR climate modelers told me they didn’t want to use my parameterization since it would make the sea ice melt too quickly (maybe they would have predicted the meltoff in 2007 with my parameterization!)

    The GISS model uses an explicit sea ice melt pond parameterization such as the one Judith argues for. Is it better? Well, among other things, it specifies that melt ponds can only form during six months of the year in the Arctic, and the other six months of the year in the Antarctic. Now, we all know that there is no such rule in nature, nor anything even remotely like that. It just happens to work, although there is no way to tell how well it does in extreme years.

    Judith, you seem to find it scientific to pick the parameters based on “sensitivity tests” … but unfortunately, when someone says that the parameters are based on physical reality, that’s not what is usually meant. Usually, it means that the parameters have been set as near as possible to the unknown true physical value. It does not mean that the parameters are set to produce the desired results. Setting parameters based on results of a “sensitivity analysis” is just tuning, it has nothing to do with whether the parameter is set anywhere near its true value. And in fact, some parameters (like the six month ice melt window) have no physical counterpart at all, they’re just included because (in some very limited sense) they work.

    Kind of a roundabout answer to your question, MrPete, but it’s the best I can do. The true answer is until we can model the clouds, we can’t model the climate, because the clouds are the throttle of the global heat engine.

    w.

  821. Gerald Browning
    Posted Mar 19, 2008 at 11:35 PM | Permalink

    Willis,

    Nicely stated.

    Jerry

  822. Scott-in-WA
    Posted Mar 20, 2008 at 4:07 AM | Permalink

    Maybe I shouldn’t cloud this issue with another discussion parameter, but here goes: One may assume values for parameters prior to the model run. Is it then useful or necessary to follow—in some detail as the model is actually running—the effects of these choices on the assumed physical interactions of the assumed physical climate driving processes? After doing so—i.e. after following the course of these many interactions—is it safe to assume that we’ve then been “successful” in modeling some complex physical climate reality, even if it isn’t 100% perfect?

  823. Posted Mar 20, 2008 at 6:55 AM | Permalink

    Regarding several above

    Counting the degrees of freedom as order 10**9 is not the proper metric, IMO.

    That number refers primarily to the number of unknowns that are to be solved by numerical solution of the discretized form of the continuous equations; solutions of the FDEs. The vast majority of those values are not subject to being changed directly by tuning. To tune these directly would mean that a number determined by the tuning changes a number that should be determined by the underling continuous equations. Many incorrect results can be obtained if these fundamental quantities are overridden by tuning. In fact I say it is not possible to tune these quantities. You can’t mess with the fundamental equations.

    They are at least seven quantities determined by the continuous equations for fluid flows, for example. The density, three velocity components, the pressure, energy content (internal, enthalpy, kinetic, potential, and/or sums of these), and the temperature. This number can easily be greater than seven, maybe on the order of ten or so, and maybe even larger for specific GCMs; tracking for scalar quantities, radiative energy transport, heat conduction, etc. All of these quantities must be determined for all cells/nodes/links/volumes/junctions that are used to represent the discrete equations over the complete climate system. The sum is an enormous number given the spatial extent of interest in GCM models. By far the majority of the order 10**9 degrees of freedom are determined by these quantities and they cannot be tuned.

    A better metric, IMO, is to count the number of phenomena and processes that are tunable relative to either the total number of phenomena and processes modeled, or the number quantities determined by the continuous PDEs and ODEs. Assuming that the simplified/appropriate continuous PDEs and ODEs do in fact provide a more nearly accurate accounting of physical phenomena and processes.

    If we look at the categories, or classifications, of equations and models that make up a typical approach to modeling of complex physical systems we might use the following.

    1. Fundamental basic model equations from continuum mechanics such as the Navier-Stokes for mass, momentum and energy conservation, heat conduction, radiative energy transport, chemical-reaction laws, Boltzmann, and many others. These include the constitutive equations for the behavior and properties of the associated materials; equation of state, thermo-physical and transport properties and basic material properties.

    2. Engineering models and empirical correlations of experimental data needed to close the basic model equations; turbulent fluid flow, heat transfer and friction factor correlations, mass exchange coefficients, for examples.

    3. Special purpose models for phenomena and processes that are too complex or insufficiently understood to model from basic principles.

    4. Models for phenomena and processes occurring in complex engineering equipment, if a physical system of interest includes hardware.

    While some of the parameterizations in GCMs might be counted as closure models (item 2 above), the primary need for parameterizations falls into item 3. Note that, generally, the model equations in Item 1 are those that cannot be tuned.

    If we look at the fluid motion models/equations as primarily a mechanism that distributes the energy content around the climate system, deep fidelity of these models is not a major requirement. Especially if the metric of focus is a solution meta-functional such as The Great Global Average Temperature Near the Surface ( the Totem with The Most Mojo ever in the Entire History of The Planet ), the distribution of the energy is not important; only an accurate accounting of gains and losses are needed. These latter are determined by the forcings and feedbacks that are generally represented by the parameterizations.

    With this view, the parameterizations are in fact the main glue for attempting to get the model to stick close to physical reality. That is, they are the dominate physical phenomena and processes of interest in Climate System Modeling. And as such parameterizations are more important than the models based on the fundamental continuous equations for fluid motions. This, in my opinion, is not a good position in which to find oneself.

    So, if the above view is a close approximation to the situation relative to GCMs, the number of parameterizations that can be tuned is in fact the majority of the model and are most certainly the most important parts of GCM models.

    That is, tuning can be, and is, carried out for almost all of the degrees of freedom in the model equations for physical phenomena and processes. And certainly for the more important parts of the modeling.

    Let me know if there are incorrectos in the above.

  824. MrPete
    Posted Mar 20, 2008 at 9:57 AM | Permalink

    I think I just heard Willis say:

    * Here are 35+ “parameters” that in theory could be drawn from physical reality and/or known physical properties
    * Unfortunately, that is not the case.
    * They are “tuned” so the model will produce a final outcome most closely matching the test data.
    * Nobody wants to admit this, so people keep quiet about the fact that parameters can diverge from reality by 100% or more

    The obvious response: this is enough for the models to form-fit an emperor’s body.

    My question: is there any way that known parameter-divergence can inform the final model CI?

  825. Tom Vonk
    Posted Mar 20, 2008 at 10:50 AM | Permalink

    Dan # 825

    If we look at the fluid motion models/equations as primarily a mechanism that distributes the energy content around the climate system, deep fidelity of these models is not a major requirement. Especially if the metric of focus is a solution meta-functional such as The Great Global Average Temperature Near the Surface ( the Totem with The Most Mojo ever in the Entire History of The Planet ), the distribution of the energy is not important; only an accurate accounting of gains and losses are needed. These latter are determined by the forcings and feedbacks that are generally represented by the parameterizations.

    Perfectly right Dan .
    Actually that is the reason why about anybody could build a global climate model with only 20 parameters that would run on a PC and all of its predictions (obviously only 20 degrees of freedom so not many predictions) would stay physical over large regions of variation of the parameters .
    I am pretty confident that by only tuning 3 degres of freedom the model would predict extremely reasonable Totems and perhaps even a global warming .
    One of the degrees of freedom that I’d keep free would enjoy Jerry because it would indeed be the dissipated power per volume as this is something that is not tractable explicitely yet is absolutely paramount for the model to stay on track .
    I could of course write a long paper explaining why I use this particular parametrization by using the Kolmogorov theory that contains some ajustable values .
    After all the constructal theory achieved with only 4 parameters , if memory serves , to predict all major features of the atmospheric circulation going even as far as correctly predicting the latitudes limiting the Hadley and the polar cells !
    This should not be interpreted as if I said that the constructal theory was wrong but only as showing that the climate system is very forgiving and allows even for extremely simple models to say reasonnable and physical things most of the time .
    Now it is not forgiving at all as far as correct predictions are concerned .

    So , yes the conservation laws and the low variability of some very important inputs (solar f.ex) constrain the system enough that it is not encessary to go “WOW ! Despite 5 millions of lines of code the thing is able to provide figures over 50 years without going nuts and unphysical .”
    To avoid misunderstandings – the complex GCMs do of course much more than simple models and the question is not the validity of every part of them but the relevance of the whole method for long term predictions .
    By using the oldest scientific method (extrapolation) I can make a statement that will be true for the next 100 years with an estimate “extremely likely” (IPCC terminology) :

    “The global yearly temperature average for every year between 2008 and 2100 will be 15°C +/- 15%” .
    I need only to explain 2 numbers to justify the model .
    And it is superior to IPCC because it also explains a global cooling should it by any chance happen 🙂

  826. jae
    Posted Mar 20, 2008 at 10:58 AM | Permalink

    825, Dan: If I understand correctly what you are saying, then what is the purpose of all the swirling calculations? Do they just add mystery and pzazz to the models? It seems you could devise a much simpler model that just uses the parameterizations ??

  827. Jonathan Schafer
    Posted Mar 20, 2008 at 12:42 PM | Permalink

    #821

    Dr. Browning

    If you add a number of model results that are wrong and take an average, what do you get?

    I can think of a couple answers to your question.

    1. From a statistical point of view, you get a result to which no significance can be attached. The fact that it may be similar to real-world observations would be no better than random chance if it weren’t for the tunable parameters.
    2. From a non-statistical point of view, you get output which may resemble real-world observations in the short term.

    Otherwise, I see little purpose in running them.

  828. Gerald Browning
    Posted Mar 20, 2008 at 3:46 PM | Permalink

    Mr Pete (#26),

    I think your summary of Willis’ comment shows that you understood it exactly as he intended. 🙂

    Jerry

  829. Gerald Browning
    Posted Mar 20, 2008 at 4:15 PM | Permalink

    Jonathan Schaffer (#829),

    We know that compared to observations, numerical weather prediction models and the CAM3 atmospheric of the NCAR climate model go astray very quickly.
    In the former case the new infusion of observational data every 6-12 hours
    keeps the inaccurate parameterizations from destroying the accuracy relative to observations over time. Now the atmospheric component of a climate model uses coarser resolution than say the ECMWF model and the parameterizations are more crude. And the ocean component of a climate model does not resolve essential currents in the ocean such as the Gulf Stream. So all of the basic rules of numerical analysis are violated,
    i.e. the dynamical core is not close to the correct solution and
    the parameterizations are not close to the real physics (or else there wouldn’t be so many different versions). Without these basic rules being satisfied, there is no scientifically valid argument that states that
    a climate model is close to reality. End of case. 🙂

    Jerry

  830. Gerald Browning
    Posted Mar 20, 2008 at 4:20 PM | Permalink

    jae (#828),

    That is exactly the point of Pat Frank’s new manuscript in Skeptic.
    If you read it I think you will be quite amused.

    Jerry

  831. Gerald Browning
    Posted Mar 20, 2008 at 4:29 PM | Permalink

    Dan Hughes (#825),

    Counting the degrees of freedom as order 10**9 is not the proper metric, IMO.

    Exactly. Is somebody trying to pull the wool over our eyes. 🙂

    Jerry

  832. Sam Urbinto
    Posted Mar 20, 2008 at 5:02 PM | Permalink

    Maybe it’s 2^50

    By the way, they want some help on the PCA article over on wikipedia; it says it appears to contradict itself.

  833. Stan Palmer
    Posted Mar 21, 2008 at 8:32 AM | Permalink

    re 825

    If we look at the fluid motion models/equations as primarily a mechanism that distributes the energy content around the climate system, deep fidelity of these models is not a major requirement. Especially if the metric of focus is a solution meta-functional such as The Great Global Average Temperature Near the Surface ( the Totem with The Most Mojo ever in the Entire History of The Planet ), the distribution of the energy is not important; only an accurate accounting of gains and losses are needed. These latter are determined by the forcings and feedbacks that are generally represented by the parameterizations.

    Would you please clarify this for us laymen. Newspaper reports indicate that GCM modelers now have confidence in their estimates for gross warming and are now moving on to modeling specific regional effects. There are constant references to droughts caused by global warming etc. Wouldn’t deep fidelity be needed for these sorts of predictions?

  834. Posted Mar 21, 2008 at 9:10 AM | Permalink

    re: #828 and 835

    I did not mean to imply that not just any motions will be close enough. Rather I mean rough approximations will be ok. Just look at how ‘theoretical’ understanding of climate change is based to a large extent on radiative-equilibrium balance constructs. Generally, fluid motions, relative to the radiative-equilibrium balance, must basically transport the energy from the regions at higher temperatures to lower-temperature regions. A pure radiative-equilibrium approach, of course, doesn’t account for any fluid motions. These planet-wide-scale fluid motions are fundamentally determined by the inherent nature of fluid motions on a rotating sphere along with the constraints set by the locations of the large land masses on the surface.

    As for ‘predictions’ of regional effects, I have not yet seen any noteworthy accomplishments along these lines. Additionally, droughts, for example, generally have limited temporal extent. I have seen practically nothing in the way of ‘predictions’ of both the location and time-scale of any regional effects. Several regional-level extreme climate events have occurred since the IPCC started working on the problem. I my not aware that any of these have been projected/predicted relative to either spatial locations or time-scale span. There is an active discussion and lack of agreement on exactly the time-span that the IPCC projections are valid. If 20-30 year averages are necessary, as some argue, many regional effects can and will be easily missed.

    I suggest that you look into Professor Pielke Sr.’s investigations into the validity of regional projections/predictions by use of GCMs.

    I would like to repeat the main focus of my previous comment. Tuning is applied to the majority of the models of the important/dominant physical phenomena and processes in GCMs. And order 10**9 has nothing to do with the issue.

    All comments and corrections are appreciated.

  835. Gerald Browning
    Posted Mar 21, 2008 at 11:28 AM | Permalink

    Stan Palmer,

    Have you heard the phrase, “Don’t believe everything you read in the newspapers” 🙂

    Jerry

  836. Mat S Martinez
    Posted Mar 21, 2008 at 12:07 PM | Permalink

    I see that many people here have earned their PhD (Player hater Degree). So many people here are “half-empty types”… it gets kinda depressing to read often. Where are bender and mosher with some quippy remarks???

    If the models are all so worthless and such a waste of time, then would it be ironic that so many people here are taking so much time to talk about them?

    Given the number of parameters that are in a model, then there are many possible solutions that will keep it “well-behaved” or appearing physical. Not blowing up. If one takes into account all the different backgrounds, person-types, etc. of people who are developing models, then many different models result depending upon how a team deems to parameterize different physical processes. The Americans, Europeans, Japanese, etc. – which are again sub-divided. So there are many different flavors of models that do not always agree on precip patterns, ENSO, etc. However, they do all agree that an increase in greenhouse gases will increase temperatures across the planet. I find that conclusion hard to dispute unless they are all crap (which many people here seem to say).

    And since there are so many smart people here, I suggest that they come up with a better solution for prediction as opposed to just throwing tomatoes. Much more constructive and more fun to read.

  837. Pat Keating
    Posted Mar 21, 2008 at 12:58 PM | Permalink

    838 Mat
    If the models weren’t being used by politicians and other folks with an agenda to disrupt people’s lives and countries economies, there would be no tomatoes thrown. Since they are, it becomes very important to look at them critically.

    The modelers form a much less diverse group than you seem to believe.

  838. Not sure
    Posted Mar 21, 2008 at 1:11 PM | Permalink

    They “all agree that an increase in greenhouse gases will increase temperatures across the planet”, because they’ve been tuned to agree in that way. That’s not exactly hard to see. Unless you’re deliberately looking the other way as hard as you can.

  839. Gerald Machnee
    Posted Mar 21, 2008 at 1:11 PM | Permalink

    Re #838 : **However, they do all agree that an increase in greenhouse gases will increase temperatures across the planet.**
    They all agree because they were designed to agree on that part. If General Motors wants us to buy the GMC, they better design it so we like it (agree with it).

  840. Mat S Martinez
    Posted Mar 21, 2008 at 1:30 PM | Permalink

    #840 and 841

    Are you 100% sure and have you seen the proof that the models are ALL “Tuned” to get a warming with an increase in greenhouse gases?

    #839

    The way in which the models handle physical processes are indeed diverse. The people all have similar qualities in that they are climate modelers – which means that they are not diverse in certain sense. But I agree, there probably aren’t too many modelers that would do well on American Idol.

  841. MrPete
    Posted Mar 21, 2008 at 3:17 PM | Permalink

    Mat, yes they are all tuned that way. It’s pretty simple to see why.

    AFAIK (someone correct me if I’ve misunderstood), the models share certain attributes:
    * CO2 trend is one of many tunable parameters
    * Proper characterization of other real-world attributes such as the sun, clouds, etc, are NOT properly modeled or are even left out of the model
    * Models are calibrated against current best-guess temperature history

    Put that together and the outcome is a given: the model will be fit to the current warming, and will adjust based on CO2 change and other parameters. With so many tunable parameters,

    * you are essentially guaranteed a “fit”, independent of whether physical reality is connected to CO2 or not. And,

    * you are essentially guaranteed that sun, clouds, etc “do not matter” because they are not really part of the model

    The problem is: the models do not stay accurate as time marches on. And it’s pretty obvious why: they cannot do so because they do are not actually based on physical reality. They’re only calibrated to the temperature for a period of time.

  842. Gerald Machnee
    Posted Mar 21, 2008 at 4:21 PM | Permalink

    Re #842: Here is a challenge – Find one that is not tuned to show a temperature increase.

  843. Mike Davis
    Posted Mar 21, 2008 at 4:47 PM | Permalink

    842:
    If you cannot predict the solat cycles. If you cannot predict the ocean occelations or the atmosphere osselations. you will not be able to predict the weather and therefore you can not predict climate. Any one who says they can predict the climate or even temperature next year or a hundred years from now is either taking drugs or delusional.

  844. Mike Davis
    Posted Mar 21, 2008 at 4:59 PM | Permalink

    842:
    One other item of interest. If per NOAA the current accuracy of the weather stations in use for local temperature is + or – 2C how can any one advise us that the temp has gone up .7C with any accuracy if that is inside the errors. NOAA also states that temperature is for an area 5ft by 5 ft. 10 miles away it could be = or – 5c

  845. Mat S Martinez
    Posted Mar 22, 2008 at 10:05 AM | Permalink

    #843-846

    Given the numerous parameters listed by Willis in post # 822, people here are telling me that in every model the modelers tweak ALL of these parameters such that they ALL get an increase in temperature given an increase in CO2? Being an engineer for many years, I find that hard to believe. The simplest answer would be that the finding of a temperature increase is robust. But it seems like people think that an X-files-like conspiracy is going on and that the modelers all have some hidden agenda. That seems pretty unlikely. Sorry for being so blunt.

  846. Mat S Martinez
    Posted Mar 22, 2008 at 10:08 AM | Permalink

    #843

    I’m sorry, but I have to ask. You say

    * you are essentially guaranteed a “fit”, independent of whether physical reality is connected to CO2 or not. And,
    * you are essentially guaranteed that sun, clouds, etc “do not matter” because they are not really part of the model

    Who guarantees that? Best Buy? Or wherever you bought the model???

  847. Craig Loehle
    Posted Mar 22, 2008 at 1:55 PM | Permalink

    Mat Martinez: you think people here are just throwing tomatoes for for no reason. I think everyone would agree that the models are pretty impressive. That does NOT mean that they are capable of highly precise predictions. They all share a common temperature history (the 20th Century) against which they have been tuned. Any model that fails to fit this common history (say gets cooler instead of warmer over the past 30 yrs) will be called “unphysical” and retuned until it looks better. This is not a conspiracy. It is a lack of calibration data. The problem is the high certainty claimed for the model outputs, when the way they are built makes this unlikely. To say they all predict warming when there are known issues with the models does not inspire confidence. The key question furthermore is not qualitative but quantitative: HOW MUCH WARMING? If it is only 1 deg, maybe we don’t need to panic.

  848. Carrick
    Posted Mar 22, 2008 at 2:07 PM | Permalink

    Mat, as I understand it, most of the knob turning has to do with matching the temperature variation in the first half of the 20th century:
    However, there is one important knob left over, the sensitivity of temperature to CO2. That isn’t fixed a priori because doing so would involve physics that aren’t included in the current global climate models.

    Prior to 1970, CO2 increase was in approximate balance with sulfate emissions in terms of its effect on global mean temperature. (In particular, according to the climate models, CO2 is not responsible for the rapid temperature increase seen from 1910-1940.) So getting the “right” temperature variation 1970-2000 involves tweaking one knob, and any variable that is approximately linearly increasing in time, with that one adjustment, can explain all of the temperature variation seen in that period.

    If you look carefully, you’ll note that the series “topped out” around 2003. That isn’t explained by a model in which all of the temperature increase from 1970-2000 was due to CO2, but it could be explained if the increase in temperature were partially (e.g., 50%) due to natural factors that enhanced the apparently anthrpogenic CO2 driving, with the knob for CO2 sensitivity correspondingly reduced.

    That latter adjustment is important, because lowering the CO2 sensitivity translates into a commensurate reduction on the impact of e.g. a doubling of CO2 on the climate.

  849. Sam Urbinto
    Posted Mar 22, 2008 at 2:31 PM | Permalink

    I think everyone reading here sometimes forgets something. The main point. We aren’t lying when we say we want the truth. If it turns out the truth is warmer than predicted, we want to know it. Sure. Hand waving and obfuscating and changing subjects and insults about being jesters and fools and deniers doesn’t always give the listener the feeling they are getting the real deal. So who would be surprised if some of us ask, “But how do you know that? What’s your rational explanation of your certainty on this issue?

    Would I love to know if red is going to come up next spin of the wheel? Of course. But it’s a simple proposition:

    “You want me to put a billion dollars on 14. How do I know your system of guessing which of the 38 numbers is going to come up will work?”

    So if you don’t answer, maybe I’ll test it out with a million on 14 15 17 18 with another million on black. Or maybe I won’t listen to you at all.

    If only I could see the next spin ahead of time.

    “But the models all say 14!”

    Yes, but over time, every bet you make loses you 5.3 cents in the long term.

    Spin. Ploink.

    00

  850. MrPete
    Posted Mar 22, 2008 at 4:20 PM | Permalink

    Mat, Craig explained it pretty well. I hope you understand. If you have any understanding of “fitting” then this should make sense.

    Put even more simply (horrible generalization coming):

    * The right way: build models from a physical understanding of the world. An independent (“blind”) test can check how well the model fits reality.

    * The wrong way: build models with many parameters, and tune them (with eyes open) to best fit a set of data.

    The problem with the wrong way: given a few tunable parameters, the model can be made to fit any data at all. And there’s no reason to believe the model has any physical meaning.

    This is not a slam against GCM’s. It’s a dig at bad models, period.

  851. Sam Urbinto
    Posted Mar 22, 2008 at 4:26 PM | Permalink

    I built a model about at what point water freezes the other day, and once I finished with it, don’t you know, it turned out it freezes at exactly 0.000000000000000 centigrade, regardless of impurity levels in the water, surrounding temperature, or altitude.

    Computer programs are a wonderful thing. They always get everything exactly correct no matter what.

  852. Judith Curry
    Posted Mar 22, 2008 at 5:35 PM | Permalink

    Willis et al., the parameterizations are physically based, either by their approximation to known physical relationships (equations) or through observational constraints. To continue with the sea ice albedo example, you state:

    The GISS model uses an explicit sea ice melt pond parameterization such as the one Judith argues for. Is it better? Well, among other things, it specifies that melt ponds can only form during six months of the year in the Arctic, and the other six months of the year in the Antarctic. Now, we all know that there is no such rule in nature, nor anything even remotely like that. It just happens to work, although there is no way to tell how well it does in extreme years.

    In the Arctic, melt ponds are observed to form end of may/early june, and freeze up Aug/Sept, typically a 2-3 month period (well within the 6 month window). Now of course we don’t want to “tune” the parameterizations to current conditions, and not allow for really different things to happen in an altered climate regime. However, having melt ponds form on arctic sea ice during the winter half of the year is essentially impossible under any scenario. You need several ingredients for melt ponds to form on sea ice: surface temperature at the melting point, and a net positive surface heat flux. The net positive surface heat flux simply isn’t going to happen during the polar night (when there is little to no solar radiation). If the surface temperature was near the melting point in winter, it is next to impossible to imagine how this situation can support sea ice: if sea ice is forming, then the surface will rapidly cool; if there is no sea ice (the more likely scenario in winter if temp is near melting/freezing point), then melt ponds are a moot issue and irrelevant.

    Each one of the parameterizations Willis listed for the GISS model has a narrow range over which the parameterizations can defensibly be varied, in terms of observational constraints and the relevant physics. No time to go through each, but different models have different treatments of the parameterizations/parameters. Take for example cloud particle effective radius. A climate model has two choices: 1) calculate the effective radius using prognostic equation for the cloud water mixing ratio and cloud particle concentration (determined through interaction with aerosol), along with an analytically derived cloud particle size spectrum shape; or 2) specify the effective radius as a function of something like cloud temperature, cloud height, and location over land or ocean. Now #1 is more physically based and allows for additional feedbacks, whereas #2 is more “tunable”, although it does have observational constraints. Models are working towards parameterizations like #1 which are physically based rather than empirically based parameterizations like #2. #1 is still a parameterization, since we don’t physically treat each cloud drop in a climate model, but rather a distribution of cloud particles over cloud elements that vary vertically and horizontally within a model grid element.

    So the point is that these parameterizations can be quite complex, the best models are using physically based and observationally constrained parameterizations and relationships such as #1. This is also why multiple different climate models are used, to include a range of different parameterization types.

    The relevance of the 10**9 degrees of freedom is that there is no way you can tweak 10**2 to 10**3 parameters and get a sensible 4-dimensional simulations (in time, lat, lon, vertical). The 20th century simulations have many observational constraints that they need to meet, not just surface temperature but in the satellite era there is a plethora of observations of cloud properties, temperature and humidity profiles, ocean surface winds, precipitation, top of atmosphere radiative fluxes, etc. against which the models are evaluated. Here is an example of community evaluation of the atmospheric component of the climate models, from the IPCC report

    Click to access ar4-wg1-chapter8.pdf

    And finally, in willis’ list, the TOA outgoing radiation flux is NOT a parameterization or a tuning knob. This is calculated using the model’s radiative transfer code with the simulated temperature, humidity, and cloud properties, plus the specified CO2 and other gaseous concentrations. The models all try to match the TOA radiative fluxes observed from satellite, and some do some tuning of cirrus cloud effective radius to try to match the space time variation of the satelite observations (this is eliminated as a tuning option if you do a parameterization of type #1). But in any event, the TOA radiation flux itself is not explicitly tuned.

    FYI there is an interesting draft report available from the US Climate Change Science Program of the Synthesis and Assessment Product #3.1 Climate Models: Assessment of Strengths and Limitations
    http://www.climatescience.gov/Library/sap/sap3-1/public-review-draft/default.htm

  853. Judith Curry
    Posted Mar 22, 2008 at 5:57 PM | Permalink

    I just spotted a paper that provides a good overview of how an individual model is tested and evaluated in the context of the model development. http://www.dvgu.ru/meteo/library/00160123.pdf
    you can see that this is not just a mindless tuning exercise, designed to give a warmer climate.

  854. steven mosher
    Posted Mar 22, 2008 at 6:41 PM | Permalink

    re 854.

    Thanks Judith. That detail was fascinating. Lucia, you remember her, has been trying to get to the bottom of some Ipcc “forecasts” . Can you point to the global temp estimate ( time series ) for the NCAR IPPC submissions. A simple chart of Anomaly verusus time would be nice.

    Surely you must have this at your fingertips.

  855. Geoff Sherrington
    Posted Mar 22, 2008 at 7:14 PM | Permalink

    Re # 855 Judith Curry

    I spotted a spaghetti graph of model result comparisons. It was hopeless. Can it get better?

  856. MrPete
    Posted Mar 22, 2008 at 7:19 PM | Permalink

    Judith – no suggestion that tuning is “designed to give a warmer climate”. To me, the key question relates to what is the “tuning fork” so to speak:

    a) Each parameter is tuned to best match its own physical basis, without regard to the model’s final outcome

    b) Parameters are tuned within a range, to optimize the final output’s match to measured (temp or other attribute)

    c) Something entirely different (if so, what is it)

    There’s a huge difference between a) and b).

    Weill read up on your papers as time becomes available…

  857. jae
    Posted Mar 22, 2008 at 7:20 PM | Permalink

    Judith: I know very little about GCMs, but this beginning statement makes me think that nobody else doe, either:

    Many centres have developed GCMs for studying
    climate. Isolating the causes of differences between
    simulations with different models is difficult because they
    use a wide range of formulations. For example many use
    spectral discretization, others use grid points, some use
    Eulerian and others semi-Lagrangian dynamics. They
    also use different physical parametrization schemes and
    often use different experimental design.

  858. steven mosher
    Posted Mar 22, 2008 at 8:41 PM | Permalink

    RE 855. I’ve spent an hour searching on NCAR for a simple thing.

    We have all seen the land sea global Hadcru anomaly. The global mean temp froom 1850 to today.
    GISS has one from 1880 to today.

    First question. Point me to the data where the NCAR model hindcasts the GSMT from 1850 to today

    A vector of annual global mean surface temps. 1850 to present.

    So like 150 or so numbers.

    Simple test.

  859. Posted Mar 22, 2008 at 10:05 PM | Permalink

    Re #855 Hi Judith and thanks for the reference to the paper. GCMs are not an interest of mine but a good overview paper is always worth a read.

    You offered it as an illustration of methodology and not for discussion of the particulars of the model but I’d like to mention two (of many) model aspects which I found to be interesting:

    If I read the paper correctly then the model’s performance was evaluated against the details of the 1979-1988 atmosphere (AMIP), which I take it is a standard evaluation period for models. That, I presume, allows for testing against the 1989-2007 atmosphere. I hope they publish those results.

    Secondly, I am intrigued by this statement in the Conclusions section:

    Overall, HadAM3 produces a good simulation of
    current climate when forced with observed sea surface
    temperatures. In coupled mode (Gordon et al. 1999) it is
    able to run without flux adjustments, a significant
    advance on the previous generation of coupled models.

    That sounds like faint praise for something that’s designed to look 100 years into the future. I presume that its performance has continued to advance since 2000 and today it is now a no-flux-adjustment model and that it performs well without forcing SST.

  860. gb
    Posted Mar 23, 2008 at 2:11 AM | Permalink

    Hi,

    In addition to Judith Curry’s references, this one is also interesting:

    http://www.agu.org/pubs/crossref/2008/2007JD008972.shtml

    ‘Performance metrics for climate models’

  861. MrPete
    Posted Mar 23, 2008 at 4:32 AM | Permalink

    Judith,

    Let’s distinguish “mindless tuning” and “sophisticated tuning.” Certainly, brilliant people are involved in these studies. I think you’d agree however that intelligence doesn’t automagically translate into brilliant outcomes. We’re all capable of being fooled by the complexity of our efforts. That’s why I appreciate Kernighan’s aphorism, which applies to many situations: Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it? I have quite a bit of experience in diagnosing serious issues in complex systems that I’ve never seen before.***

    Yes, that’s an interesting overview paper. It reads a bit like a (complex) “Latest Changes” software update in my field… which has the strength of providing detail on what has improved lately, and the weakness of not summarizing the important overall model characteristics.

    David Smith’s question highlights one interesting aspect, also brought to light in an earlier statement in the paper:

    Globally, both HadAM2b and HadAM3 are too cold near the surface. This is illustrated in maps of 1.5 m temperatures for djf in Fig. 14, and in a bar chart of mean and rms. errors over land for different regions and seasons in Fig. 15. Note that we focus on land only, since the temperatures will be strongly constrained by the prescribed SST over the sea.

    A cursory glance at Fig 14 shows that their measure of mean/rms error is in minimum two-degree buckets — with “error” of six or more degrees quite common.

    Perhaps the following is simply such common knowledge that “every” climate-model scientist knows the answers, but it seems helpful, if only from a documentation perspective, to maintain a table showing the “innies” and “outies” of each model, where rows are parameters, columns are models, and cells contain categories along the lines displayed below. To have such a table for the initial conditions of the models, as well as their final values, would be very informative from a variety of perspectives.

    Information such as this has been invaluable to teams I’ve worked with in a variety of fields, to better understand appropriate test methods for complex systems.

    I0.0 – Input, from first principles/physics/chem/etc. Any physical constants used are well known/unchanged.
    I0.1 – Input, from physics/chem/etc, constants have been refined/updated in last decade
    I0.2 – Input, from physics/chem/etc, constants have been refined/updated in last year
    I1.0 – Input, geospatial data, time invariant, past data is well known/unchanged (e.g. general land boundaries)
    I1.1 – Input, geospatial data, time invariant, past data has been modified/refined/updated in last decade
    I1.2 – Input, geospatial data, time invariant, past data has been modified/refined/updated in last year
    I2.0 – Input, geospatialtemporal data, past data is well known/unchanged (e.g. river delta geography)
    I2.1 – Input, geospatialtemporal data, past data modified/refined/updated in last decade
    I2.2 – Input, geospatialtemporal data, past data modified/refined/updated in last year
    I3.0 – Input, temporal data, geospatially invariant, unchanged
    I3.1 – Input, temporal data, geospatially invariant, modified/refined/updated last decade
    I3.2 – Input, temporal data, geospatially invariant, modified/refined/updated last year
    I4.0 – Input, estimated global constant, unchanged
    I4.1 – Input, estimated global constant, modified/refined/updated last decade
    I4.2 – Input, estimated global constant, modified/refined/updated last year
    O1 – Output, geospatialtemporal data
    O2 – Output, geospatial data, time invariant
    O3 – Output, temporal data, geospatially invariant
    O4 – Output, global constant
    NA – Not a result / Does not apply

    Obviously, a beginning would be to identify the “I” and “O” parameters, then refine to “Ix” and “Ox” level, and finally to introduce the “.x” level of detail.

    For example, interpolating from the referenced paper it would appear that in 1999/2000,

    Initial Conditions
    Land Temp – O1
    SST – I1

    Final Output
    Land Temp – O1
    SST – NA

    I realize that the details of each parameterization are of more interest to scientists such as yourself. However, I’ve found that maintaining a systemwide overview has its own value in ensuring big picture questions such as Jerry’s, Willis’ and David’s are easily answered.

    (***OT, perhaps interesting: My most recent ‘real world’ project demonstrated obvious problems in a popular, hugely expensive system. Apparently, we were the first to go beyond examining “how well does it handle transaction X, Y or Z” to ask questions like “how well does it handle real people dealing with normal daily work?” They honestly had never thought about that! It did not take long to bring quite a few killer issues to the surface.)

  862. Philip Mulholland
    Posted Mar 23, 2008 at 5:36 AM | Permalink

    Forward and Inverse Modelling: What are they and how do we use these techniques in Geoscience?

    Geophysical forward modelling: For a given geological rock-type, for example a carbonate grainstone reservoir, verify the existence of a correlatable relationship between a known petrophysical property, such as acoustic impedance, and a desired rock property, such as reservoir porosity.
    Using stratal sequence information derived from borehole well or outcrop exposure, establish the expected range and interrelation of bed thicknesses for your study area and create a geometric template of this stratigraphy that can then be used to model the subsurface lithological variation.
    Populate this geometric model template with a range of rock properties that lie within the feasible limits of the petrophysical parameters for the likely lithological sequence at the measured depth of burial.
    Convolve this geological model with a seismic wavelet that is either derived from statistical analysis of the reflection seismic data for the area of interest, or created using a mathematical process to produce a wavelet with the frequency and phase characteristics that match the transmission properties of your strata. Create a set of model seismic traces and cross-correlate these with the observed seismic reflection data collected in the field to establish a best fit choice at each reservoir sequence location, thereby deducing the required rock property of the reservoir at this point.

    For forward modelling to work you have to have a reliable template, an established physics and a reliable data set against which you can match the model output. With this relationship established you are able to predict the properties of the system away from your control point, either spatially, as in the case of geology, or temporally, as in the case of meteorology.

    Geophysical inverse modelling: For a given geophysical dataset, for example a 3D seismic reflection survey, deduce the acoustic impedance properties of the likely stratal sequence that can generate the observed seismic dataset. For this process of data integration to work, you need to establish the bed boundary reflection coefficients for the type of seismic wave that is carrying the seismic energy through the subsurface.
    The reflection coefficient of each bed interface is determined by application of Snell’s law and relates directly to the acoustic impedance contrast between the adjacent successive rock layers. The amplitude strength of each reflection is determined by the nature of the boundary interface, in particular the Poisson’s ratio contrast which governs the partition of energy between pressure P-wave and shear S-wave modes of energy transmission. This partition of energy is established using the Zeoppritz equations for mode conversion at each successive reflective rock boundary.

    For inverse modelling to be successfully applied a clean seismic dataset is required, one that is not contaminated by internal multiple reflections that generate the appearance of reflectors at spurious locations within the data volume. As with seismic forward modelling, information about the frequency properties of the seismic wavelet can be established by statistical analysis of the reflection data volume. Acoustic impedance modelling can be either unconstrained, in which case a relative acoustic impedance dataset is obtained, or constrained, by means of calibration with a known borehole impedance log, to produce an absolute acoustic impedance dataset.

    Climate modelling is the application of forward modelling techniques with the intention of predicting the future temporal and spatial variation in the climate. Climate modelling relies on, and uses in particular, meteorological and oceanographic information as constraints to establish the scaling factors and trends of data variability. Because the intention is to predict future climate trends, the technique relies on previous history matching of the model output to prior data as the primary process of algorithm validation.

    In meteorology temporal and spatial prediction of weather is the goal of forward modelling. Meteorology however also possesses a considerable wealth of historic data, and it is instructive to ask if these data can be used as a feedstock into a process of meteorological inverse modelling? The intention here is to establish the minimum set of meteorological data types for which the climate of a given locale can be unequivocally established. Knowledge of these inverse modelling minimum data criteria can then used to appropriately design predictive forward models.

    Let us start this investigation of meteorological inverse modelling by borrowing some ideas from Isaac Asimov’s Foundation stories. Suppose we are sitting in the galactic data library on Trantor and have a set of historic meteorological data believed to be gathered from planet Earth. How would we verify that the supposed origin of this data is indeed Earth? What signal in these data would we need to identify to establish the correct planetary source of the information?

    Let’s suppose that the emperor’s planetary meteorological bureau did not skimp on data collection and that we have a continuous time series of temperature data, sampled once every second for some location on our unidentified planet. With a reasonably long series of temperature data we should be able to establish a diurnal signal, a seasonal signal and an annual signal, giving some confidence that we have data from a habitable terrestrial planet in orbit around a single star. (We would be very unlucky indeed if the data was from the South Pole meteorological station). With a really long temperature data set we might have a high confidence that the star is class G, the planetary tilt is stable, a lunar signal can be speculated and that this is possibly Earth data.

    The next meteorological data set to add might be pressure data that is time locked to the temperature sequence. Earth has a considerable topographic range and we do not know the elevation of our data source, however we now have a range estimate to the probable atmospheric mass. The subsequent data set to add might be humidity data, this allows us to determine the probable existence of a planetary ocean (we could be unlucky with rainfall data, as records for the Atacama Desert might present us with a null precipitation signal). Now add the precipitation data, the wind vector data (strength, direction and duration) and let’s move to a working hypothesis that this is indeed Earth meteorological data.

    The next question to establish is the latitude and longitude of our meteorological station. Wind veer data, time locked to precipitation events, should locate us within the correct hemisphere, while temperature and pressure range should help with elevation and continental location, in particular the distance to the nearest ocean up-wind that can supply the moisture signal. Next let’s add some insolation data, these data should include, direct light measurements, cloud cover measurements and visibility data. This moves us on in a big way towards our goal of establishing the locale’s climate, but I have left adding the light data to now to demonstrate that the insolation signal is also present in other data.

    At this point we might stop and ask “Have we enough information to unequivocally determine the climate at this location?” Well, evaporation data is important, but we could of course simply “ask the biosphere” specifically the botany, as our reality check for establishing climate. But notice that I have not yet measured any gaseous properties of the atmosphere, other than moisture, in my climate determination inverse modelling process.

    Now isn’t that curious? 😉

    What’s that? It’s climate change you want to study and none of the above hack it? Sorry, my mistake.

  863. Ron
    Posted Mar 23, 2008 at 4:06 PM | Permalink

    Phillip;

    Three quick points:

    1-But wouldn’t the current warmers say something like, for example “Ray Ladbury Says:
    21 March 2008 at 8:33 AM (Real Climate)
    Eric, This is horse puckey. Think about it. You say, “The climate has had MASSIVE climate change in the past, long before human existence, far more dynamic than what we are experiencing now.”
    That, sir, is precisely the point. Human civilization, indeed human society, has never experienced anything like the changes we are starting to unleash. ALL of the infrastructure of our civilization was developed during the past 10000 years of exceptional climatic stability. And now, as the human population soars to 9-12 billion, we are changing the environment to the point where much of our agricultural, transport, health, water and social infrastructure may simply cease to function. What is more, the climate has many positive feedbacks, and if these kick in in earnest, we will have no control whatsoever.
    Finally, your assertion that we don’t understand the mechanism of climate change is just flat wrong. The theory of climate is pretty mature. Yes, there are aspects we don’t understand, but they are not significant enough to invalidate what we understand well–greenhouse forcing. Increase greenhouse gasses and you’ll warm the climate. Warm the climate and you’ll decrease predictability. Now, I ask you, in a world of 12 billion people, how can less predictability be a good thing?”
    (Correct me if I’m wrong here, but isn’t Ray really saying that even if we knew the past climate perfectly it matters not a hoot because what’s going to happen is, well, you know, kind’a, like, “unprecedented”—but of course, predictably less predictable ?)

    2-We know what a common sensical and reasonably well schooled in science Michael Crichton had to say about this whole AGW thing, and how he was savaged by RC; what do you think Asmiov would say about it?

    3-Thanks for your first class post.
    Ron

  864. Willis Eschenbach
    Posted Mar 23, 2008 at 6:47 PM | Permalink

    Judith, thanks as always for your contribution. Inter alia, you say:

    Willis et al., the parameterizations are physically based, either by their approximation to known physical relationships (equations) or through observational constraints. To continue with the sea ice albedo example, you state:

    “The GISS model uses an explicit sea ice melt pond parameterization such as the one Judith argues for. Is it better? Well, among other things, it specifies that melt ponds can only form during six months of the year in the Arctic, and the other six months of the year in the Antarctic. Now, we all know that there is no such rule in nature, nor anything even remotely like that. It just happens to work, although there is no way to tell how well it does in extreme years.”

    In the Arctic, melt ponds are observed to form end of may/early june, and freeze up Aug/Sept, typically a 2-3 month period (well within the 6 month window). Now of course we don’t want to “tune” the parameterizations to current conditions, and not allow for really different things to happen in an altered climate regime. However, having melt ponds form on arctic sea ice during the winter half of the year is essentially impossible under any scenario. You need several ingredients for melt ponds to form on sea ice: surface temperature at the melting point, and a net positive surface heat flux. The net positive surface heat flux simply isn’t going to happen during the polar night (when there is little to no solar radiation). If the surface temperature was near the melting point in winter, it is next to impossible to imagine how this situation can support sea ice: if sea ice is forming, then the surface will rapidly cool; if there is no sea ice (the more likely scenario in winter if temp is near melting/freezing point), then melt ponds are a moot issue and irrelevant.

    I’m sorry, but that doesn’t make any sense. If “having melt ponds form on arctic sea ice during the winter half of the year is essentially impossible under any scenario”, then why on earth would you have to put lines of code into your model to prevent it from happening during winter months? The only possible reason for writing code to prevent something from happening in their model is because it is happening in their model, something which you claim is “essentially impossible”.

    And if you merely put lines of code in to prevent it from happening in chosen months, RATHER THAN FIX THE UNDERLYING CODE that allows the impossible to happen, how can you possibly claim that the 6-month window parameterization is “physically based”? It has no physical basis at all. There is no such rule in any such form existing in nature. It is what is called in computer parlance a “kludge”, a way to fix the output without actually fixing the program.

    Next, you say:

    And finally, in willis’ list, the TOA outgoing radiation flux is NOT a parameterization or a tuning knob. This is calculated using the model’s radiative transfer code with the simulated temperature, humidity, and cloud properties, plus the specified CO2 and other gaseous concentrations. The models all try to match the TOA radiative fluxes observed from satellite, and some do some tuning of cirrus cloud effective radius to try to match the space time variation of the satelite observations (this is eliminated as a tuning option if you do a parameterization of type #1). But in any event, the TOA radiation flux itself is not explicitly tuned.

    Well, you may be right … but Gavin Schmidt and the folks at GISS might differ. They say (emphasis mine):

    Thermal fluxes are calculated using a no-scattering format with parameterized correction factors applied to the outgoing TOA flux to account for multiple scattering effects using tabulated data from offline calculations. Longwave multiple scattering increases the cloud thermal greenhouse contribution by reducing the global outgoing TOA flux by about 1.5 W m-2. Multiple scattering by clouds also increases the global mean downwelling flux at the surface by about 0.4 W m-2 compared to the no-scattering approximation.

    While the magnitude of the cloud multiple scattering effect has been reported in the literature to be as large as 20 W m-2 (Chou et al. 1999; Edwards and Slingo 1996; Ritter and Geleyn 1992; Stephens et al. 2001), our calculations show this to be an overestimate because these earlier studies defined their no-scattering reference by setting the single scattering albedo to zero. A better no-scattering approximation is achieved by setting the asymmetry parameter to unity so that the cloud particle absorption cross section (rather than the extinction cross section) is used in subsequent radiative transfer calculations (Paltridge and Platt 1976). The LW forcing due to well-mixed greenhouse gases compares very well to line-by-line calculations, differing by less than 1% at the TOA and only slightly more at the surface.

    They say TOA outgoing radiation flux is parameterized, noting two separate parameters (the “asymmetry parameter” applied to the particle sizes, and the “parameterized correction factors” applied directly to the outgoing TOA flux … you say it isn’t parameterized at all. Who’s a boy to believe?

    In addition, note that they do not say this is a better parameterization because it is closer to physical reality, as you do. They say it is better because the final output of the GCM is a better match to reality. I hate to keep harping on this point, but these are two very, very, very different claims. The first is what you are discussing, that the parameters are selected on some physical basis. The second is tuning, pure and simple, based on some selected output of the GCM.

    I also note that they slide by the question of accuracy by saying:

    The LW forcing due to well-mixed greenhouse gases compares very well to line-by-line calculations, differing by less than 1% at the TOA and only slightly more at the surface.

    I get nervous when someone says things like “only slightly more at the surface” … what is the value at the surface, and why not just state it? Let’s take it as being 2% … which works out to about twice the change from a doubling of CO2. When we’re looking for the effects of change in forcing of a couple of watts/m2, having an error in surface DLR of around 7 w/m2 doesn’t “compare very well”, it’s still way too large. (Although to be fair, perhaps they’re comparing their 7 w/m2 surface DLR error to the other, larger errors I have listed above, or to the 40-50 w/m2 regional errors in things like TOA radiation. Compared to those huge errors they are right, they do “compare very well”)

    Finally, you say (emphasis mine):

    The relevance of the 10**9 degrees of freedom is that there is no way you can tweak 10**2 to 10**3 parameters and get a sensible 4-dimensional simulations (in time, lat, lon, vertical). The 20th century simulations have many observational constraints that they need to meet, not just surface temperature but in the satellite era there is a plethora of observations of cloud properties, temperature and humidity profiles, ocean surface winds, precipitation, top of atmosphere radiative fluxes, etc. against which the models are evaluated.

    Well, if they “need to meet” “many observational constraints”, they are not succeeding. As you point out, it’s not possible to tune for them all. And as I point out, they don’t even try.

    Judith, modeled cloud cover in the GISS model is 15% too small compared with reality. Modeled total LW cloud forcing is only 60% of what it should be. Modeled surface net LW is 20% too large. Modeled global precipitation is 13% too high. I don’t see anyone denying your statement that we can’t tune for all of these … what I see is you denying the reality of what that inability to simultaneously tune for all of these central values actually means.

    For example, take the modeled LW cloud forcing. It is off by about 10 W/m2. This is more than twice the effect of doubling CO2 … but on Planet GISS, this huge imbalance does not affect the ability of the model to track historical temperatures. The cloud cover is off by 15%, which should change the albedo by about 10%, which is about 30 W/m2 of additional solar radiation … but on Planet GISS, this also doesn’t affect the model’s superb hindcasting abilities. Heck, even the fact that the surface of Planet GISS radiates about 10 W/m2 less than Planet Earth makes no difference, it still is able to model the temperature trend of Planet Earth so well that it can distinguish between natural and anthropogenic warming … I await your explanation of how a model with so many huge differences from reality can give accurate historical temperature trend results other than by tuning.

    Judith, you are a scientist … the GISS model report clearly shows that your claim that the models “have many observational constraints that they need to meet” is clearly not fulfilled by the GISS model. Please provide a citation that shows that some other model is doing better than the GISS model, because the GISS model is failing miserably in meeting the “many observational constraints” you refer to. In particular, you list such things as precipitation and TOA radiative fluxes, values which are very poorly matched by the GISS model. I await your citation, I’d like to see that other model.

    However, I doubt very much that you will be able to cite a single model for us that can do what you specify, a model that gets those central values that you listed correct. In particular, you list “cloud properties” … but those properties are amongst the parts of the GCMs that are most thoroughly parameterized. Of the 35 parameterizations I listed above, some 22 of them have to do with clouds … but even with all of those parameters, the model still gets the clouds way, way wrong. The GISS folks describe how they adjust the clouds:

    The model is tuned (using the threshold relative humidity Uoo for the initiation of ice and water clouds) to be in global radiative balance (i.e., net radiation at TOA within 0.5 W m 2 of zero) and a reasonable planetary albedo (between 29% and 31%) for the control run simulations.

    Now, here we need an English-to-English translation. What they are actually saying is:

    “Our cloud model is setting the albedo far too high, and also the TOA radiation is not in balance. So we’ll adjust the individual rates at which ice and liquid clouds form until we get the albedo and TOA balance right, and we won’t worry that TOA cloud DLR and total cloud area are way wrong.” (And in passing, Judith, I note that their explanation totally strikes down your claim that TOA radiation is not parameterized.)

    Call me crazy, but that doesn’t strike me as being even distantly related to your claim that “the parameterizations are physically based”. As you point out, you can’t tune for everything. So, they tune for albedo and net TOA radiative balance … but at the expense of accuracy in TOA cloud DLR, TOA ULR, and overall cloud cover, each of which is a very poor match to reality. Note that they have two tuning knobs (Uoo for both ice and water clouds) so they can simultaneouly tune for albedo and radiative balance … note also that they do not make any claims about how these parameters are being set based on underlying physics, or based on known values for Uoo. Instead they are unabashedly clear about what they are doing, which is tuning for results, and not tuning to match the physics as you claim.

    My best to you,

    w.

    PS – I note that before, you were saying that there were “very few” tunable parameters. Now that I have listed some 35 of them without breaking a sweat, are you willing to retract that claim? If not, let me add a few more parameters used in the GISS model:

    • Vertical mixing in the ocean

    • Eddy mixing in the ocean

    • Indirect aerosol effect !!! (very important)

    • Snow albedo change from soot !!! (again very important)

    PPS – It strikes me that if in fact these tunable parameters all are tuned based on some kind of a physical meaning as you claim, there would be no need for each modeling group to use different values. They could all use the same value for, say, the threshold relative humidity for cloud formation. Think of all the time that would save, nobody would have to do all of those complex calculations, the modelers could just agree on the correct value for Uoo for ice clouds and be done with it once and for all …

    (I’m sure you can see the problem with that … and by inference, the problems with your claim that the settings for the various parameters are physically based.)

    PPPS – Yes, you are right, you can’t tune for everything …

    2.4 Principal Model Deficiencies Model shortcomings include ~25% regional deficiency of summer stratus cloud cover off the west coast of the continents with resulting excessive absorption of solar radiation by as much as 50 W/m2, deficiency in absorbed solar radiation and net radiation over other tropical regions by typically 20 W/m2, sea level pressure too high by 4-8 hPa in the winter in the Arctic and 2-4 hPa too low in all seasons in the tropics, ~20% deficiency of rainfall over the Amazon basin, ~25% deficiency in summer cloud cover in the western United States and central Asia with a corresponding ~5°C excessive summer warmth in these regions. In addition to the inaccuracies in the simulated climatology, another shortcoming of the atmospheric model for climate change studies is the absence of a gravity wave representation, as noted above, which may affect the nature of interactions between the troposphere and stratosphere.

    The stratospheric variability is less than observed, as shown by analysis of the present 20-layer 4°×5° atmospheric model by J. Perlwitz (personal communication). In a 50-year control run Perlwitz finds that the interannual variability of seasonal mean temperature in the stratosphere maximizes in the region of the subpolar jet streams at realistic values, but the model produces only six sudden stratospheric warmings (SSWs) in 50 years, compared with about one every two years in the real world.

    SOURCE

  865. MrPete
    Posted Mar 23, 2008 at 7:51 PM | Permalink

    Upon further reflection, a “multidisciplinary” thought that might someday be helpful:

    In sophisticated computer systems, maintaining documentation is one of the least enjoyable tasks. So, smart computer folk have invented a variety of methods for simplifying the task.

    If the GCM teams found value in having accurate, up-to-date tables documenting the various inputs and outputs of their models, such as noted in my prior posting, this would not be difficult to accomplish with a bit of effective structuring of the model software modules.

    Simply:
    * Define standard “wrapper” functions around primary input and output “pipes”, with standardized names, function parameters, etc.
    * Use one of the many available self-documenting code systems to properly define the function parameters in line with the function code itself
    * The same self-documenting code system can parse the model code, gather the model inputs and outputs, and summarize appropriately.

    If the GCM teams do not have such software expertise available in-house, I’m sure CA’s multidisciplinary peanut gallery has plenty of folks who could help out. This kind of thing is not rocket science.

    It would be much nicer to be able to refer to an auto-updated web page describing the capabilities of each model, rather than having Judith and Willis bantering over how the models work. (Not that I want to spoil the fun 🙂 )

    For better science,
    MrPete

  866. Craig Loehle
    Posted Mar 23, 2008 at 7:58 PM | Permalink

    In addition to Willis’ excellent post, I would note that the the grid-scale approximations introduce horrible kluges. For example, convection happens at all scales in the atmosphere, but the models have huge grid boxes. The representation of convection with a box is totally nonphysical and not even empirical. Same problem with turbulent exchanges between grid boxes. Finally, note that every spot on the ground can have a different albedo, and hence temperature, and hence outgoing SW and LW radiation (by 4th power of temperature) but the model uses some sort of average or approximation to this surface heterogeneity. To say these fudge factor approximations are constrained by empirical data is to be rather loose with the term “constrianed”.

  867. Gerald Browning
    Posted Mar 23, 2008 at 8:34 PM | Permalink

    Craig Loehle (#868),

    Exactly. And when the grid boxes are effectively 100 km on a side,
    the physical parameterization of the water cycle processes are complete nonsense. The fraction of heating or cooling is chosen to make the simulation input of energy into the system balance the wrong cascade of vorticity due to the unphysically large dissipation so the model does not blow up. That does not mean that the model is anywhere near reality/

    Jerry

  868. Jud Partin
    Posted Mar 23, 2008 at 8:43 PM | Permalink

    @ steven mosher

    Try either

    http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php

    or

    https://esg.llnl.gov:8443/

    These were the first couple of hits after googling “CMIP3”. This may put you closer to the 150 numbers you are looking for. You will have to register to download any data.

  869. Willis Eschenbach
    Posted Mar 23, 2008 at 10:23 PM | Permalink

    Craig, you are quite correct to say that “the grid-scale approximations introduce horrible kluges”. Tropical thunderstorms are a good example. They are the reflective heat pipes that transport warm air up high in the troposphere and transport cold hydrometeors downwards.

    But the thing about thunderstorms is that they are an emergent phenomenon, self-organized out of sunshine, air, and water. They emerge in response to temperature, and the hotter it gets, the more of them there are. And the more of them there are, the more wind there is. And the more wind there is, the more evaporation there is.

    And the more thunderstorms, the more sunlight reflected, the more of the planet shaded, and the cooler it gets … so the eventual temperature is some kind of balance between the heat of the sun and the multiple cooling effects of the thunderstorms. Living here in the deep tropical Pacific, I witness this battle between heat and cooling every day.

    I wrote my first computer program in 1963, and I am by no means a novice at modeling various aspects of the real world on computers. But to model a single square mile of the tropical ocean down the hill from my house, with all of the interactions, energy transformations, and feedbacks … we’re not there yet.

    w.

  870. Gerald Browning
    Posted Mar 23, 2008 at 10:54 PM | Permalink

    All,

    Judith Curry has clearly revealed her vested interests in climate models and parameterizations.
    Now let us ask Judith Curry a simple question since she is unable or unwilling to respond to mathematical questions. What are the most important parameters in a coarse mesh atmospheric numerical model that does not use the dissipation operator specified by the Navier-Stokes equations (type and size of coefficient for the atmosphere)? We can compare her answer with those from Sylvie Gravel’s and Dave Williamson’s manuscripts on this subject.

    Jerry

  871. Gerald Browning
    Posted Mar 23, 2008 at 11:06 PM | Permalink

    Willis,

    The hot towers that one sees in the tropics develop because of the balance between the vertical velocity w and the total heating (reference available on request). Because both the total heating and the vertical velocity are not measured in the tropics and the initial vorticity is not known,
    tropical forecasts are the worst (as you may have noticed).

    This lack of knowledge in a crucial area of the driving force behind climate is just one reason the climate models are nonsense. 🙂

    Jerry

  872. MrPete
    Posted Mar 24, 2008 at 5:21 AM | Permalink

    Willis, I love the way you described thunderstorm formation, “self-organized out of sunshine, air, and water.”

    Camping at Turquoise Lake on the continental divide (near Leadville), I was gazing out across the lake, enjoying the blue sky and sunshine, when Leslie said “uh oh, we better batten down the hatches — looks like rain!” With not a cloud on the horizon, I thought she was imagining things until she said “look straight up!”

    Directly overhead was the beginning stages of a massive thunderstorm cloud forming out of “nothing.” It’s an awe-inspiring sight! Never considered applying that experience to AGW/GCM’s.

    Now some might call that “weather.” However, isn’t it important for models to reflect the feedback systems inherent to how climate functions?

  873. David Smith
    Posted Mar 24, 2008 at 8:14 AM | Permalink

    Re #871 willis, thanks for the crisp, well-written description of tropical thunderstorms and their roles in Earth’s heat regulation. Also, thanks for your ongoing gentle behavior in the blogosphere, which serves as a good example for the rest of us.

    I agree with the description in #871 as it applies to the effect of increased insolation (when the surface gets hotter due to the sun moving overhead) but I wonder if that’s the same as the increased-greenhouse gas case.

    Braodly speaking, tropical thunderstorms depend on temperature contrast (warm surface, cool upper troposphere) to function. If the surface warms while the upper air remains cool then the events you describe look quite reasonable to me. But, if increased CO2 makes it more difficult for the upper troposphere to radiate away its heat then that upper air ends up warmer than it would otherwise be. Perhaps in a world of greater CO2 the temperature contrast between surface and upper air does not increase, so that thunderstorms do not increase, at least not as much as might otherwise be expected.

    There are other factors (increases in vapor pressure as water warms, perhaps changes in precipitation efficiency and cloud and perhaps changes in that top-of-tropical-troposphere “dead zone”) which lead me to suspect that the feedback is indeed negative but I do wonder about whether the straightforward increasing-thunderstorms model applies to the greenhouse issue. If you can help me clarify my thoughts that would be greatly appreciated.

  874. Sam Urbinto
    Posted Mar 24, 2008 at 8:51 AM | Permalink

    Just because carbon dioxide, methane and nitrous oxide absorb and scatter IR doesn’t mean that extra amounts add energy to the system that isn’t offset by something else. Like a thunderstorm. 🙂

    MrPete:

    ” Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it? “

    Or on the other hand, if you write it as simply as possible, in a clearly defined logical modular manner, with plenty of comments, making sure it all works as you go along, the next person to work on it just might appreciate you being practical, rather than cursing you out for being clever.

  875. jae
    Posted Mar 24, 2008 at 9:21 AM | Permalink

    When I read Willis’ post 866, I kept visualizing the Wizard of Oz pushing buttons and moving levers. I remain gobsmacked that any credible scientist could suggest that GCMs be used to support public policy, when it is so obvious that they don’t model reality to any reasonable degree.

  876. MrPete
    Posted Mar 24, 2008 at 10:47 AM | Permalink

    Sam, exactly.

    And with systems this complex, it is clear that scientists “in the know” trying to keep the big picture in view are struggling to maintain an informed perspective on the strengths and weaknesses of the GCM’s. Imagine a nuclear power plant or NASA launch platform whose controls were this poorly visible outside the cubicle where the sensor/management systems were coded.

    This really can be done better. Even in an experimental, always-being-revised system. Especially in such a system.

    There should be no question about the basis for the models, nor what aspects are derived, parameterized, tuned, etc. Nor how large the boxes are. Nor the CI’s/error bounds on all this. There’s no reason a non-specialist should have to view this as Oz. (Except for Geoff of course, since he lives there 🙂 )

  877. Mat S Martinez
    Posted Mar 24, 2008 at 10:57 AM | Permalink

    You know what would give the above arguments more credibility is if people went and found citations and said more concrete statements other than “I saw some clouds. They are pretty. Incredible, actually. Impossible to model by man.” That sounds like dogma.

    Also, why do so many people ask Judith Curry to provide them with citations? Does your ISP restrict your access to google??? I wonder if this kind of armchair/sideline harassing keeps more climate scientists from posting. For example, I know it really bothers me in office meetings when people (who are not my boss) pester me for data that they have just as much access to as myself…

  878. MarkW
    Posted Mar 24, 2008 at 11:00 AM | Permalink

    David Smith,

    According to the satellites, the troposphere is not warming faster than the surface.

  879. jae
    Posted Mar 24, 2008 at 11:12 AM | Permalink

    Also, why do so many people ask Judith Curry to provide them with citations? Does your ISP restrict your access to google???

    Well, it is part laziness, I suppose. But you can save a lot of time and make sure you get the “latest” by asking the experts. If she’s not complaining, why are you?

  880. Posted Mar 24, 2008 at 11:21 AM | Permalink

    GCMs and V&V and SQA

    Some of the apologists for the present state of most GCMs as software continue to cite rough, qualitative, solution-functional level assessments and have yet to present quantitative support for their estimates.

    Several readers here, based on their comments, are certain to have hands-on Production-Grade software experience and expertise. It seems that some employ V&V and SQA in their day jobs. Many of these can cite Chapter and Verse for the failings of GCM codes relative to formal Independent V&V and SQA. There has yet to be any falsification of the cited issues.

    The state of GCM codes relative to formal Independent Verification and Validation and Software Quality Assurance cannot be disputed. There is very little, if any at all. Many here have searched, independently, and arrived at the same conclusion.

    Until these issues are addressed, all calculations by the software will remain suspect. Pseudo-validations, using methods, software, and calculations that have not been independently Verified do not carry any weight at all. They are known to be quasi-scientific exercises generally employed so as to attain publications.

    Elevators and bridges, airline flight control systems, and thousands of other applications are never designed and operated with research-grade methods, software, and calculations. Research-grade software has never been a tool in the development of policies that affect the health and safety of the public.

  881. Posted Mar 24, 2008 at 11:38 AM | Permalink

    re: #854

    The relevance of the 10**9 degrees of freedom is that there is no way you can tweak 10**2 to 10**3 parameters and get a sensible 4-dimensional simulations (in time, lat, lon, vertical).

    There can’t be on the order of hundreds or thousands of parameters that are tuned?

    But again, I say the number of tuned parameters should be assessed relative to the continuous equations that make up the model. Not the discrete approximations.

  882. Stan Palmer
    Posted Mar 24, 2008 at 11:41 AM | Permalink

    re 882

    GCMs and V&V and SQA

    What would consititute validation of these codes? Proponents seem to indicate that their ability to “hindcast” is impressive. Otehrs indicate that teh hindcasting is not as impressive as opponents claim and is based on extensive tuning in any case. It would seem to me that the verification part of V&V in this case is rather moot. Rather the validation aspect of these codes is taken by many here to be highly suspect. Are these codes doing anything worthwhile at all? Until that is resolved verification and SQA would seem to be secondary issues.

  883. Craig Loehle
    Posted Mar 24, 2008 at 1:30 PM | Permalink

    Whether the GCM codes are “impressive” is an interesting question. I’m impressed that they can get them to run at all. The actual temperature of the earth is not accurately simulated, though, only the change over time is reported (see Trenberth’s essay linked to last year). Various regions are way off (Artic too cold, too much wind, too high pressure, winds cycle the wrong way, ice piles up in the wrong places or is modeled as a single sheet of ice, jet stream wrong or missing). But then we are told (eg. By Gavin) that the models are not meant to simulate regions, just the overall globe (don’t tell that to people doing impact studies!). Sometimes they get a “cold tropical ocean” but discard that result. When the IPCC started the models didn’t even do precipitation but that didn’t stop people who were using model output from evaluating ecological and agricultural impacts—they just held precip constant! There is a long list of the things the models get wrong, like total cloudiness.
    So we are asked to accept the models when all that is asked of the model is a global temperature change of a few tenths over the past 100 yrs as a test, but that is also the calibration (tuning) period. When compared to paleo climates, the results range from fair to awful (but of course the glacial climate data or 6000BP climate test data are very poorly known also). I would say that if you think the models are wonderful you are taking someone’s word for it.

    In Ecology (my field) there are thousands of models. In most cases, there are multiple data types and sets that you can compare the output to. Forest succession and dynamics can be simulated pretty well. What does that mean? It means plus or minus 20% of the timber volume (in the absence of fire or windstorm, etc)—close enough to decide on management options. But these models are so simple compared to the climate and have tons of test data for evaluation and calibration. Plus, often parameters are directly measurable.

  884. Willis Eschenbach
    Posted Mar 24, 2008 at 1:45 PM | Permalink

    Matt, you say:

    You know what would give the above arguments more credibility is if people went and found citations and said more concrete statements other than “I saw some clouds. They are pretty. Incredible, actually. Impossible to model by man.” That sounds like dogma.

    Matt, it is not possible to prove that clouds are very difficult to model, but it is very obvious. For example, prior to posting my statement about the difficulty in modeling clouds, I presented a list of 35 parameters used in the GISS model, of which 22 are cloud parameters. Surely you are not suggesting that they would use that number of parameters to model the clouds if they could do so from physical principles, are you?

    But of course, if you think my statement is wrong, you are certainly free to point us to the GCM that actually has an operating cloud model based on physical first principles. All you need is just one …

    Also, why do so many people ask Judith Curry to provide them with citations? Does your ISP restrict your access to google??? I wonder if this kind of armchair/sideline harassing keeps more climate scientists from posting. For example, I know it really bothers me in office meetings when people (who are not my boss) pester me for data that they have just as much access to as myself…

    As to why I asked Judith for a citation, it’s because I have looked for evidence to back up what Judith is claiming, and have not found it. So I asked as follows:

    Judith, you are a scientist … the GISS model report clearly shows that your claim that the models “have many observational constraints that they need to meet” is clearly not fulfilled by the GISS model. Please provide a citation that shows that some other model is doing better than the GISS model, because the GISS model is failing miserably in meeting the “many observational constraints” you refer to. In particular, you list such things as precipitation and TOA radiative fluxes, values which are very poorly matched by the GISS model. I await your citation, I’d like to see that other model.

    But since you seem to think that superior Google-fu will provide the answers, I’m sure that you can spend a bit of time on Google and give us the names of some GCMs that fulfill the “many observational constraints” that Judith listed, and save Judith the trouble of providing the data.

    I await the results of your Google search, and I do think that you misunderstand the process here a bit. Normally, what happens is someone makes a claim, and substantiates it with scientific citations. If they don’t, and their claim seems suspect, they may be asked to provide a citation for their claim.

    This is not viewed as onerous. If the answer is so obvious that a bit of Googling will find it, the person being queried may just say so. Usually, however, the person making the claim has certain authorities in mind, certain works that they believe support their case. So they provide a citation to those works, and science goes on.

    I’m not sure why you seem to think that this is somehow different from the normal progress of science. Judith claims that the models “have many observational claims they need to meet”. The GISS model doesn’t meet those claims, and I cannot find a model that does. So I asked Judith to back up her claim. However, as I mentioned, I’m just as happy if you can back it up, so I’ll just wait for your answer.

    Please let us know if you can’t find such a model, however, because then it will once again be up to Judith to support her own work.

    All the best,

    w.

  885. Posted Mar 24, 2008 at 1:54 PM | Permalink

    Modeling papers, peer-reviewed papers, from the 1960s (at least and maybe earlier) had already identified more nearly accurate modeling of clouds as a critical aspect for calculation of the state of the climate system relative to temperature and its distribution. Many of these papers can be found at the AMS online archives and all the older papers are free of costs.

  886. MrPete
    Posted Mar 24, 2008 at 2:26 PM | Permalink

    Stan, without going into detail: when designing complex computing systems, typically the V&V/SQA (Validation & Verification / Software Quality Assurance) is performed in what might be thought of as concentric circles.

    Assume for the moment that scientists are writing software for their own use — no “release” is anticipated (even if published, the code will not be supported as a product.) In my experience, many people presume that somehow the lack of public release gives a “bye” on software test.

    Nothing could be further from the truth.

    Here are some “layers” of V&V/SQA testing that are commonly used for any quality software, even used only in-house. For each “layer”, it is important to determine if the correct design was used (many ignore this part, to their detriment!), and if the design was correctly implemented:

    * Unit: the smallest “module” of software (function, class, method, etc)
    * Integration: the connection between units (innies and outies 🙂 )
    * Functional: Getting Things Done
    * System: The whole ball of wax.
    * System integration: if connected to other systems…
    * Performance: Is it fast/small/scalable/patient/resilient enough?
    * Acceptance: the Real Test — does it make the “customer” happy (might be the scientist who built it, but y’never know)

    See here for a pretty good overview.

  887. Gerald Browning
    Posted Mar 24, 2008 at 2:30 PM | Permalink

    Mat S Martinez (#879)

    I have cited manuscripts with rigorous mathematical proofs and illustrative examples and have provided simple mathematical examples on this site. If you or Judith do not read those manuscripts or do not understand their implications (or cannot prove that they are wrong but do not want to admit that they are correct), then there is nothing more that can be done for such obstinance.

    Jerry

  888. Mat S Martinez
    Posted Mar 24, 2008 at 3:21 PM | Permalink

    #886 Willis E. I wasn’t trying to pick a fight with you. I was pointing out that after lurking here for a while, I noticed that so many people are hostile. I will note however that you are quite cordial.

    By the way, I did a crane kick with Google-fu and entered “gcm cloud microphysics”. There were a mere 10,200 hits. Looks like some people share your same concerns and are addressing the problems you raise. The second hit put a cloud microphysical model into a GCM.

    http://www.cosis.net/abstracts/EGU04/03486/EGU04-A-03486.pdf

    It looks like people really want to do incorporate microphysical models

    http://adsabs.harvard.edu/abs/2006AGUFM.A31E..02S (this was hit #7)

    (in lieu of parameterization), it is probably just really expensive computationally. Maybe quantum computing will make that possible. …also, when you say “from first principles”, do you mean ab initio? As in you put in protons, neutrons and electrons and get out climate???? People have a hard enough time doing that for large molecules, much less the earth. That’s a lot of molecules floating around.

    #879. Gerald B. You make more demands of others than are willing engage in constructive conversations. Even your reply to me was not nice. I should quote Michael Jackson’s “Man in the Mirror” to you…

    I have to get back to work.

  889. kim
    Posted Mar 24, 2008 at 4:26 PM | Permalink

    Lorne Gunter, in the National Post: ‘Are the Climate Change Models Wrong?’
    ============================================

  890. Gerald Browning
    Posted Mar 24, 2008 at 5:01 PM | Permalink

    Mat S Martinez (#890).

    That is because I am tired of people saying that there haven’t been a considerable number of references cited on this site that mathematically prove what is going on in these models. It doesn’t help to cite references when people don’t read them. If Judith and you spent some time reading those references, the issues would be clear.

    Jerry

  891. Dekarma Spence
    Posted Mar 24, 2008 at 5:21 PM | Permalink

    Re #890 Mat S Martinez

    I’ve only skimmed the abstracts, but as far as I can see both of those papers endorse the sort of views made here – that clouds are a major source of GCM error.

    The second one had more technical detail, but was pretty damning. I agree with the first line – that if you want any idea of how clouds may behave in a different climate than todays, you need to understand the microphysics. Taking obs today and using statistical parameterisations may capture some of the statistical behaviour of clouds (although not all – not enough data, and problems with the fractal nature of clouds), but tell you nothing about the absolutely critical question of how clouds may change in the future. If you don’t get that right, future predictions amount to pure guesswork.

    The second paper is interesting. They note that including a microphysics model improves GCM performance. But it also makes it quite clear that current versions do not include this. And as you note, processor cost is a problem – it isn’t just clouds that would benefit from more detailed models, but you just can’t have everything. But look at the detail here – they paper is not discussing the formation or distribution of clouds, but just their radiative properties. That is one tiny aspect of cloud behaviour (an essential one, but not the only essential one). And they basically say that it isn’t done right now.

    But look deeper – and this is just from the abstract – at how crude even their microphysical model is:

    In the simulations described above, constant values of Nd were specified for land (Nd=200 cm-3) and ocean (Nd=75 cm-3) which is a gross simplification and could lead to significant errors given the strong sensitivity of results to Nd.

    Nd is their cloud droplet density – yes, they’ve characterised clouds with just two constant densities. I would have one hundred and one questions about this – is it valid to use a constant? What is the natural variability of these constants on all scales? How do we know that these constants would be valid in a different climate? Also, given microphysical models aren’t that great at modelling clouds in the first place, are these constants based on physical properties of clouds, or are they values derived because they “get” the right answers out of the radiative models? If so, this is a gross degree of tuning and just (yet another) unphysical parameter. (Actually, as a constant it is unphysical anyway, but if matched via any process such as above then it is two degrees removed from physical reality).

    Unfortunately, a quick search failed to yield the full paper, so I can’t work out how they derived their values of Nd. The abstract highlights a weakness in current models but even the enhanced version seems to be highly sensitive to parameters which are modelled very crudely.

  892. Kenneth Fritsch
    Posted Mar 24, 2008 at 5:29 PM | Permalink

    Re: #879

    Also, why do so many people ask Judith Curry to provide them with citations? Does your ISP restrict your access to google??? I wonder if this kind of armchair/sideline harassing keeps more climate scientists from posting. For example, I know it really bothers me in office meetings when people (who are not my boss) pester me for data that have just as much access to as myself…

    Mat, perhaps you are not aware of our working relationship with Dr. Curry, but I think we expect her to teach and not preach. I would think that under those conditions you would understand that those who post here with credentials for reading and discussing climate science are going to request references. I do not think the true scientists with great personal interest in a subject are going to be put off from requests for references. You seem to make it appear that posters are testing the teacher.

  893. Geoff Sherrington
    Posted Mar 24, 2008 at 7:59 PM | Permalink

    Re # 878 MrPete

    There should be no question about the basis for the models, nor what aspects are derived, parameterized, tuned, etc. Nor how large the boxes are. Nor the CI’s/error bounds on all this. There’s no reason a non-specialist should have to view this as Oz. (Except for Geoff of course, since he lives there )

    Living in Oz does not prevent one from asking in one line what others elsewhere have taken pages to say. Ref my # 857

    Judith, I spotted a spaghetti graph of model result comparisons. It was hopeless. Can it get better?

    It is universal that projects are stopped because barriers to further progress are reached. I think GCMs are there. There comes a point of dimishing returns, where the money and effort are no longer justified by the results so far, or the projected likely outcomes. Chop the funding now, divert the money to something practical like Gen IV nuclear reactor development.

    You guys an Nth America are merely jealous at being whipped by Oz in so many sporting events. We are often intellectually superior as well.

  894. Willis Eschenbach
    Posted Mar 24, 2008 at 8:40 PM | Permalink

    Mat (sorry for spelling your name wrong last time), thanks for your quick reply.

    Yes, there are a host of people working on cloud physics, and on getting cloud representations into GCMs.

    However, there are also a host of people working on nuclear fusion. Google finds a thousand times more hits for “nuclear fusion” than you found for “GCM cloud microphysics” … but not counting the sun and stars, how many working nuclear fusion reactors have you heard of?

    I suspect it’s about as many as GCMs that correctly represent cloud processes. The fact that people are working on something doesn’t impress me much, I’m interested in results.

    And finally, when I say “from first principles”, I mean based on physics rather than based on parameters.

    Regards,

    w.

  895. jae
    Posted Mar 24, 2008 at 8:56 PM | Permalink

    It is probably clear to folks here that I know very little about the guts of climate models (but I’m slowly learning). But, FWIW, if even the modelers agree that they don’t have the effects of even ONE of the critical paramaters (e.g., clouds) fully accounted for, then they would have to agree (IMHO) that the models are not reliable and should not to be used to support important policy decisions. I mean, come back to reality, oh ye modelphiles. Besides crazy Jim, do we know of any real modelers that say otherwise?

  896. Tom Vonk
    Posted Mar 25, 2008 at 4:03 AM | Permalink

    Mat S Martinez #838

    And since there are so many smart people here, I suggest that they come up with a better solution for prediction as opposed to just throwing tomatoes. Much more constructive and more fun to read.

    I do not know of much tomato throwing .
    On the contrary on this thread there is a wealth of mathematical and physical arguments supported by references and evidence that
    prove that the climat modelling is NOT :
    a) solving equations representing the physical laws
    b) predicting any regional parameters (temperature , pressure , humidity)
    c) using consistent physical methods/parametrizations and giving validated confidence intervals
    d) deriving its calculations from physical principles (see a) but using a data fitting (or hindcasting) strategy instead
    e) verified and submitted to any kind of quality control as far as the millions of computer code are concerned

    You seem to still believe the Descartian illusion that everything can and must be acurately predicted .
    This illusion has been abandonned already one century ago with the advent of quantum mechanics .
    So here a constructive approach even if I am not sure that it is “fun to read” :
    http://www.ingentaconnect.com/content/els/0012821x/2001/00000184/00000003/art00329

    L.Gimeno has published much more on the subject and there is quite a lot of research actually suggesting that global temperatures can NOT be accurately predicted .
    I quote from the paper above :
    “The obtained predictability range of 2.5 – 7 years for the detrended anomaly series should be considered with great caution but suggests that typical values for predictability should be in the interannual scale , close to the El Nino periodicity band .There are abundant references that provide evidence that El Nino is indeed chaotic and possibly a subsystem of a grand complex system . The way in which this subsystem is connected to the grand climate system could explain the predictability (limits) of global surface temperature anomalies .””

  897. Posted Mar 25, 2008 at 8:04 AM | Permalink

    Mat:

    Also, why do so many people ask Judith Curry to provide them with citations

    If she is making specific claims, it is the norm in science to ask her for substantiation of those claims. Bluntly, it is not our responsibility to read her mind.

    While like you I wish some of the rhetoric could be toned down slightly, this is mostly because this is a public forum. Scientists are often very “forthright” with each other, where substance is valued more highly than decorum. By forthright, I mean extremely direct at times. I can think of a few back-and-forths I’ve had, where I was giving public seminars, that anything that was said here looks like a weekend excursion to Disneyland. I’ve no doubt that Judith Curry has had similar experiences!

    Passion is a big part of science, and sometimes this spills over into our conversational style when we talk with each other.

  898. Sam Urbinto
    Posted Mar 25, 2008 at 10:09 AM | Permalink

    It’s all water.

  899. Gerald Browning
    Posted Mar 25, 2008 at 1:27 PM | Permalink

    Geoff Sherrington (#895),

    It is universal that projects are stopped because barriers to further progress are reached. I think GCMs are there. There comes a point of dimishing returns, where the money and effort are no longer justified by the results so far, or the projected likely outcomes. Chop the funding now, divert the money to something practical like Gen IV nuclear reactor development.

    Well stated. Money would be better spent in other areas (although I might not agree that it be for nuclear reactors 🙂 ).

    Jerry

  900. Gerald Browning
    Posted Mar 25, 2008 at 1:41 PM | Permalink

    All,

    On another thread I cited a manuscript by Lu et al. that provides a detailed scale analysis and simple tests of the standard microphysical packages in use today. The schemes involve hundreds of tunable parameters,
    but it turns out that only a few terms in those packages are crucial. (As you might surmise, it is the evaporative cooling and latent heating terms.)
    The packages are no panacea for what ills climate models.

    Jerry

  901. See - owe to Rich
    Posted Mar 25, 2008 at 2:48 PM | Permalink

    In this posting I am going to ask “how did we get into this mess?”. By this I mean climate models which are not fit for purpose. First, I just want to grab some quatloos by noting that the GCM sub-thread here arose out of my question #759 to Dr. Curry’s #758. Though perhaps I just happened to beat others to it…and Willis has done a better job than I could of teasing out all the hidden parameters.

    Anyway, my answer to this question, which I will put forward tentatively as this is not my own field, is as follows.

    Global Circulation Models, with their Navier-Stokes approximations etc., are, apart from some occasional lapses, good at predicting short-term weather. They therefore have great value and rightly attract a good deal of investment. So they are important. That they cannot predict accurately beyond a week is probably more a function of the chaotic nature of weather than of parameters within them. (I am not totally sure about that, as it might be disputed by competing groups of forecasters trying, from different computer programs, to predict hurricane tracks 3 days in advance.)

    And then some bright spark suggested using the GCMs for modelling long-term climate. A Bad Idea, which may well have met resistance, but not enough. If everything was deterministic and computable then this idea might have worked, but it isn’t and it didn’t. So the use of a creaky model with zillions of parameters to compute one number – the predicted global mean temperature in AD 2100 – is bizarre.

    So what might we reasonably require of a climate model, to be fit for purpose? For me, it would:

    a. have as few estimated parameters as reasonably possible,
    b. be derived from basic input data which are uncontroversially obtained,
    c. be verifiable and reproducible by scientists from other fields,
    d. be susceptible to statistical analysis for “goodness of fit”,
    e. be capable of producing error bars as well as an estimate,
    f. have proven skill in predicting upturns and downturns in temperature over long periods (at least 100 years).

    For an example of b., if a model depends on aerosol forcings to explain relative cooling at a certain epoch, what are the uncontroversial data on aerosol densities to be input to the model (perhaps I’ll be referred to some IPCC tome for these)?

    I invite others to challenge my list and to add to it (taking care to avoid introducing paradoxes).

    Rich.

  902. Sam Urbinto
    Posted Mar 25, 2008 at 4:17 PM | Permalink

    Rich, I think you’re asking for something that’s currently impossible to provide.

  903. welikerocks
    Posted Mar 26, 2008 at 7:07 AM | Permalink

    re: 903
    “just want to grab some quatloos by noting that the GCM sub-thread here arose out of my question #759 to Dr. Curry’s #758.”

    Hey, not so fast Rich see 778-781! I think you’ll have to share them quatloos. 😉

    I wonder at what point some of these scientists decided that GCM’s were more then just a simulation of the atmosphere and more then just a tool to help -understand- it? My husband was warned by his environmental geology professors in his masters program not to think this way with ANY computer model in regards to earth science when he was in school (and he was still in school when the first IPCC report came out) Even now, using a model for looking at the flow of a spill or spread of contamination for relatively small plots of land or water they are working on at his company, the models fail to show what is really happening -all the time. I think they even just laid off the person who ran the models..not because this person did the modeling all wrong, just that the models didn’t help all that much in the long run to a project and project budget-plain old human observation and mind power figured things out just as accurately if not more-so a decision was made not to keep this person on permanent staff.

  904. Sam Urbinto
    Posted Mar 26, 2008 at 8:17 AM | Permalink

    I find it interesting the same folks that are often distrustful of technology would put more faith in a computer program than the human brain.

  905. See - owe to Rich
    Posted Mar 26, 2008 at 3:52 PM | Permalink

    Re 905, welikerocks, I very much like your postings, but regarding quatloo claims, would you say, in your expert opinion, that 778-781 is bigger than (later than) or smaller than (earlier than) 759 😉

    Rich.

  906. welikerocks
    Posted Mar 26, 2008 at 4:58 PM | Permalink

    Rich, oh lol! You are right, I was thinking different (like helping to keep the subject going because honest, I didn’t see your post up there, just JC’s before mine)

    Bigger or Firstest = is better I got you! (I wouldn’t touch some of this dialog with a ten foot pole ha! it wouldn’t be “model” behavior for a lady! lol)

  907. John Baltutis
    Posted Mar 27, 2008 at 1:20 AM | Permalink

    This seems timely WRT the GCM dynamical cores discussion: A Proposed Test Suite for Atmospheric Model Dynamical Cores

  908. Mat S Martinez
    Posted Mar 27, 2008 at 10:43 AM | Permalink

    #909. Timely??? The papers and presentations about the testing of GCMs are dated from 2006 on the Michigan site that is referenced… it’s good that people here and at the other site are talking about it, but it seems like old news. Not outdated, just new to people here (including me!).

  909. Gerald Browning
    Posted Mar 27, 2008 at 11:21 PM | Permalink

    Hi John (#909)

    Numerical analysts have known (and proved) the requirements to resolve features of a time dependent PDE for a particular wavelength (spatial length scale) and time scale for decades. And now Roger Pielke Sr. quotes his 2002 manuscript as if it were some new revelation. Give me a break. 🙂 Also on March 26 he finally realized there was a Jablonowski manuscript
    because it had been discussed on CA (didn’t I ask his son to mention to him
    a long time ago).

    Jerry

  910. Gerald Browning
    Posted Mar 27, 2008 at 11:44 PM | Permalink

    John (#909),

    Note that numerical analysts have known (and proved) the resolution requirements needed for a numerical method to accurately approximate
    a continuum feature of a time dependent PDE with given spatial and temporal scales (Oliger et al. reference over 40 years ago available on request). Roger Pielke Sr. on his blog cited his 2002 manuscript and his 1991 book as if this were some new result and that obviously is not the case. Decide for yourself about the source of the result after reading the Oliger manuscript.

    Also note that Roger Pielke Sr. evidently just found out about the Jablonowski manuscript although his son knew about it when I asked him to review it.

    Jerry

    Jerry

  911. Gerald Browning
    Posted Mar 27, 2008 at 11:47 PM | Permalink

    Sorry about that. I thought the first message had been removed so I softened it a bit and resent. 🙂

    Jerry

  912. Gerald Browning
    Posted Mar 28, 2008 at 3:36 PM | Permalink

    All,

    The difference between Roger Pielke Sr.’s site and this one is that Roger edits content that he does not want to appear (just like RC). In the above link by John, Roger has shut off comments about his discussion of the Jablonowski manuscript (actually he is discussing the proposal and not the manuscript) and his questionable claims of originality about resolution requirements.

    Jerry

  913. Posted Mar 28, 2008 at 4:01 PM | Permalink

    Mat 847 and later.

    I doubt Judy thinks being asked to provide a citation to back up a scientific claim that models are tested against data is pestering or hostile. The act of being asked to cite a papers that back up your point is rather normal.

    Anyway, Judy’s all grown up, and I strongly suspect she knows how to handle hostile pestering. One of the ways is to ignore people. 🙂

  914. SteveSadlov
    Posted Mar 28, 2008 at 4:15 PM | Permalink

    Pielke Sr. no longer allows any comments at his site. He tired of Steve Bloom and other ideologues posting.

  915. Kenneth Fritsch
    Posted Mar 28, 2008 at 4:51 PM | Permalink

    Re: #915

    I doubt Judy thinks being asked to provide a citation to back up a scientific claim that models are tested against data is pestering or hostile. The act of being asked to cite a papers that back up your point is rather normal.

    Anyway, Judy’s all grown up, and I strongly suspect she knows how to handle hostile pestering. One of the ways is to ignore people.

    Lucia, you bring up an interesting point that sometimes eludes me when an academician/expert comes to a blog to comment but then fails to follow up on a reply/question to their comments. Did they feel the question was hostile, or was it a question that they did not have an immediate answer or reference for and then subsequently lost track of for answering purposes, or was it something they, as an acknowledged expert, felt they did not want to risk an answer that might effect that expert reputation or relations with colleagues, or did they simply lose interest in the conversation? I usually judge a lack of an answer (by using the circumstances and tone under which the discussion was conducted) as fully as I would an answer.

  916. Gerald Browning
    Posted Mar 28, 2008 at 10:25 PM | Permalink

    Kenneth Fritsch (#917),

    Well said. If Judith could provide an answer to any of the specific mathematical questions that I have asked her to prove that they are wrong, why would she not do so. The silence and misdirection provide the answer.

    Jerry

  917. Gerald Browning
    Posted Mar 28, 2008 at 10:59 PM | Permalink

    All,

    Here I will recite the manuscript by a cast of thousands that
    clearly shows that the atmosheric portion of the NCAR climate model does not accurately reproduce day to day weather compared to obs.

    Evaluating Parameterizations In General Circulation Models
    Thomas J. Phillips et al.
    BAMS December 2004
    1903-1915

    This manuscript can be read online at the American Met Society website.
    Also note the obvious similarity to Sylvie’s Gravel’s manuscript on this site, but not referenced by this manuscript.

    Jerry

  918. Gerald Browning
    Posted Mar 28, 2008 at 11:02 PM | Permalink

    Steve Sadlov (#916),

    So is Roger having a conversation with himself? 🙂

    Jerry

  919. John Baltutis
    Posted Mar 29, 2008 at 12:51 AM | Permalink

    Re #912 and others.

    Sorry. I didn’t pursue the details and thought the mention of a proposed test procedure for GCM dynamical cores fit in the thread, especially in light of Dr. Curry’s comment that

    …the dynamic cores of weather and climate models are robust (as per their theoretical and numerical foundations and decades of validation of the atmospheric cores using surface based and satellite data)….

    If no one’s seriously tested those cores, then I conclude she’s blowin’ smoke.

One Trackback

  1. By Die Klimakrise » Wer hats gesagt? on Jan 5, 2010 at 5:29 AM

    […] Dieser Spruch kommt von niemand geringerem als ClimateAudit.org-Macher Stephen McIntyre. Erstaunlich, wie breit die Bereitschaft zum verantwortlichen Handeln gestreut sein und wie selten […]