The "First" Assessed Likely Range for Climate Sensitivity

One of the remarkable claims in the AR4 Summary for Policy-Makers was that they provided the first “ assessed likely range to be given for climate sensitivity”, which they reported as follows:

the global average surface warming following a doubling of carbon dioxide concentrations … is likely to be in the range 2 to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values. Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty.

It seemed to me that I’d seen similar figures in the past, so I thought that I’d look back at prior assessments back to the 1979 NAS Report (Charney) and found some interesting results.

IPCC Aviation Report
The IPCC Aviation Report reported a range of 1.5 to 4.5 K (AR4: 2-4.5K):

For doubled CO2 relative to pre-industrial conditions (+4 W m-2), surface temperature warming ranges from 1.5 to 4.5 K, depending on the modeling of feedback processes included in the CGCM.

Third Assessment Report
The Third Assessment Report also presented a range of 1.5-4.5 K, based on radiative forcing of 4 wm-2 for dobuled CO2, identifying clouds as a main factor in uncertainty:

If the amount of carbon dioxide were doubled instantaneously, with everything else remaining the same, the outgoing infrared radiation would be reduced by about 4 Wm-2. In other words, the radiative forcing corresponding to a doubling of the CO2 concentration would be 4 Wm-2. To counteract this imbalance, the temperature of the surface-troposphere system would have to increase by 1.2°C (with an accuracy of ⯱0%), in the absence of other changes. In reality, due to feedbacks, the response of the climate system is much more complex. It is believed that the overall effect of the feedbacks amplifies the temperature increase to 1.5 to 4.5°C. A significant part of this uncertainty range arises from our limited knowledge of clouds and their interactions with radiation.

In passing, one annoying aspect of TAR’s use of 4 wm-2 is that no citation is given. For someone approaching the IPCC corpus for the first time, it is by no means easy to backtrack this figure to a source. (I speak from experience as I tried to locate the source of 4wm-2 beginning from IPCC TAR references and was unable to do so from these references. I eventually located documentation by browsing CDIAC surveys from the early 1980s.)

1979 NAS Report
The Second Assessment Report also used a range of 1.5-4.5 K. Let’s jump back to the 1979 NAS Panel report which is a very early assessment report. In 1979, the U.S. National Academy of Sciences convened an eminent panel of scientists under the chairmanship of Jules Charney to provide what was perhaps the first international report (Bert Bolin was a member of the panel) on the impact of increased CO2. The Charney Report projected that the most probable warming from doubled CO2 would be 1.5-4.5 deg C, noting that the modeling of clouds was one of the “weakest links in GCM efforts”. The Charney Report:

We estimate the most probable global warming for a doubling of CO2 to be near 3 deg C with a probable error of +- 1.5 deg C. Our estimate is based primarily on our review of a series of calculations with three-dimensional models of the global atmospheric circulation…Existing parameterization of cloud amounts in GCMs are physically very crude. When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate. But such tuning by itself does not guarantee that the response of clouds to a change in CO2 concentration is also tuned. It must thus be emphasized that the modeling of clouds is one of the weakest links in the GCM efforts.

The structure of their argument is also virtually identical to later arguments. Like later attempts, they started by estimating the radiative forcing for doubled CO2, reporting a value of 4 wm-2, citing Ramanathan et al 1979. Here’s their argument:

For a doubling of atmospheric CO2, the resulting change in net heating of the roposhere, oceans and land (which is equivalent to a change in the net radiative flux at the tropopause) would amount to a global average of about ΔQ = 4 wm-2 if all other properties of the atmosphere remained unchanged [Ramanathan et al JGR 1979] … For the simplest case in which only the temperature change is considered and the earth is assumed to be effectively a black body, the value of ΔQ/ΔT = 4σ T^3 is readily computed to be about 4 wm-2 K-1. For such a case, doubled CO2 produces a temperature increase of 1 deg C.

They then estimated the feedback from water vapour as follows:

The associated increase of absolute humidity increases the infrared absorptivity of the atmosphere over that of CO2 alone and provides a positive feedback …the consequence is that ΔQ/ΔT is decreased and ΔT increased by about a factor of 2. For doubled CO2, the temperature increase would be 2 deg C. One-dimensional radiative-convective models that assume fixed relative humidity, a fixed tropospheric lapse rate of 6.5 K km-1 and fixed cloud cover and height give ΔQ/ΔT= 2.0 wm-2 K-1 (Ramanathan and Coakley, 1978). This value is uncertain by at least +-0.5 wm-2 K-1 because of uncertainties in the possible changes of relative humidity, temperature lapse rate and cloud cover and cloud height.

The other major feedback incorporated into their estimates was snow and ice albedo, after the incorporation of which they obtained a sensitivity of 1.6 to 4.5 K for doubled CO2:

Snow and ice albedo provide another widely discussed positive feedback mechanism (Lian and Cess, 1977 and references). As the surface temperature increases, the area covered by snow or ice decreases; this lowers the mean global albedo and increases the fraction of solar radiation absorbed. Estimates of this effect lead to a further decrease of ΔQ/ΔT by between 0.1 and 0.9 wm-2K-1 with 0.3 a likely value. Some uncertainty in albedo feedback also arises from cloud effects. Taking into consideration all the above direct effects and feedbacks, we estimate ΔQ/ΔT to be 1.7 +- 0.8 wm-2 K-1 and hence ΔT for doubled CO2 to lie in the range of 1.6 to 4.5 K, with 2.4 K a likely value. …

They observed problems with cloud feedbacks on several occasions, pointing out:

Existing parameterization of cloud amounts in GCMs are physically very crude. When empirical adjustments of parameters are made to achieve verisimilitude, the model may appear to be validated against the present climate. But such tuning by itself does not guarantee that the response of clouds to a change in CO2 concentration is also tuned. It must thus be emphasized that the modeling of clouds is one of the weakest links in the GCM efforts.

I’ll try to compare these numbers on a line-by-line basis to Soden and Held 2006 on a future occasion.

AR4
So when the AR4 Summary for Policy-Makers (SPM) states:

the global average surface warming following a doubling of carbon dioxide concentrations … is likely to be in the range 2 to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values. Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty.

AR4 WG1 comments on feedbacks were strongly dependent on Soden and Held 2006. Here’s an interesting comment at realclimate by Isaac Held on cloud feedbacks:

Another way of stating the results from this paper is that the feedbacks that we are moderately confident about (water vapor, lapse rate, and snow/sea ice albedo) seem to generate a sensitivity near the low end of the canonical range, with the more uncertain cloud feedbacks then providing the positive push, in these models, to generate all of the higher sensitivities. I think the picture that many of us had, speaking for myself at least, was that the first set of feedbacks brought us with moderate confidence to the middle of the canonical range, with cloud feedbacks, both positive and negative, then providing the spread about this midpoint. One evidently has to argue for a significantly positive cloud feedback to get to the 3K sensitivity that various empirical studies seem to be pointing towards.

If, in fact, this is an accurate diagnosis of what the models are doing, why is it that they all have positive cloud feedbacks? This is in itself a bit surprising given the diverse schemes used to predict clouds in these models.

This last bolded question reminded me of a comment in Ellingson et al 1991, a survey of infrared code in GCMs. Like cloud models today, Ellingson et al observed a wide variation among radiation codes, with some being described as simply wrong. Yet all the models, including the ones using incorrect infrared calculations, yielded roughly similar results under doubled CO2. Ellingson observed:

The 30-80 wm-2 range of variation in longwave radiative flux computations discovered during this study are a significant fraction of normally observed latent and sensible energy fluxes. In the end it is these energy fluxes that control the climate. The reason that such large discrepancies in radiative fluxes have not seriously distorted model predictions of current climate is simply that most climate models are heavily tuned to give the “right answer” for current climate conditions.

The situation with clouds appears pretty similar to infrared in the early 1990s. Given the diversity of cloud schemes, not all of them can be correct. It’s just that no one knows right now which ones are wrong. Presumably the reason why even incorrect schemes give positive cloud forcing is the same as why correct and incorrect infrared code alike gave the same answer – the models are “heavily tuned” to present conditions. IPCC tries to cope with the use of both correct and incorrect schemes by hypothesizing that the schemes will err in random ways and that the methodological errors will balance out in an “ensembles”. This would obviously not apply in the presence of systemic errors – and one would have thought that the identification of potential areas of systemic error would be one of the priorities in a scientific assessment report.

However, IPCC reports seem to me to minimize or evade such issues, rather than highlighting them. There was a bizarre example of this in TAR. HITRAN-1996 (or a variation) was used in all TAR GCMs. Prior to TAR, a transcription error in the HITRAN 1996 water vapor near infrared absorption data base (discussed here last year) was discovered; some incorrect values got embedded in the HITRAN-1996 database due to a transcriber failing to change units from the original study to the units used in HITRAN-1996. All GCMS in TAR used incorrect values. The effect of the transcription error was about 4 wm-2 or the same as doubled CO2. Although this error was known at the time (but not in time to re-tune the GCMs), it was not mentioned in TAR as a potential problem for the GCMs (and hasn’t been mentioned in AR4 either). Everything was later re-tuned. It would be interesting to a re-run of a TAR GCM with corrected near infrared values.

Finally, it’s remarkable, really, that so little progress has been made in narrowing the range of estimates, especially given very strong institutional reasons for convergence. You would think that climate scientists would have a full-court press on water vapour matters. It’s an active area, but appears to me to fall short of a full-court press. For example, WG1 says:

Similarly, the underprediction of low-level and mid-level clouds presumably affects the magnitude of the radiative response to climate warming in the widespread regions of subsidence. Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. However the evaluation of these assumptions is just beginning (Doutriaux-Boucher and Quaas, 2004; Naud et al., 2006).

It’s nice that evaluation of these “critical” modeling assumptions affecting climate sensitivity are beginning now, but shouldn’t this have begun some time ago? It’s not as though clouds haven’t been on the table – they’ve been identified as a GCM problem for nearly 30 years.

If this were space station engineering, would such a dilatory approach to physical modeling have been acquiesced in by project managers? I hardly think so. The failure to make any progress can hardly be attributed to a lack of funding – last year, U.S. federal funding (by itself) for climate research was about $4.5 billion. Maybe the failure to make progress on clouds points to issues with the GCM approach to the problem; maybe it points to the intractability of the Navier-Stokes equations; perhaps elsewhere.

Whatever the answer, the range of projections in AR4 is virtually identical to those of the 1979 NAS Report. If you were an astronaut relying on the quality of these analyses, you might find this a little, shall we say, “unsettling”.

80 Comments

  1. Ian S
    Posted Feb 14, 2007 at 1:17 PM | Permalink

    jae and I were just discussing this on unthreaded. A couple of points. In very simple example they talk about the case for a black body in which you can show:

    dS/S = 4 dT/T

    They try to imply that this is only applicable for the very ‘naive’ case of a black body. In my opinion this is totally wrong and misleading. It doesn’t matter what absorptivity (or emissivity) or other complicating factors one uses (these are all inherently captured in the base value of T) due to the outgoing flux being proportional to the fourth power of temperature, a 1% change in temperature requires a 4% change in incoming energy ‘€” everything else remaining constant.

    In the case of a forcing’ of 4 W/m^2 and an average solar flux of 342 W/m^2 I get:

    dT = (dS/S)*T/4 = (4/342)*(287/4) = 0.84 C

    Notice how they rounded this up to 1°C …. hmmmm.

    Imagine for arguments sake that through positive feedback you could get a temperature rise of 4°C. This is equivalent to an increase in flux of 19 W/m^2. This means that positive feedback must have an amplifying factor of around 5. You put 1 W/m^2 in, positive feedback gives you 5 W/m^2 out, and of course this works both ways. I have to say they sure are relying a lot on very strong positive feedback. Do they have any proof at all for a feedback of five times?! Given the large range that they present I would have to guess no, so essentially this is pure speculation.

    Imagine if feedback was negligible, or heaven forbid, negative. The net effect would be, well …. decidedly un-alarmist.

    cheers,

    Ian

  2. Steve McIntyre
    Posted Feb 14, 2007 at 2:41 PM | Permalink

    In fairness, there must be some reason for scientists to believe in strong positive positive feedbacks or the models wouldn’t have survived this long. On the other hand, it seems unlikely that they’ve completely nailed the matter either or else the uncertainties would be much reduced and modelers would be able to point to a theory of clouds. I’m not suggesting that the people in question are likely to have made trivial errors, but the progress is surprisingly slow and they sure sell hard on what they’ve got.

  3. Hans Erren
    Posted Feb 14, 2007 at 2:42 PM | Permalink

    see also Nir Shaviv’s comments on climate sensitivity:

    Climate Sensitivity – an interesting IPCC bias
    http://www.sciencebits.com/IPCCbias

    On Climate Sensitivity and why it is probably small
    http://www.sciencebits.com/OnClimateSensitivity

  4. jae
    Posted Feb 14, 2007 at 2:43 PM | Permalink

    If, in fact, this is an accurate diagnosis of what the models are doing, why is it that they all have positive cloud feedbacks? This is in itself a bit surprising given the diverse schemes used to predict clouds in these models.

    Hmmm, maybe an indication that some climate modelers are seriously evaluating their product.

  5. Steve McIntyre
    Posted Feb 14, 2007 at 2:44 PM | Permalink

    #1. This is a pretty standard argument. Kerry Emanuel uses 0.8 deg C in his essay. If you look at Soden and Held 2006, I get the impression that the dT has crept up over 1.2 deg C in the simple case (sensitivities around 3.2 wm-2 K-1) in the AR4 models, but I haven’t assimilated the exact definitions yet.

  6. Posted Feb 14, 2007 at 2:47 PM | Permalink

    Another introductory article on sensitivity. The link below is not necessarily endorsed by ClimateAudit. 😉

    http://motls.blogspot.com/2006/05/climate-sensitivity-and-editorial.html

  7. Steve McIntyre
    Posted Feb 14, 2007 at 2:48 PM | Permalink

    #3. Those are interesting links. The grey-body analysis is probably involved in the dT in the recent AR4 models – in Soden and Held 2006’s Table 1 summary.

  8. Joe B
    Posted Feb 14, 2007 at 2:50 PM | Permalink

    Richard Lindzen said this in the Telegraph:

    “As the primary “consensus” document, the Scientific Assessment of the UN’s Intergovernmental Panel on Climate Change notes, modellers at the United Kingdom’s Hadley Centre had to cancel two-thirds of the model warming in order to simulate the observed warming.

    So the warming alarm is based on models that overestimate the observed warming by a factor of three or more, and have to cancel most of the warming in order to match observations.

    Rather than entertaining the rather obvious possibility that the models are over-reacting to increasing greenhouse gases, advocates are assuming that the cancellation will disappear in the future. Why might models be over-reacting?

    The answer is actually fairly simple. Carbon dioxide and methane are minor greenhouse gases (and methane has, for unknown reasons stopped increasing, during the last five years). Doubling carbon dioxide in the atmosphere would, all else held constant, only lead to about 1C of warming; quadrupling carbon dioxide would only add another 1C (there is a diminishing return in warming per unit carbon dioxide).

    The greater response arises because in current models, the most important greenhouse substances, water vapour and clouds, act so as to amplify the impact of increasing carbon dioxide. But, as the previously cited IPCC document notes, water vapour and especially clouds are major sources of uncertainty in models.”

  9. Douglas Hoyt
    Posted Feb 14, 2007 at 2:57 PM | Permalink

    Another discussion of climate sensitivity can be found at http://www.warwickhughes.com/blog/?p=87

  10. Posted Feb 14, 2007 at 3:01 PM | Permalink

    Tom Nelson sent me an interesting comment about the very-long-timescale estimates of the CO2 concentrations that could imply that sensitivity on CO2 simply can’t be too large. 450 million years ago, the concentrations were arguably 3000 ppm

    http://en.wikipedia.org/wiki/Carbon_dioxide#Variation_in_the_past

    from counting the ratio of concentrations of radioactive boron vs. carbon isotopes and from counting stomata on fossil plant leaves. If that’s true, multiplying the CO2 concentrations by ten isn’t enough to create any anti-life catastrophe and we should be safe at least for 1000 more years of burning fossil fuels. 😉

    Tom Nelson quoted

    http://www.canadafreepress.com/2006/harris061206.htm

    “Appearing before the Commons Committee on Environment and Sustainable Development last year, Carleton University paleoclimatologist Professor Tim Patterson testified, “There is no meaningful correlation between CO2 levels and Earth’s temperature over this [geologic] time frame. In fact, when CO2 levels were over ten times higher than they are now, about 450 million years ago, the planet was in the depths of the absolute coldest period in the last half billion years.” Patterson asked the committee, “On the basis of this evidence, how could anyone still believe that the recent relatively small increase in CO2 levels would be the major cause of the past century’s modest warming?””

  11. Steve Sadlov
    Posted Feb 14, 2007 at 3:08 PM | Permalink

    The problem is they assume water vapor only results in straight up GHG behavior (e.g. it’s assumed to be mostly non condensed, or at best, high-mid, and high level stratiform and cirrus). When in fact, water vapor has multiple personalities. Yes it can be a GHG, but it can also be a tremendous internal kinetic and electrical dissipator as well when it forms up into a mesocyclone or even less formiddible structures.

    The other problem is, as noted well above, that the basics of black and near black body radiation are not dealt with correctly. In general, the sum total outward flux within the IR and near IR bands into space, in my opinion, has never been faithfully represented by the GCMs, I beg to be proven wrong regarding this impression. Just to pick on one particular nit, I am fairly certain they fail to account for IR radiating into the Stratosphere (and hence space) from the tallest CuNim buildups.

  12. jae
    Posted Feb 14, 2007 at 3:29 PM | Permalink

    I’m intrigued by this 4 Watt increase for 2 X CO2 and this “feedback” hypothesis.. The CO2 is part of the atmophere and absorbs and emits IR, just like water and other greenhouse gases. But according to the famous radiation balance produced by Kiehl and Trenberth, greenhouse gases radiate a total of 324 W/M^2 back to Earth. So the contribution of 2X CO2 2 is 4/232 = 1.2%. The total greenhouse effect is stated to be 32 or 33 C. Thus, the effect of a 1.2% increase should be 0.012*32 = 0.382 degrees C, or a sensitivity of about 0.1 degree/ Watt. This agrees with many other estimates of temperature sensitivity, and I can’t see where it is in error.

    Now, if one assumes that each Watt produced by CO2 has X Watts of positive feedback, doesn’t it follow that the other 324 Watts produced by greenhouse gasses have 324 X Watts of feedback? Sounds like runaway Planet Earth to me.

  13. Steve McIntyre
    Posted Feb 14, 2007 at 3:29 PM | Permalink

    #11. Steve S, I don’t think that it’s true that they simply model straight GHG behavior. Whatever the defects are, I doubt that they are trivial defects. Also I would not be inclined to opine that basics of radiation have not been dealt with correctly without some basis for this particular diagnosis and I don’t think that the posts above do this. To be “fairly certain” about cumulonimbus would require a detailed look at model assumptions. They may or may not have this defect, but what basis do you have for being “fairly certain” about this?

  14. Darwin
    Posted Feb 14, 2007 at 3:46 PM | Permalink

    I think Dan Rothman at MIT did a paper in 2002 on Atmospheric Carbon Dioxide Over the Last 500 million years It found no correlation between CO2 levels as measured and temperature in that period. It’s tail to today shows us in a CO2 deprived era. I don’t know if or how it may have been discredited, though.

  15. Steve Sadlov
    Posted Feb 14, 2007 at 3:50 PM | Permalink

    RE: #13 – I based what I wrote on ~1 year of arguments with Gavin and Ray over at RC. As for sources, I’ve seen various things, can’t recall them offhand, most of them were over at Climate Science (Pielke Sr.).

  16. Ian
    Posted Feb 14, 2007 at 4:06 PM | Permalink

    #2,#3,#5

    I read through links in #3. I have to say that the arguments feel awfully schizophrenic.

    In a gray body case our simple equation

    dS/S = 4 dT/T

    still applies, however we adjust S downward arguing that only 1/2 of S is absorbed. This is incorrect.

    (1-a)Sfull=sigma*epsilon*T^4 — Sfull is the FULL flux

    We do the derivative to get

    dS/Sfull = 4*dT/T — same as before

    You see, it doesn’t matter how much light is absorbed or what the emissivity is, or any other constant factor you want to throw in there. The derivative gets rid of these factors (essentially they are built into the base temperature T). It is incorrect to reduce S before plugging it into our difference equation. It is already accounted for in the first equation (1-a) adjusting the value of T. There’s no trick here, it is just simple math.

    In addition, in order for one to use only albedo and emissivity of the surface we have to ignore the atmosphere. It doesn’t exist, it doesn’t absorb anything. Hmmm ok.

    In order to get our current temperature T, of 287K we need an ‘effective’ absorptivity of near unity. Put whatever factors you want on the left and the right of the equation and merge them into a single constant. When you plug in the current S and T you get the value of this constant near unity. I refer to this as the effective absorptivity. You can’t talk about the greenhouse effect raising the temperature of the Earth on the one hand (effective absorptivity of unity) and then promptly go and throw it away on the other hand when you calculate sensitivity. This is what I mean by the schizophrenic nature.

    Forget about albedo, absorptivity, emissivity. At the end of the day outgoing flux is proportional to the fourth power of T.

    S = c T^4

    dS/S = 4 dT/T

    You cannot ‘will’ away the ratio of 4, it is physics. You also cannot ‘will’ away the values of S and T or arbitrarily reduce them twice. They are what they are.

    At least, that is my primitive understanding, and I look forward to the explanation correcting this — (although please phrase it as simply as possible so that I will most likely be able to understand it)

    cheers,

    Ian

  17. Posted Feb 14, 2007 at 4:07 PM | Permalink

    It seems like building a working fusion reactor will end up being an easier task than answering the question of cloud feedback….

    Anyway, I find it strange that between the TAR and 4AR, the IPCC could change the likely sensitivity from 1.5°-4.5° to 2°-4.5°, even though the GCMs are pretty much the same models giving the same large range of sensitivities. I wonder what is their excuse.

  18. Steve McIntyre
    Posted Feb 14, 2007 at 4:11 PM | Permalink

    #16. Ian, I’d dial back the rhetoric a little here. I don’t have time to try to reconcile the differences between the two positions, but I think that this is a dead end issue.

  19. Ian S
    Posted Feb 14, 2007 at 4:20 PM | Permalink

    #18

    I not sure what you mean, but ok …

  20. Joel McDade
    Posted Feb 14, 2007 at 4:25 PM | Permalink

    Me neither – was anxious to see any responses — but ok.

  21. David Smith
    Posted Feb 14, 2007 at 5:35 PM | Permalink

    My focus while reading FAR (so far) has been water vapour (specific humidity). What I’ve been searching for is evidence that the atmosphere, in recent decades, has been behaving as expected by the models. I’m still searching.

    There is good evidence that the bottom of the atmosphere (“mixed layer”, where convection helps stir the air) has seen increased water vapor, pretty much in accordance with the hypothesis and standard meteorology.

    However, the evidence for water vapor increase above the mixed layer is spotty. The highest parts of the troposphere (200 to 100mb) seem to be seeing an increase, though I’m not at all sure that is true for the critical 300-400 mb tropical region.

    It’s hard (for me at least) to find evidence of increased specific humidity in the middle troposphere.

    (The upper regions are the most important for heat retention, even though their water vapor content is quite low compared to the near-ground atmosphere.).

    The GCMs receive a lot of focus on their temperature projections. I’m just as interested in how their water vapor projections are doing.

  22. jae
    Posted Feb 14, 2007 at 5:58 PM | Permalink

    #2, Steve M:

    In fairness, there must be some reason for scientists to believe in strong positive positive feedbacks or the models wouldn’t have survived this long. On the other hand, it seems unlikely that they’ve completely nailed the matter either or else the uncertainties would be much reduced and modelers would be able to point to a theory of clouds. I’m not suggesting that the people in question are likely to have made trivial errors, but the progress is surprisingly slow and they sure sell hard on what they’ve got.

    I wish I could find where they explain the basis for their belief in positive feedbacks. There are more than a dozen theoretical and empirical demonstrations that show that IPPC doesn’t have the basic physics nailed down.

  23. Pat Frank
    Posted Feb 14, 2007 at 6:40 PM | Permalink

    #13 — “Steve S, I don’t think that it’s true that they simply model straight GHG behavior.”

    If you plot GCM model outputs against a straight GHG warming projection, you get curves with very similar slopes and intensities. The GCM outputs look like straight GHG but with added wiggles that mark small annual excursions. Somehow, all the other forcings seem to almost cancel out to zero.

    I expect the reason for that is the tuning that Willis has mentioned several times and that’s mentioned in your quote from the 1979 NAS report.

    If your quotes are representative, it looks to me like they were more forthright about the model adjustments in 1979 than the IPCC is now or ever has been.

  24. mz
    Posted Feb 14, 2007 at 8:36 PM | Permalink

    Here go my highschool physics:

    Assume we have influx of energy in visible light, W.
    Then we have a black body that absorbs it, heats to temperature T and radiates it in infrared, with the same power W. The body is then in the balance state W=kT^4.

    If we add a layer to surround the blackbody that lets in the visible light but reflects 1/10 parts of the infrared back in, what is the situation then?

    You can’t calculate the energy flow balance with the simple W=kT^4 formula anymore, since part of the energy radiated by the black body is reflected back in. If we mark the energy radiated back by R, in the new balance state we get W+R flux to the surface, same flux must leave it, and 90% gets out that equals the light flux, so 0.9(W+R)=W => W+R=W/0.9
    So the new temperature grew enough that the blackbody radiates 1/0.9 times the original power, or temperature grew to (1/0.9)^(1/4)~=1.027 times the original. The temperature grew 2.7 percent when we added a 10% greenhouse effect.

    I hope this shows how even a perfectly black body (no reflectance) got hotter with a greenhouse. 🙂

    I hope I calculated right, I just kinda winged it here.

  25. Greg F
    Posted Feb 14, 2007 at 9:13 PM | Permalink

    Steve M wrote:
    In fairness, there must be some reason for scientists to believe in strong positive positive feedbacks or the models wouldn’t have survived this long.

    If there is it would have to have been discovered after the last IPCC report.

    Probably the greatest uncertainty in future projections of climate arises from clouds and their interactions with radiation. Cloud feedbacks depend upon changes in cloud height, amount, and radiative properties, including short-wave absorption. The radiative properties depend upon cloud thickness, particle size, shape, and distribution and on aerosol effects. The evolution of clouds depends upon a host of processes, mainly those governing the distribution of water vapour. The physical basis of the cloud parametrizations included into the models has also been greatly improved. However, this increased physical veracity has not reduced the uncertainty attached to cloud feedbacks: even the sign of this feedback remains unknown.

  26. David Smith
    Posted Feb 14, 2007 at 9:23 PM | Permalink

    Powerpoint on water vapor feedback is here .

    From slide 8:

    CO2 doubling by itself = circa 1C rise

    CO2 + water vapor = 1.7C rise

    From slide 14, note the importance of specific humidity in the tropical middle atmosphere (300 to 500mb)

  27. Greg F
    Posted Feb 14, 2007 at 9:26 PM | Permalink

    Steve M.
    Looks like my last comment just got eaten by the spam filter. It showed up in the “Recent Comments” and then disappeared.

  28. tom
    Posted Feb 14, 2007 at 10:36 PM | Permalink

    How can the meteorlogical station data show a steady fall in mean temp between 1945-1980 while CO2 was steadily increasing?

  29. Steve McIntyre
    Posted Feb 14, 2007 at 10:36 PM | Permalink

    #22,25. Look, I expressed my carefully. There are arguments against strong positive feedback. Fine. But that doesn’t prove that there are NO arguments for positive feedback. Don’t get me wrong. I’m not endorsing the concept of big positive feedbacks. I’m as aware of the counter-arguments as you are and wouldn’t dip my toe into these waters if I weren’t interested in them. But there are two sides to the argument and to deal with the matter, one has to fully understand all sides of the argument. I’d compare this to the HS arguments – lots of people complained that Mann didn’t recognize the MWP, but to come to grips with the problem, someone had to wade through principal components and bristlecones. It’s a big job wading through the feedback issues.

  30. buck smith
    Posted Feb 14, 2007 at 10:37 PM | Permalink

    Didn’t “eminent” NASA scientist James Hansen recently reduce his estimate of the effect of a doubling of CO2?

  31. Steve McIntyre
    Posted Feb 14, 2007 at 10:38 PM | Permalink

    #27. We’ve installed WP CAche to improve speed. One of the disadvantages is that you sometimes get a cached refresh even though a comment has been recorded. Don’t assume that the comment was lost.

  32. Greg F
    Posted Feb 14, 2007 at 11:43 PM | Permalink

    There are arguments against strong positive feedback. Fine. But that doesn’t prove that there are NO arguments for positive feedback.

    If there are I have never seen them. You might find this post by modeler James Annan of some interest. He is clearly pro AGW.

    One crucial take-home message is that there is no way to generate a truly objective estimate of climate sensitivity, and never will be. The strength of various strands of evidence needs to be assessed and weighed up by experts, and even an objective method necessarily relies on subjective decisions regarding the inputs.

  33. Ian
    Posted Feb 15, 2007 at 12:04 AM | Permalink

    Steve,

    I’m not sure why you seem to be taking these arguments so personally (perhaps I’m reading too much into the tone of your last post). I don’t believe they are directed at you (I know mine certainly aren’t) They are directed at anyone who feels they understand the position of the other side of the argument and are therefore welcome to explain it to us. Maybe some statements are naive, (mine specifically), maybe there is some heat in the comments, but let’s please not pull out the “nothing to see here, we moved on” argument for climate sensitivity. These are just people’s opinion and input and are no reflection on yourself or this site. There is definitely something to see here of great interest, although understandable potentially not well understood. So let’s understand it. Did you not open the topic up in the first place?

  34. MarkW
    Posted Feb 15, 2007 at 6:06 AM | Permalink

    The strongest argument for or against feedbacks is look what has happened in recent history. Take the increase in
    CO2 over the last 100 years and calculate how much that would increase the earth’s temperature,assuming no feedbacks
    of any kind.

    My understanding of this calculation, is that even if we assume that 100% of the recent warming was due to CO2, that the
    above calculated increase, is greater than the observed increase. Which strongly implies that we are in a zone dominated
    by negative feedbacks. If you assume that some portion of the increase was due to things like increased solar output,
    decreased cosmic rays, and UHI, then the argument that negative feedbacks dominate get even stronger.

    So unless one can come up with an strong theory regarding why we are going to hit somekind of threshold in the next couple
    of degrees that will switch from a negative feedback zone to a positive feedback zone, it makes absolutely no sense’
    whatsoever, to assume that positive feedbacks are going to dominate. Yet this is, as I understand it, what all of the
    general circulation models do.

    PS: Any theory that postulates we are about to hit a threshold which would cause positive feedbacks to dominate, would also
    have to explain why the normal temperature changes between winter and summer, even record temperature summers, never manage
    to hit this threshold.

    PPS: To get back to Steve’s question. Given the solid and substantial evidence for the existence of strong negative feedbacks.
    Given the weak to non-existent, real world, evidence for the existence of positive feedbacks, much less strong ones.
    It beggars the imagination to conclude that the modelers are inserting strong feedbacks into their models, because they
    actually believe that this is how the atmosphere works.

  35. Hans Erren
    Posted Feb 15, 2007 at 6:30 AM | Permalink

    A high climate sensitivity goes hand in hand with strong aerosol cooling. It’s the Rasool&Schneider legacy from the previous climate hype.

  36. Steve McIntyre
    Posted Feb 15, 2007 at 8:23 AM | Permalink

    The main argument for positive feedback is that forcing from increased CO2 will lead to more water vapor; and since water vapor is also absorbing in the infrared, this causes a strong positive feedback. Calculations based on this effect alone lead to strong positive feedback overall. You don’t have to look far to find descriptions of this effect.

    Hans has identified the main supposed rebuttal of climate modelers to the arguments of jae and others about why temperature hasn’t gone up as much as strong positive forcing suggests – aerosols. While this may seem like special pleading to a third party – and to raise as many problems as it solves e.g. if aerosols cool, then why is the aerosol-heavy NH warming more than the SH? – it’s currently the base case. The Charney Report also mentioned that ocean gyres could absorb considerable amounts of heat and lead to substantial delays between forcing and temperature increases (and perhaps account for the failure of temperatures to increase in accordance with the models.)

    I wish that IPCC AR4 would have spent some time discussing the Doug Hoyt issues, none of which seem trivial to me and where I would like to understand the other side of the argument, and spent a little less time congratulating one another on the wonderful progress of climate models (not to speak of the self-indulgent chapter on the history of climate science in an assessment report).

  37. Francois Ouellette
    Posted Feb 15, 2007 at 10:00 AM | Permalink

    Steve,

    Your analysis just goes to show that the point I recently made on another thread is valid: focusing on GCM’s with ever increasing resolution is not the way to go! Over 20 years, improvements in GCM’s have brought no improvement whatsoever in the predicted effect of doubling GHG’s.

    The water vapor amplifying effect does make a lot of sense. It is most certainly a real effect. But one must remember that this is really under clear sky or “fixed cloud cover” conditions. The problem, of course, is that the Earth is never under full “clear sky” or even “fixed cloud cover” conditions. And clouds change everything: they change the albedo, they carry heat all over the place, horizontally and vertically. They come and go in a fashion that we don’t yet understand. When they started making accurate measurements of the Earth’s radiative budget, they found that it varied much more than what the models predicted. That’s because a small change in cloud cover has a similar effect as doubling CO2, if not more.

    So I agree that the money and research effort would have been better spent understanding the physics, instead of writing more lines of code. And that, of course, includes yet little known effects like cosmic rays.

    Finally, I can’t be convinced that we would be so near a threshold for a “runaway” climate. Remember again that the water vapor amplifying factor occurs whatever the original source of heat is. More sun or less clouds also means more heat, and more water vapor, and more heat. If that system were unstable, we would be living on Venus already. We never seem to get much warmer than now, but we do get much colder. So there must be something that stops the climate from going astray. Maybe the climate researchers should look into that, instead of trying to find new, hypothetical, positive feedbacks.

  38. Ian
    Posted Feb 15, 2007 at 10:09 AM | Permalink

    #36

    Yes from what I have read I believe that aerosols are the prime reason given. Which suggests to me a very simple way to ‘stop global warming’. Stop removing aerosols (or even more pro-actively add sulfur into jet fuel). Yes we want to reduce aerosols, but I believe this is a lesser of two evils. If people really believe in a coming catastrophe of unmatched proportions then a little pollution high in the atmosphere is a small price to pay to prevent it I would think. The fact that such a concept would appear ‘unacceptable’ to most in the warming camp show that they are not really interesting in mitigating the problem (is there anyone here that actually thinks carbon trading will do anything?).

  39. jae
    Posted Feb 15, 2007 at 11:08 AM | Permalink

    Here’s Fred Singer’s view of the effects of CO2 and other GHGs.

  40. Gerald Machnee
    Posted Feb 15, 2007 at 11:14 AM | Permalink

    The following is from an exchange with the Suzuki website **Regarding water vapour, water vapour is not a LONG LIVED gas, because it
    condenses out all the time. Carbon dioxide, on the other hand, persists for decades or longer in the earth’s atmosphere. The main thing with
    water vapour is that warming (caused by carbon dioxide) increases the amount of water vapour in the atmosphere — a huge positive feedback for
    warming!**
    I take it that this is the main argument of the “warmers” and that this does not consider all the possibilities.

  41. Ian S
    Posted Feb 15, 2007 at 11:29 AM | Permalink

    #35

    Yes from what I have read I believe that aerosols are the prime reason given. Which suggests to me a very simple way to stop ‘global warming’. Stop removing aerosols (or even more pro-actively add sulfur into jet fuel). Yes we want to reduce aerosols, but I believe this is a lesser of two evils. If people really believe in a coming catastrophe of unmatched proportions then a little pollution high in the atmosphere (putting the sulfur back into jet fuels) is a small price to pay to prevent it I would think. The fact that such a concept would appear to be ‘unacceptable’ to most in the warming camp (without any thought or discussion) shows that they are not really interested in mitigating the supposed problem (is there anyone here that actually thinks carbon trading will do anything other than transfer wealth?).

    On a positive note, China is currently creating something like 50,000 new coal-fired power plants. It seems that aerosols are unlikely to be in short supply anytime soon…

  42. Tom Vonk
    Posted Feb 15, 2007 at 12:01 PM | Permalink

    I have been dabbling for a certain time in those sensibility questions .
    Even if I do not know what exactly is in the GCM , I do know that the positive forcing is all water vapour .
    Now we should be able to see if it is there because the energy increase
    between summer and winter should act exactly like a trigger for increasing
    the positive forcing (in the summer hemisphere) and on a much bigger scale than the modest CO2 .
    In other words we should get a much higher temperature than what we’d expect by the increase of the incoming energy alone as compared to the winter situation .
    Of course there are energy transfers all over the sphere that avoid (big) discontinuities but it stays that the forcing in the summer hemisphere should be much bigger than in the winter hemisphere – lack of symmetry notwithstanding .
    Does that make sense or is there something I don’t see ?
    But if it makes sense then increasing the temperature by some odd 10° doesn’t obviously make the climate going wild .
    And if it doesn’t happen on a time scale of several months , why should it happen on a scale of 50 or 100 years ?

  43. Ian S
    Posted Feb 15, 2007 at 12:12 PM | Permalink

    Sorry, my China coal plant numbers are obviously way off. I think it’s something more like 500.

  44. Steve Sadlov
    Posted Feb 15, 2007 at 12:24 PM | Permalink

    RE: #37 – They keys, I believe, are the convective and cyclonic features. The former is moving massive amounts of thermal energy up toward, and in rare case, to and even through the tropopause. A thunderstorm is the proverbial wet blanket if couched in terms of greenhouse theory. It does not do a very good job as a glass or plastic sheet. In addition to this “tower heat sink” upward sucking of heat, and the massive blocking effect it has vis a vis insolation, there is all the kinetic and electrical energy being dissipated within (and in the case of electrical, nearby) the structure. That is also a thief of thermal energy in the equation. The cyclonic extremes of such structures add additional kinetic elements.

  45. Steve Sadlov
    Posted Feb 15, 2007 at 12:28 PM | Permalink

    RE: #40 – if I fear anything about climate change, I fear a catastrophic, and for the masses, totally unexpected rapid cooling event or events, wrought by aerosols and other negative feedbacks. The icing on the cake will be if the fanatics succeed in implementing active “cooling” via purposeful emission of additional aerosols, or via particulates or shields in orbit.

  46. Greg F
    Posted Feb 15, 2007 at 2:15 PM | Permalink

    What is the clearsky albedo of the northern vs. southern hemisphere? I would expect the clearsky albedo of Southern hemisphere to be lower due to a larger percentage being ocean. All things being equal, that would mean that more of the solar radiation would be absorbed in the southern hemisphere then in the northern hemisphere. I would also expect it to reduce the time constant on the oceans ability to “absorb considerable amounts of heat”. But all things are not equal. :

    The global annual averaged albedo is approximately 0.30. The annual average albedo of the northern and southern hemispheres is nearly the same, demonstrating the important influence of clouds.

    Assuming my assumptions about the albedo in the different hemispheres is correct, what does this say about cloud feedbacks?

    Looking at the monthly mean albedo from the same web page I noticed some interesting things. Going from January to June the albedo decreases over most the land in the northern hemisphere as would be expected (snow from the winter melts and decreases the albedo). The interesting thing is if you look at central South America and Southern Africa the albedo also decreases going from January to June (summer to winter). In the northern hemisphere it seems safe to say that over land the albedo decreases going from winter to summer. Just eye balling it, over land the southern hemisphere appears to do the opposite or at the very least remain neutral.

  47. fFreddy
    Posted Feb 15, 2007 at 3:55 PM | Permalink

    Re #44, Steve Sadlov

    The former is moving massive amounts of thermal energy up toward, and in rare case, to and even through the tropopause.

    Don’t forget the sprites.

  48. Steve Sadlov
    Posted Feb 15, 2007 at 4:35 PM | Permalink

    RE: #47 – Good point, although possibly rare (or possibly not as rare as we may think), these types dissipation events are of a high magnitude in terms of energy flux.

  49. DeWitt Payne
    Posted Feb 15, 2007 at 5:45 PM | Permalink

    #12 jae

    The 324 W/m^2 = 33 C warming thing and thus a sensitivity of 0.1 C/W is a violation of fundamental physics (Stefan-Boltzman Law). Solar constant = 1368 W/m^2. Divide by four because the earth is a sphere = 342 W/m^2 Multiply by 0.7 to correct for albedo =239 W/m^2. That’s incoming direct insolation. Assuming radiative balance, not an unreasonable approximation, at the tropopause, 10 km or whereever the atmosphere becomes transparent in the 5 to 25 micrometer wavelength range, gives a gray body temperature of 255 K, not correcting for incoming radiation from space at 3 K. Actual average near-surface atmospheric temperature is about 288 K or 390 W/m^2 emitted/absorbed (assumption again of approximate radiative balance). So the net greenhouse forcing is 151 W/m^2. Add 4 watts and you get a temperature of 288.72. Thus, to a first approximation, the temperature sensitivity is 0.18 degrees/watt. IanS (#1) uses surface T (288) and top of atmophere S (342 W/m^2) so he gets a different answer. That’s a very simplistic calculation, though, because we’re averaging T and emission is proportional to T^4. However, T is in K not C, so I don’t think the error is all that large.

    My understanding of the greenhouse gas effect is that the lower atmosphere is nearly completely opaque (which leads to an emissivity of 1, btw) in the 5 to 25 micrometer 288 K black body emission range, primarily because of water vapor with a minor contribution from CO2. There is a window around 10 micrometers in the CO2/H2O absorption, which increases the greenhouse effect of gases like methane, nitrous oxide and ozone because they have strong absorption bands in that region. The black body emission spectrum also peaks at about 9.7 micrometers. So transmission of heat to the radiatively transparent upper atmosphere is by a combination of many processes including convection, water evaporation/condensation, radiative absorption/re-emission etc., which are all slower than direct radiation to space. Slower transmission = insulation. CO2 contributes more at higher altitudes because there’s less water vapor (it’s colder) and the absorption lines of water vapor and CO2 are no longer collision broadened (lower pressure) and don’t overlap as much.

    A positive feedback would cause a larger forcing resulting in a higher temperature. It still seems unreasonable to me, though, that a 4 watt increase in forcing from doubling CO2 leads to a total increase in forcing of nearly 17 watts (an 11% increase in net forcing), which is what would be required for a 3 degree increase in average near-surface atmospheric temperature.

  50. DeWitt Payne
    Posted Feb 15, 2007 at 6:26 PM | Permalink

    Anybody know of any evidence that supports the assumption that the Environmental Lapse Rate , now assumed to be -6.5 C/km, of the atmosphere will remain constant with a doubling of CO2? It seems to be an even more fundamental assumption than constant relative humidity. Even a small change in the ELR would make a big change in greenhouse forcing.

    #26

    I love the use of premature rounding in the slide. Let’s round 4 watt surface temperature change of 0.7 C up to 1. Then we calculate that water vapor feedback is a factor of 1.7. 1.7 x 1 = 1.7 C which then rounds to 2 instead of 1.7 x 0.7 C = 1.2 C which would round down to 1 C.

  51. Hans Erren
    Posted Feb 16, 2007 at 2:28 AM | Permalink

    re 49:

    My understanding of the greenhouse gas effect is that the lower atmosphere is nearly completely opaque (which leads to an emissivity of 1, btw) in the 5 to 25 micrometer 288 K black body emission range, primarily because of water vapor with a minor contribution from CO2.

    That is not correct, because in that case the meteosat thermal infrared window could not map surface features.
    The water vapour band is indeed saturated.

    http://www.eumetsat.int/Home/Main/Publications/Technical_and_Scientific_Documentation/repository/pdf_td05_meteosat-sys

    0.45 to 1.0 μm the visible band (VIS),
    used for imaging during daylight,

    5.7 to 7.1 μm the water vapour absorption band (WV),
    used for determining the amount of water vapour in the middle atmosphere,

    10.5 to 12.5 μm the thermal infrared (window) band (IR),
    used for imaging by day and by night and also for determining the temperature of cloud tops and of the ocean’s surface.

  52. Posted Feb 16, 2007 at 1:28 PM | Permalink

    Re Aerosols

    If anthropogenic aerosols have such a strength that they masked the GHG warming for decades and actually cooled the world in the mid 20th century, then I would expect to see a cooling trend in the temperature record over China and India (and downwind) in the past decade or so. I can’t manage to see any such thing in the GISS charts.

    #36

    why is the aerosol-heavy NH warming more than the SH?

    Climate modellers would explain this away with the much larger proportion of ocean surface over the SH, which would be absorbing most of the heat (although I don’t know if this explains why SH land is also warming less than NH land).

    The real problem, in my view, is that the SH also cooled in the mid century, parallel to the NH, while GHGs were building up in the atmosphere and no significant aerosol forcing was in play there. In fact, it cooled even more than the NH (up to the late 60s).
    If anyone has any idea of how modellers deal with this and manage to get a “good hindcast” of historical temperature records, I’d greatly appreciate their input.

  53. jae
    Posted Feb 16, 2007 at 2:39 PM | Permalink

    50:

    I love the use of premature rounding in the slide. Let’s round 4 watt surface temperature change of 0.7 C up to 1. Then we calculate that water vapor feedback is a factor of 1.7. 1.7 x 1 = 1.7 C which then rounds to 2 instead of 1.7 x 0.7 C = 1.2 C which would round down to 1 C.

    Great observation!

  54. Posted Feb 16, 2007 at 3:47 PM | Permalink

    #36

    why is the aerosol-heavy NH warming more than the SH?

    UHI ? 🙂 ‘we can explain everything’ leads to overfitting leads to problems

  55. Hans Erren
    Posted Feb 16, 2007 at 5:51 PM | Permalink

    An increase of water vapour reduces annual temperature amplitude

  56. DeWitt Payne
    Posted Feb 16, 2007 at 7:01 PM | Permalink

    #51

    10.5 to 12.5 μm the thermal infrared (window) band (IR),
    used for imaging by day and by night and also for determining the temperature of cloud tops and of the ocean’s surface.

    Yes, I know, but what fraction of the total surface emission actually escapes directly to space through that window? I don’t have that data, but I’m guessing it’s fairly small while the emissivity of the surface at those wavelengths is at least 0.7. The black body emission spectrum at 288 K is pretty broad and flat with essentially all of the energy emitted between 5 and 25 micrometers. I think the end result is still a total emissivity of the near surface atmosphere very close to 1.

    Speaking of albedo/emissivity, anybody have a link to the variation of albedo with wavelength in the visible and UV?

  57. Paul Linsay
    Posted Feb 16, 2007 at 8:16 PM | Permalink

    #56, about 30 to 40% of the 288 K blackbody spectrum is in the 8-15um window.

  58. Hans Erren
    Posted Feb 17, 2007 at 2:52 AM | Permalink

    Re 56:
    define “near surface”, because earlier you said “that the lower atmosphere is nearly completely opaque”
    here is an observation:

  59. Hans Erren
    Posted Feb 17, 2007 at 2:59 AM | Permalink

    A more clear picture emerges when radiance temperature is plotted

    using the temperature profile the emission height (optical depth) can be calculated

  60. jae
    Posted Feb 17, 2007 at 10:01 AM | Permalink

    56: According to Kiehl and Trenberth, about 40 Watts are lost through the “window.” http://www.cgd.ucar.edu/cas/abstracts/files/kevin1997_1.html

  61. DeWitt Payne
    Posted Feb 20, 2007 at 4:34 AM | Permalink

    re #58, 59 and 60

    Thanks, that’s the sort of data I was looking for and couldn’t find. By near surface I meant the 2m height of the temperature measuring stations, because we don’t actually measure the land surface temperature, we measure the temperature of the air just above it. We do measure sea surface temperatures, but coverage was rather spotty until very recent times. I see that the Modtran 3 effective height never goes below 1 km, which is approximately the height of the boundary layer.

    According the figure in the Trenbeth abstract cited in #60, 40 W out of 390 W long wave radiation emitted from the surface escapes directly to space. That’s only 10%, which I think qualifies as pretty small. Also from that figure, the emissivity of the surface in the visible is 0.85 (albedo 0.15), so I’m still betting the emissivity in the IR is even closer to 1.

    If weather gets more severe and the specific humidity increases with temperature, don’t convective and evapo-transpirative surface energy losses increase? They now total 102 W/m2 or about 20% of total surface energy loss in the Trenbeth diagram. That would be a negative feedback, I think. I’d be curious to see where that shows up in the models.

  62. MarkW
    Posted Feb 20, 2007 at 6:01 AM | Permalink

    DeWitt,

    From what I have read, the modelers predict that the upper layers of the atmosphere are supposed to warm faster than the lower
    layers. This puts a cap on any increase in convection.
    Of course real world data is not finding this trend. The upper layers are warming at the same rate, if not a little bit slower.
    Unfortunately, the modelers seem to take the view that when the data does not match the model, it’s the data that must be “fixed”.

  63. DeWitt Payne
    Posted Feb 21, 2007 at 10:15 AM | Permalink

    MarkW,

    This is the lapse rate problem. The satellite (UAH MSU) data shows a linear trend of 0.09 C/decade in the temperature anomaly for the tropics in the lower troposphere and 0.06 C/decade in the middle troposphere. IIRC, the MT should be warming faster than the LT and both should warm faster than the surface. IIRC, the temperature anomaly from the instrumental record for the tropics gives a trend about the same as the satellite LT. I have read that basic undergraduate thermodynamics requires that the atmosphere warms faster than the surface (link anyone?). That would mean that either the satellite trend is too low, the surface trend is too high or some combination of both. The August 2003 NOAA workshop on reconciling vertical temperature trends had a lot of interesting presentations, but the link to the presentation slides was broken when I checked today. The CCSP has since issued a report with recommendations for further study, but Roger Pielke, Sr. was less than complementary.

  64. jae
    Posted Feb 21, 2007 at 10:51 AM | Permalink

    63. Your last link is extremely interesting. Looks like another fine example of political considerations trumping science. If only the decision makers understood what is going on…

  65. Posted Feb 21, 2007 at 11:42 AM | Permalink

    I have read that basic undergraduate thermodynamics requires that the atmosphere warms faster than the surface (link anyone?).

    #63: I think that this is the conventional wisdom if increased CO2 IR absorption was responsible for the increase in temperature. I have wondered about the predicted vertical temperature profiles for an increase in temperature due to an increase in solar activity and/or a decrease in low level clouds. Surface air temperatures would increase due to convection at the ground. Would this be a “fingerprint” of non-CO2 forcing? Most of the debate that I’ve seen is “which one is wrong?”. What would this mean if both are correct? I seem to remember reading a quote from Lindzen about the models not adequately accounting for convection.

  66. McCall
    Posted Feb 21, 2007 at 12:30 PM | Permalink

    Re: 63’s “less than complimentary” link

    This was later followed by a commentary naming and extending the list of offending parties:
    conflict-of-interest

  67. jae
    Posted Feb 21, 2007 at 12:46 PM | Permalink

    66. Hmmm, that link reads much like the Wegman Report on the Team.

  68. Steve Sadlov
    Posted Feb 21, 2007 at 2:48 PM | Permalink

    Pielke Sr’s site must be getting lots of traffic, I can’t even get in. Good sign!

  69. Francois Ouellette
    Posted Feb 21, 2007 at 8:43 PM | Permalink

    #68 Steve,

    That’s because he just quoted Mike Mann…

  70. DeWitt Payne
    Posted Feb 22, 2007 at 2:00 PM | Permalink

    Re: #65

  71. DeWitt Payne
    Posted Feb 22, 2007 at 3:33 PM | Permalink

    Re #65

    How does the idea that the troposphere warms faster than the surface reconcile with a chart like this:
    (slide #4 from here)

    This is a simplified explanation of greenhouse warming which uses a constant lapse rate (-6.5 K/km). The assumption of a constant lapse rate would seem to require that the lower atmosphere warms at the same rate as the surface. If the absolute value of the lapse rate increases with warming, then the surface temperature doesn’t increase as much from a given increase in the height where T = Te.

  72. DeWitt Payne
    Posted Feb 25, 2007 at 9:22 PM | Permalink

    Sorry, should have been the absolute value of thelapse rate decreases with warming.

  73. KChua
    Posted Apr 3, 2007 at 10:41 AM | Permalink

    The link below is to a CNN report dated 1 April – I assume it is not some kind of April Fools joke.

    http://www.cnn.com/2007/TECH/science/04/01/climate.report.ap/index.html

    Apparently there is a report coming out of Belgium that “maps out the effects of global warming, most of them bad, with every degree of temperature rise”. The projection has been described as a ‘Highway to extinction’. I have a finance/business background and I know from experience that projections based on models (very simple stuff when compared with climate science) need to be treated with great caution.

    I have been following this blog but have not posted anything before.

  74. Steve Sadlov
    Posted Apr 3, 2007 at 11:19 AM | Permalink

    RE: #73 – More realistically, I can imagine a number of highways to mass death (but certainly not extinction):
    1) Great war
    2) Global cooling
    3) Mass hysteria induced collapse of the various institutions due to eco ideology
    4) Genocide
    5) Turn around in advancement in developing countries forced by global artificially imposed cost increases and scarcity of energy

  75. jae
    Posted Apr 3, 2007 at 4:25 PM | Permalink

    75: Better add meteorites, bird flu, some other new virus, the Yellowstone Caldera, and WW III.

  76. MarkR
    Posted Apr 3, 2007 at 11:01 PM | Permalink

    Link

  77. Willis Eschenbach
    Posted Apr 4, 2007 at 12:29 AM | Permalink

    Jason L, you post is appreciated. You say:

    #63: I think that this is the conventional wisdom if increased CO2 IR absorption was responsible for the increase in temperature. I have wondered about the predicted vertical temperature profiles for an increase in temperature due to an increase in solar activity and/or a decrease in low level clouds. Surface air temperatures would increase due to convection at the ground. Would this be a “fingerprint” of non-CO2 forcing? Most of the debate that I’ve seen is “which one is wrong?”. What would this mean if both are correct? I seem to remember reading a quote from Lindzen about the models not adequately accounting for convection.

    One thing that seems to be overlooked is the fact that when energy is radiated from the troposphere to the surface, the change in temperature is not the same. To look at a simplified example, let us take the temperature of average radiative layer in the atmosphere as being 0°C, and that of the surface as being 15°C. Assuming black-body radiation, the surface radiation temperature is 390.9 W/m2, and the troposphere is 315.6 W/m2.

    Next, let’s assume that the forcing at the troposphere increases by 3.7 W/m2 (UN IPCC figure for doubling of CO2, but the mechanism is not relevant, could be anything). Neglecting all feedbacks, the tropospheric temperature changes to 315.6 + 3.7 W/m2 = 319.3 W/m2, and the surface is at 390.9 + 3.7 W/m2 = 394.6 W/m2.

    Now, let’s look at the temperature change. The tropospheric temperature has increased 0.8°C, but the surface temperature has only gone up by 0.7°C. This is because of the non-linear relationship between blackbody radiation and temperature.

    w.

  78. Willis Eschenbach
    Posted Apr 4, 2007 at 2:23 AM | Permalink

    I simply can’t make sense of the arguments for the high value of climate sensitivity. The UN IPCC, as Steve M. points out, says 1.5 to 4.5°C for a doubling of CO2. This implies an effective sensitivity on the order of 0.4 to 1.3°C per W/m2.

    Almost everyone agrees that the Earth is warmed about 33°C by the “greenhouse effect”, and that the total downwelling longwave radiation (DLR) from the atmosphere is about 325°C. This implies that the average climate sensitivity is about 0.1°C per W/m2. However, the situation is a bit more complex than that, because the atmosphere absorbs some of the incoming solar radiation, and only about 168 W/m2 hits the surface. That 168 W/m2 would give a temperature of about -40°C, with the warming to the current temperature of ~ 15°C coming from the 325 W/m2 of DLR. This gives a total warming of 55°C from 325 W/m2 of DLR, giving a sensitivity of 0.17°C per W/m2.

    We must bear in mind, though, that this is the average warming over the entire temperature range. But since the parasitic losses (sensible heat, latent heat, vertical transport to regions of reduced GHGs, and hydrometeors) increase with increasing temperature, the current sensitivity must be less than the sensitivity averaged over the whole range. Thus, 0.17 ° per W/m2 is the maximum possible sensitivity, with the actual sensitivity likely to be much lower than that.

    Can we estimate how much lower? Kiehl/Trenberth estimate the total sensible and latent heat loss at about 100 W/m2, and the radiant heat loss at 390 W/m2. This means that about a fifth of the loss is parasitic loss. This loss is included in the 0.17 sensitivity figure.

    Again, though, this 0.17°/Wm-2 is a maximum estimate, because parasitic losses increase with ‘ˆ†T, and we need to use the modern value rather than an average value. Can we get a better estimate? In an earlier thread, the results of Collins et al JGR 2006 are discussed. They say that the climate models use a figure of about 75% of the change in forcing being taken up in sensible and latent heat, meaning that only 25% of the change in forcing actually raises the temperature. This would lower the sensitivity by 75%, and would imply a current sensitivity on the order of 25% of 0.17 = 0.04°C per W/m2.

    It is worth noting that the estimate of the maximum value of 0.17 includes all feedbacks, positive and negative. Reducing this further, on the other hand, requires that we assume that there is no net feedback. If there is net positive feedback, the answer lies between the minimum value of 0.04°C and 0.17°C per W/m2. But since the 0.17°C per W/m2 includes all feedbacks, the sensitivity cannot be larger than that. This implies a change of 0.15° to 0.6°C for a doubling of CO2.

    Now, perhaps there’s some kind of hole in this argument, but I don’t see what it is. All criticism and hole-poking gratefully accepted.

    w.

  79. Gaudenz Mischol
    Posted Apr 4, 2007 at 3:08 AM | Permalink

    Dear Willis

    I’m far from poking any holes in your calculations (too much layman). The question Jason L. raised in his post 63 ist exactly what I was thinking about for some time. Are there different vertikal temperature profiles for IR absorption due to increased CO2 or increased heating due to solar/cloud changes?

    Anybody has any idea?

  80. jae
    Posted Apr 4, 2007 at 10:46 AM | Permalink

    78, Willis: Indeed, my data show an average sensitivity of 0.11, but I find some locations that are 0.2, and I believe even higher sensitivities are possible at higher altitudes. Parasitic losses are exceptionally high in areas dominated by the Pacific Ocean (sensitivities are about half those in other areas).