More on Functional Forms: Wigley 1987

Over the last week or so, I’ve reported on my efforts to locate the provenance of the functional forms for the relationship between levels of CO2 and other greenhouse gases and temperature. Luboš has also chipped in on the topic from a different perspective proposing a derivation of a log formula from first principles.

We’ve noted that AR4 endorsed these particular TAR results (here), that Myhre et al 1998 was the primary source for these TAR results here and that Myhre et al 1998 specifically applied the IPCC 1990 forms (see here ); we noted that IPCC 1990 attributed the forms to Wigley 1987 and Hansen et al 1988 (see here for IPCC 1990 discussion) and that Hansen et al 1988 Appendix B simply stated results, attributed there to the Lacis et al 1981 radiative-convective model.

The other leg of their argument was Wigley 1987, published in Climate Monitor, a CRU house organ where Wigley was then employed. I doubt whether this was severely “peer reviewed”. However, the CRU authors are leaders in their field and I see no reason to disrespect Wigley 1987 merely because it appeared in a house organ. However, it has not been easy to locate. The University of Toronto did not have a copy; Wigley himself said that he did not have a copy. However a CA reader has located a copy and kindly emailed me a scanned version, enabling this source to be tracked down a bit further.

Once more there’s rather a dead end. Wigley 1987 simply stated his results, rather than deriving them, as shown below. Wigley also had some interesting comments about GCM performance in this article, which I’ve also excerpted at length below.

The Logarithmic Formula

Wigley simply states the results without deriving them:

On theoretical grounds it can be shown that the relationship between radiative forcing change at the top of the troposphere and concentration change is linear at low concentrations, square root at intermediate values and logarithmic at higher concentrations. Because of this, the results of detailed radiative transfer calculations for the various trace gases give a linear concentration dependence for CFCs, square root for CH4 and N2) and logarithmic for CO2.

For CO2 and CH4, I have used results from the Kiehl and Dickinson 1987 model, supplied by Jeff Kiehl.

For CO2 over the range 250 ppmv to 600 ppmv, the Kiehl-Dickinson model gives a change in radiative forcing ΔQ, resulting from a concentration change from C_0 to C which can be described by:

CO2: ΔQ= 6.333 ln (C/C_0) (14)

to within 0.01 wm-2. Note that the precision of this fit should not be confused with the accuracy of the implied ΔQ values. The equation is probably accurate to about +-10% with similar accuracy for the results for other trace gases given below. Equation (14) implies a ΔQ value of 4.39 wm-2 for a doubling of CO2 concentration.

Luboš also believes that the relationship is logarithmic and this idea is plausible. This may very well be, but I would be surprised if Wigley had precisely the same proof in mind. Hans Erren has written in saying that a logarithmic form was stated by Arrhenius but Arrhenius’ results were not derived “on theoretical grounds” within the terms of Wigley’s above assertion. I suspect that there is some folk history to the linear-square root-logarithm rule within the climate community of the 1980s – Ramanathan probably has something on it, but this is a dead end in terms of tracking IPCC references. There is more to be said on Lacis et al 1981 and Myhre et al 1998 methods, which I will get to.

Wigley on GCMS
Wigley 1987 contained an interesting discussion on a topic that continues to this day: the divergence between the warming predicted by GCMs and the historical record. Wigley:

The accepted range of equilibrium warming due to a doubling of CO2 concentration (or its radiative equivalent) is 1.5-4.5 deg C. Most recent GCM studies have given values of 4 deg C or more. A 4 deg C warming for doubled CO2 corresponds to a climate sensitivity of about 1 deg C/wm-2, i.e. an equilibrium warming of about 1.7 deg C for the 1880-1985 radiative forcing of 1.7 wm-2. This is very much larger than the observed global mean surface air temperature change. This discrepancy, which is partly accounted for by oceanic lag effects, has been noted earlier by other authors e.g. Gilliland and Schneider 1984; Wigley and Schlesinger 1985. It has a number of possible explanations: the magnitude of global warming may have been considerably underestimated; the damping or lag effect of the oceans may be much greater than is currently believed; large additional forcings may be operating on the century time scale; and/or the most recent GCM studies may have overestimated the climate sensitivity.
Uncertainties in the observational temperature record are discussed by Wigley et al 1985 and Jones et al 1986. Current opinion is that, if anything, the amount of warming has been overestimated, an opinion not shared by myself and my colleagues. I will not consider this option further, but, instead concentrate on the other three possibilities.

Wigley then goes on to consider oceanic lag as an explanation for the non-response, concluding against this on the basis that the “only way that one could obtain the observed warming would be for vertical ocean mixing to be much greater than could be obtained with a pure diffusion or upwelling-diffusion ocean model”.

He then discusses the possibility of an overlooked forcing, noting that one would have to reduce the 1880-1985 radiative forcing by 0.7 wm-2 or more, given GCM sensitivity. He canvassed the possibility of a decline in solar over the 20th century as being an explanation (something that all parties would now seem to agree is exactly opposite to what was going on):

This would occur if some other external forcing agent were operating on the century time scale. The obvious possibilities are solar irradiance changes and/or long time scale changes in the volcanogenic aerosol changes of the stratosphere. …Solar variations of this magnitude cannot be ruled out. A decline of 0.7 wm-2 would correspond to a 0.3% reduction in solar output, well within the uncertainty in historical (pre-satellite) measurements of the solar constant. Recent satellite data show a decline of about 0.1% in irradiance over the 1979-85 period (Kyle et al; Willson et al 1986) attesting to the feasibility of a 0.3% decline over the past century or so.

He dismissed the potential forcing from volcanic aerosols as being anything other than transient. His other suggestion was planetary albedo:

Another possibility is a long-term increase in planetary albedo, Since the incoming radiation is about 240 wm-2, an increase of only 0.002 in the planetary albedo i.e. about 0.7% would be sufficient to reduce the net radiation balance by 0.7 wm-2.

Notably and surprisingly missing from this inventory were manmade aerosols. I think that Charlson (Hansen) et al 1990 was seminal in putting these into the mix as an explanation for the divergence. As I noted in my comments on Ramanathan’s AGU presentation, while the “discovery” of manmade aerosols seems to be somewhat opportunistic, the aerosols themselves are real enough and the effect has to be considered in a historical context. (Of course, opportunism can creep in, as one notes by the inverse relationship between GCM sensitivity and aerosol history, so there’s a lot of softness in this topic.)

Wigley’s third alternative is that GCMs are too sensitive:

A third possibility is that he climate sensitivity of about 1 deg C/wm-2 implied by recent GCMs is too high. If one accounts for the ocean damping effect using either a PD or UD model, and, if one assumes that greenhouse gas forcing is dominant on the century time scale, then the climate sensitivity required to match model predictions is only about 0.4 deg C/wm-2. This corresponds to a temperature change of less than 2 deg C for a CO2 doubling. Is it possible that GCMs are this far out? The answer to this question must be yes. Feedbacks involving sea-ice and cloud variations are still relatively poorly handled by all climate models and the feedback due to changes in cloud optical properties (e.g. Somerville and Remer 1984) has not been included in any GCM studies. This latter factor alone could possibly reduce the climate sensitivity by a factor of two.

The model uncertainties described in (1) [oceanic lag] and (3) [sensitivity] are of course well known. Their existence is the reason that, in spite of recent GCM results, the equilibrium temperature change due to a CO2 doubling is still generally given as lying in the range 1.5-4.5 deg C. The lower limit is entirely compatible with observations.

It’s interesting to see once again the references to cloud feedback as a major uncertainty and the possibility that a particular cloud feedback could reduce climate sensitivity. I wonder how IPCC represented the uncertainties, as stated here by Wigley, in their contemporary reports. I’ll look at that some time.

18 Comments

  1. Peter D. Tillman
    Posted Jan 11, 2008 at 11:49 AM | Permalink

    Steve, re

    Ramanathan probably has something on it, but this is a dead end in terms of tracking IPCC references.

    But not in terms of Ram himself, who is alive & well in (ims) California (I posted his web page TOD). Why not email him, & ask for his account?

    Incidentally, all the early accounts of sensitivity modelling I posted a few day ago (mostly from Wearts, http://www.aip.org/history/climate/) also mention clouds as the “great unknown” factor. See http://www.climateaudit.org/?p=2572 , post #7.

    Steve, for once the internal Google search found that post, but this is my first success in my last 3 or 4 tries. To my mind, this is the most urgent software issue facing this blog. Without a reliable search function, what use is an archive?

    Cheers — Pete Tillman

    Steve:
    I have a reliable search function available to me in editor mode. If you can give me instructions on how to make it available to others in a reader mode, Im happy to do it.

  2. DeWitt Payne
    Posted Jan 11, 2008 at 12:43 PM | Permalink

    Grant Petty in A First Course in Atmospheric Radiation (Second Edition) explains why there is linear, square root and logarithmic (exponential) behavior for different gases at different concentrations. It has to do with line shape, number of lines and overlap. For any line shape at very low optical density the behavior is linear. For an isolated Lorentzian line at high optical density, the limiting behavior is square root. If the line width and number of lines is so high that they severely overlap over a broad wavelength range, Beer’s Law applies and transmission decays exponentially with concentration or stated another way, absorption increases logarithmically.

    You can find this stuff in textbooks, but it’s so basic you are unlikely to find it in the primary literature unless you dig down a very long way. If there ever is an explication of the 2.5 C/doubling CO2 climate sensitivity, it will probably first appear in a textbook. But that won’t happen until the level of understanding is much higher than it is now.

  3. Kenneth Fritsch
    Posted Jan 11, 2008 at 4:01 PM | Permalink

    Re: #2

    Perhaps we are finally getting down to a definitive explanation. As a practicing chemist many years ago I was familiar with the practical applications of Beer’s Law in determining concentrations of absorbing species in solutions, but even that straight line relationship could “bend down” at higher concentrations and as I recall become almost flat. I had never given much thought to what the relationships would be with gases at various concentrations, but after a lapse in adding to my knowledge base perhaps it is resuming.

    Are the ranges at which linear, square root and exponentional relationships apply to optical transmission of an absorbing gas all that distinct?

  4. DeWitt Payne
    Posted Jan 11, 2008 at 4:15 PM | Permalink

    Re: #3

    Are the ranges at which linear, square root and exponential relationships apply to optical transmission of an absorbing gas all that distinct?

    I don’t think so.

    The deviation from Beer’s Law at high absorbance in a real spectrophotometer, at least in the UV/Vis range, is generally considered to be due to stray light intensity becoming significant compared to light transmitted through the cell rather than some fundamental problem with the theory. Unless you have a really good spectrophotometer, absorbances over about 2 will likely not be in agreement with Beer’s Law. You might be able to go to 3 (99.9% absorption) under the best conditions.

  5. SteveSadlov
    Posted Jan 11, 2008 at 4:45 PM | Permalink

    RE: Anthropogenic aerosols. The conventional wisdom has typically been very Euro and Amero centric, namely, that since pollution controls ramped up hard during the 1970s and 1980s, anthro aerosols are a diminishing issue. Of course, by now, the Asian Brown cloud is common knowledge. One hedging tactic seen is to mention the short residence time of the subject particles. But that is a red herring. The emission rate is such that there is a continual plume covering a good portion of Asia, the North Pacific and parts of the Arctic. Yes, individual particles’ residence times are much shorter than GHGs, but the plume made of of the train of such particles is persistent and growing.

  6. MJW
    Posted Jan 11, 2008 at 6:09 PM | Permalink

    DeWitt Payne:

    If the line width and number of lines is so high that they severely overlap over a broad wavelength range, Beer’s Law applies and transmission decays exponentially with concentration or stated another way, absorption increases logarithmically.

    If transmission decays exponentially (E_total*exp(-a*c)), absorption increase as E_total*(1-exp(-a*c)). They aren’t inverse functions. The total energy is equal to the sum of the energy transmitted and the energy absorbed.

  7. DeWitt Payne
    Posted Jan 11, 2008 at 6:23 PM | Permalink

    MJW, You are correct. Put a period after decays exponentially with concentration and strike the last part of that sentence. Wasn’t thinking.

  8. John Creighton
    Posted Jan 11, 2008 at 6:23 PM | Permalink

    #6 it depends on how the abortively “a” is distributive. See:
    http://www.climateaudit.org/?p=2570#comment-193595

  9. Geoff Sherrington
    Posted Jan 11, 2008 at 10:39 PM | Permalink

    The grass roots derivation of Beer’s Law is purely logarithmic. In lab use it is confined to low optical densities, usually well below 1, because noise becomes too large in the signal. Great care has been taken with spectrometers to get the relationship between reference and test specimens correct. These conditions simply do not apply in the atmosphere. There are complications, for example, from pariculate scattering (several variations), from high optical densities and from temperatutre and pressure and co-compositional changes in the optical path, which might not be parallel with the radiative path in any case.

    It should not be encouraged that a certain type of relationship, because it fits OK in a certain concentration range, can be extrapolated and turned into a rule relationship.

    I have not read the whole article now cited, but the abstract gives an indication of some of the complexity. Given that some of the variables are as yet unable to be modelled or properly measured, great care should be taken in arriving at conclusions. Reference follows – I’d go broke if I had to pay the opening fee of every article of interest.

    L Wind and W W Szymanski
    Institute of Experimental Physics, Aerosol Laboratory, University of Vienna, Boltzmanngasse 5, A-1090 Vienna, Austria
    Abstract. We present a modelled approach of scattering contribution to the radiation transmission through a scattering medium, such as an aerosol, yielding a correction term to the Lambert-Beer law. The correction is essential because a certain amount of the forward scattered light flux is always overlaid on the transmitted radiation. Hence it enters together with the attenuated beam into the finite aperture of any detector system and therefore constitutes a potential problem in the inversion of measured data. This correction depends not only on the geometry of the measuring system but also substantially on the optical depth of the medium. We discuss the numerical analysis of the magnitude and functional behaviour of the scattering correction for a number of important measuring parameters and we give a simple approximation for the determination of the range of applicability of the scattering correction for single scattering conditions. Finally, we show that the derived expressions yield useful values of optical depths at which non-negligible multiple scattering effects occur.

    Keywords: extinction, aerosols, scattering corrections, single scattering, multiple scattering

    An erratum for this article has been published in 2002 Meas. Sci. Technol. 13 951

    Print publication: Issue 3 (March 2002)
    Received 29 June 2001, in final form 13 November 2001, accepted for publication 16 November 2001
    Published 8 February 2002

  10. MJW
    Posted Jan 12, 2008 at 2:43 AM | Permalink

    Geoff Sherrington:

    The grass roots derivation of Beer’s Law is purely logarithmic.

    What’s logarithmic is the concentration as a function of the fraction of transmitted light. Which makes the fraction of transmitted light an exponential function of the concentration.

  11. RW
    Posted Jan 12, 2008 at 9:53 AM | Permalink

    I may be wrong but I think the derivation of the logarithmic relation for CO2 is similar to the ‘curve of growth’ concept in astronomy – an example derivation can be found here. As you increase the column depth of an absorber, the increase in equivalent width of the absorption line is directly proportional at small column depths, proportional to the logarithm at intermediate column depths, and proportional to the square root at high column depths.

  12. Posted Jan 12, 2008 at 11:03 AM | Permalink

    In stellar atmospheres, the relation between line strength change and concentration (abundance) is linear for small concentrations, almost constant for intermediate concentrations and square-root for high concentration, see for example http://www.physics.uq.edu.au/people/ross/phys2080/spec/photo.htm
    Is not line strength propotional to radiative forcing? Or do perhaps other physical laws apply for the atmosphere of the Earth and atmospheres of other objects in the universe?

  13. Steve McIntyre
    Posted Jan 12, 2008 at 11:22 AM | Permalink

    Many posters are missing how one goes from absorbance to radiative forcing in an atmosphere with a tropopause. Not as easy as you think.

  14. maksimovich
    Posted Jan 13, 2008 at 11:56 PM | Permalink

    Ramanathan probably has something on it, but this is a dead end in terms of tracking IPCC

    Ramanathan radiative-convective equilibrium.

  15. George M
    Posted Jan 14, 2008 at 10:02 PM | Permalink

    Further on SteveSadlov’s contribution: Are the Sahara dust aerosols which drift across the Atlantic and partly end up in the US considered anthropogenic or natural? A case could be made for either, depending on the time scale. Apparently there is some evidence that they have existed since long before satellite photography, but the loss of ground cover in the Sahara has some anthropogenic roots in the long view. Ah, those pesky aerosols.

  16. DeWitt Payne
    Posted Jan 15, 2008 at 6:41 PM | Permalink

    Re: #9

    Geoff,

    The correction involved appears to be because the measurement system is imperfect (finite angle of acceptance), not a problem with the theory itself. It’s similar to how stray light in a spectrophotometer limits the range of linear behavior of absorbance with concentration. Exponential extinction of transmitted radiation should continue for many more orders of magnitude than it is possible to measure in the lab.

  17. Ivan
    Posted Jul 26, 2008 at 4:08 AM | Permalink

    Steve, as for cloud feedback maybe is worth discussing new major research breakthrough by Roy Spencer’s team, in their published work, or work in press or under review (http://en.wikipedia.org/wiki/Roy_Spencer_%28scientist%29#cite_note-weatherquestions-6, or http://blog.acton.org/uploads/Spencer_07GRL.pdf, or http://www.weatherquestions.com/Climate-Sensitivity-Holy-Grail.htm ).

    Spencer seems to have found the source of artificial positive bias in diagnosing cloud feedback, which, if corrected, can reduce climate sensitivity at mere 0,8 deg K. If Spencer is right (none seriously disputed his conclusions thus far, he even had presentations of his results at Colorado University with 40 climatologists attending, and none objected) then all IPCC science is completely wrong, and your demand to AGW supporters to provide engineering quality study of 2XCO2 leading to 3 deg of warming is unnecessary. They will be shown to be wrong, not only stating uncertain propositions without engineering quality.

  18. Richard Patton
    Posted Jul 26, 2008 at 10:02 AM | Permalink

    One concern I have with Spencer’s recent study is that when he uses a 7-day moving average there are no linear striations apparent in the data. When he uses a 91-day moving average then the striations appear. But what would happen if he used a 60-day moving average? or a 120-day moving average?

    Could these striations somehow be an artifact of the data being auto-correlated and the time scale of the moving average?
    I have no idea if this is or could be the case but it would be nice to rule this out.

    Does anyone know how to get the underlying data he used?