AR4: "ad hoc tuning of radiative parameters"

Chapter 1 of AR4 has some surprisingly interesting comments about models that, to the extent that the points are disclosed in the body chapters, are disclosed so opaquely that they would be undecipherable to anyone other than a few. Here are some interesting comments about flux adjustment – an issue that must surely raise civilian eyebrows. A “flux adjustment” in a GCM is defined below as an “empirical correction that could not be justified on physical principles” i.e. a fudge factor, and one of the accomplishments of recent GCMs has been to apparently get past that. AR4:

The strong emphasis placed on the realism of the simulated base state provided a rationale for introducing ‘flux adjustments’ or ‘flux corrections’ (Manabe and Stouffer, 1988; Sausen et al., 1988) in early simulations. These were essentially empirical corrections that could not be justified on physical principles, and that consisted of arbitrary additions of surface fluxes of heat and salinity in order to prevent the drift of the simulated climate away from a realistic state. The National Center for Atmospheric Research model may have been the first to realise non-flux-corrected coupled simulations systematically, and it was able to achieve simulations of climate change into the 21st century, in spite of a persistent drift that still affected many of its early simulations. Both the FAR and the SAR pointed out the apparent need for flux adjustments as a problematic feature of climate modelling (Cubasch et al., 1990; Gates et al., 1996).

By the time of the TAR, however, the situation had evolved, and about half the coupled GCMs assessed in the TAR did not employ flux adjustments. That report noted that ‘some non-flux adjusted models are now able to maintain stable climatologies of comparable quality to flux-adjusted models’ (McAvaney et al., 2001). Since that time, evolution away from flux correction (or flux adjustment) has continued at some modelling centres, although a number of state-of-the-art models continue to rely on it.

This raises an obvious question: which “state-of-the-art models” continue to rely on flux adjustments? One of the annoying aspects of IPCC WG1 reports is their refusal to make such identifications, which might put one of the group in hot water with his funders, I suppose. I’d like to know which models make flux adjustments so that I can keep an eye out when the “ensemble” results are reported.

They go on to make the following interesting comment that I;ve not seen in print elsewhere:

(1.5.3) The design of the coupled model simulations is also strongly linked with the methods chosen for model initialisation. In flux adjusted models, the initial ocean state is necessarily the result of preliminary and typically thousand-year-long simulations to bring the ocean model into equilibrium. Non-flux-adjusted models often employ a simpler procedure based on ocean observations, such as those compiled by Levitus et al. (1994), although some spin-up phase is even then necessary. One argument brought forward is that non-adjusted models made use of ad hoc tuning of radiative parameters (i.e., an implicit flux adjustment).

No reference is given for this powerful statement. This is exactly what Gavin Schmidt denies and yet here’s IPCC WG1 worrying about “ad hoc tuning”. Does anyone know anything more about this?

41 Comments

  1. Larry
    Posted Jan 6, 2008 at 4:24 PM | Permalink

    I think the “flux adjustment” is kind of like the “flux capacitor”:

    http://en.wikipedia.org/wiki/Flux_capacitor

  2. H
    Posted Jan 6, 2008 at 5:18 PM | Permalink

    You see which models use flux adjustment in figs 14-16

    Click to access Coveyetal-GlobPlanChng03.pdf

  3. kim
    Posted Jan 6, 2008 at 5:26 PM | Permalink

    Throughout nature tautologies create reality.
    ===========================

  4. Posted Jan 6, 2008 at 6:01 PM | Permalink

    Steve M, you misused the term “begs the question.” Begging the question does not mean the same as “invites the question.” Begging the question means that you are answering the question in a tautological manner.

    I know that most journalists and journalism professors cannot tell the difference, but a “climate auditor” needs to watch his language.
    😉

  5. Ron Cram
    Posted Jan 6, 2008 at 6:32 PM | Permalink

    H,

    Thank you for that link. Table 1 seems to provide the key info on which model runs have flux adjustments and what they are. Of the 18 GCMs listed, only six show no flux corrections.

    This paper was submitted in 2001, accepted in 2002 and published in 2003. Has this paper ever been updated?

  6. John Norris
    Posted Jan 6, 2008 at 6:34 PM | Permalink

    re #2 H
    Okay, I chased the link and garnered the following from the Fig 14 caption:

    Alternate observationally based data sets: ERA15, NCEP

    Flux Adjusted: BMRC, CCCMA, CCSR, CSIRO, ECHAM3+LSG, ECHAM4+OPYC3, GFDL, HadCM2, IAP/LASG, MRI

    Not Flux adjusted: CERFACS, DOE-PCM, GISS, HadCM3, LMD/IPSL, NCAR-CSM

    Having read a little bit of GCM source code and some GCM readme files, it would not surprise me at all if most of these GCMs have common lineage, or at least share common code. As Steve has demonstrated with numerous proxy studies, there is a limited number of fresh starts. A lot of published proxy studies use rehashed proxies, perhaps adding few new ones, or with a new technique.

    I suspect it is the same with many GCMs. I would like to see a GCM family tree. Anyone know where I can find the ancestry?

  7. kim
    Posted Jan 6, 2008 at 7:16 PM | Permalink

    AF, the question is ‘Which state of the art models still beg the question?’ Shall I attribute the awkwardness to haste?
    ===========================

  8. Pat Keating
    Posted Jan 6, 2008 at 7:19 PM | Permalink

    I’m sure that GISS and Columbia U. models are very close…..

  9. Posted Jan 6, 2008 at 7:26 PM | Permalink

    Kim, I would say that virtually all state of the art models still beg the question. I suspect that Steve M. largely agrees, although his emphasis has been more on proxies than GCMs.

  10. Steve McIntyre
    Posted Jan 6, 2008 at 9:27 PM | Permalink

    #2. That’s a 2003 paper. While it doubtless serves as a guide, one would need to do the same table for the AR4 models.

  11. Erik
    Posted Jan 6, 2008 at 9:44 PM | Permalink

    If you put into your model that for a 1 degree increase in temperature do to CO2 the feedback mechanisms add 1.5 degree’s C. This is a unstable system and you will need to add fudge factors to make your model stable.

  12. Ian McLeod
    Posted Jan 6, 2008 at 9:59 PM | Permalink

    I ran across this web page the other day and thought it was applicable. The IPCC does have a policy for submitting model information. If you have developed a GCM, and then submit your results to the IPCC for perusal, your model must contain specific features before your outputs will be reviewed.

    http://www-pcmdi.llnl.gov/ipcc/standard_output.html

    There is a lot of info here but some of the more interesting stuff is way down at the bottom of the page. I have not seen this link come up thus far in the recent discussions and thought it might add something to the debate.

  13. Steve McIntyre
    Posted Jan 6, 2008 at 10:20 PM | Permalink

    http://www-pcmdi.llnl.gov/ipcc/model_documentation/ipcc_model_documentation.php indicates that nearly all of the AMIP3 GCMs were non-flux-adjusted. Only 4 CMIP3 models acknowledged flux adjustments in a quick browse: BCC-CM1; ECHO-G; MRI-CGCM2.3.2; INM-CM3.0.

  14. Raven
    Posted Jan 6, 2008 at 10:40 PM | Permalink

    I remember reading that there is no reliable data about the effects of aerosols so the GCMs are required to estimate their effect. The usual technique for estimating the effect is: calculate the difference between reality and the theory and assume the difference is due to aerosols. I can’t find the link right now but there are lots of papers on uncertainies in aerosol modelling. Given those uncertainities it is unlikely that any GCM could have predicted the past without tuning aerosols to produce the desired result.

    Here is one of the papers on the topic:
    http://climatesci.colorado.edu/2007/02/26/direct-and-indirect-effects-of-anthropogenic-aerosols-on-regional-precipitation-over-east-asia/

  15. Raven
    Posted Jan 6, 2008 at 11:00 PM | Permalink

    Here is the paper that points out the ‘GCM tuning’ for aerosols: http://www.ecd.bnl.gov/pubs/BNL-71341-2003-JA.pdf

  16. hswiseman
    Posted Jan 6, 2008 at 11:08 PM | Permalink

    …and I am sure the non-flux adjusted models don’t incorporated any datapoints or forcing values from their flux adjusted cousins…..

  17. Demesure
    Posted Jan 7, 2008 at 1:17 AM | Permalink

    For the French non flux-ajusted model LMD/ISPL (team of Hervé le Treut, lead Author of Chapter 1 of WG1 AR4), here is the archive of internal correspondances between modellers.

    It seems they have divergence problems of their own, for example some quite “funny” and illustrative translations in this letter:
    – “Olivier has mentionned the problem of snow accumulation reaching several km must be resolved”
    – “Flux comparisons between top and bottom atmosphere show a discrepancy of about a dozen W/m2, it’s too much”
    – “Zonal means show a big cold biais (5 to 15°C) at the tropopause”

  18. rafa
    Posted Jan 7, 2008 at 2:09 AM | Permalink

    Can you see page. 596, section “8.1.3.1 Parameter Choices and ‘Tuning’”?

    “The number of degrees of freedom in the tuneable
    parameters is less than the number of degrees of freedom in
    the observational constraints used in model evaluation. This
    is believed to be true”

    They “believe” is true. No one explictily audited that is actually true.

    “No studies are available that formally address the question” it is said there.

    What happens when there are more tuning knobs than degrees of freedom?

    best

  19. Geoff Sherrington
    Posted Jan 7, 2008 at 3:52 AM | Permalink

    Re # 2 “H”

    Thank you for the reference to figs 13 & 14. They are educational. They also alarm this scientist. I think we used different terminoloical description (rather shorter), but we did not use the method.

    Fair amazing how much modelling goes evetually back to Phil Jones. We know that there are still questions to be answered about the reliability of these data, yet here they spring up in yet another paper as if they were gospel.

    It would be neat if the science followed the useual processes of postulation, experimental design, data gathering, evaluation, reporting etc. I believe that this process is incomplete for global temp data such as Jones, so the cart is before the horse and has been for a long time.

  20. Paul
    Posted Jan 7, 2008 at 4:31 AM | Permalink

    As always, use of econometric terminolgy is probably more reveailing in the use of models.

    Macroeconometric models struggle with the same problems, but don’t use “flux adjustments”. Instead, they simply have “errors”. i.e. Actual-Predicted=error (or residual). Then, if you want to forecast, clearly you need to “forecast the errors”. How you do that in any credible way is one of the most endearing features of macroeconomtric based forecasting.

  21. Posted Jan 7, 2008 at 6:04 AM | Permalink

    John at #6

    Here’s a GCM family tree http://www.aip.org/history/sloan/gcm/famtree.html

  22. steven mosher
    Posted Jan 7, 2008 at 8:33 AM | Permalink

    an explaination of flux adjustmenst I posted a while back

    http://www.climateprediction.net/board/viewtopic.php?p=49085#49085

    Nick’s presentation video
    http://www.climateprediction.net/science/pubs/OpenDay2006/nick_f.wmv

    Click to access NF_OpenDay2006.pdf

  23. Posted Jan 7, 2008 at 8:45 AM | Permalink

    My recollection from research done some time ago now is that some of the regional GCMs i.e. GCMs used to do studies that are local to a particular part of the world e.g. Europe and/or the UK still use flux-adjustments. I’ll double check and get back.

    KevinUK

  24. Kenneth Fritsch
    Posted Jan 7, 2008 at 9:19 AM | Permalink

    One argument brought forward is that non-adjusted models made use of ad hoc tuning of radiative parameters (i.e., an implicit flux adjustment).

    As I recall, when Dr. Isaac Held made his first visit to CA and he was queried about the use of fluxes in climate models, his reply indicated that flux adjustments could be and were made implicitly by being included in other parameterizations. In fact my take on his reply was that the boundaries between flux use and parameterization were fuzzy. I do not have the exact post or comment but I think I could find if anyone feels it is important.

    On the other hand part of Held’s arguments about parameterizations was that tuning was difficult to do with parameters which he indicated were not “flexible” in changing the end results and that apparently tuning the model and using parameters are different concepts to him.

  25. Kenneth Fritsch
    Posted Jan 7, 2008 at 11:04 AM | Permalink

    Re: #24

    I found the Isaac Held comment I was referencing in my above post at Post #64 on the thread “Truth Machines”. As I reread these comments I think my interpretation is one of many that one could take away from Held’s comments. Unfortunately, Dr. Held never returned to comment on the tuning question and provide details and insights into his comments at Post #64.

    I would argue, as do some of the modelers interviewed, that there is no sharp distinction between flux adjustments and “tuning” of other sorts. Both are attempts by “pragmatists” to get a model that has potential relevance, one hopes, to the problem at hand. My personal preference for tuning is that the latter results in a model that is a testable hypothesis for how the climate system behaves, while a flux adjusted model is not. Anyway, I don’t think there is any point in focusing on the flux adjustment issue in isolation.

    I do want to comment on the tuning question. I haven’t the time right now “¢’‚Å“ I will get back to this in a day or two.

    http://www.climateaudit.org/?p=845

    On doing some rereading of the thread I noted that Dr. Held used the term “stiff” to describe why parameterizations is difficult to use in “tuning” a climate model.

    In this thread Steve M had questioned Dr.Held with regards to climate models results of the tropics where measured differences between troposphere to surface warming seems to go against the model results. Dr. Held implied that the “stiffness” of the physical parameters used in the climate models in this csae would indicate that the measurements are wrong. When Steve M pointed to Emanuel using the measured differences and not the climate model results in his exposition of a potential intensity model for explaining hurricane intensities, Held more or less laughed it off with a quip.

  26. SteveSadlov
    Posted Jan 7, 2008 at 1:44 PM | Permalink

    RE: #1 – Right up there with dilithium crystals! (She cannae take it! She’s gonna blow cap’in!)

  27. AJ Abrams
    Posted Jan 7, 2008 at 2:00 PM | Permalink

    So this confirms my postulate from the other day that after a sigma (in this case it was said ten years) occurs, they will just go back and tweak (ad hoc) the forcings to make darn sure that the sigma disappears. Yet adding that kind of negative value should be readily apparent in that it would show temperatures decreasing without man’s help. Have I got this correct?

  28. bender
    Posted Jan 7, 2008 at 2:05 PM | Permalink

    AJ Abrams: Did you look up the post by Daniel Klein, as I suggested a few days ago? The reply from Gavin Schmidt squarely addresses your question.

  29. AJ Abrams
    Posted Jan 7, 2008 at 2:11 PM | Permalink

    Bender I tried, but after almost an hour gave up the ghost. I would think the last two posting by SM squarely answers my question, I was just asking if I’m reading that information correct. I’m still a bit new to the debate and haven’t read everything there is out there, but I am trying (and trying out the math, which is giving me a headache).

  30. bender
    Posted Jan 7, 2008 at 2:15 PM | Permalink

    #29 AJ Abrame
    See here:
    http://www.climateaudit.org/?p=2517#comment-184541
    and comments below and above

  31. AJ Abrams
    Posted Jan 7, 2008 at 2:27 PM | Permalink

    You are awesome. It’s so much easier to find things when you know exactly what you are looking for. I shall read immediately. Thanks.

    AJ

  32. Michael Jankowski
    Posted Jan 7, 2008 at 2:31 PM | Permalink

    “Olivier has mentionned the problem of snow accumulation reaching several km must be resolved”

    If a model based on consensus climate science suggests kilometer-thick snow is going to happen as a result of global warming, who are we to argue?

    This is in contrast to the typical tuned GCM, which predicts less snow and hurts places dependent on (1) snowmelt for drinking water and (2) snow for their resort economies.

    I am sure that once they got the memo, their “untuned” GCM matches the consensus a little better.

  33. Francois Ouellette
    Posted Jan 7, 2008 at 3:16 PM | Permalink

    This rather oldish paper (1999) has a good discussion of flux adjustments, and how modelers dealts with it.

  34. AJ Abrams
    Posted Jan 7, 2008 at 3:16 PM | Permalink

    What an interesting read Bender. A few things to note and a question.

    1. Gavin comes across as obtuse in his response.
    2. His answer was circular logic and thus completely nonsensical.

    Let me see if I can articulate why:

    1. He states that you cannot cheery pick a specific year to start a trend analysis, but that is illogical. You have to start somewhere, and in giving his criteria you’d actually start 5 years ago, not ten. Regardless of what your starting point for the trend, 1998, 1948 your slope is significantly altered if the last ten years have remained static to what it would have been had temperatures actually gone up for the last ten years. Period.

    2. You can’t arbitrarily call any data insignificant without defining the parameters like he did in his response, then use a single year (2005) to prove that there was an upward trend in the same paragraph. It’s using circular logic and not just evasive, but a blatant end around.

    3. You can’t comment that any one year’s negative temperature trend can be explained away by looking for a significant negative forcing but at the same time not doing it for positive trends which is what he’s doing in the response.

    In short he gives an answer that is no answer and suggests that no matter what date you use or no matter what the trend is over a given time frame that he reserves the right to change the parameters. The only way it seems to circumvent his illogical argument would be for temperature data to drop to pre -satellite era temperatures which while possible isn’t likely.

  35. AJ Abrams
    Posted Jan 7, 2008 at 3:19 PM | Permalink

    Forgot the question part: So the answer to my question is that they (in this case Gavin), have no real response but are stalling for time?

  36. Sam Urbinto
    Posted Jan 7, 2008 at 5:22 PM | Permalink

    Check the RSS satellite of troposphere for 2007.

  37. SteveSadlov
    Posted Jan 7, 2008 at 7:07 PM | Permalink

    RE: #32 – That sort of alludes to a global superstormish output. While Bell and Strieber are masters of junk science, I would not completely discount what I refer to as the “overshoot syndrome” – namely, a perturbation of any type, anthropogenic or otherwise, is eventually bound to cause the system to snap back down into ice mode. Someone on another blog put it as “the climate system is currently biased toward lossiness and cooling.”

  38. John Norris
    Posted Jan 7, 2008 at 9:52 PM | Permalink

    #21 Oliver Morton

    Nice find. Certainly plenty of shared ancestry with that set of GCMs!

  39. Andrew
    Posted Jan 8, 2008 at 7:37 AM | Permalink

    AJ Abrams, indeed sudden a drop off is unlikely. Indeed, its nearly statistically impossible that the prediction that 2008 will be at least the 10th warmest ever over at the Hadley Centre will be wrong. It’s actually not going to happen short a volcanic corruption or the Sun dissapearing. Or some kind of super La Nina. But that only happens after a super El Nino. Next year will be warm, but what about the next decade?

  40. steve
    Posted Jan 9, 2008 at 12:54 PM | Permalink

    Does anyone know why the “Clear Sky Anomaly” can be so easily dismissed by the modeling community? They blame the 20% difference between satellite data and radiative transport codes on aerosols. To me this seems to be a huge problem for people that are constantly saying that the “physics” underlying the models is correct.

    Lindzen never brings it up, and it is plausible, but 20%!

  41. Posted Jan 18, 2008 at 7:35 PM | Permalink

    Re: #8 I don’t see any GCM in the Covey et al. (2003) paper that is Columbia U. – what are you referring to when you are comparing to GISS?