USHCN #3

285 Comments

  1. steven mosher
    Posted Oct 14, 2007 at 5:05 PM | Permalink

    Now, Obviously ASOS gear doesnt go back to 1884.

    Odd thing. I didnt expect the instrument variable to exhibit such a wide range in Absolute
    temps. I expected lat/lon/ele to be big explainers of absolute C.

    A puzzlement. Is anyone confortable adjusting all the data per station down to sea level
    via a lapse rate adjustment?

  2. steven mosher
    Posted Oct 14, 2007 at 5:22 PM | Permalink

    A clarification. ALL_SITES is the data created by OpenTemp when it is run on all
    1221 USHCN sites. It differs slightly from GISS in two regards. In the early years it is
    Slightly Cooler than GISS. Hansen 2001 notes that GISS remove the early period of a handful of cold
    Norcal sites
    (4-5) Opentemp doesnt remove these. I consider this to be a NON ISSUE, except for the lack
    of documentation by NOAA. JohnV or I or anyone who runs Opentemp can adjust this. ( maybe I’ll do that
    just to show how well OpenTemp_all matches GISS)

    The Post 1979 mismatch between GISS and OpentempALL might require a bit more examination. But
    The agreement looks promising. I think it’s just outside the error band that GISS claim.

  3. Anthony Watts
    Posted Oct 14, 2007 at 6:22 PM | Permalink

    Mosh, suggestion for you. Maybe as a test exercise, run a data set though it with 39 stations, and nothing but readings of 11.3 c (I think that was the median used) for max and min, or maybe something like +5 from mean as a max and -5 from mean as a min. Run for both GISS and CRN123mm If the flat line is offset from GISS, then maybe it is a coding artifact of some kind.

    Just an idea. One of the things I do to find problems in code is to run null data sets though them to see what pops out. If it isn’t null or zeros, then problems lie within.

    You should probably make sure this isn’t something else other than real offsets, first.

    I’m not comfortable with a lapse rate being applied to the data – Yet.

  4. steven mosher
    Posted Oct 14, 2007 at 8:02 PM | Permalink

    I use the same proceedure for everything. I feed OpenTemp a set of stations. I get yearly figures.
    Ill post a graphic of the actual temps. I Should also re download the software and just do a fresh
    install of the whole thing. It’s a lot of work to redo, but I should be able to get around to it in the next
    couple days. Not sleeping much. Anywho, I’ll post in a bit. have to eat.

  5. steven mosher
    Posted Oct 15, 2007 at 6:46 AM | Permalink

    Anthony,

    here you go; all = all 1221 ; mnmax = all minmax; crn123mm = crn123 with minmax
    there are 38 of these sites

    temps as calculated by Opentemp

  6. Jeremy Friesen
    Posted Oct 15, 2007 at 8:37 AM | Permalink

    So, what, if any, conclusions are being drawn from this?

  7. steven mosher
    Posted Oct 15, 2007 at 8:59 AM | Permalink

    RE 6.

    There are several interelated issues here that we are not addressing in a systematic fashion.

    I’ll try to write up something to structure my understanding and you guys can stone me.

  8. Kenneth Fritsch
    Posted Oct 15, 2007 at 9:14 AM | Permalink

    Re: #5

    Steven, I am most perplexed by the fact that the plots I see in your graphs follow nearly an exact same pattern, indicating a near constant bias or a near constant correction or a near constant error. I am going to do some reality checks with plots of my own as my feel for these data might simply be wrong. I do know that the differences between the USCHN MMTS and Urban data sets show some constant biases — on average.

    I am also of a mind to think, after doing some research on the matter, that the TOBS and MMTS data sets, while corrected for TOBS and MMTS, are not corrected/adjusted for some rather large errors in those sets that stem from other sources. I also want to look at what the averaging effect is when the Filenet/Urban data sets are adjusted for missing and bad data points using “well correlated” and nearby stations.

  9. Posted Oct 15, 2007 at 9:21 AM | Permalink

    I’ve got lots to catch up on. Steven Mosher first:

    Second Look 2, Post #287:

    yes, but you know it’s misleading. When the sample has a lower mean than the population
    during the period used to calc the anomomaly the graphs are collapsed in Y. When the sample
    is Warmer than the population they are widened in Y.

    The population is never collaped or widened. It is simply shifted so that two trends line up for a certain date range. That makes it easier to see where they diverge.

    To calculate anomalies for CRN12.. or CRN5 or CRN whatever.. I’ll just subtract 11.2.

    By subtracting a single constant from all series, you’re actually making a version of absolute tempertures. From your recent plots, I see that your goal is to compare the absolute temperatures of different subsets, and that’s fine. My goal is different — I’m looking at the trends and where they diverge. From your plots of MMTS vs others I can see that MMTS is cooler (more on that later), but I can’t see if it’s cooling or warming.

    Second Look 2: Post #290
    I really think you should be using TOBS instead of raw, and rural instead of all sites. The relative warming of GISS wrt CRN12 is explained by TOBS, and urban sites add additional complications. Why reduce micro-site issues by choosing CRN12 but ignore UHI by including urban sites?

    Second Look 2: Post #293
    “When I run all 1221 I still have stations outside the grid ( coastal) can we get that nit fixed?”
    The stations are outside the perimeter of the USA lower 48 that I drew, but they are included in all calculations.

    #1 above:
    I think you are saying that MMTS reads lower than the general population. That has already been established. USHCN applies a net correction of ~0.04F (~0.02C) starting around 1984 (see the USHCN graph here).

    Since MMTS was not used until the early 1980s, shouldn’t the difference between MMTS and the general population of CRN123 stations disappear before 1980? Since it doesn’t disappear, the only conclusion I can draw from your graphs is that the MMTS stations are in cooler areas (higher latitude and/or altitude).

    If you difference CRN123MM from CRN123, you may see a step change in the early 1980s when MMTS was introduced. That would mean something.

    I haven’t looked into, but I suspect a similar argument could be made for the ASOS stations.

  10. Anthony Watts
    Posted Oct 15, 2007 at 9:35 AM | Permalink

    RE5 Mosh, fascinating. As an electronic engineer would surmise; “DC offset”. Causes?

    Stevenson Screens are cooler than MMTS units – we know that, but only an offset of .05 degree F is applied for that in USHCN adjustments. Maybe that is way too low. Maybe that is a laboratory derived number, and not a field derived number. MMTS accounts for 71% of the network now, so MMTS has the ability to bias signifcantly, but that does not explain the DC offset 50-100 years ago. Divergence would start circa 1985 when MMTS was put into service if that is the case.

    Locations? Yes could be elevation. Could also be rural -vs- urban placements, or latitude. We must look at average elevation of those 38 stations. Shoot me the list of 38 and I’ll look.

    Instruments? Hard to imagine such a large offset between sensors. Again this does not explain offset 50-100 years ago when MMTS did not exist.

    Program? We need to know what happens if you run 38 stations with flat or nulled data against 1221 to make sure DC offset isn’t part of the programs response to 38 versus 1221 datapoints. This is my most likely candidate. I’m not knocking John V, just applying my own troubleshooting maxim that I use regularly with great success. “human error first, connections second, hardware last”

    Except for DC offset, the signal looks nearly identical, which lends credence to Gavin’s “60 stations are all we need” theory. Trends? Eyeballing tells me CRN123mm may trend a little lower. We must figure out the cause of this offset.

  11. Posted Oct 15, 2007 at 9:49 AM | Permalink

    Clayton B:

    Second Look 2: Post #257
    I wrote a long and detailed response re your coastal vs interior analysis, but I think it was lost when the server went down last week. That was fine, because I have a better answer now anyways. I updated OpenTemp to accept a new input parameter to specify the averaging radius (eg. /r=500 for a 500km radius). The new version is available here:

    http://www.opentemp.org/_release/OpenTempV1RC2.zip

    It also supports Monte Carlo analysis with the /stnpick=NN option. This option will pick NN random stations from the stations listed in the stations file.

    Second Look 2: Comment #292
    The 1000km radius that I used originally was chosen to cover the USA lower 48 using only the CRN12 stations. It was and is arbitrary. OpenTemp still uses 1000km as the default, but you can now specify a new radius without modifying the code (see above).

    Don’t forget that the averaging algorithm is inverse-distance-weighted (IDW). The stations near the perimiter have very little effect.

    Your analysis of results using different radii got my attention. I’m not sure which result is the most accurate because there’s a compromise involved in choosing the averaging radius. If it’s too small you lose any geographic weighting (which is fine if your sample is uniformly distributed). If it’s too big, then interior stations influence a larger area than coastal stations.

    Would you mind adding trendlines from 1900 to your plots? I’m guessing the biggest difference is about 0.04C/century.

    Second Look 2: Comment #299
    “If stations are outside of the boundary do they still contribute to cells that are within 1000km or are they ignored?”
    See above — the stations are included in the calculations even though they are outside the perimeter.

  12. Posted Oct 15, 2007 at 9:55 AM | Permalink

    Kenneth Fritsch: Second Look 2: Comment #262

    I have thought about the use of monthly data in attempts to overcome the missing data problem, but I have not been able to convince myself that it truly solves the entire problem. Could you convince me with some details on how you did it?

    I’ll try.
    OpenTemp looks at each month individually. For each month it builds a list of station locations and temperatures. If a station has a missing value for a month it is excluded.

    The USA lower 48 is divided into 3308 cells. The temperature of each cell for a given month is calculated from the list of station locations and tempertures available for that month. The average temperature for the month is then calculated by averaging all of the cells. If a cell is too far from any station (more than 1000km by default) it is excluded from the average. The cell areas are considered when calculating the average.

    I hope that helps.

  13. Posted Oct 15, 2007 at 10:01 AM | Permalink

    Anthony Watts:

    Program? We need to know what happens if you run 38 stations with flat or nulled data against 1221 to make sure DC offset isn’t part of the programs response to 38 versus 1221 datapoints. This is my most likely candidate. I’m not knocking John V, just applying my own troubleshooting maxim that I use regularly with great success. “human error first, connections second, hardware last”

    That’s a good idea (and don’t worry I didn’t take it as a knock against me).
    We have some anecdotal evidence that OpenTemp does not introduce a DC offset (CRN12R vs CRN123R, CRN12R vs USHCN1221, etc) but anecdotal evidence is no substitute for rigorous testing.

    I have a couple of busy weeks coming up and would appreciate any help with generating and running OpenTemp tests.

    Re an earlier comment about testing OpenTemp versions, I have been running regression tests against old results with each new version.

  14. Kenneth Fritsch
    Posted Oct 15, 2007 at 10:17 AM | Permalink

    Re: #8

    Reality check completed and reality is as Steven Mosher defined it. I compared the USCHN Urban data set for CRN123 versus all stations for absolute temperature in degrees F and essentially duplicated what Steve Mosher saw in his plots of CRN123 MMTS versus GISS all. The nearly constant bias is apparent over the entire 1920-2005 time period (the amount of missing data points inhibits me from looking further back in time). The plot below shows the CRN123 in pink and All USCHN in dark blue.

    The next step (unless I missed it being done here) is to compare CRN123 for latitude, longitude and elevation against all the USCHN stations. When I did my 11 pairs of stations CRN12 versus CRN5 the CRN12 and CRN5 stations where centered at nearly the same latitude and longitude but the CRN12 stations where at significantly higher elevations.

  15. steven mosher
    Posted Oct 15, 2007 at 10:31 AM | Permalink

    RE #8 kenneth.

    “Steven, I am most perplexed by the fact that the plots I see in your graphs follow nearly an exact
    same pattern, indicating a near constant bias or a near constant correction or a near constant error. ”

    I am perplexed as well. Here is what I did. I created a station list for all 1221 stations.
    I fed that to OpenTEMP. I selected the yearly.cvs file. WHY did I do this? I did this because
    I want to compare APPLES to APPLES. Opentemp to Opentemp. If you want to run all the stations
    just holler. The near constant bias appears no matter what cut I do. The issue I think
    is how near is near? Let me put it this way. If GISS showed .6C trend I would not expect
    that Quality Control ( removing bad sites) would lower that trend below .5C. I might hope
    that the noise level would diminish.

    THEN you wrote:

    “I am going to do some reality checks with plots of my own as my feel for these data might simply be wrong.
    I do know that the differences between the USCHN MMTS and Urban data sets show some constant biases — on average.”

    MMTS is NOT MinMAx. MixMax is LIG. In a CRS.
    I Looked at INSTRUMENTS. Not urban. Not rural. (I think these distinctions are
    suspect.) Also, I didnt look at MMTS. I looked at the old style old school LIG MinMAX instrument.
    Here is what perplexes me. If you cut the data URBAN/RURAL ( based on 1980 population) you see
    NADA. If you cut the data BRIGHTLIGHTS/DIMLIGHTS ( based on 1995 data) you get BUPKISS.

    IF, however, you look at INSTRUMENTS ( ASOS, MMTS,MINMAX) you see stuff. What it means?

    If you want to test the results I suggest the following

    1. GET anthony’s list. Sort on MINMAX instruments. You should find more than 200.
    Then sort on CRN. You should find 38 stations that are CRN123 AND minmax.
    If you dont find 38, then we should cross check.
    Then do you thing with those 38.

    2. You could ask me for my list, which I will give, but its better if you duplicate
    fom scratch in case I effed up step 1. OK?

    “I am also of a mind to think, after doing some research on the matter,
    that the TOBS and MMTS data sets, while corrected for TOBS and MMTS,
    are not corrected/adjusted for some rather large errors in those sets that stem from other sources. ”

    I picked MINMAX instruments so that I could avoid MMTS adjustments!
    The introduction of the MMTS ( the beehive looking thing) did two things

    1. Introduced a sensor with a small cold bias
    2. Moved sensors closer to buildings.

    The MMTS studies only quantified #1.

    My thought was this: pick the sites that havent had instrument changes and are well sited.
    It was just a hunch.

    “I also want to look at what the averaging effect is when t
    he Filenet/Urban data sets are adjusted for missing and bad data
    points using “well correlated” and nearby stations.”

    Urban is a spurious variable. I have goten a bunch of mail from folks asking me
    to look at URBAN RURAL. Urban Rural is a designation given by NOAA or NASA. based on
    1980 population. I might as well regress on my waistine in 1975. Same with Nightlights.

    Note: I have not looked at trends.

  16. steven mosher
    Posted Oct 15, 2007 at 10:32 AM | Permalink

    RE 9. John. I have done NO PLOTS of MMTS. I am doing minmax

  17. steven mosher
    Posted Oct 15, 2007 at 10:47 AM | Permalink

    RE 10.

    The average elevation of the 38 Minmax is 1900 feet or so. That could explain some
    of the cooler temps.

    There are a couple issues we need to keep clear. As JohnV and others point out one major issue is this.

    DOES siting BIAS, INFLUENCE or BIAS the trend. ?

    That’s the big question. I’m looking at ways to quantify that. When we first looked at CRN12 and found
    nothing everybody said “look at RURAL CRN12” but rural is based on 1980 population. and we have a handful
    of sites. JohnV is now looking at CRN123 ( I think this is ok ).

    So I thought this.

    Why not look at intruments. We’ve all seen a rural site CRS with a minmax become a MMTS site located
    next to a building.

    To BE CLEAR. I am looking at CRN123 with the old school LIG MinMax. NOT MMTS. so MMTS adjustments
    dont hit my data. I could care less about it. TOBS is another issue

  18. Posted Oct 15, 2007 at 10:48 AM | Permalink

    #16:
    My mistake. I believe Anthony Watts made a similar mistake in comment #10.
    I looked back at your posts and saw this in USHCN Second Look #2 (comment #302):

    “Notably the MMTS which moved the sensor closer to buildings. So, I started to cut the data a bit differently.”

    That’s where you started using the CRN123mm notation, and that’s where I got confused.

  19. SteveSadlov
    Posted Oct 15, 2007 at 10:51 AM | Permalink

    Truism – “the late 1800s were cooler than much of the 20th century. The apparent rise from 1880 to the mid 1920s can be explained via a combination of emergence from the LIA, and, the depressive impacts of Krakatoa upon the late 1880s.”

    Challenge – how do we know that the late 1800s and, the 1900s prior to 1925, were not as warm as, or even warmer than the mid – late 1930s?

  20. steven mosher
    Posted Oct 15, 2007 at 10:59 AM | Permalink

    RE 13. DC offset.

    I will take GHCHv2 and turn it into a null set and do the nullset test.

    However, Opentemp ALL matches GISS with very good precision. The only issues I see are this.

    1. GISS exclude some stations and peroids of stations that Opentemp includes
    2. OpenTemp and GISS diverge slightly in the modern period. Maybe this will change
    when I switch to USHCN. I promise to do this, but I have my process down and
    am just being lazy about regenerating all my stuff.

    Now, my run with 38 stations did result in some cells having less than 3 stations. I dont know
    what this means.

    Bottom line. Opentemp hasnt failed me yet ( as in crash) and the results have always been within
    my expectations. Nothing that looked bone head wrong. I’ll do some dummy data sets later this week.
    Walking through the code I havent seen anything that would indicate any issue. PLus Opentemp
    matches with SteveMC R code.

    The issue here is the cut I made through the data. Is it geographicaly perverse? On accident?
    Does it matter… The trend question.

  21. steven mosher
    Posted Oct 15, 2007 at 11:04 AM | Permalink

    RE 14.

    I think the average elevation of the CRN123 MinMAX ( not MMTS ) was around 1900 feet.
    I think that on the cold end of things.

    Another thought I had the other day was to CHERRY PICK regimes on purpose to check trend.

    For example. Between the late 50s and the early seventies you have a near linear slide in
    TEMPS. Fine.

    Which sites COOL faster: good sites or bad sites?

    Then pick a Linear range of temp Increase. Which sites warm faster.?

    In short, pick the linear portions of the regime to test the Site bias hypothesis.

  22. steven mosher
    Posted Oct 15, 2007 at 11:10 AM | Permalink

    38 Stations:

    CRN1 or CRN2 or CRN3.

    INSTRUMENT = MINMAX ( not MMTS)

    42572383001,34.70,-118.43
    42572465001,38.82,-102.35
    42572694004,44.63,-123.20
    42574506003,37.87,-122.27
    42574501001,38.33,-120.67
    42572591004,39.75,-122.20
    42572504001,41.13,-73.55
    42572786006,48.35,-116.83
    42572537004,42.30,-83.72
    42572743002,46.33,-86.92
    42574341005,45.13,-95.93
    42574341007,45.58,-95.88
    42572772006,45.67,-111.05
    42572677004,45.92,-108.25
    42572488003,39.45,-118.78
    42572515000,42.22,-75.98
    42572698007,45.68,-122.65
    42574500001,39.10,-120.95
    42572483001,38.53,-121.77
    42572591002,39.52,-122.30
    42572469007,40.00,-105.27
    42572464001,37.17,-104.48
    42572785005,48.28,-116.57
    42572432009,38.73,-87.50
    42574492000,42.22,-71.12
    42574490007,42.70,-71.17
    42572536008,41.92,-84.02
    42574365002,45.65,-84.47
    42572772003,45.30,-111.95
    42574484001,42.95,-72.32
    42572523002,42.25,-77.78
    42572521006,40.78,-81.92
    42572683002,42.70,-120.53
    42572572014,41.82,-111.32
    42572371003,37.22,-112.98
    42574484002,43.38,-72.60
    42572614003,44.42,-72.02
    42572788004,46.10,-118.28

  23. Anthony Watts
    Posted Oct 15, 2007 at 11:42 AM | Permalink

    Average elevation of 38 CRN123mm stations is 1900 feet
    As an average the

    International Civil Aviation Organization
    (ICAO) defines an international
    standard atmosphere with a temperature lapse rate of 6.4 °C/1000 m (3.5 °F/1000
    ft) from sea level to 12 km.

    Therefore at 1900 feet we get 3.5 x1900/100 = 6.65 °F for 1900 feet
     Δ6.65 °F = Δ3.69°C

    It certainly seems plausible that the elevation provides the “DC offset”, and elevation and general lat/lon would seem t be the only two variables that are constant over 100 years for these stations. From the plot in #1 it looks like you have an offset of about 1.75 degrees C. to ASOS, and about 1 degree to “all”. Unlike barometric pressure, temperatures aren’t corrected to sea level, and doing so is problematic due to the daily variances in lapse rate from wet to dry adiabatic rates. Plus there is the issue of terrain and mixings which affects the local lapse rate general “average” character.

    Unless the null datapoints test on OpenTemp reveals something, it looks like elevation is the source of offset. An easy way to test would be to remove 1 or 2 of the highest elevation stations and run it again. If the offset changes, but the signal remains relatively unchanged, that would point in the direction of the offset being elevation based.

    I think what we have here is the “boondocks effect” i.e. Stevenson Screens tend to persist at sites that are way far away from the NWS office. It is somewhat like the “Starbucks Hypothesis”. In the “boondocks effect” the question is: “can the local NWS COOP manager go out and replace the Stevenson Screen with an MMTS in one day and still be home for dinner?” Thus I think some of the Stevenson Screens with Max-Min that are furthest away from the NWS office are the least likely candidates for replacement. From experience, I know that mountain/higher elevation travel is usually slower, which is why I could not complete the Cheesman Reservoir in Colorado’s Rocky mountains survey before I ran out of daylight.

    Now trends for CRN123mm -vs- All are what we really need to look at.

  24. Anthony Watts
    Posted Oct 15, 2007 at 11:55 AM | Permalink

    RE18, John V. I’m not sure what you are getting at. I clarified the MMTS versus Max-MIN LIG issue back at the end of USHCN thread #2. Thats the first question I asked Mosher. Or, are you referring to something else?

  25. Kenneth Fritsch
    Posted Oct 15, 2007 at 12:28 PM | Permalink

    On reading the adjustments of the USCHN data sets I get the idea that while in the Areal data set large deviations (4/5 sigma) in temperatures for a station are flagged they are not removed. I get the imprecision that these non-homogeneities and others are not removed until the progressive adjustments reach the SHAP/Filenet stage and therefore using Raw or TOBS or MMTS datasets means one is including some large reporting and other errors in their data. I also judge that using data with 10% missing points can theoretically change the results of trends. Using monthly averages will reduce this effect but not eliminate it. For example, if I am measuring a trend with data points of a station with either an average temperature at the very low or high end of the stations temperature range and then the station data points go missing I am trending now with an artificially decreased or increased average temperature for all stations. This problem can become more pronounced when a smaller number of stations are included in the samples used for analysis.

    One way around that problem is to eliminate a station with any missing data from the analysis. This would however lead to a significantly reduced sample size. The other way is fill-in the missing data using some averaging algorithm. I think that doing the latter artificially affects the analysis results less than the missing data does.

    I think the adjustments that USCHN uses can be debated as a separate issue, but until one can show their shortcomings and any misleading effects they may have on the analysis, I judge that using them gives more realistic results than using unadjusted and presumably bad data.

  26. Clayton B.
    Posted Oct 15, 2007 at 12:51 PM | Permalink

    RE 11 JohnV,

    I added the trend values (not lines). You were close!

    More interesting is the difference in absolute.

  27. steven mosher
    Posted Oct 15, 2007 at 12:56 PM | Permalink

    RE 23. Altitude is NOT the whole answer.

    Average Altitude of ALL 1221: 1679 Ft.
    Average altitude of all LIG_MIN_MAX: 1932 ft.
    Average altitude of all CRN123 LIG_MIN_MAX: 1943 ft.
    Average altitude of ASOS is 1291:

    So at a lapse rate of 6.5C per KM we have a difference of 300 feet between ALL sites
    and the CRN123mm ( lig minmax) call that 100 Meters. Call that .7C

    Also, note the difference between All minmax and the CRN123mm. they are similiar in altidue
    but divergent in Temp. ON ITS FACE this make no sense. Maybe its random? amybe other geometry.

    Altitude explains some of this.

    For example ASOS sites are LOWER than average by 380 feet or so. Call that 115 meters.
    At a Lapse rate of 6.5C per KM you get about .75C

    ASOS sites at 1290 ft are .7C warmer than the AVERAGE SITE which is at 1680 feet.
    Thats good agreemneent.

    CRN123MM sites are at 1943 feet. Thats about 270ft higher than average. So, it should be
    cooler tha average by about .55C. In actuality they are cooler by 1.5C!

    LET me bottom line this.

    1. We cut the data by CRN BUT we know CRN changes with time. We need insight here.
    2. I cut the data by instrument ( like ASOS) but we know that changes with time. Except
    for LIG min_max.. We need insight here.
    3. We cut the data by Urban/rural/small town. nightlights etc… I think we need to look at this.

    Weird. Maybe they are distributed in a lat/lon cold zone. STILL the question of trend remains

  28. steven mosher
    Posted Oct 15, 2007 at 12:56 PM | Permalink

    RE 23. Altitude is NOT the whole answer.

    Average Altitude of ALL 1221: 1679 Ft.
    Average altitude of all LIG_MIN_MAX: 1932 ft.
    Average altitude of all CRN123 LIG_MIN_MAX: 1943 ft.
    Average altitude of ASOS is 1291:

    So at a lapse rate of 6.5C per KM we have a difference of 300 feet between ALL sites
    and the CRN123mm ( lig minmax) call that 100 Meters. Call that .7C

    Also, note the difference between All minmax and the CRN123mm. they are similiar in altidue
    but divergent in Temp. ON ITS FACE this make no sense. Maybe its random? amybe other geometry.

    Altitude explains some of this.

    For example ASOS sites are LOWER than average by 380 feet or so. Call that 115 meters.
    At a Lapse rate of 6.5C per KM you get about .75C

    ASOS sites at 1290 ft are .7C warmer than the AVERAGE SITE which is at 1680 feet.
    Thats good agreemneent.

    CRN123MM sites are at 1943 feet. Thats about 270ft higher than average. So, it should be
    cooler tha average by about .55C. In actuality they are cooler by 1.5C!

    LET me bottom line this.

    1. We cut the data by CRN BUT we know CRN changes with time. We need insight here.
    2. I cut the data by instrument ( like ASOS) but we know that changes with time. Except
    for LIG min_max.. We need insight here.
    3. We cut the data by Urban/rural/small town. nightlights etc… I think we need to look at this.

    Weird. Maybe they are distributed in a lat/lon cold zone. STILL the question of trend remains

  29. Posted Oct 15, 2007 at 1:05 PM | Permalink

    #24 Anthony Watts:
    Sorry, my confusion again.
    I was catching up on the posts from the weekend and made the MMTS vs MinMax mistake.

  30. steven mosher
    Posted Oct 15, 2007 at 1:11 PM | Permalink

    RE 23. Looking at Trends. I agree. But I’m am not at ALL confident in any trend measurement.

    Selecting a regime ( a period ) is fraught with subjectivity. It’s easily to cherry pick
    the end points and fit the junk in between. So, I guess I respectfully decline from doing
    this. Lot’s of folks have suggested eras to look at and after toying with the trend data
    I sensed that I was data mining.

    That has lead me to a different focus. JohnVs approach removes lat/lon dependency. If we remove
    Elevation dependency ( via a lapse rate adustment) then we have a relatively homogeneous
    set of data. Then we can look at siting variables.

    1. Population density.
    2. near feild impacts. ( crn stuff)
    3. Instrument bias.

    The excess is CLIMATE CHANGE.

    That is: remove the lat/lon/ele bias.
    remove the urban bias
    remove near feild bias
    remove instrument bias.

    Then you got Climate Change and some weather noise.

  31. Posted Oct 15, 2007 at 1:12 PM | Permalink

    #25 Kenneth Fritsch:

    For example, if I am measuring a trend with data points of a station with either an average temperature at the very low or high end of the stations temperature range and then the station data points go missing I am trending now with an artificially decreased or increased average temperature for all stations. This problem can become more pronounced when a smaller number of stations are included in the samples used for analysis.

    OpenTemp attempts to get around this problem using the /offset option (which I always use). The region (USA lower 48) temperature is calculated in two passes:

    The first pass uses the station readings directly. For each station series, the average offset from the series temp to the average temperature for the whole region (USA lower 48 so far) is calculated. I call this the series bias.

    For the second pass, the series bias is added to every series and all calculations are repeated. By shifting all series to the regional average temperature, series can be added or removed from the calculation with little effect on the average.

    My original description of this procedure is available here.

  32. Posted Oct 15, 2007 at 1:16 PM | Permalink

    #30 steven mosher:

    Selecting a regime ( a period ) is fraught with subjectivity. It’s easily to cherry pick the end points and fit the junk in between.

    I’ve been using the 22-year trailing trend for every year from 1921 to 2005. Although the trend length (22 years) is subjective it at least has some merit based on sun cycles. By plotting the trend for all years instead of just key periods you can see where the trends match and where they differ. What do you think of this approach to trends?

  33. Kenneth Fritsch
    Posted Oct 15, 2007 at 1:26 PM | Permalink

    I used Steven Mosher IDs for CRN123MMS (not MMTS) and plotted All USCHN, CRN123 and CRN123MMS using the USCHN Urban data set to produce the graph presented below. CRN123MMS appears to be definitely different and may well be what is producing most of the difference between CRN123 and ALL.

    We now need to look at the latitude, longitude and elevation differences.

  34. Posted Oct 15, 2007 at 1:41 PM | Permalink

    RE26 Clayton, the image does not show on the post. I think your http://img91.imageshack.us/img91/1410/aawtm7.gif URL will not allow public access. I’m betting you can see it, but you are “logged in”. Try another image host service…would love to see your graph.

  35. Clayton B.
    Posted Oct 15, 2007 at 2:02 PM | Permalink

    I just wrote a mile long post that got ate up. I’ve had no problems with imageshk in the past…

  36. steven mosher
    Posted Oct 15, 2007 at 2:37 PM | Permalink

    RE 32: One point A while back I ran a whole host of smooths on the data all of which seemed
    “justified” So 22 years? starting when? why start at that year?

    If we had a theory of the source we could tune a filter to create some processing gain right?
    But absent that theory, filtering the signal is just making prettier pictures.

    Here is what I see the hypotheses are in front of us.

    1. SITE BIAS is real, AND it changes long term trends in statistically significant ways

    2. SITE BIAS is real, BUT it does not change long term trends in statistically significant ways

    3. SITE BIAS is not real and does not change long term trends.

    So, lots of folks want to examine the trend first. That’s fine. That’s the AGW issue.

    I’d like to get clear on the BIAS issue. All of the microclimate science says the Bias is
    real. That is, the enviromental science of the climate near the ground has decades of work
    showing that the Issues Surfacestations documented are important. So, I start there. Are these
    BIASED. If so, how does the bias manifest itself? gradually, sporatically? Does it look like
    AN OUTLIER?

    So, I can’t answer the 22yr question.

    Another thing to consider is this.

    By dropping CRN45 we may not change the anomaly, but do we change the variance.
    I had this debate with Folks On RC. They were against dropping ANY SITES on the assumption
    that 1) it would make no difference. 2) the CI would widen.

    That seemed a good thing to test.

    What made me think. Look how well Opentemp matches with a small number of sites!

  37. Clayton B.
    Posted Oct 15, 2007 at 2:40 PM | Permalink

    One more try on the different averaging distances….

  38. Clayton B.
    Posted Oct 15, 2007 at 2:47 PM | Permalink

    OK, I had a write-up on this that got ate up so I’m just posting figures…

  39. Posted Oct 15, 2007 at 3:11 PM | Permalink

    Clayton B:
    Your write-up was posted to Unthreaded #22.

  40. Posted Oct 15, 2007 at 3:13 PM | Permalink

    RE36, regarding Microsite BIAS, I have an English translation of Michel Leroy’s paper from MeteoFrance in the works. His paper is the basis for the entire CRN rating. There have been excerpts from this paper published in English, and that was the basis for The US Climate Reference Network adopting the site rating scheme, as well as my use of it, but the whole paper has never been translated. He sent me his original in French last week, and I and Hu McCullough of OSU have been working on it. It’s back in Leroy’s hands now for approval. Once approved by him, I’ll get it posted.

  41. Kenneth Fritsch
    Posted Oct 15, 2007 at 3:17 PM | Permalink

    Re: #31

    John V, I think our method does overcome the missing data problem, but I need to ask some questions about the details.

    1. How do you think your averaging algorithm varies the analysis results from that used by USCHN for filling in missing data points? I sometimes think we may be debating here about issues that only make marginal differences and we do adjustments that really do not change results from USCHN “standard” methods.

    2. Is your offset based on monthly data or yearly? Data points that go missing for a warm or cold part of the year in a state with a climate like IL would make a difference with yearly data points and offsets. Although the average 48 states temperature does not change by large amounts over the years, would not that offset have to be calculated for the period of time that is being covered in the particular analysis?

    Finally, even with a reasonable fix for the effects of missing data, does not the problem of non-homogeneities in the Raw and TOBS data sets remain a problem.

  42. Posted Oct 15, 2007 at 3:40 PM | Permalink

    RE36, Mosh, another thought on the microsite bias issue. This one has been brewing in me a long time.

    I tend to view the USHCN network much like an electronic circuit. A multi-node A to D converter with a big low pass filter attached to it in the form of the yearly average temperature math procedure.

    Such a circuit will filter out all the short period noise. But, long period bias will get through it and be darned hard to detect or to separate from background.

    One clear instance of potential long period bias is MMTS conversion. With the majority of the network (71%) now MMTS, and with that conversion being done over a period of 20 years, there should be a gradual bias added. The whole process of converting CRS’s that “likely” had better exposure to MMTS’s which due to cable issues have been placed closer to buildings should slowly introduce a long term bias signal. It’s a slow site by site change, with a randomized distribution in the USHCN network both from temporal and spatial standpoints. If the CRS to MMTS conversion had been done all at once, like over a year, it would likely show up as a step function. But given the long period, how do we detect it? When the MMTS was first installed, it may have been a CRN3, but then over time since human activity is very close it gets turned into a 4 or 5 as things get built up around it. Marysville is a perfect example of that. Used to be grass back there, then CRS converted to MMTS, still over a small patch of grass, then it got moved, then cell tower buildings added, nearby parking. Each thing ads a little bit, or subtracts a little bit if shading is involved. Each small step will change the trend slightly. The process is cumulative but hard to spot.

    How do we test for this? My thinking is that OpenTemp and GISS aren’t wired for the job. I’m not faulting them, but they are designed with a different process in mind.

  43. Kenneth Fritsch
    Posted Oct 15, 2007 at 4:31 PM | Permalink

    I have calculated the average latitude (Lat) , longitude (Long) and elevation (Ele) for the following groups of USCHN stations:

    All USCHN:
    Lat = 39.6; Long = -95.7; Elev = 1680.

    CRN123:
    Lat = 40.1; Long = -94.5; Elev = 1542.

    CRN123MMS:
    Lat = 42.2; Long = -101.1; Elev = 1944.

    CRN45:
    Lat = 40.2; Long = -98.8; Elev = 1521.

    The differences in absolute temperatures between CRN123 MMS and other groups agree qualitatively with a more northerly and higher location of the CRN123MMS stations. The difference between CRN123 and All USCHN stations is not explained by elevation differences, although the CRN123 stations on average are 0.5 degrees further north.

    To complete this analysis we need to look at the CRN45 absolute temperatures.

  44. steven mosher
    Posted Oct 15, 2007 at 4:48 PM | Permalink

    RE 42: EXACTLY.

    “RE36, Mosh, another thought on the microsite bias issue. This one has been brewing in me a long time.

    I tend to view the USHCN network much like an electronic circuit.
    A multi-node A to D converter with a big low pass filter attached to
    it in the form of the yearly average temperature math procedure.”

    There are ALSO bandpass filters. The USHCN filter spikes. If the Microsite bias hits like shot
    noise it looks like an spike and USHCN clip it to a window.

    “Such a circuit will filter out all the short period noise.
    But, long period bias will get through it and be darned hard to detect or to separate from background.”

    Yes. One thing I did early on ( a smart felow at RC asked me a fair question) was to ESTIMATE
    the size of the effect. This estimation is a GUESS. So I guessed at several things: JohnV
    has done the same thing independendantly. Looking at the size of the effect and the maganatude of
    the Noise my SOUTH END tells me we have no power in the test. More precisely the power in the
    test is not great enough to indentify a low powered signal with reliabiity… Ahhh. Grossly

    If the noise band around a yearly figure is .1C and we are trying to pull out a .05C bias with 95%
    confidence…..

    “One clear instance of potential long period bias is MMTS conversion.
    With the majority of the network (71%) now MMTS, and with that conversion
    being done over a period of 20 years, there should be a gradual bias added.”

    I am not sure the BIAS is gradual. MMTS resulted in move toward human structures.

    1. This means wind shelter. BUT ONLY when the wind is blowing. Wind is a rare event
    ( Looking at the parker data, and some Fresno data, wind looks like a Poisson process)

    2. This means Anthro heating.. In summer days when AC is used. In winter days when heating is used.

    So The Bias isnt a RAMP.. it appears on certain days,not on others. Certain months its stronger..

    A ramp Bias to the signal would be easier to detect.. This kinda Bias raises the noise floor
    widens the varience.. shifts the DC up a bit.. get the picture?

    ” The whole process of converting CRS’s that “likely” had better exposure to MMTS’s which due to
    cable issues have been placed closer to buildings should slowly introduce a long term bias signal.
    It’s a slow site by site change, with a randomized distribution in the USHCN network both from temporal
    and spatial standpoints. If the CRS to MMTS conversion had been done all at once, like over a year,
    it would likely show up as a step function. But given the long period, how do we detect it?”

    OK I see what you mean.. The introduction is gradual…adjust my previous comments.
    The detection has to be on a site by site basis. We need the convesion date.

    ” When the MMTS was first installed, it may have been a CRN3,
    but then over time since human activity is very close it gets turned
    into a 4 or 5 as things get built up around it. Marysville is a perfect example of that.
    Used to be grass back there, then CRS converted to MMTS, still over a small patch of grass,
    then it got moved, then cell tower buildings added, nearby parking.
    Each thing ads a little bit, or subtracts a little bit if shading is involved.
    Each small step will change the trend slightly. The process is cumulative but hard to spot.”

    The Marysville site got me started. The only way I say the change was by comparing to sites within 100KM
    When you did that you could see the bias in the site. Small subtle gradual. When you dump the offal
    in the grinder you get sausage. Then its hard as hell to find the pigs lips.

    “How do we test for this? My thinking is that OpenTemp and GISS aren’t wired for the job.
    I’m not faulting them, but they are designed with a different process in mind.”

    At some point I will get back to the Titusville study and show you all an approach. I’ll do the same
    thing for Orland ( but quincy is spotty on data, I’d rather use Mineral or Manzinta lake and chester)
    Opentemp works fine for this process. Some minor mods might help. But let me do a pilot study and then
    let others through rocks at it. SteveMC and others might see a different approach. So, I’ll do my thing
    here in a few days. Others can critcize or innovate as they see fit.

    LASTLY, I think it is critical to identify and document sites that the CRN can calibrate to.
    One thing we know. We know we dont need the CRN45s to represent the climate.

  45. JZ Smith
    Posted Oct 15, 2007 at 5:13 PM | Permalink

    Great site, lots of great info (I suspect), but as a layman, almost impossible to understand. One or more of you smart guys need to either contribute a summary write-up to posts here or a companion site that boils these posts down to how they relate to AGW. Does this info support or not support AGW?

    I understand Mr. MacIntyre’s mission for this site as “auditing” the data and methods of the mainstream AGW arguments, but there are many of us ‘science-challenged’ skeptics who would really like to see an unbiased, objective review of the science on both sides of this issue dumbed-down to our level.

    Maybe the “anti Realclimate” site?? They do a nice job of ‘selling’ their views to the average Joe like me.

    Just a thought…

  46. JZ Smith
    Posted Oct 15, 2007 at 5:21 PM | Permalink

    A clarification to my post above:

    One or more of you smart guys need to either contribute a summary write-up to posts here or a companion site that boils these posts down to how they relate to AGW. Does this info support or not support AGW?

    I meant to write “…to how they relate to AGW for the layman.

  47. steven mosher
    Posted Oct 15, 2007 at 5:49 PM | Permalink

    RE 40. Great. I trust he appreciates the hard work of the team.

    I’l hold my tongue till I read it

  48. steven mosher
    Posted Oct 15, 2007 at 5:53 PM | Permalink

    RE45: JZ. Good questions. If you ask these over on “unthreaded” you’ll get an ear full of stuff.

  49. Anthony Watts
    Posted Oct 15, 2007 at 5:56 PM | Permalink

    RE45, JZ try this blog, http://www.wattsupwiththat.com for a layman level read on many of these things. I spent 20+ years in front of a TV camera explaining complexities of weather to a layman audience, so I know of the frustration of which you speak. So I try to write my posts catering to the layman. Often many of the things discussed here in detail are written up in “Cliff’s notes version” there too.

    Also I’d point out that ClimateAudit isn’t about “Does this info support or not support AGW”, but it is about what the numerical truth is behind the data and the procedures that use that data. For example, I doubt you’ll find anyone here that disputes a temperture rise in the surface, sea, and satellite temperature records over the last century. But you will find probig questions into how certain conclusions were arrived at, with detailed looks in to data gathering, data processing, and application of results.

  50. David
    Posted Oct 15, 2007 at 6:01 PM | Permalink

    #45: I often wish the journal Nature would do the same, but until then I am stuck with Popular Science. 😉

    Real Climate has a political agenda, which is why they require the layman to be able to read it. Climate Audit should at least have a summary page of conclusions it has reached and how it got there (trail of factual evidence) in layman’s terms.

  51. JZ Smith
    Posted Oct 15, 2007 at 6:12 PM | Permalink

    Re 48: I thought I WAS on “unthreaded”. Sorry!

  52. JZ Smith
    Posted Oct 15, 2007 at 6:15 PM | Permalink

    Re 49, I’ll give the website a look, thanks.

    Re 50, the political agenda of RC is obvious, but you cannot underestimate the power of easy-to-understand science to the layman. Both sides need equal representation or they win regardless of the science supporting their position.

  53. Clayton B.
    Posted Oct 15, 2007 at 7:20 PM | Permalink

    51,

    Re 48: I thought I WAS on “unthreaded”. Sorry!

    I think you and me got our threads crossed! (see post 39 above)

  54. Paul29
    Posted Oct 15, 2007 at 7:28 PM | Permalink

    As far as the missing temperature data points, has anyone taken a site with a complete monthly record and randomly removed data points to see what happens to the fidelity of the monthly values. A simple Monte Carlo analysis could determine the importance of the missing values.

  55. steven mosher
    Posted Oct 15, 2007 at 7:52 PM | Permalink

    RE 54. That’s on our list of stuff to do. Thanks for the suggestion. Hey? you got spare
    cycles? Pick up a subroutine and excute.

    My plan was to remove random days and then try to recontruct with a curve fitting
    thingy… sorry I should be less technical.

    Still JohnVs OpenTemp works just fine without filing in temps. So I see
    no benenfit.

    I do not see what filling a temp actually accomplishes? WHAT?

    generally: filing in data cant change a mean. It can only underestimate the varience.

    ( ducks and runs for cover )

  56. cce
    Posted Oct 15, 2007 at 8:38 PM | Permalink

    Here’s my stab at an algorithm that comes at the problem from a different angle. This might be cartoonishly stupid, but maybe it will inspire other ideas along these lines.

    For a given month, find all of the stations with continuous data and no moves for period of time X (say, 5 years). Exclude the rest. For example, include all of the sites with complete data for October for 5 continuous years.

    Look at each of the remaining stations individually. Find the closest Y stations (for example, the 10 closest stations). Calculate the X year trend for each of these nearby stations in addition to the station we’re focused on. Sort them from smallest trend to largest trend. This will give you an S curve, with the stations with a cooling bias at one end of the S, and stations with a warming bias at the hot end of the S (or maybe you are excluding stations that really have unusual warming and cooling compared to the others, but they will get multiple chances to be included in future steps). Find the point of inflection for the S curve. Mark this as an “ideal” station (or you could mark several stations near the inflection point).

    Move on to the next station. Now do the exact same thing for its nearest Y neighbors. In other words, if there are 2000 stations, this will be done 2000 times. Some stations may be marked repeatedly, others will never be marked. If a station is never marked, exclude it.

    The idea is that we look at the area surrounding each station and the size of this area is determined by the proximity of its closest neighbors. That is, the algorithm dynamically adjusts for sparse data but always includes enough sites to construct the S curve. If sites have unusual warming or cooling from the perspective of every one of their neighbors, then they are excluded and only the “best” sites are left.

    Use the trends at each of our “ideal” sites to create an area weighted trend for the entire area in question (US or the world, it wouldn’t matter) for that month, covering that time period (i.e. October, 5 years).

    Move onto the next month. You would build the chart backwards using the instaneous trend. This would automatically create a “smoothed” graph, with the amount of smoothing determined by our time period X. The noise of each individual year would automatically be filtered out.

    You could specify a maximum bound of any distance you like. Once there aren’t enough sites within the maximum bound to satisfy Y, you’ve examined the longest time period you are going to get.

    Workable? Unworkable?

  57. Gary
    Posted Oct 15, 2007 at 8:46 PM | Permalink

    I’m just flying over these posts at 30,000 ft, but basically following the discussion. The question about the timing of the CRS-to-MMS conversion bias makes a lot of sense to investigate. Has any thought been given to checking for any other non-random temporal patterns in station relocations prior to the 1980s? Is it even possible?

  58. Geoff Sherrington
    Posted Oct 16, 2007 at 4:14 AM | Permalink

    re #56 cce

    Why not read a text on geostatistics instead? Friendly suggestion. Geoff.

  59. Kenneth Fritsch
    Posted Oct 16, 2007 at 9:51 AM | Permalink

    Below, I am presenting 2 graphs showing the differences between the absolute temperatures in degrees F between CRN45 and CRN123 using the USCHN data set. Since these two groups of stations where shown to have very nearly the same elevations and latitudes/longitudes in my previous post a comparison of absolute temperatures is at least from a qualitative standpoint more reasonable.

    I see an early time period where the differences are constant and than change in the mid range of the period and then level off again in the latter current time period. This absolute temperature observation is in essential agreement with the significant trend differences that I noted between these two groups in the period 1945-1985 and then no differences from 1920-1945 and 1985-2005.

  60. Clayton B.
    Posted Oct 16, 2007 at 10:24 AM | Permalink

    Kenneth,

    I missed alot. How are your temperatures calculated/averaged?

  61. SteveSadlov
    Posted Oct 16, 2007 at 11:44 AM | Permalink

    The fault scarp as witnessed in Greenland’s records.

  62. Mike B
    Posted Oct 16, 2007 at 12:46 PM | Permalink

    Could someone help me out a bit with these MMTS devices?

    Is it’s average daily temperature based on a min/max average?

    The reason that I ask is that based on data I’ve worked with so far, the TOBS adjustment required for a min/max average is different than the TOBS adjustment required for a continuous average.

    Thanks

  63. cce
    Posted Oct 16, 2007 at 1:19 PM | Permalink

    Geoff,

    Thanks for the suggestion. I was just wondering if it could possibly work. When it comes time to analyze the rest of the world, some dynamic method of excluding and including sites will be necessary.

  64. Posted Oct 16, 2007 at 2:25 PM | Permalink

    I just realized that my method of converting to temperature anomalies in OpenTemp is incorrect. I should be looking at each month’s anomaly from the monthly average over a reference period, and then averaging the monthly anomalies to get the yearly averages. I’ve been using absolute monthly temperatures and calculating the yearly anomalies from the yearly average in the reference period.

    I don’t expect this will make much difference, but it should be fixed.

    I will add an option to OpenTemp to calculate and write proper monthly and yearly anomalies for a given reference period.

    Steve Mosher: My mistaken understanding of anomalies may explain some of our disagreements on their usage.

  65. steven mosher
    Posted Oct 16, 2007 at 3:42 PM | Permalink

    RE 64. JohnV I have been puzzling on the anomaly thing for some time. I asked steveMc how he did
    his, Then looked at his code. I think the approach he was using was mistaken… At least for
    DIFFERENCING anomalies.

    As long as OpenTemp outputs absolute temps for the dataset selected I have no problem. I understand
    why Dr. Hansen only publishes anomalies, but I wish they would give both the absolute and the anomaly
    Or at least the average for the period of interest when they do reference a period. It just makes checking
    the data easier.

    On a final note. I tried (not very forcefully) to defend you on the getting warmer thing ( 1998 to present) I obviously
    can’t explain or condone everything folks say ( can’t even explain the mosh pit sometimes) but
    I do enjoy your contribution and think it makes a difference. FWIW.

    I don’t like voices being silenced. It’s my biggest beef against
    the AGW crowd. So, I should prolly be more rigorous in my defense of alternative voices here.

  66. Kenneth Fritsch
    Posted Oct 16, 2007 at 4:44 PM | Permalink

    Re: #60

    Kenneth,

    I missed alot. How are your temperatures calculated/averaged?

    The temperatures are taken from the USCHN Urban data set (the most adjusted set of the USCN progressively adjusted data sets). I use the annual values and average all the stations in the group specified by year over the time period of interest.

  67. Clayton B.
    Posted Oct 16, 2007 at 6:45 PM | Permalink

    66,

    So no geographic weighting?

  68. Posted Oct 16, 2007 at 8:39 PM | Permalink

    Anomalies:
    I had myself all confused this afternoon.
    My method of calculating yearly anomalies gives the same result as the GISTEMP and HadCRU methods. I wrote a long post with all of the algebra but then my browser locked up. It’s actually pretty obvious that the results would be the same.

    I hate it when my brain runs me in circles like that.

  69. steven mosher
    Posted Oct 17, 2007 at 8:45 AM | Permalink

    RE 62.

    MikeB. Not sure I get your question. regardless of the instrument the Obsever records three
    measures and forwards them to NOAA.

    1. Temp at the time of Observation ( rounded to the nearest integer F)
    2. Minimum temp since the last Observation ( rounded to integer F)
    3. Maximum…..

    Then, NOAA and users of NOAA data ( NASA) calculate a “mean temp” for the day.
    Mean = ROUND((minumum+maximum)/2))

    IN all cases .5 is rounded up.

    JerryB will correct me if I got this wrong.

  70. Kenneth Fritsch
    Posted Oct 17, 2007 at 9:10 AM | Permalink

    Re: #67

    66,

    So no geographic weighting?

    I have done state by state geographic weightings for my previous trend analyses. The adjustments have never changed the analysis conclusions. For the absolute temperature comparisons I have done recently, I did not include an adjustment for geography primarily because I also listed the average longitude, latitude and elevation for each of the groups. I am not sure that the trend adjustment I used for geographical differences can be applied to absolute temperatures.

  71. Stephen Richards
    Posted Oct 17, 2007 at 9:54 AM | Permalink

    Yes Mr Mosher 🙂 you certainly gave me a roasting on this recently. John’s contribution is absolutely vital if this team is to achieve it’s best.
    But heh, you guys work hard at this “hobby” and deserved to be congrated.

    “On a final note. I tried (not very forcefully) to defend you on the getting warmer thing ( 1998 to present) I obviously
    can’t explain or condone everything folks say ( can’t even explain the mosh pit sometimes) but
    I do enjoy your contribution and think it makes a difference. FWIW.

    I don’t like voices being silenced. It’s my biggest beef against
    the AGW crowd. So, I should prolly be more rigorous in my defense of alternative voices here.”

  72. Posted Oct 17, 2007 at 10:01 AM | Permalink

    Thanks for giving me some cover Steven and Stephen.
    I realize what I’m getting into every time I state an opinion or correct an error. I don’t mind the grilling by others — it’s good practice for choosing my words carefully, putting together a logical argument, and developing a thick skin.

  73. Mike B
    Posted Oct 17, 2007 at 10:05 AM | Permalink

    #69

    MikeB. Not sure I get your question. regardless of the instrument the Obsever records three
    measures and forwards them to NOAA.

    Thanks Mosh. I think that answers my question. Based on some of the earlier discussion, I thought perhaps that the MMTS devices were recording a 24-hour moving average of hourly (or shorter interval) temperatures rather than a 24 hour min/max average.

  74. steven mosher
    Posted Oct 17, 2007 at 8:35 PM | Permalink

    RE 71. You aint been in the Mosh pit yet. hehe. Funny thing is I dont recall roasting
    you. Not that I didn’t. I got a pretty thick skin ( being wrong a lot helps) so, feel
    free to fire back. I don’t take stuff personal ( ok I took Josh Halpern pickin on Kristin
    personal, so i’m kinda old school there) Just ask JohnV or Gunnar we disagree about all sorts
    of stuff, but manage to get along, after the dust settles. Kinda like mud wrastlin.
    and in between we learn a bit from each other.

  75. steven mosher
    Posted Oct 17, 2007 at 8:40 PM | Permalink

    RE 73. The ASOS gear does something like 5 minute or 15 minute samples and I think they
    have hourly data ( its airports, so they have to have near continous reports of things like precip
    and wind and visibility ) JerryB and Anthony know this stuff better than I do. I had the links to
    all the specs at one point… but my dog ate them..

    There have been studies comparing the differene between an “integrated average) and the minmax/2
    I read it one day a while back. I think the conclusion was that (MAx+min)/2 was an unbiased
    estimator of the “average temp” ( collecting temps every 5 mnutes and averaging at the end of the period)

  76. Posted Oct 19, 2007 at 11:05 AM | Permalink

    I did a comparison of USA48 regional temperature variation against the USA48-vs-globale temperature variation over at Unthreaded #22:

    http://www.climateaudit.org/?p=2200#comment-150237
    http://www.climateaudit.org/?p=2200#comment-150434

    Archive:
    http://www.opentemp.org/_results/20071018_CRN123R_TRI.zip

    It got lost in the noise over the greenhouse effect, solar cycles, AGW, etc, but it may be worth discussing here.

    Does my method seem valid? Do you agree with my conclusion that the differences between USA48 and the globe are not unusual given the differences between regions in the USA48?

  77. Steve Sadlov
    Posted Oct 19, 2007 at 11:17 AM | Permalink

    John V – how much do you trust ROW data?

  78. Sam Urbinto
    Posted Oct 19, 2007 at 12:01 PM | Permalink

    I wrote this over there before it got closed, and I think it’s appropriate to answer your question above about the lower 48 graphs with it John V.

    (Bruce never answered if the numbers he gave me over there http://www.climateaudit.org/?p=2200#comment-150113 were the global mean trend for the period listed (with 1960-1990 or 1970-2000 as the base period), or the period that was listed the base period and the numbers the 1880-2005 trend…)

    Anyway, the thought experiment is, for whatever reason, how could you explain warming if not CO2, basicially. I’ll stay away from the sun, I got dizzy enough as it was trying to follow and decipher the back and forth on that one.

    First idea. The thermometers are better, or at least different. We are therefore not measuring the same thing as we were before. The “warming” is not happening.

    Second idea. The farms, roads and cities we build, the polution we create, and the use of fossil fuels and other things we use to create energy/heat has changed the climate dynamic, which happens to be currently positive.

    Third idea. The magnetic field is leaking. Lava is changing the ocean’s equilibrium and dynamic, which is causing it to release more of its stored heat (or releasing it in a different way and/or rate)

    Forth idea. The magnetic field is leaking. A slightly weaker field is letting in more cosmic rays, which is heating things better to a slight degree.

    Fifth idea. We’re in a cycle that’s happened many times before, but we can’t see it because the proxies and/or models are not delicate enough to show us 150 year periods in the past.

    Sixth idea. We just happen to be in a trend line that’s going up rather than a flat one or one that’s going down, which has happened in the global mean anomaly before. (This one is kinda like five)

    Seventh idea. The number given in the global mean anomaly is either meaningless, and/or in the margin of error, and we think we’re looking at signal but are actually just looking at noise and making something out of it, and it’s not warming.

    Eighth idea. The methods used to adjust the temp data is faulty and has steadily been introducing a trend bias that’s increasing. (Sort of like one and eight together)

    Now, all that said, I’d like everyone to think about something. We quite clearly see that the lower 48 is not like the Rest Of the World. Then we look at the US by region and see they don’t match either.

    I think that tends to prove my seventh idea, I suppose. If the 48 are not like ROW, and the 3 section 48 are not like each other, doesn’t that rather invalidate the validity that this is global warming ?

  79. steven mosher
    Posted Oct 19, 2007 at 12:03 PM | Permalink

    RE 76. JohnV

    I thought it an interesting approach.. Essentialy you argue that the continental variability
    of the US, exceeds the variability between the Lower48 and the ROW.

    Very creative approach.

    What you show, is that the US difference from the ROW does not REQUIRE any explanation
    beyond regional variability. It may admit other explanation, but does not require it.

    Is that your essentiall position.?

  80. Clayton B.
    Posted Oct 19, 2007 at 12:20 PM | Permalink

    JohnV,

    First, you are presuming GISTEMP for US48 is valid based on OpenTemp results for CRN1-3R. That’s fair enough, but I think there’s more work to do on that (statoin histories).

    Second, US48 has many, many more stations per unit than ROW which may allow GISTEMP to better “correct” or “dampen” biases.

  81. Steve Sadlov
    Posted Oct 19, 2007 at 12:54 PM | Permalink

    I seriously doubt that the warm 1930s were limited to the US48.

  82. steven mosher
    Posted Oct 19, 2007 at 1:03 PM | Permalink

    ask steinbeck.

    hey Tamino is pissed because i dared to mention pascals wager and compare it to
    the precautinary principal. mosh pit was very respectful and uncharacteristically
    restrained

  83. steven mosher
    Posted Oct 19, 2007 at 1:06 PM | Permalink

    re 82. my reference to steinbeck was for sadlovs amusement

  84. Larry
    Posted Oct 19, 2007 at 1:08 PM | Permalink

    82, that thought crossed my mind a while back, too. But to elaborate would be a definite snipper.

  85. Posted Oct 19, 2007 at 1:13 PM | Permalink

    Steve Sadlov:

    John V – how much do you trust ROW data?

    I don’t know yet. I was only looking at the argument that the difference in USA48 vs global trends makes the global trends suspect. I don’t think that can be said with confidence. As I said, there may be other reasons why the ROW data is bad.

    Sam Urbinto:

    I think that tends to prove my seventh idea, I suppose. If the 48 are not like ROW, and the 3 section 48 are not like each other, doesn’t that rather invalidate the validity that this is global warming ?

    There are differences between regions. That’s expected in a system as complex as the earth’s climate. I do not think that invalidates the average. When I make pancakes on my camping frying pan some parts burn but some parts are under-cooked — the pancakes on average are still warmer.

    I’m staying away from your other points because I don’t want to re-ignite that fire. They were well stated though.

    steven mosher:

    Essentialy you argue that the continental variability
    of the US, exceeds the variability between the Lower48 and the ROW.

    I wouldn’t use the word “exceeds”. I would say “the continental variability of the US is comparable to the variability between the Lower48 and the ROW

    What you show, is that the US difference from the ROW does not REQUIRE any explanation
    beyond regional variability. It may admit other explanation, but does not require it.

    Is that your essentiall position.?

    That is my position. Thank you for stating it more succinctly and precisely than I could.

    Clayton B:

    Second, US48 has many, many more stations per unit than ROW which may allow GISTEMP to better “correct” or “dampen” biases.

    That is a valid reason why the uncertainty on the ROW trend is larger than the uncertainty on the USA48 trend.

    SteveSadlov:

    I seriously doubt that the warm 1930s were limited to the US48.

    I believe Bruce said the warm 1930s also appeared in Canada and Greenland. I have not confirmed or denied that myself. I’m not using the above analysis to state the the ROW trend is correct — only that the difference between global and USA48 trends can be explained by regional variability.

  86. Posted Oct 19, 2007 at 1:27 PM | Permalink

    Over lunch I thought of another way to compare the regional variations. The sum of absolute deviations from their mean and the sum of squares of deviations from their mean (Excel’s AVEDEV and DEVSQ functions respectively). Here are the results for the four comparisons (5yr averages, 1904 to 2006):

    USA48 West minus All:
    AVEDEV = 0.19, DEVSQ = 5.02

    USA48 Central minus All:
    AVEDEV = 0.10, DEVSQ = 1.63

    USA48 East minus All:
    AVEDEV = 0.13, DEVSQ = 2.60

    Globe minus USA48:
    AVEDEV = 0.13, DEVSQ = 3.00

    Again the globe minus USA48 variation is not unusual compared to regional variations in the USA48. I’m open to suggestions for more ways to analyze the results.

  87. steven mosher
    Posted Oct 19, 2007 at 1:52 PM | Permalink

    RE 82.

    OK. I posted this at Tamino ( I have stopped outing his real name) and I told him I would cross post
    it. I try to keep promises. For the most part I like Tamino cause he does math. I can live with the
    crappy nonsensical commentary as long as he sticks to the math. If I want Nonsense I go read ELI.

    Anyway, I put up a two sentence comment that lead to a total china 3 meltdown. Kinda weird. Here I can
    fight with the stubborn Gunnar for three days before our plug gets pulled. Anyway…

    I cross post here.

    “Tamino.

    You construe it as an smarmy insult. And now ask the mob to decide.

    First, there is nothing ingraiting about my “insult”

    There is nothing sleazy about my “insult”

    There is nothing insulting about my “insult”.

    I award you no points. ( I should cite billy madison here)

    Now, about my porported insult.

    What exactly is the insult? My comment cites pascals wager. A wager that takes the exact form of the wager analyzed by the science teacher. The same logic of argument is used. Decision theory is used to assert that a certain path should be taken for consequentialist reasons.

    So, where is the insult? I can imagine these:

    1. You think I am insinuating that you are not religious and dont go to church.

    2. You think I am insinuating that you are not logical by endorsing an argument that is used by theists to coerce belief.

    3. You think I am insinuating that you never heard of pascals wager or if you had, had never thought how it was like this argument.

    Which insult are you complaining about?

    1. Atheism
    2. Hypocrisy.
    3. Ignorance.

    The insult you complain about is the most telling.

    Cross posted. I do not approve of people whining about banned posts after the fact.

  88. Posted Oct 19, 2007 at 2:01 PM | Permalink

    #87 steven mosher:
    This sounds like a discussion for Unthreaded.
    If SteveMc starts a new one, I promise to stay out of any arguments with MarkW, MarkR, Bruce, et al.

  89. Posted Oct 19, 2007 at 2:14 PM | Permalink

    Re #86:
    I just realized that I should not have used AVEDEV and DEVSQ, since they look at deviations from each sample’s mean. The comparisons are already deviations from their enclosing region (USA48 or Globe) means. The comparison should be with zero, not the sample mean.

    Below I look at two metrics for comparing the variation. First, the mean of the absolute value of the difference. Second, the square root of the mean of the square of each difference (the usual root-mean-square metric):

    USA48 West minus All:
    MEAN(ABS) = 0.23C, RMS = 0.29C

    USA48 Central minus All:
    MEAN(ABS) = 0.13C, RMS = 0.16C

    USA48 East minus All:
    MEAN(ABS) = 0.13C, RMS = 0.16C

    USA48 minus Globe:
    MEAN(ABS) = 0.14C, RMS = 0.17C

    Qualitatively these results are similar to the results above. I believe they are more correct however.

  90. steven mosher
    Posted Oct 19, 2007 at 2:31 PM | Permalink

    RE 86. Interesting. I’m not unconvinced by your argument.

    Here is were I stand.

    The CRN5 ( 15% of the sites) appear to have a different trend from the balance of sites.

    a. removing them may depress the overall trend inthe US48 but only slightly given their
    small number ( 15%) and relatively modest Bias.
    b. Good practice would suggest removing them from the list of high quality sites.
    c. Error bands may be improved ( test this)

    The other thought I having is can we find the stations that reflect the trend with the best fidelity?
    And those that dont?

  91. steven mosher
    Posted Oct 19, 2007 at 2:44 PM | Permalink

    RE 88.

    You had coming from all sides man. the whole C02, attribution, paleo, solar… constellation
    of arguments unravells into a twisted thing of nastiness. One reason I like CA is that for the most part
    SteveMc will keep threads focused on a particular issues. ( although tangents are fun) Nobody
    has come up with a foundational text on C02, for example. For UHI it was Parkers paper and petersons paper
    so the discussion stays pretty focused. but without a central text the discussions just whirl
    out of control, eventually. Sometimes because the topic is multi faceted, other times be
    protagonists shift the debate, pull the rug out from beneath you. I think it leads to a bistable
    “conversation.” People shouting or people talking past one another.

    (Not to say the latter isnt of value at times)

  92. Posted Oct 19, 2007 at 3:06 PM | Permalink

    #90 steven mosher:
    Regarding CRN5 issues, I did a paired rural CRN123 vs rural CRN5 analysis and got some interesting results. There is a definite warming trend in CRN5 (as you and others have shown), but CRN4 seems pretty good.

    I’ve also done a similar analysis for UHI, and sent Steve McIntyre some of the results. (Paired CRN123 rural and small/urban stations). UHI was very real for those stations. I’d like to take it a step further and do a paired rural vs urban (no small towns) before I write it up.

    I’ve offered both sets of analysis and results to Steve McIntyre if he wants to write them up as an article (or have me do the same). Haven’t heard back yet.

  93. steven mosher
    Posted Oct 19, 2007 at 3:22 PM | Permalink

    RE 92.. Ya I did a similir write up ( OpenTempALL versus CRN1234) a while back and sent it in.
    He’s got a lot on his plate.

    One thing I’d like to check how well OpenTempALL checks against GISS when the same stations and
    same periods are used. GISS drop some stations and drop some station periods. I can get you
    the list.

    Also, CRU have published a station list finally so you could calculate HADCRU OPENTEMP.

    CRU and GISTEMP differ ( for a few reasons) and it might be neat to look at that

  94. steven mosher
    Posted Oct 19, 2007 at 3:29 PM | Permalink

    RE 92. I was thinking the other day about plotting a trend of trend chart

    2005-2006 trend
    2004-2006
    2003–2006
    2002-2006.
    2001-2006
    2000-2006..

    And so forth and so on until you get to
    1880-2006.

    So, the X axis is years past, and Y is slope.

    That way the whole cherry tree is there.

  95. Posted Oct 19, 2007 at 4:07 PM | Permalink

    #94 steven mosher:
    To satisfy a regular who will remain un-named you’d also have to look at every end year. Maybe a surface plot with start year and end year as the horizontal axes and linear trend as the vertical. For both hemispheres. And then for every country. And probably for every state/province. And maybe even for every station.

    Monthly breakdowns would be useful too in case the yearly averages miss something important to *climate* trends.

    🙂

  96. Posted Oct 19, 2007 at 4:15 PM | Permalink

    Sam Urbinto:
    I agree with you that defining a global average temperature is futile, but I’m saying that the global average trend can be defined. That’s why I always focus on trends.

  97. Steve Sadlov
    Posted Oct 19, 2007 at 4:21 PM | Permalink

    RE: #94 and 95 – Do the 2006 – n years series, then do a 2005 – n and maybe a 2004 – n. Slide around the end point as well at max n. Then compare series.

  98. Steve Sadlov
    Posted Oct 19, 2007 at 4:23 PM | Permalink

    RE. #98 – I would also imagine Steve M has some good experience and expertise with end point concerns and would have some good recommendations.

  99. Posted Oct 19, 2007 at 5:55 PM | Permalink

    John V. No problem, Steve can snip it, hopefully I will have more if he starts a solar thread

  100. steven mosher
    Posted Oct 19, 2007 at 6:25 PM | Permalink

    RE 106 and 107. I’m just funning you guys. wish SteveMc would start a Solar thread. A while back he posted a
    solar/hyrdology paper, but it wasn’t really foundational science. I think things go better when we work
    from the AGW Canon and spiral out.

    Needed Threads:

    Solar.
    C02.

    SteveMc thinks he needs a paper to start the threads. I’m puzzling on that. Absent a paper
    on either having dedicated sandboxes does lend a certain order to the discussions.

    Otherwise unthreaded becomes a sweaty mosh pit. And disagrements spill over into well behaved
    enclaves. I know. I do this. good discusion or good fight. If I can’t the former, the latter
    is fun.

  101. steven mosher
    Posted Oct 19, 2007 at 6:58 PM | Permalink

    RE 109. I think if it hits common themes here it could work..

    Let me give you an example from GISS. The best CA topics are those topics that
    have a strong methodological component.

    There are Two red flags that the Bull charges:

    1. Invention of novel untested statistical approaches. When you read Hansen your eyes
    glaze over. JohnV in contrast explained his in a couple sentences. And SteveMC
    used a standard stats package. You see the same theme in HockeyStick stuff: Invention
    of novel approaches where standard approaches seem applicable. RED FLAG.

    2. Failure to preserve an adequate chain of custody WRT evidence. Not that this is a court
    case, but when a fundamental document obscures its sources and methods, then SteveMc’s
    antenna tingle. You see the same theme in the hockeyStick stuff. RED FLAG.

    In fact, you can ( within reason) distill CA concerns down to these two very specific items. Put another way,
    when these two STREAMS cross you have powerful topic( apologys to ghostbusters). Now, I am simplifying
    CA concerns to COMPREHEND its appeal. Topics that combine these two themes have long legs.

    SteveMc. Is that fair?

  102. Posted Oct 19, 2007 at 7:10 PM | Permalink

    112 The Hurrell link is in the post, the Tsonis, well Google it since that was before the Scotch. I couldn’t get the whole paper, being non-affiliated with a reputable institution. I had to piece it together from several cites.

  103. Geoff Sherrington
    Posted Oct 20, 2007 at 7:53 AM | Permalink

    Taken from #85: quote

    “SteveSadlov:

    I seriously doubt that the warm 1930s were limited to the US48.

    I believe Bruce said the warm 1930s also appeared in Canada and Greenland. I have not confirmed or denied that myself. I’m not using the above analysis to state the the ROW trend is correct — only that the difference between global and USA48 trends can be explained by regional variability.” End quote.

    Here is annualised average temp data taken from daily records for Melbourne Aust for year 1910 to year 1950. One swallow does not a summer make, but there is not a warm 1930s here. (Pls excuse the many digits). The temp started to rise, mainly the minimum temp, from about 1945. Possibly when the site UHI effects started to cut in.

    Year Average temp deg C.
    1910 15.11808219
    1911 14.69493151
    1912 14.70819672
    1913 14.67931507
    1914 15.45082192
    1915 14.80808219
    1916 14.54808743
    1917 14.61876712
    1918 14.90630137
    1919 15.18931507
    1920 14.73265027
    1921 15.31287671
    1922 14.75328767
    1923 14.71410959
    1924 14.08415301
    1925 14.52424658
    1926 15.28547945
    1927 14.94630137
    1928 15.32513661
    1929 14.48712329
    1930 15.20753425
    1931 14.21547945
    1932 14.44002732
    1933 14.62520548
    1934 14.93972603
    1935 14.55315068
    1936 14.85669399
    1937 14.85
    1938 15.2309589
    1939 14.92424658
    1940 15.00915301
    1941 14.71136986
    1942 15.11630137
    1943 14.08794521
    1944 14.55601093
    1945 14.38671233
    1946 14.29739726
    1947 15.09410959
    1948 14.38456284
    1949 13.85136986
    1950 14.80534247

  104. Bruce
    Posted Oct 20, 2007 at 9:46 AM | Permalink

    Greenland

    The warmest year in the extended Greenland temperature record is 1941, while the 1930s and 1940s are the
    warmest decades.

    http://www.cru.uea.ac.uk/cru/data/greenland/

    Antarctica

    the entire record suggests the existence of a multi-decadal or centennial-scale cycling of climate, where Antarctic temperatures in the early 1800s were equally as warm as they were in the late-1930s/early-1940s, as well as in the late-1980s/early-1990s.

    http://www.co2science.org/scripts/CO2ScienceB2C/articles/V9/N40/EDIT.jsp

  105. steven mosher
    Posted Oct 20, 2007 at 1:28 PM | Permalink

    For interested parties.

    http://www.metoffice.gov.uk/research/hadleycentre/CR_data/Daily/HadCET_act.txt

    Looks like England temps to way back when.

    I looked at the correlation between ALL USHCN ( opentemp) and this Record. I looked. I didnt
    analyze. I was wondering. If a few trees in Colorado can teleconnect to the whole planet
    do you think a whole continent ( the US ) from 1880 to 2007 can teleconnect with a junky
    little island off europe called england?

    Who warms more and why?

    Might be neat to regress on continental “population”

  106. Posted Oct 20, 2007 at 7:32 PM | Permalink

    steven mosher:
    I had a quick look at the England temps. Here are a couple of graphs with yearly, trailing 5yr, and trailing 11yr trends (all relative to 1951-1980 average for easy comparison against GISTEMP trends):

    The trend is similar in shape to the GISTEMP world trend (http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif), but different in details.

    This is a very small piece of land and probably not worth investigating further. Waldo does seem to have a home in England though.

  107. Posted Oct 20, 2007 at 7:34 PM | Permalink

    Steve McIntyre:
    When you have some time, I’m interested in your opinion of my regional variability analysis (see links below and some discussion after #76 above):
    http://www.climateaudit.org/?p=2200#comment-150237
    http://www.climateaudit.org/?p=2200#comment-150434

  108. MarkR
    Posted Oct 20, 2007 at 10:19 PM | Permalink

    Charts of English temperatures 1659-1973 by Gordon Manley 1974. Note dis-similarity to current version #106.

  109. MarkR
    Posted Oct 20, 2007 at 10:22 PM | Permalink

    http://www.esnips.com/doc/53facea4-2c96-463f-9e17-cf565401c677/Gordon-Manley-English-Temperatures-1659-1973

  110. MarkR
    Posted Oct 20, 2007 at 10:32 PM | Permalink

    #108 Oops. Should say similarity!!!!

  111. Bruce
    Posted Oct 21, 2007 at 12:27 AM | Permalink

    JohnV,

    Why not just make the actual temps white on a white background and hide them completely – instead of the really light blue?

  112. Posted Oct 21, 2007 at 7:13 AM | Permalink

    Bruce,
    I originally plotted the yearly temps with a solid line. The yearly variation is so large that it makes it hard to see the pattern. The pattern is what matters — right?

    steven mosher provided a link to some data. I spent some time plotting so we could all visualize it. I believe you meant to say thanks, instead of complaining about colours.

  113. Bruce
    Posted Oct 21, 2007 at 9:09 AM | Permalink

    JohnV

    The pattern I see is that the peaks over the last 250 years are the same height, and after a century of record solar energy hitting the earth, there are a few more similar peaks in the 90s than usual.

    That pattern is osbscured by the trendlines in the 1990s.

  114. M. Jeff
    Posted Oct 21, 2007 at 9:20 AM | Permalink

    Yearly trend is adequately visible on my monitor.

  115. Posted Oct 21, 2007 at 9:57 AM | Permalink

    #113 Bruce:
    The standard deviation of yearly temperature changes is ~0.55C. The 2sd confidence limit is therefore plus or minus ~1.1C. It’s not surprising to see isolated warm years throughout history considering that variability.

    However, we are talking about climate, not extreme weather years. The average atmospheric temperature over many years is what causes ice caps to melt, raises sea levels, etc.

  116. Posted Oct 21, 2007 at 10:01 AM | Permalink

    Bruce,
    I updated the graphs to include markers on the yearly data. The trends are not obscured and you can see the yearly peaks better. Everybody wins.

  117. Posted Oct 21, 2007 at 10:15 AM | Permalink

    Bruce, one last point:
    If you insist on looking at unusually warm years, you should also be looking at unusually cold years.

    Actually, it’s interesting that the warm peaks are almost the same (in temp, not frequency) but the cold valleys are rising quickly. There might be something to that…

  118. Bruce
    Posted Oct 21, 2007 at 11:56 AM | Permalink

    JohnV

    How does the theory of AGW based on CO2 deal with maximum temperatures. Shouldn’t they be going up too?

    The UK’s 1949 is equivalent to the USA’s 1934, except 2006 beats 1949 because of one month: July.

    July 2006 Central England Temperature (CET) at 19.7 °C, is the warmest monthly figure in this series which goes back to 1659
    England experienced its hottest month on record with a mean temperature of 19.3 °C
    England experienced its sunniest month on record with 301.5 hours, beating the previous record of 284 hours in June 1957

    http://www.metoffice.gov.uk/corporate/pressoffice/2006/pr20060801.html

    Here we all are concentrating on temperature, and England experiences 6% more sunshine than it ever has experienced in its whole history.

    Is more sunshine part of the CO2 theory? Or does it fit in better with the idea that more solar energy is reaching the earth?

  119. Posted Oct 21, 2007 at 12:36 PM | Permalink

    SteveMc:
    Do you think we could have a thread on solar measurement and proxies? Just the measurements and proxies, no talk of correlations with anything (that can come later).My references show relatively flat solar activity since the early 1970s. Others have different references with differnt trends. It would be useful to gather the references in one place.

    Steve: Will do.

  120. Bruce
    Posted Oct 21, 2007 at 1:07 PM | Permalink

    Sunshine in the UK

    2003: http://www.metoffice.gov.uk/climate/uk/2003/sunshine.html
    2004: http://www.metoffice.gov.uk/climate/uk/2004/sunshine.html
    2005: http://www.metoffice.gov.uk/climate/uk/2005/sunshine.html
    2006: http://www.metoffice.gov.uk/climate/uk/2006/sunshine.html

    Looks well above the 1961-1990 average to me.

  121. Stephen Richards
    Posted Oct 21, 2007 at 1:12 PM | Permalink

    People

    The Armagh observatory has data going way back when, has suffered no UHI effect in 150years and has good thermometer info.
    http://climate.arm.ac.uk/archives.html

  122. Bruce
    Posted Oct 21, 2007 at 2:37 PM | Permalink

    http://www.sciencemag.org/cgi/content/abstract/308/5723/847

    A decline in solar radiation at land surfaces has become apparent in many observational records up to 1990, a phenomenon known as global dimming. Newly available surface observations from 1990 to the present, primarily from the Northern Hemisphere, show that the dimming did not persist into the 1990s. Instead, a widespread brightening has been observed since the late 1980s. This reversal is reconcilable with changes in cloudiness and atmospheric transmission and may substantially affect surface climate, the hydrological cycle, glaciers, and ecosystems.

  123. BarryW
    Posted Oct 21, 2007 at 4:41 PM | Permalink

    117

    Your comment on the cold years is interesting. I was playing around with linear trend lines and saw that if I took the 19th century data I got a flat trend line(which extends back to about 1740), while the 20th showed a strong postive trend. There is a similar spurt from about 1700 to 1740. Was the 18th centry a step function, and will the 21st show a similar step? Of course this depends somewhat on the arbitrary choice of end points, but the difference between the 19th and 20th centuries is striking. Since the USHCN records don’t extend back that far I don’t think the change in trend is as obvious. Even the early 20th century is not out of line with the 19th (based on your 11yr trend), only since about 1990 has there been a significant deviation from the mean. It also appears that most of that is due to a reduction of below average cold years. It would be interesting to overlay the max and min for the years to see what their trends look like. Are rising minimum temps is consistent with CO2 and the models?

  124. Clayton B.
    Posted Oct 21, 2007 at 7:25 PM | Permalink

    111 & 112,

    Bruce,
    Somebody (SteveMc, I believe) has requested to see annual data in the background in case someone was interested in the single data points.

    JohnV,
    That was a bit testy, no?

  125. Posted Oct 21, 2007 at 8:48 PM | Permalink

    #123 BarryW:
    I’ll try to find some time tomorrow to look at the max and min temperature trends. I’m not sure what AGW (or other forcing theories) says about rising yearly mins vs yearly maxes.

  126. Posted Oct 21, 2007 at 8:51 PM | Permalink

    #124 Clayton B:

    That was a bit testy, no?

    Yes, it was. I’ll try not to let it happen again.

  127. nevket240
    Posted Oct 22, 2007 at 3:58 AM | Permalink

    #115
    JohnV
    I would have thought that a warming from “-55 to -53C would not develop into melting of the ice caps as you suggest. the real cause is the ocean currents.

    regards.

  128. Posted Oct 22, 2007 at 9:11 AM | Permalink

    #123 BarryW:
    I had a look at the min and max temperatures this morning. I chose an 11yr trailing period for determining min and max. Here are the results for max-min:

    For central England:
    1659 to 2006: -0.16C / century, R^2 = 0.089
    1900 to 2006: +0.00C / century, R^2 = 0.000
    1976 to 2006: +1.00C / century, R^2 = 0.139

    For GISTEMP global:
    1900 to 2006: +0.11C / century, R^2 = 0.298
    1976 to 2006: +0.13C / century, R^2 = 0.028

    It doesn’t look to me like there’s anything statistically significant there. It’s too bad — I thought maybe we stumbled into something important.

  129. Posted Oct 22, 2007 at 10:17 AM | Permalink

    JohnV says in #125,

    I’ll try to find some time tomorrow to look at the max and min temperature trends. I’m not sure what AGW (or other forcing theories) says about rising yearly mins vs yearly maxes.

    I heard Lonnie Thompson speak about this here at OSU this spring. He said the declining max/min spread is evidence for GHG rather than urbanization, because more asphalt etc would make the days hotter relative to the nights. GHG, on the other hand, would hold heat in at night, ergo GHG cause AGW.

    I don’t buy this, though, because my limited experience is that trees and cornfields cool things off quickly in the evening — perhaps because their chlorophyll radiates well in the long-wave IR spectrum, enabling them to attract moisture as dew — while buildings hold heat in at night, even in the summer when the heating is not on. The dewpoint puts a lower bound on how far down plants can pull the temperature, but still they would pull it down below parking lot temperatures. Am I muddled?

    Incidentally, the MMTS tends to give much lower MAXes and much higher MINs than the CRS, for a greatly reduced spread. The MEAN effect is only slightly negative, however. This means that pre-MMTS adjustment and post-MMTS adjustment data should show very different spreads for stations with a switch.

  130. Clayton B.
    Posted Oct 22, 2007 at 10:27 AM | Permalink

    I thouht less extreme minimums WOULD be an indication of UHI. Concrete/asphalt stores the energy more so than soil and vegetation.

  131. Posted Oct 22, 2007 at 10:38 AM | Permalink

    Hu and Clayton B:
    The min/max comparison I did above is different than what you’re talking about. You’re looking at the difference between daily min and max temperatures. My results above are comparing warm years and cold years calculated from daily averages. It looked like there was a trend in the central England temperatures (less cold years) but that trend does not seem to hold up globally.

  132. Bruce
    Posted Oct 22, 2007 at 1:12 PM | Permalink

    If you have 3 more sunny days in a month (as an example) it would result in a bigger change in monthly sunshine hours (percentage-wise) in the winter than in the summer.

    There has been more Sunshine in the Northern Hemisphere.

  133. UK John
    Posted Oct 22, 2007 at 1:24 PM | Permalink

    #132

    Yes there are more sunshine hours, the surface temperature we observe has got a bit warmer, nothing surprising at all.

    Surely the question is “why more sunshine hours?”

  134. Bruce
    Posted Oct 22, 2007 at 1:34 PM | Permalink

    Why more Sunshine?

    Aerosols is one explanation:

    http://earthobservatory.nasa.gov/Newsroom/NasaNews/2007/2007031524529.html

    Scientists at GISS created the Global Aerosol Climatology Project by extracting a clear aerosol signal from satellite measurements originally designed to observe clouds and weather systems that date back to 1978. The resulting data show large, short-lived spikes in global aerosols caused by major volcanic eruptions in 1982 and 1991, but a gradual decline since about 1990. By 2005, global aerosols had dropped as much as 20 percent from the relatively stable level between 1986 and 1991.

    The NASA study also sheds light on the puzzling observations by other scientists that the amount of sunlight reaching Earth’s surface, which had been steadily declining in recent decades, suddenly started to rebound around 1990. This switch from a “global dimming” trend to a “brightening” trend happened just as global aerosol levels started to decline, Mishchenko said.

  135. Posted Oct 22, 2007 at 6:53 PM | Permalink

    Why are we talking about sunshine hours in the USHCN #3 thread?

  136. Bruce
    Posted Oct 22, 2007 at 7:06 PM | Permalink

    Doesn’t the USHCN track sunshine? The Met does in the UK.

    How can you track climate without recording sunshine?

  137. Posted Oct 22, 2007 at 7:23 PM | Permalink

    Sunshine is not related to the surface stations and their quality ratings. That’s what this thread is about.
    SteveMc, if possible can we get that solar measurement and proxies thread soon? Thanks.

    (In fairness, I also should not have analyzed the central England temps on this thread).

  138. BarryW
    Posted Oct 22, 2007 at 7:40 PM | Permalink

    #128
    Thank John, goes to show patterns that you think you see might not really be there. The spread between avg min and avg max temps by year might be interesting to look at but maybe this is the wrong thread.

  139. Clayton B.
    Posted Oct 23, 2007 at 6:27 PM | Permalink

    I looked at the trends (since 1900) for all stations as a quick determination of some of the stations that are way out of wack. Since I know little to nothing of statistics I’ll stop there and post a chart:

    ps – I added a table with the trends in the DB, JohnV. I’ll also add some columns for different trend periods (i.e. 1980-present). I think OpenTemp – or any temperature averaging program for that matter – should disclude stations on the extremes.

  140. Paul Linsay
    Posted Oct 23, 2007 at 7:33 PM | Permalink

    #139, nice plot. I’d interpret it to mean that the average trend is zero with a bunch of outliers on the warm end that would skew the average. Did you try to fit the peak to a Gaussian? Did you look at the stations that are in the positive tail to see what they have in common?

  141. Clayton B.
    Posted Oct 23, 2007 at 8:44 PM | Permalink

    140,

    The peak is right of zero (around 0.02 degC/dec). The average is 0.036 degC/dec.

    I believe most geographic averges show trends between 0.03 and 0.04. JohnV correct me if I’m wrong.

  142. Posted Oct 23, 2007 at 9:26 PM | Permalink

    Clayton B:
    Interesting chart. I’m surprised by the range. The distribution looks almost normal and I’d guess the standard deviation is ~0.1C / decade. That’s huge. Does the chart only include series with data from 1900 to the present day, or are shorter series included?

    The linear trend on my CRN123R results from 1900 to 2006 is 0.040 C/decade.

    This is definitely worth looking into. I suggest using 1915-1935, 1935-1975, and 1975-2006 for shorter-term trends. I’ve started using degC/century because it gives values closer to 1 (easier for my brain to work with).

  143. Steve McIntyre
    Posted Oct 23, 2007 at 9:37 PM | Permalink

    John V, I wish you’d take a little more time to review what’s already been done here on these topics. The trends for all 1221 USHCN stations were calculated here showing the map below for the TOBS. This is what prompted the interest in Tucson, as this was identified as a station with the highest trend – and thus the search for any potential microsite problems there that might account for such outlier status.

  144. Posted Oct 24, 2007 at 2:59 AM | Permalink

    …Lonnie Thompson speak about this here at OSU this spring. He said the declining max/min spread is evidence for GHG rather than urbanization, because more asphalt etc would make the days hotter relative to the nights. GHG, on the other hand, would hold heat in at night, ergo GHG cause AGW.

    It is evident that L. Thompson is not a meteorologist and don’t know how lower troposphere reacts to sunshine.
    Daytime boundary layer is a turbulent, well mixed, thick layer of air just above the ground and remarkable differences between rural and urban environment are quickly removed through vertical mixing and horizontal wind.
    Of course, if you are close to a south facing wall of a building at noon (in northern hemisphere), local constraints prevail.

    Nighttime boundary layer, instead, is quite a stable environment in which local characteristics often (if not always) have the main role.

  145. Clayton B.
    Posted Oct 24, 2007 at 5:50 AM | Permalink

    SteveMc,

    Has anyone taken a look at discluding stations at high and low extremes from US average?

  146. Clayton B.
    Posted Oct 24, 2007 at 6:33 AM | Permalink

    142 JohnV,

    Includes shorter series.

  147. Clayton B.
    Posted Oct 24, 2007 at 7:23 AM | Permalink

    142 JohnV – it includes all series

    The three extremes on the right side (greater than 0.38) are Pokegama Dam, Sandy Lake Dam, and McKenzie Bridge, which – since 1990 – began reporting only summer months.

    The extreme on the left side is Langdon Experiment which – in the early 1900s – only reported during summer. Perhaps I shouldn’t have used monthly data. or maybe throw out incomplete years (that would be alot of data).

  148. SteveSadlov
    Posted Oct 24, 2007 at 9:35 AM | Permalink

    Anyone have any ideas about what caused the Southern Sierra Nevada-like fault scarp bewtween the mid/late teens and early 20s?

  149. Posted Oct 24, 2007 at 9:59 AM | Permalink

    Steve McIntyre:

    John V, I wish you’d take a little more time to review what’s already been done here on these topics.

    Since Clayton B made the plot, was that jab intended for him? I don’t understand what the problem is. Are you taking offense to other people independently reproducing and extending your results?

    I also did not see a histogram or any sort of statistical analysis in your results, besides “the oddest pattern is surely the degree to which red and blue states on these maps match their political counterparts”.

    [snip]

  150. Jeremy Friesen
    Posted Oct 24, 2007 at 10:10 AM | Permalink

    I sense bad blood rising again… do try to not take the bait, and focus on the question or request, John V. That said, I’d have to defend you in that this blog is not exactly set up to do detailed searched on topics to see if they have already been discussed at length, and Steve Mc is likely one of very few who would simply “know” that there is already info on this.

  151. Steve McIntyre
    Posted Oct 24, 2007 at 10:18 AM | Permalink

    While the blog is not laid out to access past info, the Categories give some help. http://www.climateaudit.org/?cat=54 gives access to the topic in question. John V is being very active on this blog and, given that it’s my blog, if he thinks it worthwhile to discuss USHCN trends on this blog, it would be polite to note that I had previously discussed the matter.

  152. Posted Oct 24, 2007 at 10:36 AM | Permalink

    #151 Steve McIntyre:

    would be polite to note that I had previously discussed the matter

    Fair enough.
    I am still confused as to why you request that I note your previous work when it was Clayton B who did the trend analysis? I was responding to his analysis. If it was a simple mistake, I understand that can happen.

    Clayton B, I’m not trying to single you out. I’m just confused why the jab was directed at me.

  153. Posted Oct 24, 2007 at 11:10 AM | Permalink

    Clayton B (#139) —
    Great histogram! Can you do similar ones for CRN123
    (or CRN 123R, if more appropriate) vs CRN45?
    Of course, we still only have 35% or so of stations surveyed,
    so we’d be looking at a small, perhaps still unrepresentative
    subset of the 1221 total stations.
    What level of adjustment does your data have? AREAL only?
    TOB? MMTS? SHAP? FILNET? “FINAL” (URBAN)? These can all
    have different trends. In particular, the “homogenized” data
    is adjusted to conform to nearby stations, perhaps contaminating
    123 sites with 45 trends.

  154. Posted Oct 24, 2007 at 12:02 PM | Permalink

    Clayton B:
    If you decide to break down the histogram by CRN rating (as suggested by Hu), you should consider separating the CRN5s from the CRN4s. Some of my analysis shows that the 5s are much worse, but their problems tend to get lost due to the large number of 4s.

  155. Kenneth Fritsch
    Posted Oct 24, 2007 at 3:54 PM | Permalink

    Good stuff is this looking at variations of warming/cooling trends in the US. I had the same question, Clayton, that Hu ask — what level of USCHN adjustment did you use? I would like to compare your distribution to the Final (Urban)adjustment — if you used some other level.

    I do remember Steve M’s previous graphs and wondering at the time whether we are looking at local temperature uncertainties/errors or a real variation. Real variations of this magnitude, in my mind, makes talking about a US (and by inference, a global) average trend less meaningful.

  156. Sam Urbinto
    Posted Oct 24, 2007 at 5:17 PM | Permalink

    The worthiness or use of the data is a seperate although related subject to this, but as far as I’m concerned, to get data that is known to be good (the first step):

    Temperature as often as possible to get a good signal (every minute? 10? 60?)
    Humidity as often as possible to get a good signal (every minute? 10? 60?)
    Amount of sunlight as often as possible to get a good signal (every minute? 10? 60?)
    Strength of sunlight as often as possible to get a good signal (every minute? 10? 60?)
    Wind speed and direction as often as possible to get a good signal (every minute? 10? 60?)
    Measuring devices that can measure down to .1
    Measuring devices that are known to be calibrated (kept calibrated)
    Measuring devices that are shielded from direct wind
    Stations in strategically placed locations and at regularly spaced intervals
    No external sources that bias the readings (or far second best, are reliably known so can be accounted for
    The data collected electronically and available to others in its raw form
    Open and available methods used to do any processing that the data is subjected to.

    Also, I want a pony for Christmas.

  157. Kenneth Fritsch
    Posted Oct 24, 2007 at 5:49 PM | Permalink

    I do have a bit of problem when we talk about trends since 1900 and then look at missing data points from the progressively most adjusted data set that USCHN publishes. Most these data points result either by way of USCHN not being able to average them in or removing “bad” data points and not re-entering a value.

    The percentage of missing data points for USCHN Urban data set are as follows for the listed 25 year increments:

    1900-1924 = 17%

    1925-1949 = 4%

    1950-1974 = 1%

    1975-1999 = 0.1%

  158. SteveSadlov
    Posted Oct 24, 2007 at 6:33 PM | Permalink

    That’s really fascinating. Before the great fault scarp in US48, 17% missing data points. After the scarp, 4% missing. Coinkydink?

  159. SteveSadlov
    Posted Oct 24, 2007 at 6:35 PM | Permalink

    Of course another big deal sort of thing, which coincided with the great fault scarp of 1918 – 1921, was the US involvement in WW1 – we were late to the game. (Granted, the war de facto ended in 1919, but the complete stand down and final conclusion of treaties, etc, was not until 1921.)

  160. Clayton B.
    Posted Oct 24, 2007 at 7:31 PM | Permalink

    There are faults with the above histogram. I used monthly data since it was all I had access to at the time. Therefore, years with incomplete months were still used for the trends. The data used was hcn_doe_mean (this was in the original chart but did not make it to the figure I posted.

    The histogram figure needs to be reproduced with proper values.

  161. steven mosher
    Posted Oct 25, 2007 at 2:26 AM | Permalink

    RE 151.

    Huh? I keep up with this site and was here when the red state blue state thing was done.
    I certainly didnt make a connection between Claytons work and the countour plot you drew.
    John V wasnt even here then. Is he suppose to review all the past stuff before commenting?
    You’re off base on this one.

  162. Posted Oct 25, 2007 at 9:56 AM | Permalink

    #161 steven mosher:
    I had actually read the article with the contour plot. The thing that confuses me (and that SteveMc hasn’t acknowledged) is that the histogram wasn’t even mine. I should also add that if SteveMc had made his results available, it would not have been necessary to re-create the trends. Somebody could have grabbed the data, attributed it to SteveMc, and plotted it.

    Steve: “if SteveMc had made his results available”, puh- leeze. Are you trying to liken me to someone like Thompson who’s blocked access to his data for 20 years. C’mon. In the post in question, I provided a script which generated the results. With not much effort, any internal directory references could be externalized. I try to encourage people to be hands-on the calculations. And this is for blog postings; Thompson and the Team refuse information on articles published in academic journals. See what happens if you try to get data from realclimate. (And yes, you can find trends in the climateaudit.org/data/ushcn/details.dat file – now updated to include the trends which were in my directory file – among other information.)

  163. Kenneth Fritsch
    Posted Oct 25, 2007 at 11:01 AM | Permalink

    Below I have presented two histograms showing the distribution of temperature trends (in degrees F per year) from 1900-2005 using the USCHN Urban data set. The first histogram uses only those stations that had a complete set of yearly average temperatures for the time period 1900-2005 while the second histogram shows all stations regardless of the completeness of their histories.

    The histograms have a reasonable degree of agreement and both show a rather close fit (eyeball) to a normal distribution — although the all stations histogram shows a long tail in the warming direction. It would be, however, incorrect to claim that all stations histogram is showing trends that all go back to 1900 (because of the incompleteness of data that was available for constructing the histogram).

  164. Posted Oct 25, 2007 at 11:49 AM | Permalink

    SteveMc:

    Are you trying to liken me to someone like Thompson who’s blocked access to his data for 20 years.

    Of course not.

    Let’s review: Clayton B posted a histogram of station trends. You got upset with me for not reviewing and citing your previous work. (In Clayton’s post? You still have not acknowledged or explained this). I suggested that if your data was available, then we could more easily cite it and build on it. That’s all.

  165. Sam Urbinto
    Posted Oct 25, 2007 at 5:32 PM | Permalink

    I don’t understand, John. It seemed to me Steve was making a wry observation. Or he just thought you saying he wasn’t making the results available was rediculous. Although I also don’t kinda understand why something Clayton said started this whole thing in the first place. *shrug*

    What is it result-wise or data-wise that you’re looking for that’s not available? I don’t understand what you’re looking for, so I can’t go look for it, or I would.

  166. Posted Oct 25, 2007 at 6:40 PM | Permalink

    #165 Sam Urbinto:
    I’m not actually looking for anything. Clayton and Kenneth have posted the histograms of temperature trends that they calculated themselves. I was just saying that if SteveMc’s previous calculations were available it could have saved them the trouble. It’s not a big deal.

    I think every article that includes analysis should have a link on the bottom to an archive that includes code, data, etc. SteveMc could set a great example by doing that.

  167. SteveSadlov
    Posted Oct 26, 2007 at 1:40 PM | Permalink

    My early 20th Century fault scarp is going to end up like jae’s negative H2O feedback discussed on other threads. No one seems to want to go there. A big scary rock to overturn. Lots of creepy crawlers will come wiggling out.

  168. Posted Oct 26, 2007 at 2:19 PM | Permalink

    Kenneth:
    Thanks for the trend distribution plots for stations with complete history. The distribution looks normal to me as well. Have you calculated the kurtosis and skew parameters? I’m hoping to generate equivalent plots using TOBS data partitioned by urbanization, CRN rating, and key trend periods this weekend.

  169. Posted Oct 26, 2007 at 2:25 PM | Permalink

    #167 SteveSadlov:
    If you decide to investigate the fault scarp further, it does not appear in the USA48 West — only in the Central and East regions (see link below). I have no idea what that means but it may be a clue.

    http://www.climateaudit.org/?p=2200#comment-150258

  170. steven mosher
    Posted Oct 26, 2007 at 3:41 PM | Permalink

    RE 168.

    Just test it for normaility. there are a handful of tests.

  171. steven mosher
    Posted Oct 26, 2007 at 3:43 PM | Permalink

    RE 167. I’ll post up my fault scarp stuff when I get a chance. I’m a bit behind now

  172. steven mosher
    Posted Oct 26, 2007 at 4:16 PM | Permalink

    RE 169. To be precise its not in the western CRN123R. How many sites in that bin?

  173. Posted Oct 26, 2007 at 4:38 PM | Permalink

    #172 steven mosher:
    There are about 18 sites with pretty good distribution. There’s a map available here:

    http://www.climateaudit.org/?p=2200#comment-150237

  174. Kenneth Fritsch
    Posted Oct 26, 2007 at 4:56 PM | Permalink

    I have calculated the mean, standard deviation, kurtosis, skew and chi square goodness of fit for the histogram using only the 661 USCHN (Urban data set) stations with complete yearly averages throughout the time period from 1900-2005 and presented them as follows:

    The mean of distribution = 0.0089 in degrees F per year.

    The standard deviation = 0.014.

    Kurtosis = -0.37 indicating somewhat longer tails. Kurtosis = 0 for a perfectly normal distribution.

    Skew = 0.04, indicating a very slight skew towards more positive temperatures. Skew = 0 for a perfectly normal distribution.

    Chi square goodness of fit had a p = 0.78, indicating a very good fit of the histogram to a normal distribution. I used the standard requirement for all bins having a minimum of 5 values per bin.

    While I hesitate to do the same with the “all stations” histogram due to the amount of missing data, I will do it as a general exercise and post it under a separate cover.

    With all the indicated regional differences in temperature trends, I am surprised to see this good a fit to a normal distribution. I do not see any bi- or multi-modal character in the distribution.

  175. steven mosher
    Posted Oct 26, 2007 at 5:57 PM | Permalink

    RE 173. I check the fault scarps ( Year – previous year) for all and for CRN5. While you do see wild
    swings from year to year there appears to be nothing uncharacteristic about the early period ( except
    that the biggest swing happens to happen then. the period from 57 -73 is benign, that is small swings
    which seems the MOST uncharacteristic. from 73 on the large swings return.. So I suppose we are looking
    at volitility here.

    If you recall I did plot comparing the sea anomaly with the land anomaly, differencing the two.
    In the period leading up to mid 1940s the land anomaly was running up faster than the ocean anomaly
    from 45 to mid 80s the the two were in balance, and then the land anamoly starts to change at a faster
    rate. FWIW.

  176. Kenneth Fritsch
    Posted Oct 26, 2007 at 6:40 PM | Permalink

    Re: #174

    For comparison purposes I have calculated the mean, standard deviation, kurtosis, skew and chi square goodness of fit for the histogram using all 1221 USCHN (Urban data set) stations regardless of the completeness of yearly averages throughout the time period from 1900-2005 and presented them as follows:

    The mean of distribution = 0.0108 in degrees F per year.

    The standard deviation = 0.015.

    Kurtosis = 1.66 indicating shorter tails, or more peakedness actually, even though there are a few outliers on the positive side. Kurtosis = 0 for a perfectly normal distribution.

    Skew = 0.43, indicating a skew towards more positive temperatures. Skew = 0 for a perfectly normal distribution.

    Chi square goodness of fit had a p = 0.62, indicating a good fit of the histogram to a normal distribution but not as good as that for the stations with complete yearly averages. I used the standard requirement for bins having a minimum of 5 values per bin and this can improve the fit when a few outliers are in the distribution, as was the case here.

  177. steven mosher
    Posted Oct 26, 2007 at 7:03 PM | Permalink

    RE 173. Thanks John. I should have read the thread (grin)

  178. Joe Bowles
    Posted Oct 28, 2007 at 8:18 PM | Permalink

    I have been tied up with client work,so I am just catching up. One question I have for JohnV is why you seem to be making judgment concerning significance based on the R^2 instead of the actual probability? Assuming that the dominent trends is obscured by noise, it seems likely that the R^2 may be low but still be statistically significant. I also wonder if the tends should be calculated based o linear regression. It is possible that a transform might result in a better fit.

    I’m glad that work is proceding.

    Joe

  179. Clayton B.
    Posted Oct 29, 2007 at 5:06 PM | Permalink

    RE: 162 Steve Mc,

    Once you first responded to the trends post I tried to locate a list of the trends but was only able to locate the trends of the “top ten” with positive slopes.

    I am glad you have posted them in your details file. Thanks.

  180. Posted Oct 29, 2007 at 7:51 PM | Permalink

    #178 Joe Bowles:

    why you seem to be making judgment concerning significance based on the R^2 instead of the actual probability?

    No reason other than my very rudimentary knowledge of statistics. I welcome any feedback on how to better quantify the agreement between result sets and/or the significance of trend lines.

  181. steven mosher
    Posted Oct 29, 2007 at 9:39 PM | Permalink

    RE 180. With climate data if R^2 get too big I figure something must be wrong.

  182. Kenneth Fritsch
    Posted Oct 30, 2007 at 9:14 AM | Permalink

    I think it is helpful in threads such as this one to occasionally summarize what has been discovered (or thought to be discovered, as this is a blog and not a science publication). I would guess that it would be difficult to get a “consensus” in such an endeavor and thus it might be better to have those with a strongly held view of any conclusions or tentative conclusions to present them. To that end, I am presenting my preliminary and not scientifically proven conclusions below.

    1. One can use a partition of the CRN station quality into CRN123 and CRN45 in order to obtain reasonably large sample sizes to do statistical testing of the temperature and temperature trend differences.
    2. The only differences that I could determine as having statistical significance for temperature trends between CRN123 and CRN45 were for the approximate time period 1945-1985.
    3. The absolute differences in temperatures between CRN123 and CRN45 also changes in this approximate time period.
    4. The two CRN123 and CRN45 groups of stations, while having very close average longitude, latitude and elevation locations and with out significant geographic/regional trend differences (as determine by my admittedly crude state by state corrections) also have a long term bias of the CRN45 being warming than CRN123 stations. The basis for this long term difference would make sense from the approximate time period from 1945 to present but not before this time.
    5. Since the ratings of the station quality is not known currently to apply for any known time period, it can only be used at this time to infer some general trends back in time without a clear cut beginning or end for individual stations and thus resulting in a rather poorly defined average condition of these groups of stations with time.
    6. The uncertainty derived from these quality findings would appear to me to apply to any calculation of trends even though it is unlikely to reverse the trend direction. The more difficult the trend differences by station quality are to find going back in time the more likely, in my mind, it is that the quality of many stations has changed over time and from better to worse and from worse to better and perhaps more than once. These changes in turn would lead to a currently unknown and not measurable uncertainty in trends and particularly when looking at local station differences which in my mind become important in attempting to conceptualize the meaning of an average trend for the lower 48 states in face of large differences in station to station trends.
    7. Finally, I have remained unconvinced that using raw or TOBS USCHN data series adds anything to finding differences between stations and may well add some measure of errors to the series by way of including obvious outliers and not accounting for missing data points.

  183. Joe Bowles
    Posted Oct 30, 2007 at 11:23 AM | Permalink

    Re 180:

    John V:

    The R^2 is an indication of the amount of variance accounted for by the regression equation. The probability that the equation is meaningful (i.e. statistically significant) is a different issue calculated using an analysis of variance. This provides an F-test which generates a statistical probability that the equation is true result. In the literature, the level of probability for acceptance is set based on the consequence of a false result. For most things, the probability of a false result is set at 5%, which means that there is a 5% chance that the result is spurious. In medical research, where the result may be catastrophic, the level of significance is frequent more like 1 chance in a 1000, to 1 chance in 10,000.

    In a lot of cases, the R^2 is relatively low, but the finding is meaningful and useful. For example, the R^2 of the SAT is about .15, but that still provides guidance for success in college.

    My point was that the R^2 you are seeing may be low, but they may be meaningful in something like climate science.

    It isn’t clear to me that we should find a particularly high r^2 for the data. It isn’t intuitively obvious (to me, at least) that the data should be linear, especially when the underlying data (temperature records) aren’t linear. Since we are combining data across different regions, each of which has its only idiosyncractic weather systems, the likelihood of them behaving in the same way seems pretty low.

    As a result, it seems logical that we have a lot of noise in data and that the proportion of the observations which relate purely to an underlying global warming signal would be pretty low. So, intuitively, I would expect to see only a small proportion of the variance being explained by the correlations and by extention the R^2.

    If you run a regression on Excel, it will generate the R^2 and the analysis of variance. It also provides the probability for the slope.

    Hope this helps. You are doing some great work. I just didn’t want you to toss out some of your results merely because the R^2 is low. We may be seeing something valuable…i.e. how much of the warming is actually predictable versus inter-regional variation.

    Joe

  184. Joe Bowles
    Posted Oct 30, 2007 at 11:39 AM | Permalink

    Re: 182

    Kenneth:

    I tend to agree with your summary. One thing I don’t think anyone has looked at (my appologies if I missed it) is the level of correlation between the stations in the CRN45 stations. As far as I can tell, our understanding of them is limited to their degree of violation of the standards.

    It seems to me that there are several relevant questions about those stations:
    1) Do they tend to err in the same direction? I think we have intuitively assumed that the directionality of the bias is in the overstatement of warming, but I am not sure whether we have actually checked to verify this is the case other than in the aggregate.
    2) Do these stations tend to differ significantly in their bias? Since the Hansen calculations seem to include stations within 1200 miles, the difference in some key stations may be contaminating the process disproportionately, especially given their apparent dispersion.
    3) The reason I bring up (2) is that the apparent effect of the group of CRN45 stations may affect things differently in the other configurations than they do combined as a single class. I don’t know if this is the case. It is just a thought.

    I offer my sincere thanks to everyone who is playing with the data. I am learning so much from all of you and want to express my thanks and admiration for the good work.

  185. steven mosher
    Posted Oct 30, 2007 at 12:43 PM | Permalink

    RE 183 and 184.

    My approach has been a bit different here. For me the question on the table is this:

    1. Does the inclusion of non compliant sites ( say CRN5) influence the Trend and/or influence
    the variance. The simple way to assess this is to compare two Series
    A All sites without CRN5
    B CRN5

    Clearly 1221 is more than is used for other land masses of comparable size. So you cannot
    argue that all 1221 are needed without vitiating analysis of other land masses.
    If one found that CRN5 differeed substantially, it would make sense to exclude them from
    the network. A glance at varience with and without CRN 5 would also be telling

    CRN5 Versus The sites that are not CRN5 ( ratedCRN1234 and unrated)

    So unrated sites will contain some CRN 5

    Does CRN5 warm more? the same? less?

  186. steven mosher
    Posted Oct 30, 2007 at 12:51 PM | Permalink

    Between 1880 and 1985 LIG was the dominant sensor ( called minmax in anthonys Spreadsheet)

    So from 1880 to 1985 we should expect no diffrence in TREND between MMTS and LIG.

    WHY? well, MMTS was introduce in 1985 and before that the stations were CRS shelters housing
    LIG sensors.

    To test this I selected all Minmax ( LIG) and all MMTS. Differenced them and compared them from
    1880 to 1985. I expect a FLAT TREND in this difference. Why? because MMTS didnt exist prior to
    1985. MMTS in 1900 is actually a LIG site that will transition in 1985 to MMTS.

    So, lets see:

    What you see here is that there is no trend between “MMTS” and LIG prior to 1985
    because “MMTS” is infact LIG. That is, prior to 1985 MMTS designated stations were
    equipped with LIG. IN 1985 they switched over.

    The rest of the story in the next slide

  187. steven mosher
    Posted Oct 30, 2007 at 12:58 PM | Permalink

    In 1979 Noaa changed the protocal for painting CRS ( with LIG inside).
    They switched to latex paint which passes IR more readily than Titanium oxide
    based whitewash. ( used prior to 1979) In 1985 A number of CRS(LIG) were
    switched over to the Beehive MMTS.

    If MMTS and LIG are homogenous we would expect that the flat trend we saw in
    1880-1985 should remain.

    However, if painting changes Warm the CRS(LIG) relative to MMTS we would see the LIG warm
    relative to MMTS.

    This hypothesized realtive warming would be modulated ( diminished)by changes in the MMTS
    A) sooting
    B) siting ( locating sites by buildings)

    So, 1985-2005 LIG-MMTS. What changed: LIG was painted with “transparent” paint.
    Some MMTS have moved closer to buildings

    Next step. I havent done. CRN 5 MMTS.

  188. steven mosher
    Posted Oct 30, 2007 at 1:04 PM | Permalink

    Opps wrong damn chart.

    LIG-MMTS 1985-2005

    LIG Warms relative to the new MMTS.

  189. Joe Bowles
    Posted Oct 30, 2007 at 2:00 PM | Permalink

    R: 185-188

    Stephen:

    Great work! The upward bias from the changeover in equipment comes thru lound and clear. The magnitude seems to be consistent with the articles on the MMTS problems and theri systematic bias.

    I still think we need to start reporting the Standard Error and probabilities. Without those we don’t know whether what we are seeing is real or not, particularly given the low R^2s.

    Any thoughts as to why the bias since 1985 isn’t in one direction? I would have thought that if MMTS was systematic bias, it would be consistently positive. The actual data looks something like a systematic oscillation.

  190. Joe Bowles
    Posted Oct 30, 2007 at 2:03 PM | Permalink

    Re: 189
    My appologies for writing Stephen rather than Steven. My magic fingers sometimes slip out of contact with my brain.

  191. Kenneth Fritsch
    Posted Oct 30, 2007 at 2:12 PM | Permalink

    Re: #185

    Steven Mosher, I have a big problem with using USCHN temperature data prior to 1920 due to the increasing frequency of missing data and I suspect wrong data as one goes back in time. If I look at your chart of CRN5 minus all-the-rest after 1920 (I am not even confident of some of these data after 1920 into the 1940s) I see somewhat the same trends as I see when comparing CRN123 and CRN45.

    While regime changes in observing more than a single slope in these graphics can easily be a figment of the imagination of the observer and seeing what one wants to see, I think it might be instructive to look for these changes if one assumes that the differences in quality levels attributed in the recent Watts audit happened some period back in time and enough of the changes occurred in a relatively short time span as to be reasonably easy to detect. I see definite plateaus in the trend differences that lead me to suggest that not many of the changes occurred in the times associated with the plateaus.

    Re: #184

    Joe Bowles, in reply to:

    Do they tend to err in the same direction? I think we have intuitively assumed that the directionality of the bias is in the overstatement of warming, but I am not sure whether we have actually checked to verify this is the case other than in the aggregate.

    I believe that if we had warming and cooling errors in both directions with CRN45 over CRN123 we should be able to detect it with an F test of the variances of the two groups and definitely, at least, we should see a larger variance in CRN45 than in CRN123. I wanted to do this test previously and will now do it soon.

    Do these stations tend to differ significantly in their bias? Since the Hansen calculations seem to include stations within 1200 miles, the difference in some key stations may be contaminating the process disproportionately, especially given their apparent dispersion.

    I am not completely certain what your point is here, but if you are alluding to the outlier, homogenization and missing data averaging processes that are used, it is the USCHN data and adjustments that we are using and not the GISS data and adjustments. Further I have failed to see where the effect we are looking for here is sufficiently large to be adjusted as an outlier or for that matter to affect the averaging algorithm used to fill in missing data. The USCHN averaging process uses nearby stations that have a good correlation with the station in question. I also believe that the trends we have seen using the USCHN Urban data series (the one I have mostly used and the most adjusted one of all the progressively adjusted USCNH temperature series) and those we have seen using TOBS adjusted data are, while not exactly the same, in general close to the same.

    I could calculate the relative frequency of outlier data thrown out by the USCHN adjustment processes for the CRN123 and CRN45 groups.

  192. SteveSadlov
    Posted Oct 30, 2007 at 2:31 PM | Permalink

    RE: #186 – imagine if you “undid the earthquakes” and slid everything prior to 1917 back up the fault scarp, thereby eliminating the scarp.

  193. steven mosher
    Posted Oct 30, 2007 at 2:44 PM | Permalink

    RE 189.

    Here was Thinking. I have a series of data from from 1880-2005.
    If I am going to focus on A period I want a solid reason that is NOT simply
    I see a trend.

    So, for LIG and MMTS 1985 is a objectve cutoff. From 1985 SOME LIG transitioned to MMTS.

    Now why is the bias not montonically increasing:

    Speculation:

    Some MMTS changovers involved two Changes: an instrument switch ( to MMTS) and a sensor relocation
    ( closer to buildings )

    The next stage would be to break MMTS down into CRN categories. And see what is there.
    Also as MMTS soots up one could hypothesize a warming as the result.

    When Anythony finishes his paint experiement we will see if the side by side comparison
    confirms the long range.

    I only calcuate R^2 ( excell style) because we don’t have the data we need in OpenTemp.

    I think JohnV is adding this

  194. steven mosher
    Posted Oct 30, 2007 at 2:58 PM | Permalink

    RE 191. Kenneth.

    Here you go: 1920 -2005. CRN5 – Others ( CRN1234 and Unrated sites )

  195. Joe Bowles
    Posted Oct 30, 2007 at 3:50 PM | Permalink

    Kenneth, Re: 191

    I understood that the data isn’t GISS. I guess I didn’t express myself clearly. I was thinking that due to Hansen’s method, the CRN5 sites taken individually into his methodology may be adding a bias to GISS numbers that may be different than the combined effect of CRN5 as a whole. I didn’t mean to imply that it was similarly affecting the USCHN data since we are using the sites directly. I was just thinking that we might get somewhat closer to reconciling the two.

    Joe

  196. Joe Bowles
    Posted Oct 30, 2007 at 4:00 PM | Permalink

    Steven: Re: 193 and 194

    I was thinking along the same lines on why the bias isn’t monotonic, but the oscillation effect still fascinates me. One additional thought I had was that the bias may relate to the temperature range–so it isn’t constant but rather variable.

    I was amazed at the way the R^2 jumped when the entire series with just CRN5 was subtracted jumped. This might suggest that the CRN4 sites aren’t all that bad.

    Thanks for your thoughts,

    Joe

  197. Joe Bowles
    Posted Oct 30, 2007 at 4:04 PM | Permalink

    Steven: Re:193

    Is it my imagination or is the volatility suddenly increasing about 1971?

  198. Joe Bowles
    Posted Oct 30, 2007 at 4:05 PM | Permalink

    Re 197

    Magic fingers got me again. I meant to reference the graph in 194.

  199. steven mosher
    Posted Oct 30, 2007 at 4:10 PM | Permalink

    RE 195.

    My position on GISS is somewhat radical. I compared GISS to OpemTemp ALL. The charts
    are closer than two dogs in heat. So, I use OpenTemp because it is OPEN. When some
    poor soul gets GISSTemp up and running I’d cross check. But for the US, OpenTemp is
    the tool of choice.

    We could dod these same studies with GISS, once it’s running.

  200. steven mosher
    Posted Oct 30, 2007 at 4:15 PM | Permalink

    RE 196.

    Yes, One thing I’ve suggested is that bad sites ( CRN5) may have a bias that exhbits itself
    like shot noise. Given that only 15% of the sites are class 5, the overall IMPACT on trend may
    be small, but the added noise may be large. Anecdotally, I get the biggest r^2 when I isolate
    CRN5. I think Kenneth has done some anova, kenneth am I misremebering?

  201. steven mosher
    Posted Oct 30, 2007 at 4:22 PM | Permalink

    RE 198.

    Yes, Now, the thing to rememeber is that CRN5 rating is the ENDSTATE of the site. We don’t
    know when it became a CRN5.

    Without an Objective reason to look at 1971-2005, I hesistate. I could cherry pick the hell
    out of this stuff. If you look long enough you WILL find something worth talking about.

    For example, I could say, lets look at the last 10 years 1995 to 2005. But I’d want a REASON
    ( outside of the change in trend) to isolate this regime.

    This means to mean that some effort to encode the history of sites must be undertaken. Obviously
    a site that is asphalt infected today was NOT asphalt infected prior to say 1940. An air conditioner
    infected site, has NO AC UNITS in 1934.

    This last point underscores the importance of surface stations. 50 years from now, people can look
    back and se what the site looked like in 2007.

  202. Kenneth Fritsch
    Posted Oct 31, 2007 at 9:10 AM | Permalink

    Re: #200

    I think Kenneth has done some anova, kenneth am I misremebering?

    I made an off-the-top-of-the-head suggestion to that effect awhile back but have not attempted to do any ANOVA.

  203. Clayton B.
    Posted Nov 1, 2007 at 8:28 PM | Permalink

    I looked at this about a month ago (with the first OpenTemp) when Steven Mosher was discussing “Shot Noise”. The following chart was created with OpenTempR1.

    There’s something about CRN4 that I can’t put a finger on. It appears that CRN4 stations cause volatility. It may be just because CRN4 has over 1/2 of the surveyed stations.

  204. steven mosher
    Posted Nov 1, 2007 at 9:26 PM | Permalink

    re 203 nice work clayton.

    Have any of you moved up to Anthonys latest Excell sheet?

  205. Clayton B.
    Posted Nov 2, 2007 at 4:38 AM | Permalink

    #204 StevenMosher,

    Chart in #203 has 414 stations that are CRN12345. This matches the current surfacestations spreadsheet. The MySQL DB is also updated with this information.

  206. Kenneth Fritsch
    Posted Nov 2, 2007 at 1:17 PM | Permalink

    I have summarized my data analysis of the temperature trends for the CRN rated stations using the USCHN Urban data set and reporting trends in degrees C per century. I limited my analysis to comparing CRN123 against CRN45 (in order to provide reasonable sample sizes) and to the time period from 1920-2005 (due to the amount of increasing missing data as one goes back in time).

    In the first part of the analysis I looked at the entire period 1920-2005 and then the periods 1920-1949, 1950-1980 and 1981-2005. The last three periods were mined from the data by looking for periods of little or no trends of CRN45-CRN123 and a period of a significantly large trend. Without collaborating evidence my choices of these three partitioned time periods can only be justified by a preliminary judgment that the quality ratings established recently had to have occurred due to changes substantially back in time for the most part, but not so far back in time that features to which poor quality was attributed would not have been expected to commonly exist. Without collaborating evidence of a prior judgments one would be limited to looking for trends over the entire time period.

    The following trends are for the yearly temperature anomaly differences of CRN45-CRN123:

    1920-2005: Trend = 0.31; R^2 = 0.38; Trend is statistically very significant.

    1920-1949: Trend = -0.12; R^2 = 0.01; Trend is statistically not significant.

    1950-1980: Trend = 0.97; R^2 = 0.48; Trend is statistically very significant.

    1981-2005: Trend = -0.04; R^2 = 0.00; Trend is statistically not significant.

    For the second part of the analysis I looked at the distributions of trends for the CRN123 and CRN45 stations over the same time periods as used above. One reason for doing this analysis was to determine whether one could detect a variance widening for the CRN45 stations that might indicate that the quality problems could cause cooling as well as warming errors. A small percentage of stations were not used in this analysis due to missing data.

    For 1920-2005:

    CRN123: Ave of Trends = 0.38; St Dev of Trends = 0.76; Skew of Distribution = 0.00; Kurtosis of Distribution = 0.16; Chi Sq Goodness of Fit for Normal Distribution — p Value = 0.26 (cannot reject hypothesis that the distribution is different than normal, but p value indicates a poorer fit).

    CRN45: Ave of Trends = 0.66; St Dev of Trends = 0.94; Skew of Distribution = -0.01; Kurtosis of Distribution = -0.54; Chi Sq Goodness of Fit for Normal Distribution — p Value = 0.75 (p value indicates a very good fit).

    The F test for determining a statistical difference between the variances for CRN123 and CRN45 gave a p value = 0.01 indicating that the difference was statistically significant.

    For 1920-1949:

    CRN123: Ave of Trends = 1.00; St Dev of Trends = 1.66.

    CRN45: Ave of Trends = 1.10; St Dev of Trends = 1.89.

    The F test for determining a statistical difference between the variances for CRN123 and CRN45 gave a p value = 0.12 indicating that the difference was statistically not significant.

    For 1950-1980:

    CRN123: Ave of Trends = -1.26; St Dev of Trends = 1.70; Skew of Distribution = 0.60; Kurtosis of Distribution = -0.10; Chi Sq Goodness of Fit for Normal Distribution — p Value = 0.15 (cannot reject hypothesis that the distribution is different than normal, but p value indicates a poor fit).

    CRN45: Ave of Trends = -0.68; St Dev of Trends = 1.90; Skew of Distribution = 0.25; Kurtosis of Distribution = 0.47; Chi Sq Goodness of Fit for Normal Distribution — p Value = 0.89 (p value indicates an excellent fit).

    The F test for determining a statistical difference between the variances for CRN123 and CRN45 gave a p value = 0.17 indicating that the difference was statistically not significant.

    For 1981-2005:

    CRN123: Ave of Trends = 2.63; St Dev of Trends = 2.09.

    CRN45: Ave of Trends =2.64; St Dev of Trends = 1.98.

    The F test for determining a statistical difference between the variances for CRN123 and CRN45 gave a p value = 0.46 indicating that the difference was statistically not significant.

    Overall in summary I can pick 3 regimes with 2 showing no trends between CRN123 and CRN45 and 1 showing a very significant trend. Determining, whether these regimes can be related to actual station changes or to a figment of a snooper’s imagination, would seem to remain unanswered without more detailed historical evaluations of the stations’ quality changes. On the other hand, one can also see statistically significant trends over the entire time period between CRN123 and CRN45 stations.

    I am not at all sure what the analysis of the station trends for the 2 groups, CRN123 and CRN45, is telling us. Some of what we see may be the result of smaller sample sizes (CRN123 being less than half the size of CRN45 and the station trends over shorter periods of time) but the larger variance for CRN45 over that for CRN123 shows statistical significance for the entire time period of 1920-1945. The partitioned periods do not show statistical significance. The larger variances for the CRN45 group may indicate that the poorer quality of these stations causes both warming and cooling problems, although I would suspect there could be other explanations for the differences.

    I was somewhat surprised by how much better the CRN45 group distribution fit a normal distribution than did the CRN123 group. My thinking about it at this point does not readily come up with a plausible explanation. If we assume that the CRN123 stations are producing fewer temperature errors, then we would be lead to the conclusion that the “real” temperatures have less normal distribution and that the errors that the poorer quality CRN45 stations introduce have a randomizing effect that produces a more normal distribution. I am not sure that explanation makes a lot of sense.

  207. Clayton B.
    Posted Nov 2, 2007 at 6:43 PM | Permalink

    Richard,

    Have you looked at CRN4 seperately? I did – see #203. CRN4 causes alot of volatility. Probably just because of the larger sample size. Should perhaps select equal sample sizes at random for analyses.

  208. Posted Nov 3, 2007 at 7:11 AM | Permalink

    R e #206 Kenneth, with regards to your final paragraph

    I was somewhat surprised by how much better the CRN45 group distribution fit a normal distribution than did the CRN123 group.

    what are the groups you’re examining (max and min temps, averages, etc) ? In my view the factors which make a site low-quality (concrete, vegetation, buildings, etc) tend to dampen the range of maxs and the range of mins, which may make the distribution more normal vs a proper site.

  209. Kenneth Fritsch
    Posted Nov 3, 2007 at 8:47 AM | Permalink

    Re: #208

    David, I used yearly means as defined by the USCHN Urban data set. I suppose if the minimum and/or maximum temperatures had less variation it would reduce the variation of the mean temperatures. CRN45 stations had more variation than the CRN123 stations in yearly mean temperatures over the period 1920-2005. The CRN45 station distribution (with more than double the data points, i.e. stations) was very obviously more normal (under a chi square test for goodness of fit) than that of the CRN123 stations for the time periods 1920-2005 and 1950-1980.

    If the CRN123 distribution was representing more closely the natural variations in temperatures and that distribution did not have a particularly good fit to a normal one, and furthermore if the CRN45 stations were introducing a more or less random error in temperatures that was significant in size compared to a natural variation in temperature, I could then see this situation potentially producing the observed distribution results. The errors in CRN45 relative to CRN123 do not appear, however, to be randomly distributed around a mean but rather biased towards warming.

    I think in order to understand better the resulting distributions I would have to do some simulations — simulations that I could readily do if I knew R better.

  210. Clayton B.
    Posted Nov 4, 2007 at 3:15 PM | Permalink

    I redid #203 using a random selection of the same amount of stations for each series. For example:

    SELECT i.NCDCID, s.Lat, s.Lon FROM surfacestations s JOIN StationID i ON s.GHCNID=i.GHCNID WHERE s.CRNRating > 0 AND s.CRNRating

    In the previous chart (post #203) the lines were much smoother when CRN4 was subtracted out. This no longer seems to be the case:

  211. steven mosher
    Posted Nov 4, 2007 at 3:19 PM | Permalink

    RE 210.

    I dub thee king of spagetti graphs

  212. Kenneth Fritsch
    Posted Nov 5, 2007 at 4:07 PM | Permalink

    I started looking at USCHN Urban Maximum and Minimum temperatures to determine whether the CRN quality effects were biased toward one or the other. While preliminarily, the effect seems to be nearly all realized in differences in minimum temperatures, when I used my own calculated mean temperatures (from the maximum and minimum averages) I found that the calculated temperatures and USCHN Urban Mean temperatures do not agree.

    I went back and compared my own calculated mean temperature from the USCHN maximum and minimum with the posted calculated mean at the USCHN site. While the differences in temperatures were not zero they were in reasonably good agreement. Although the differences in the USCHN mean Urban temperatures and the USCHN calculated mean Urban temperatures do not appear to change any trends, the differences are sufficiently large to make me ask the question here as to why there should be a difference.

    At the USCHN data site, a readme file notes that the USCHN data base contains urban adjusted monthly maximum (urban_max_fahr.Z), minimum (urban_min_fahr.Z), and mean (urban_mean_fahr.Z) for the 1221 USCHN stations. Then they note without explaining any differences that there is also an urban data set (urban_calc_mean_fahr.Z) that is the mean monthly temperature calculated from the adjusted urban_max_fahr.z and the adjusted urban_min_fahr.Z data sets.

    Why should there be a difference and, for that matter, two different mean temperature data sets? In a comparison study, when analyzing minimum and maximum temperatures, should the calc_ mean or plain mean data set be used? Can anyone here help me out with this issue before I proceed further with my analysis?

  213. Sam Urbinto
    Posted Nov 5, 2007 at 6:19 PM | Permalink

    You’re going about it wrong, Ken. Here’s the steps:

    1. Come to your conclusion.
    2. Make stuff up that supports it.
    3. Get your buddies to review it and rubber stamp it.
    4. Publish.
    5. Refuse to provide your data, software or methods.

  214. D. Patterson
    Posted Nov 5, 2007 at 10:05 PM | Permalink

    Re: #213

    Don’t forget:

    Step 6. Declare a false victory by unilaterally declaring a favorable consensus of opinion.

  215. Posted Nov 6, 2007 at 3:19 PM | Permalink

    #26 Jean S:

    You do not simply know, you’ve been only considering (with simplistic methods) the best data set. You haven’t done any real statistical testing even for US48, so it is even premature to make any conclusions concerning the GISSTEMP US48 results

    I have asked for anybody with better knowledge of statistics to look into the correlation between GISTEMP and my (and others) results using the best stations. There have not been any takers, but I would appreciate your input.

    My position is this:
    The best temperature record comes from the best stations. The best stations are those with CRN site quality ratings of 1 and 2 in rural, non-airport locations. There are only 17 such stations in the USA48, but GISTEMP agrees with these stations very well. If stations with CRN site quality ratings of 3 there is much better geographical coverage and GISTEMP is still in close agreement.

    Please, check these links then head over to the USHCN #3 thread and help out with the statistics:
    http://www.climateaudit.org/?p=2124#comment-147568
    http://www.climateaudit.org/?p=2124#comment-147569

  216. Posted Nov 6, 2007 at 3:57 PM | Permalink

    #213 Sam Urbinto:

    1. Come to your conclusion.
    2. Make stuff up that supports it.

    That sounds like an accusation. What’s that word we’re not supposed to use here? Five letters, starts with “fr”, ends with “ud”. Nah, it couldn’t be that.

  217. Sam Urbinto
    Posted Nov 6, 2007 at 4:18 PM | Permalink

    Let me rephrase that. (That was more so satire)

    1. Come to a conclusion in line with your worldview. (bristlecones trees tell us past climate, open sea ice is due to warming is due to CO2, whatever)
    2. Tune a model so it comes to that conclusion, probably isn’t repeatable (either by design or by the methods you’re using or due to inertia) and perhaps things are obfuscated to the point where the results are incomprehensible and method is unknown. This is perhaps not done on purpose or pre-planned, but is what ends up happening.
    I think this is more so out of the fact that up until now, nothing needed to be verified, and it’s just being done the wrong way. I dunno.
    3. Write up the results and have your peers (who probably don’t know any more than you do) review it rather than using outside auditors who are experts in the various fields used to create the results.
    4. Publish.
    5. Refuse to provide your data, software or methods.
    6. Declare victory by unilaterally declaring a favorable consensus of opinion even when independent experts find issues with your work.
    7. Keep doing the same thing over and over.

    I’m not accusing anyone of anything. I’ve said it before, if you don’t want people to think you may be hiding something, don’t act like you could be. Or put another way, if strange things are happening and the motives for it are unknown, some will question your motives. 5 6 and 7 make people question your motives. It could be as simple as “that’s just how we’ve done it” or “you are not qualified to question how I do things, peon” or negligence, or ignorance. Not knowing what you don’t know. Or it could be more. I don’t know, and I don’t care.

  218. Jean S
    Posted Nov 6, 2007 at 5:33 PM | Permalink

    #215: Ok, I bite.

    I suppose the goal of the analysis is to determine how reliable the GISSTEMP analysis is. IMO, the key problem there is how GISS handles the “urbanization” issue. For checking this, I think USHCN is not the best data set as it is not completely independent of GISS. I would rather use some independently homogenized data (e.g. NordKlim) and compare the result using that data set and the ones obtained from the GISS data set (using the GISS algorithm). But I suppose you do not want to do that, so let’s see what could be done (IMO better) with the USHCN (US48) data. As a general reference, I suggest that you first read Tuomenvirta’s thesis (IMO a decent exposition of the topic). Here are some specific suggestions:

    1) Try to avoid simply averaging absolute temperetatures, and try to use some robust methods. I would use some type of robust regression of first differencies, i.e., Peterson’s method with robust extension. In the simplest form, you would just take the median of the (monthly) differencies over the selected stations.

    2) Define carefully your metric when comparing the series (only climate scientists’ eyes seem to be better than an objective measure). This is important as the original series are already highly correlated. In general, the closeness is usually measured from the residuals. A good match would have white (possibly Gaussian) residuals. So remember to check for the correlation.

    3) A wild idea: try “inverting” the problem. Calculate (let’s say 5) series that best reproduces the GISS series (or a apart of it). Investigate the quality of those stations.

    4) Whatever you do, do it also seperately for min/max temperatures and for different months/seasons (e.g., do the analysis 1. for min and max separately and then combine for the mean temperature). Compare.

  219. steven mosher
    Posted Nov 6, 2007 at 5:59 PM | Permalink

    RE 215. Jean S. JohnV and I have both been emploring people with better stats backgrounds
    than us to lend a hand. Stuffing numbers into a stats program is something I can do, but
    I refuse to do it here. I refuse because I have looked at too much of this stuff and mined
    it and cherry picked it and you and know know the end of that tail… er tale.

  220. Earle Williams
    Posted Nov 6, 2007 at 6:51 PM | Permalink

    Re #216

    John V,

    Please, no discussions of cigars here!

  221. steven mosher
    Posted Nov 6, 2007 at 7:48 PM | Permalink

    RE 220.

    That allusion will miss most but the most perceptive or those who know that FR is only followed by vowels.

  222. D. Patterson
    Posted Nov 6, 2007 at 8:36 PM | Permalink

    Re: #186, #187

    steven mosher says:

    October 30th, 2007 at 12:51 pm
    Between 1880 and 1985 LIG was the dominant sensor ( called minmax in anthonys Spreadsheet)

    So from 1880 to 1985 we should expect no diffrence in TREND between MMTS and LIG.

    WHY? well, MMTS was introduce in 1985 and before that the stations were CRS shelters housing
    LIG sensors.

    Also

    October 30th, 2007 at 12:58 pm

    So, 1985-2005 LIG-MMTS. What changed: LIG was painted with “transparent” paint.
    Some MMTS have moved closer to buildings

    Steven,

    Please note the statement, “MMTS was introduce in 1985 and before that the stations were CRS shelters housing LIG sensors”, is false and misinformation, because Liquid-In-Glass (LIG) sensors were not the only sensors in widespread use by the various observational networks.

    In fact, thermographs were another type of temperature sensor in use by the various observational networks for decades prior to the introduction of the MMTS and ASOS instrumentation. While I have not undertaken a census of the populations of the specific types of temperature sensors in use by the various observation networks in different periods of time, and I would not attempt to characterize which sensor was “dominant;” I do know from personal experience that thermographs were the preferred instrument in some organizations and were used by a large number of major and minor weather stations across a period of decades. Thermographs were in use by the National Weather Bureau, the National Weather Service, the Cooperative Observers network, the U.S. Army Signal Corps, the Air Weather Service of the U.S. Army and the U.S. Air Force, the U.S. Navy, the Department of Commerce, the U.S. Forestry Service, the U.S. Coast Gurad, and others. If anyone were to make an assumption that LIG sensors were the only sensor or virtually the only sensor contributing temperature data to the meteorological datasets between 1880 and the advent of the ASOS and MMTS instrumentation, they would be in gross error.

    Many types of thermographs have been in use over the decades. The bi-metallic sensor may be the principal type of thermograph used, but anyone interested in the subject may want to avoid assumptions and determine the extent to which the other types may have been used. Other types of thermographs to investigate would include the Mercury-In-Steel and/or Bourdon thermograph/thermometer.

    There are a number of potential sources of errors when using thermographs to make observations of temperatures. They range from simple observer error in reading the graphs and mounting the graph sheets onto the instrument to zeroing adjustment errors, maintenance problems, and telemetry errors. As in the later ASOS and MMTS instruments in which calibration, sensor degradation, and maintenance problems resulted in undetected errors in the data records, small errors may have gone undetected when electrical problems in the telemetry cables introduced errors into the recorded temperatures.

    In any case, evaluations of the types of temperature sensors used to produce the temperature datasets must include an undetermined population of thermographs and their associated potential sources and ranges of error.

    Sorry if this information rains on any parades, but I do hope it helps everyone find whatever the truth may be regarding the accuracy of the datasets.

  223. Kenneth Fritsch
    Posted Nov 7, 2007 at 2:07 PM | Permalink

    I extended my analysis of CRN123 and CRN45 stations to Calculated Mean, Maximum and Minimum average yearly temperatures from the USCHN Urban data sets. Before I had used the Mean average yearly temperatures from the USCHN Urban data set and, as noted previously, on comparing data sets I found that the Calculated Mean (calculated by averaging the Maximum and Minimum temperatures) had differences with the Mean. To be consistent in this analysis I used the Calculated Mean data set in the comparisons with the Maximum and Minimum temperature data sets.

    I have not as yet found an explanation for having Calculated Mean and Mean temperature data sets and why they would not yield the same temperatures. Using either produces the same conclusions for the purposes of my CRN123 and CRN45 analysis, but I find it perplexing that the USCHN web site does not explain the differences.

    For this analysis I did a comprehensive statistical analysis of the USCHN station trends for the time periods: 1920-2005, 1950-1980, 1920-1949 and 1981-2005. I looked at the shape of the distributions and attempted to determine how well the distributions fit a normal distribution using a Chi square goodness of fit test. I compared the trend means for CRN123 versus CRN45 for significant differences using t-tests and compared trend standard deviations using F-tests. The results are summarized in the 4 tables below. Trends are all listed in degrees F per year. A few percent of the CRN stations were excluded from each analysis due to incomplete data points.

    As noted in previous posts using the Urban Mean data set, there are significant differences in the trends between CRN123 and CRN45 stations over the time period 1920-2005 with most of the difference confined to the period from 1950 to 1980. The results also show that the minimum temperature difference contributes more than that from the maximum temperature.

    The analysis identified significant departures in some distributions from a normal distribution. This could create some uncertainty in determining significant differences between the CRN123 and CRN45 groups. The differences in distribution shapes could perhaps, under more detailed analysis, yield some useful information about CRN123 and CRN45 differences. As noted before, the partitioning of the time periods was done to show where most of the trend difference for 1920-2005 occurred and cannot without independent evidence lead to conclusions on temporal CRN quality changes.

  224. Earle Williams
    Posted Nov 7, 2007 at 3:49 PM | Permalink

    Kenneth Fritsch and others,

    The amount of work you all have put into generating and analysing this USHCN data is impressive. It seems however to be focusing way too much on interpreting what is an emasculated metric with regards to the variability and uncertainty in the record.

    Ever since John V. asked about spatial interpolation algorthms I’ve been contemplating how to make use of all the information contained in the historical temperature record. By averaging monthly averages, the variability and gappiness of the data is getting tossed out. I’m thinking that kriging (or some other method) can be used to create a smooth surface for the entire lower 48 given the available temperature information for a given day. Missing readings just get ignored in the final calulation of the temperature distribution, but the resulting uncertainty calculation likely goes up for the area where the data is missing.

    I’m no further than the idea stage of how to process all the data and generate a measure of the uncertainty. One definite problem with going down this path is that it generates a lot of data and requires more numerical processing. I’d welcome any thoughts you gentlemen have on how to carry this uncertainty information through into an aggregate metric that represents the Lower 48 on an annual basis. The methodology and code are very mature, so it becomes more of a data and process management issue than a software development issue.

  225. D. Patterson
    Posted Nov 7, 2007 at 4:45 PM | Permalink

    Re: 223

    I have not as yet found an explanation for having Calculated Mean and Mean temperature data sets and why they would not yield the same temperatures.

    Have you checked to see if there is a DQF flag set to indicate a calculated mean value interpolated from three surrounding stations?

  226. Kenneth Fritsch
    Posted Nov 7, 2007 at 4:57 PM | Permalink

    Re: #224

    The USCHN data sets with missing data and outlier data do get filled-in partially with an algorithm that uses close-by stations with temperatures that correlate well with the station in question — or so says the web site for USCHN. Even with these processes there remain data points that do not get filled-in and these data points become increasingly frequent as one goes back in time. That is why I confined my analyses to 1920 and later and have some misgivings about the amount of 1920-1940s missing data points.

    What, I think we are missing, in general, in our analyses is an understanding in more detail of how the USCHN (or GISS) processes are actually performed. I prefer to use the most corrected data sets until someone can demonstrate to me the errors in the adjustments used. We evidently do not collectively possess sufficient knowledge here to answer my query about why USCHN has a Mean temperature data set and a Calculated Mean temperature and why these sets differ.

  227. D. Patterson
    Posted Nov 7, 2007 at 6:02 PM | Permalink

    Re: #226

    Kenneth, I’m trying to understand the exact dataset/s you are using to represent what you are describing as the USHCN urban dataset, so that I may be able to attempt an answer to your questions about the meaning of the calculated mean and mean temperatures. Looking up the thread, it is not immediately obvious to me whether you are taking your data from DS-6900 USCHN Version 1 urban subset, USHCN Version 2, a combo, and/or the other 3XXX datasets. Can you please enlighten us with a scorecard (smile) which can briefly clarify exactly datasets and subsets are in play here?

  228. Bob Koss
    Posted Nov 7, 2007 at 6:10 PM | Permalink

    Kenneth Fritsch,

    The calculated mean is likely the daily temperature derived from averaging all observations for the day. I have GSOD temperature files that are explicitly described as done that way. They also include Tmin and Tmax and the mean is usually less than (Tmin+Tmax)/2.

  229. D. Patterson
    Posted Nov 7, 2007 at 6:23 PM | Permalink

    Re: #228

    Yes, that is one of the methods used for some datasets.

    Another method in use for some datasets is to make a monthly sum of the MaxMin values per day divided by two and then divided by the number of days in the month.

    DS-6900 USHCN Version 2 sums values obtained from three other datasets to report a calculated mean value.

    So, the method used is dependent upon the dataset, version, and data subset/s that are being used.

  230. BarryW
    Posted Nov 7, 2007 at 8:19 PM | Permalink

    I’ve had a nagging feeling from what John V and others have done hasn’t shown an error because everyone is looking in the wrong place. Generally what I have read is that most of the more knowledgeable people on this site accept the TOBS adjustments that are being performed. After reading an entry on Gust of Hot Air I’m not sure but I think that that correction is where the body might be buried. Acording to what I read minimum temperatures don’t occur in the middle of the night, but just after local dawn. If there is a change in the amount of daylight then this is going to affect the minimum temperature. If the correction assumes a constant offset based on time of observation and there is a change in the amount of light (changes in clouds, aerosols, haze) over time then the correction would be biased based on the time the original analysis was done. Since the TOBS seems to depress older measurements (if I remember correctly) could this account for some of the trend?

  231. D. Patterson
    Posted Nov 7, 2007 at 8:30 PM | Permalink

    Re: #230

    You may find this publication on the NOAA Website to be of interest:

    II.7-CALB-MAT CALIBRATION SYSTEM MEAN AREAL TEMPERATURE (MAT)
    COMPUTATIONAL PROCEDURE

  232. steven mosher
    Posted Nov 7, 2007 at 9:01 PM | Permalink

    RE 228.

    Another approach is to record the Tmax (rounding to the nearest degree), record the TMIN
    Rounding as well, and THEN USHCN will sum these two. divide by 2 and round up.

    It’s a mixed bag. JerryB is a good source as is D. Patterson.

  233. Kenneth Fritsch
    Posted Nov 8, 2007 at 2:03 PM | Permalink

    Re: #227

    I have previously documented my sources and will now do so again. The mean, maximum and minimum Urban temperature data sets that I have previously referenced here are from the link below to ncdc/noaa/uschn and are in the files urban_mean_fahr.Z, urban_max_fahr.Z and urban_min_fahr.Z, respectively.

    A readme (Readme.Text) file from the same link comments on the four data sets as excerpted below:

    The USHCN data base contains urban adjusted monthly maximum (urban_max_fahr.Z), minimum (urban_min_fahr.Z), and mean (urban_mean_fahr.Z) temperature data (in hundredths of degrees fahrenheit) for the 1221 USHCN stations. There is also an urban mean data set (urban_calc_mean_fahr.Z) that is the mean monthly temperature calculated from the urban adjusted maximum temperature (urban_max_fahr.Z) and the urban adjusted minimum (urban_min_fahr.Z) data sets.

    ftp://ftp.ncdc.noaa.gov/pub/data/ushcn

    The urban_calc_mean-fahr.Z data set is found at this link:

    http://cdiac.ornl.gov/epubs/ndp/ushcn/ndp019.html#tempdata

    My calculations show that, while the urban_calc_mean_fahr.Z temperature agrees well with that obtained from the urban_max_fahr.Z and urban_min_fahr.Z (by simply averaging the max and min results), the urban_calc_mean_fahr.Z and the urban_mean_fahr.Z data sets yield significant discrepancies.

  234. Kenneth Fritsch
    Posted Nov 9, 2007 at 12:38 PM | Permalink

    I will attempt here to show a measure of the differences between the USCHN Urban mean (urban_mean_fahr.Z) and USCHN Urban calculated mean (urban_calc_mean_fahr.Z). I took differences in listed temperatures for each year and for all 1221 stations (calculated mean – mean) in degrees F and then averaged those yearly differences for each station over various time periods and covering the period from 1920-2005. While this procedure does not show the extremes of the differences that occur by year, it is gives a general feel for the differences and how they have changed over time. What is reported below is an average of all stations average difference for the time period noted and with the corresponding standard deviation and maximum and minimum.

    1920-2005: Ave = -0.0029; Stdev = 0.187; Max = 0.866; Min = -0.709.

    1920-1949: Ave = -0.0163; Stdev = 0.330; Max = 1.335; Min = -1.305.

    1950-1980: Ave = -0.0037; Stdev = 0.215; Max = 1.089; Min = -0.949.

    1981-2005: Ave = -0.0138; Stdev = 0.0653; Max = 0.583; Min = -0.429.

    One can see that while the average yearly station differences average close to zero over the time periods the variations in the differences start at relatively large numbers and decrease with time going forward.

    By the way, I want to clearly state that my previous analyses presented on this thread show that, when the CRN quality ratings are divided between CRN123 and CRN45, significant differences are shown (approximately a quarter of a degree C per century difference) for the time period 1920-2005 and that larger sample comparison differs from my view of what John V concluded from his smaller sample analysis.

  235. steven mosher
    Posted Nov 9, 2007 at 1:14 PM | Permalink

    Change point analysis

  236. Kenneth Fritsch
    Posted Nov 9, 2007 at 3:43 PM | Permalink

    Re: #235

    Steven, I agree that change-point analysis would be applicable here. I might be capable of a simple model to do this but a more comprehensive one might well be beyond my capabilities. If someone would simply step up to the plate and demonstrate that the CRN quality changes that are now documented for CRN123 and CRN45 stations took place mainly from 1950-1980 we would not need to bother. Meanwhile it would help if anyone here could answer my question on the differences between USCHN mean and calculated mean data sets, or for that matter explain why the USCHN averaging algorithm cannot fill in or refill in (for outliers) increasingly many points as one goes back in time. I have come to very strongly suspect the early data in these time series.

  237. Clayton B.
    Posted Nov 9, 2007 at 5:37 PM | Permalink

    236,

    I have a feeling that an update to surfacestations AND opentemp will be available soon…

  238. steven mosher
    Posted Nov 9, 2007 at 6:33 PM | Permalink

    RE 236 Agreed. I started looking at it and I think with JohnVs help we could do something

    On Anthony’s side we need TIME SENSITIVE station history. Essentially we are talking about
    check the SHAP adjustment.

    Now, NOAA have announced a whole new set of analytic approaches including some new Change point
    analysis work.. I cant get those papers without access .. more to follow

  239. Steve Sadlov
    Posted Nov 13, 2007 at 11:15 AM | Permalink

    Evidence of warm 1930s in the Baltic. As a very shallow and brackish body of water, I’d expect it to largely reflect the lower tropospheric temperature:

    http://www.worldclimatereport.com/index.php/2007/11/08/snow-and-ice-surprises/#more-282

  240. Steve Sadlov
    Posted Nov 13, 2007 at 11:19 AM | Permalink

    RE: #230 – My concern as well. I am especially intrigued with all measurements prior to the end of First World War in the US. My eye for fudging keeps coming back to the fault scarp (from low to high) in the record which just happens to correspond with before and after the USA’s participation in (the latter portion) of the war.

  241. Steve Sadlov
    Posted Nov 13, 2007 at 1:33 PM | Permalink

    Bump. No commentary on NE European warm 1930s? I’m shocked! Shocked I say!

  242. Steve Sadlov
    Posted Nov 13, 2007 at 3:51 PM | Permalink

    Bump …. warm 1930s in NE Europe … HEL-LO!!! 😉

  243. Steve Sadlov
    Posted Nov 14, 2007 at 2:21 PM | Permalink

    Ping to #239, this is huge.

  244. Posted Nov 14, 2007 at 2:37 PM | Permalink

    Ok, Steve Sadlov, I’ll bite.
    Why is a warm 1930s in NE Europe so important?
    Does it differ from the instrumental record for NE Europe?
    What’s up?

  245. Steve Sadlov
    Posted Nov 14, 2007 at 4:01 PM | Permalink

    Or, can we trust Euro instrument records? Can we trust anysurface instrument record that has not been thoroughly scrubbed?

  246. Posted Nov 14, 2007 at 4:12 PM | Permalink

    #245 Steve Sadlov:
    You need to elaborate about why you’re so excited about warm 1930s in NE Europe.

  247. Steve Sadlov
    Posted Nov 14, 2007 at 4:41 PM | Permalink

    There is a belief that warm 1930s were a NoAm phenominon and not global. That construct may not be correct. If warm 1930s were global, then it would imply a sort of “constructive interference” during the 1930s of various oceanic oscillatory mechanisms among other things. I think that would be pretty exciting. Nature is about cycles of varying periods and harmonics, superimposed on longer term macro phenomena such as the evolution of the universe and entropy. In order to model it, understanding interactions, destructive and constructive interference of the oscillatory mechanisms, are keys.

  248. Clayton B.
    Posted Nov 14, 2007 at 7:47 PM | Permalink

    239+,
    this seems a bit off-topic.

    Is there a thread where this has been a focus (to provide more background if nothing else).

  249. Posted Nov 15, 2007 at 7:01 AM | Permalink

    #247 SteveSadlov:
    The evidence is that the 1930s globally were not as warm as other periods. There are regional differences. You can make maps from the GISTEMP site:
    http://data.giss.nasa.gov/gistemp/maps/

    This first map shows the anomalies of the 1930s (1930 to 1940) versus the 1951-1980 reference period. You can see that it was warm in the central and SE USA and across much of the Arctic and sub-Arctic particularly around the former USSR (among other areas):

    The second map shows the anomalies of 1995 to 2005 versus 1951-1980 reference period. It’s warmer almost everywhere. The warmest areas are now the Arctic over Canada and continental Russia.

    On average the world is warmer, but there are always regional differences.

  250. Posted Nov 15, 2007 at 7:53 AM | Permalink

    # 249 John V, Steve Sadlow Use 1930 to 1960 as the calibration range and have 1900 to 2007 as the anomoly range. I used land and it was 0.09C. Not much warmer for a century+. In the land use, the warmest spots were at the edge of the Antartica land shelf. It would be interesting to see the land, oceans, all comparison. When I did that for land, I was impressed with how little warming it showed. Steve S, it might not be just the Baltic area alone. My map showed only usual (IPCC) warming in antartica, and some warming in the usual suspect areas such as around Sahara and China-Mongolia.

  251. Posted Nov 15, 2007 at 8:15 AM | Permalink

    #250 John Pittman:
    What you plotted was the 1900-2007 average temperature vs the 1930-1960 average. Using 1900 to 2007 as the anomaly range does *not* mean “2007 minus 1900”, if that’s what you thought.

  252. Kenneth Fritsch
    Posted Nov 15, 2007 at 9:55 AM | Permalink

    Anybody want to discuss change point analysis and the need for pre-whitening in looking for regime changes in the differences between the temperature trends for CRN123 and CRN45 quality classifications? I found this link is a good starting point:

    Click to access Red_noise_paper_v3_with_figures.pdf

  253. Posted Nov 15, 2007 at 9:59 AM | Permalink

    Thanks John V.

  254. Steve Sadlov
    Posted Nov 15, 2007 at 10:41 AM | Permalink

    1995 – 2005 is complete crap.

  255. Posted Nov 15, 2007 at 11:09 AM | Permalink

    #254 Steve Sadlov:
    Thanks for your concise opinion. Please elaborate.

    #252 Kenneth Fritsch:
    Sounds like an interesting discussion, and something I need to learn about. I will try to read the paper you linked.

  256. Steve Sadlov
    Posted Nov 15, 2007 at 11:42 AM | Permalink

    RE: #254 – The change to short cabled automated measurement systems (close to buildings), UHI, land clearing and other land use changes. These impacts have only accelerated since the end of WW2, and particularly, since the 1970s. This acceleration derives from the technological explosion and the full fruition of the so called newly industrializing countries.

  257. steven mosher
    Posted Nov 15, 2007 at 12:14 PM | Permalink

    RE 252. I’ve asked people to look at that paper for some time. But no takers.

    I think we may also want to consider the new papers cited by NOAA in USHCN 2

    Hey JohnV can I get a pointer to the latest version? I have some more time and some ideas.

  258. Posted Nov 16, 2007 at 10:10 AM | Permalink

    steven mosher:
    I have not worked on OpenTemp for about four weeks. The latest version is available here:

    http://www.opentemp.org/_release/OpenTempV1RC2.zip

    Run it without any command line parameters to see what it can do. The best new feature is the ability to pick a random subset of stations using “/stnpick=N”.

    Clayton B found a small problem where it reports one too many stations in its text output (ie. it writes “M stations parsed” instead of “N stations parsed”, where M=N+1). It has no effect on the calculations.

    I hope to find some time before the end of the month to work on it and generate some new results.

  259. Clayton B.
    Posted Nov 27, 2007 at 11:19 AM | Permalink

    So I was gonna look at the California average temperature for CRN12 stations vs. CRN45 stations using opentemp. But there’s not enough CRN123 stations!

  260. Clayton B.
    Posted Nov 29, 2007 at 7:45 PM | Permalink

    It’s been quiet around here. Here’s more talk about trends – sorry if this has already been discussed, SteveMc ;).

    more to come as I’m stuck watching the cowboys for the rest of the evening…

  261. Geoff Sherrington
    Posted Nov 29, 2007 at 8:16 PM | Permalink

    Re # 228 Bob Koss

    I have tried for a year to get some good data comparing the temperature averaged from many points in a day, to the temperature of mean of Tmax and Tmin. Is it possible for you to post a graph with a short explanation, or a short discussion to interpret what you found like r^2? sherro1@optusnet.com.au I have always been more interested in integrated heat than average daily temperature. Thanks Geoff.

  262. Bob Koss
    Posted Nov 30, 2007 at 4:43 AM | Permalink

    Geoff,

    Here are two stations with 24 readings per day to demonstrate the difference between methods.

    The yearly difference is 0.7F for Miami and 0.9F for Boston. On a daily basis the difference can be as much as +-6F. I suppose they use Tmin+Tmax/2 due to the historically large number of stations taking few readings per day. Unfortunately that method seems to be very sensitive to the number of hours of clear sky. How they can say with a straight face that year 1 is warmer than year 2 by 0.1C, is beyond me.

    I had about 12gig of daily data files. Reorganised them all by station history for ease of use since they’re individually filed by year. Then about 2 months ago I lost it all due to a bone-headed mistake. I don’t even like thinking about it.

    If your interested the files are located here. ftp://ftp.ncdc.noaa.gov/pub/data/gsod/
    The data is filed by station number and year. You’d need the history file to identify a particular station and the readme to identify all the fields. My spreadsheet doesn’t recognize their .op extension so I had to append a .csv extension for it to load the file after first decompressing it. Standard field separators didn’t work. All columns don’t have labels, so specifying each field-size was necessary.

    They are text files containing weather data for each day. Individual hourly readings aren’t available, but they do have Tmax, Tmin, and the Tmean for all readings. The reading count is also in the file. If the day isn’t missing entirely, the number of readings varies mostly from 4 to 24 depending on the station.

  263. steve mosher
    Posted Nov 30, 2007 at 7:47 AM | Permalink

    re 259.. there is a study of the difference between ave and tmax+tmnin)/2 can’t find it now

    Google TOBS and find JerryBs stuff he has years of hoursly data from ASOS..

    or see the TOBS thread for links

  264. steve mosher
    Posted Nov 30, 2007 at 7:50 AM | Permalink

    re 261 opps see my post above I referenced #259 by mistake

  265. SteveSadlov
    Posted Dec 4, 2007 at 3:52 PM | Permalink

    So how about that late 10s, early 20s upward fault scarp? Is it real, or, is it due to “adjustments?”

  266. steve mosher
    Posted Dec 4, 2007 at 6:58 PM | Permalink

    265. I did a fault scarp chart.. just to see if it was way out of wack. What I found was this
    ( subjectively) during periods when the SST is catching up to the land temp, you get volitility
    ( jolts) in the land temp… basically look at SST and land temps up to 1940 or so, you see the air
    warming but the SST is catching up… During this period air temp is volitile. Then
    SST ( in anamolay form) catches the air temp anomaly and you get a quiessent period..say 1940-75.

    In this regime you have small year to year changes ( SST anonmaly is not changing much) Then it takes
    off again. Led by air temp with SST lagging ( inertia) and the volitility of air temp returns..

    FWIW.. I should dig those charts out again… When it comes to these kind of processess I think the Jolts
    and scarps (ok, you inspired me) are most instructive

  267. SteveSadlov
    Posted Dec 4, 2007 at 7:43 PM | Permalink

    The late teens “scarp” is unique in the entire 100 plus year apparent US record. There are other scarp like features, but nothing like this one. I looked at a lot of charts when the whole discussion was still in full swing, and it seemed that the scarp lined up with one of the TOBS “adjustments” – but the magnitude of the adjustment alone could not account for the magnitude of the “scarp.” It would be interesting to make a compendium of adjustments, and other issues that were known or suspected to occur between 1915 ane 1925. Another thing to ponder – the short US participation in WW1. Prior to it, the US was extremely primitive in terms of air fields and air transportation (we were way behind the level of development seen in Europe), after it, the air age began in earnest and we had essential parity with Europe.

  268. steve mosher
    Posted Dec 4, 2007 at 8:40 PM | Permalink

    RE 267… OK. later on I’ll get some time and do up my fault scarp chart ( three times quickly)

    I think it was the greatest yr over yr change. Some year has to have it. But these shocks are always
    interesting places to look… The year over year was something like 1.5C Imagine if we got that
    next year?…

  269. Clayton B.
    Posted Dec 4, 2007 at 9:27 PM | Permalink
  270. Clayton B.
    Posted Dec 4, 2007 at 10:45 PM | Permalink

    More fun…

  271. Kenneth Fritsch
    Posted Mar 18, 2008 at 12:07 PM | Permalink

    I took the latest available CRN station ratings from the A. Watts web site and updated my CRN123 versus CRN45 comparisons. Recall that I decided to divide the ratings in these groups so that the statistics would have sufficient data to look for significant differences in temperature anomaly trends related to the Watts team quality ratings. Although the numbers of stations should eventually increase to make significant comparisons between individual group ratings, I choose to continue here with CRN123 versus CRN45.

    As in previous analyses, I used the USHCN Urban data set to make the comparisons and the time period of 1920-2005. There is a missing data problem with these data sets and I thus used in one comparison only those stations for which I had complete data and then did a second comparison where I filled in the missing years data points with the average over 1920-2005. Obviously the fill in technique is a lazy person approach, but it is all I had for the moment.

    I found a total of 143 CRN123 and 310 CRN45 stations at the Watts website and used that many with my fill in technique. The numbers for those stations with complete USHCN data were 130 CRN123 stations and 282 CRN45 stations.

    My main thrust in this analysis was to compare the anomaly differences between the CRN45 – CRN123 stations over the time period 1920-2005. The calculated difference in a regression analysis for the comparison with stations having no missing data was a trend 0.23 degrees C per century and that had a p less than 0.00001. The 95% CI for the trend was 0.16 to 0.30 degrees C per century. When the CRN45-CRN123 trend difference is compared to the combined trend of all the CRN123 and CRN45 stations used in this analysis for the 1920-2005 time period of 0.57 degrees C per century, the CRN45-CRN45 appears to me to loom large.

    I also compared the CRN123 and CRN45 groups with regards to location coordinates of latitude, longitude and altitude. That comparison is given in the table below. I see nothing obvious in that comparison to indicate a confounding of the CRN45-CRN123 anomaly temperature difference.

    I also present some graphs below of the regression analysis I did for the CRN123 and CRN45 anomaly differences with a regression plot, a plot of the regression residuals and a normality plot. Nothing in these plots stands out, for me anyway, as an obvious warning of something not accounted for.

    Again I make no claims for the statistical completeness of this analysis, but only that I think there are differences (assuming no missed confounding properties) in temperature trends related to the Watts team’s CRN ratings.

  272. steven mosher
    Posted Mar 18, 2008 at 1:35 PM | Permalink

    Good work Kenneth. One thing I noticed a while back. The trend in the UNSURVEYED stations
    was bigger than the trend in the surveyed. All of which leads me to believe that we may have
    some more suprises when the survey is complete

  273. Raven
    Posted Mar 18, 2008 at 2:10 PM | Permalink

    A question about proper errors on adjusted temperatures.

    Assume that a temperature measurement is 10 degC +/- 0.5 degC
    What are the error bars if this temperature measurement is later adjusted by some algorithm to 9.7 degC?

    They must be bigger than -/+ 0.5 because the adjustment has uncertainty associated with it too.
    It seems to me that the original measurement should be within the error bounds of the adjusted measurement.

    Is there any rule to follow or is this seat of the pants stuff?

  274. Kenneth Fritsch
    Posted Mar 18, 2008 at 2:41 PM | Permalink

    Re: #273

    I think you may be getting into the domain of mixing those errors you know and those errors you don’t know. Kind of like the known unknowns versus the unknown unknowns.

  275. steven mosher
    Posted Mar 18, 2008 at 6:18 PM | Permalink

    re 273. I’ve been asking that question for about a year. it’s either stupid or intractable

  276. SteveSadlov
    Posted Mar 18, 2008 at 7:56 PM | Permalink

    The fault scarp may be due to a large number of new stations being introduced. Prior to WW1, the US had almost zero aerodromes. During the war a number of new ones opened. Many new stations, in far more controlled settings, albeit settings more developed than in the past. The pre war data are probably cold biased and slightly affected by TOBS.

  277. Raven
    Posted Mar 18, 2008 at 7:59 PM | Permalink

    I don’t think it is stupid. Adjusting data always sounds like a scam because even your intentions are good one can never know if the algorithm choices were unconsciously designed to produce the result you want to see.

    Many of these data problems have well defined uncertainity intervals (TOBS, UHI, station moves etc). Instead of adjusting the data the error bars should be widened. In some cases, like station moves the uncertainty would would huge, however, the data can still be used provided the uncertainty is carried through to the end. When developing trends these uncertainty intervals would like shrink to something useful even if they did not disappear.

  278. steven mosher
    Posted Mar 19, 2008 at 7:00 AM | Permalink

    we agree raven, kinda. I didnt mean it was stupid. I’m just saying Ive never gotten an anwser to it.

  279. RomanM
    Posted Mar 19, 2008 at 10:05 AM | Permalink

    #274 Kenneth:

    I think you may be getting into the domain of mixing those errors you know and those errors you don’t know. Kind of like the known unknowns versus the unknown unknowns.

    You have hit the nail on its head. The problem with adjustments is that they introduce non-random bias error as opposed to the random errors due to natural fluctuation and measurement error. You can statistically estimate the effect of random error and indicate the uncertainty in a probabilistic fashion with error bars. However, the amount of bias (which has the effect of sometimes dramatically reducing the confidence level of the error bounds) cannot be determined by simply looking at the size of the adjustment because part of the adjustment may or may not be warranted. In that case, if you merely widen the error bars, you won’t have a clue of what the actual confidence level of those bounds is.

    The worst part of it is that most of the temperature adjustments seem to be done automatically by the fine software used by the keepers of the flame rather than by scientific justification based on actual knowledge of geography and other conditions. By the way, one approach to get a handle on the adjustment bias might be to look at relationships between other climatic factors such as rain, sun or cloud, etc. and adjusted vs. non-adjusted temperatures.

  280. Kenneth Fritsch
    Posted Mar 28, 2008 at 2:39 PM | Permalink

    From Bering Climate at NOAA, I downloaded an Excel add-in for detecting statistically significant regime changes. The procedure linked below accounts for autocorrelation and outliers. The link notes that: “The program can detect shifts in both the mean level of fluctuations and the variance. The algorithm for the variance is similar to that for the mean, but based on a sequential F-test (Rodionov 2005b)”.

    The algorithm for regime change for means gives the operator choices of significance level, cut-off length for determining a regime change, Huber’s weight parameter for accounting for outliers, red noise estimation method with subsample size and the use prewhitening of the data.

    http://www.beringclimate.noaa.gov/regimes

    I used the NOAA Excel add-in to look for regime changes in my previously reported time series of the temperature anomaly differences CRN45-CRN123 over the period 1920-2005 using the USHCN urban data set on the latest available Watts CRN station ratings. I used only those stations that have complete USCHN data for the period 1920-2005.

    I did variations on the tunable parameters in order to get an idea of the sensitivity of the resulting regime change(s) to the parameter selections. What I found is what the authors of the algorithm claimed, i.e. the parameters having effects are the probability and cut-off length. A cut-off length of 3 years found no significant regime changes (indicating to me that any changes did not occur so fast that a 3 year cut-off would detect it), 5 and 10 year cut-offs always found a regime change at 1958 and only 1958 regardless of the other parameters used with the exception of probabilities larger than 0.10. A 20 year cut-off found regime changes at 1958 under the same conditions that it was found for the 5 and 10 year cut-offs and an indication of one for the year 2005 (that would take more years to truly confirm as a trend as 2005 was a single year at the end of the series). Changing the probabilities beyond 0.10 showed change points at 1934, 1945, 1976 and 2005. These change points appeared to me to be more due to the noisy character of the station differences than to anything statistically significant.

    Going back to the CRN45-CRN123 anomaly differences over the 1920-2005 time periods (see Post # 271 in this thread), one could make a case for a flat trend difference 1920-1950, followed by a relatively steep trend 1951-1969 and then followed by another flat trend difference from 1970-2005. That would put the 1958 regime change point near the middle of the steep trend in the CRN45-CRN123 anomaly differences and indicate that the CRN45 and CRN123 differences could be concentrated in 1958 to 1969 time period.

  281. Kenneth Fritsch
    Posted Mar 28, 2008 at 3:06 PM | Permalink

    The last sentence in post #280 above should have read:

    That would put the 1958 regime change point near the middle of the steep trend in the CRN45-CRN123 anomaly differences and indicate that the CRN45 and CRN123 differences could be concentrated in 1951 to 1969 time period.

    That is 1951 to 1969, not 1958 to 1969.

  282. steven mosher
    Posted Mar 28, 2008 at 5:15 PM | Permalink

    Kenneth I;ve been trying to get folks intersted in that plug in for a while. I think UC or maybe t was RomanM thought it was interesting. Mac thought it was less impressive. shrugs.

    Have you had a look at Menne’s paper? it “argues” kinda sorta for a 1964 Break point.

  283. Kenneth Fritsch
    Posted Mar 28, 2008 at 9:01 PM | Permalink

    Re: #282

    I read the Menne paper and as I recall the change point(s) determined in the paper were claimed for climate change regimes and would not correspond to the regime change I determined for CRN45-CRN123 temperature anomalies which would better fit a time period of changing quality between the CRN123 and CRN45 stations that the Watts team is picking up in its current audit.

    The regime change point algorithm that I used was developed to find regime changes in real time and as well as after the fact. It was suggested that it might be helpful for Bering Sea fisherman detecting regime changes in fish populations. I used it because it was available and not because it might be the best instrument to be applied to my analysis. This algorithm determines change points as breaking in one data point time period whereas other algorithms I have seen (in the Menne paper as I recall again) can determine change points changing over several data points. I need to look further at these other algorithms.

    The algorithm that I used also finds change points for standard deviations in a series and when I applied it I found a change point for the CRN45-CRN123 time series at 1958 — the same as for the mean.

  284. Kenneth Fritsch
    Posted Apr 1, 2008 at 4:22 PM | Permalink

    In previous posts (#280 and #281) I used a method employing an Excel Add-in from this link http://www.beringclimate.noaa.gov/regimes that was used to find a change point in the mean from a time series. The method as described above found a statistically significant change point in the mean for the series I generated using the CRN45- CRN123 differences in temperature anomalies over the time period 1920-2005. I used the USHCN Urban data set and only for CRN rated stations that had complete data for the period. I found a single change point at 1958.

    As noted in the above post I wanted to use other change point methods with which to analyze the same data. I found an excellent link summarizing the various regime change/ change point methods here:

    http://www.beringclimate.noaa.gov/regimes/rodionov_overview.pdf .

    The method I used is described here:

    Click to access i1520-0442-15-17-2547.pdf

    and involves a simple two phase linear regression scheme where an F statistic is calculated for all possible two periods by diving the main time series into all possible 2 part divisions with a minimum of 9 points in any given part. The F statistics for every division is then compared to find a maximum F value and one that exceeds that for an F value that would occur by mere chance. If the main series produces a statistically significant maximum F that series is divided at that point and the two series are put under the same analyses to find any further statistically maximum F values exist. This is done exhaustively until no more F maximum values exceed that for the p limit used for statistical significance (in my case it was set at p =0.05).

    The method described here is fortunately the one used by Menne in his paper here:

    Click to access 100694.pdf

    I was able to use the CRU data set in the Menne paper to verify that I was using the linear regression scheme method properly. I was able to match all three change points for the CRU series from 1860-2005.

    The regression scheme detected no statistically significant change points in my CRN45-CRN123 anomaly difference series. The F values did, however, peak very close to the 1958 change point year that I found with the mean regime method that I used earlier. From all this I conclude that while the difference in the anomaly trends between the CRN45 and CRN123 stations may have been concentrated in the 1950s an objective analysis indicates that the changes probably occurred over a longer period of time. In general, I also conclude that subjectively eyeballing a change point from a graph can be very misleading – and wrong.

  285. barry
    Posted Jan 4, 2009 at 10:49 AM | Permalink

    This is from right near the top of the first thread. I’ve read them through a few times and am surprised not to have noticed it.

    one has to consider the quality of satellite information from the 1930s, which, as I understand it, is less complete than satellite information from the 1990s

    I would think that the “satellite information from the 1930s” is non-existent rather than “less complete”.

    I posted a request on any updates or new threads on this topic many months ago. Doesn’t seem to have got through. If further work has been done comparing the crn12(3?) stations to GISTEMP, where can it be found?

    If this project has been abandoned, could you let us know? The work has been very impressive.