Broadcast Charges Leveled at NOAA, NASA Labs

‘Extraordinary Claims’ in KUSI Broadcast On NOAA, NASA … but ‘Extraordinary Evidence’?

A San Diego TV station’s mid-January one-hour broadcast reporting that two key federal climate research centers deliberately manipulated temperature data appears to have been based on a fundamental misunderstanding of the nature of the key climatology network used in calculating global temperatures.

Independent TV news station KUSI in San Diego aired a story challenging current scientific understanding of climate science and offering “breaking news” of government wrongdoing based on work of Joseph D’Aleo, a meteorologist, and E.M. Smith, a computer programmer.

The two maintained that the National Oceanic and Atmospheric Administration, NOAA, “is seriously complicit in data manipulation and fraud” by “creating a strong bias toward warmer temperatures through a system that dramatically trimmed the number and cherry-picked the locations of weather observation stations they use to produce the data set on which temperature record reports are based.”

The program’s host, KUSI meteorologist John Coleman, accused NOAA and NASA climate research laboratories of “lying” to the American public, a charge NOAA and NASA spokespersons have both rejected. These are extraordinary claims which, as Carl Sagan was fond of saying, should require “extraordinary evidence.”

The broadcast accusations appear to have resulted from a misunderstanding or misrepresentation of the nature of the Global Historical Climatology Network (GHCN) and methods used in calculating global temperatures.

D’Aleo and Smith charged that NOAA “systematically eliminated 75 percent of the world’s stations with a clear bias towards removing higher latitude, high altitude and rural locations, all of which had a tendency to be cooler.” They also said NOAA used a “slight of hand” to make 2005 appear to be the warmest year on record; that “the National Data Climate Center [NCDC] deleted actual temperatures at thousands of locations throughout the world as it evolved to a system of global grid boxes”; and that NOAA and NASA arbitrarily adjusted individual station temperature data in order to exaggerate warming.

When glancing at the chart showing the number of temperature stations used over time, it does appear rather odd to see the number of stations used in the GHCN network drop dramatically between the 1970s and present. D’Aleo and Smith point to purposeful elimination of those stations.

However, as Thomas Peterson and Russell Vose, the researchers who assembled much of GHCN, have explained:

The reasons why the number of stations in GHCN drop off in recent years are because some of GHCN’s source datasets are retroactive data compilations (e.g., World Weather Records) and other data sources were created or exchanged years ago. Only three data sources are available in near-real time.

It’s common to think of temperature stations as modern Internet-linked operations that instantly report temperature readings to readily accessible databases, but that is not particularly accurate for stations outside of the United States and Western Europe. For many of the world’s stations, observations are still taken and recorded by hand, and assembling and digitizing records from thousands of stations worldwide is burdensome.

During that spike in station counts in the 1970s, those stations were not actively reporting to some central repository. Rather, those records were collected years and decades later through painstaking work by researchers. It is quite likely that, a decade or two from now, the number of stations available for the 1990s and 2000s will exceed the 6,000-station peak reached in the 1970s.


View larger image
Figure taken from Peterson and Vose (1997), showing the change in temperature stations over time for daily mean temperatures (solid line) and min/max temperatures (dotted line).

There actually is a fairly easy way to test if the absence of more recent data from a number of stations has a significant effect on temperature records. If stations were purposefully dropped in favor of those with greater warming trends, one would expect to see cooler temperatures in the stations that do not have temperature records available in the last few decades than in those stations with a continuous record up to the present.

The chart below shows this analysis for all stations with continuous records between 1960 and 1970. Of the 1,419 temperature stations containing data for this period, available at the National Center for Atmospheric Research, 1,017 continue up to at least the year 2000, and 402 stop providing data at some point between 1970 and 2000.

There is no significant difference between the temperature from discontinuous and continuous stations, suggesting that there was no purposeful or selective “dropping” of stations to bias the data. If anything, discontinuous stations have a slightly higher trend over the century than continuous stations. This result strongly suggests that the discontinuity in station data results from having inadequate resources to gather those records, rather than from some pernicious plot to exaggerate warming trends.


View larger image
Based on an analysis raw temperatures from all station records in the NCAR database with duration of at least 10 years (the minimum required for inclusion in GHCN). Continuous stations are defined as those that at a minimum cover the period from 1960 to 2000. Discontinuous stations are those that cover the period from 1960 to 1970 but stop contributing data to GHCN between 1970 and 2000. Discontinuous station data is not plotted post-1997 due to the availability of fewer then 25 stations remaining in the group. Raw data and source code for this analysis can be found here. Note that changes in the spatial distribution of stations in both groups will impact trends in ways unrelated to real global temperatures, particularly in the discontinuous group when the number of stations available becomes small.

The chart below shows a map of all 1,200 temperature stations that provide updated temperature data on a monthly basis. While certain places have much better spatial coverage than others, there is a good distribution of stations across all major landmasses, with the possible exception of parts of Africa and the Polar Regions, particularly Antarctica (though Antarctic temperature records are supplemented by a number of temporary stations located in the interior of the continent).


View larger image
Figure taken from Peterson and Vose (1997) and shows all stations in GHCN v2 with regularly updating temperature records.

In addition, the accuracy of the surface temperature record can be independently validated against satellite records. Over the period from 1979 to present where satellite lower-tropospheric temperature data is available, satellite and surface temperatures track quite well as shown in the chart below. One analysis of the satellite data by the Remote Sensing Systems (RSS) group has a slope (0.15 C per decade) virtually identical to that of the GISS and NCDC (0.16 C per decade) temperature records, while another from University of Alabama, Huntsville has a slightly lower slope (0.13 C per decade).

If stations had intentionally been dropped to maximize the warming trend, one would expect to see more divergence between surface and satellite records over time as the number of stations used by GHCN decreases. However, a close examination of the residuals when satellite records are subtracted from surface station records shows no significant divergence over time compared to either UAH or RSS.


View larger image
Figure One: Based on monthly data from 1979 through December 2009 from GISS, UAH, RSS, and NCDC.

In addition to arguing that NCDC manipulated the stations used in the temperature record, D’Aleo and Smith said “the National Data Climate Center deleted actual temperatures at thousands of locations throughout the world” when they switched to a method that averages data from individual temperature stations into discrete grid boxes that cover the globe to better account for the spatial location of individual station measurements. But it is hard to reconcile that view with the vast amount of raw station data available in various places from NCDC and others.

D’Aleo and Smith also said that NCDC and NASA adjusted the raw temperatures from stations in a number of ways in the process of producing their temperature records. They identified a number of stations where these adjustments appear to turn a cooling trend into a warming trend.

However, many of those adjustments made to raw temperature data are completely justified. Temperature stations have undergone many changes over their long lifetimes, including moves to different locations, readings at differing times of day, changes in the type of screens used to house thermometers, changes in the environment surrounding the station, and other factors. NCDC and NASA correct for these changes in slightly different ways, but one of the primary methods involves comparing individual temperature station records to those of the nearest stations to identify significant discontinuities (e.g., large persistent upward or downward step changes in the data) and correct for them.

Adjustments will be negligible for most stations, but a few stations will experience large positive or negative adjustments that can have a significant effect on the long-term temperature trends. To determine the net effect of these adjustments, one would have to examine the adjustments across all stations, rather than highlighting a few outliers as D’Aleo and Smith did.

The image below, from an analysis by an Italian molecular biologist, shows a histogram of the effect on the slope over the record of each temperature station for all adjustments made in GHCN. As expected, most adjustments are quite small, but there are positive and negative outliers on both sides. This allows parties interested in criticizing adjustments to pick individual stations that show either a large increase in warming or cooling trends due to adjustments.


View larger image
Analysis of the trend effects of 6737 adjustments in GHCN over the full record of each station. The median adjustment is 0 and the mean adjustment is 0.017 degrees C per decade. Note that these trend effects may be larger or smaller if a shorter timeframe is examined. From Giorgio Gilestro. Note that Realclimate has a similar analysis.

After examining the evidence, there seems little indication that either the discontinuities in recent records from many GHCN stations or the adjustments made to the raw data have any substantive effects on global temperature trends. The accusations by D’Aleo and Smith aired as part of the KUSI “The Other Side” broadcast seem to be mostly unfounded, and certainly do not justify the seriousness of their allegations.

Creating global temperature records is no simple task, and the process might not always be pretty. But there is no evidence of major methodological problems that would compromise the validity of the records, and certainly no evidence of deliberate manipulation.

Zeke Hausfather

Zeke Hausfather, a data scientist with extensive experience with clean technology interests in Silicon Valley, is currently a Senior Researcher with Berkeley Earth. He is a regular contributor to Yale Climate Connections (E-mail: zeke@yaleclimateconnections.org, Twitter: @hausfath).
Bookmark the permalink.

32 Responses to ‘Extraordinary Claims’ in KUSI Broadcast On NOAA, NASA … but ‘Extraordinary Evidence’?

  1. Curtis Moore says:

    A show like this costs money. Who paid to prepare it and air it?

  2. carrot eater says:

    A clear and level-headed analysis. I’d bring to your attention another way of assessing the impact of adjustments.

    http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html

    See the Figures under Q4. If I understand it correctly, one is the global temp anomaly history using the raw GHCN data, and the other shows the adjusted data. As suggested by the biologist’s analysis, the net effect is minor; there is quite clearly no manipulation or fraud.

    We can always pick out individual stations and indeed regions where adjustments are significant, but one must always return to the big picture.

  3. Deep Climate says:

    Zeke,
    Good job here – and RC agreed.

    In Canada, this “story” got picked up by the National Post (no surprise) and the rest of the CanWest chain (worrying trend).

    http://www.nationalpost.com/news/story.html?id=2465231

    Washington Post blogger Andrew Freedman covered it, but found the analysis dubious.

    http://voices.washingtonpost.com/capitalweathergang/2010/01/a_new_nasa_temperature_analysi.html

  4. Baja says:

    There is quite clearly manipulation and fraud in this article. For example, here is the map in the article showing the purported weather stations:

    http://www.yaleclimatemediaforum.org/pics/0110_Figure-32.jpg

    But it’s from 1997 – 13 years ago. Let’s look at a current map, including the changes:

    http://i44.tinypic.com/23vjjug.jpg
    [gif - takes time to load]

    Why the deliberate misrepresentation?

    The answer is right on the Yale home page: the heavily pro-AGW Grantham Foundation funds this site. Since it is bought and paid for, its editors bend over backward to propagandize for Grantham. The proof is right in this post.

    Who do you people think you’re kidding?

    Tools.

  5. Baja,

    Your chart appears to be somewhat incorrect. And it would behoove you to assume good faith in others, and avoid claims of deliberate misrepresentation without strong evidence.

    GHCN v2 is still the version in use, and is described by Peterson in his 1997 paper. An update of the state of GHCN is available here: http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html

    As I describe in the article, every month GHCN receives data from 1200 stations in the Global Climate Observing System (GCOS). You can find the latest map of their coverage here: http://www.wmo.ch/pages/prog/gcos/documents/GSN_Station_Map.png. GHCN is later supplemented with data from additional stations as it is collected, though as the analysis in this article demonstrates, the addition of more stations post-facto has little impact on the temperature record given the representativeness of the base set.

    As far as the alumni or foundations that provide financial support for this website, none of them have any input on or editorial control over the content that appears.

  6. TellingItLikeItIs says:

    Why are the skeptics here ignoring the SATELLITE data that has been available since the 1970s??

    RSS satellite temperature readings show the global temperatures have been warming .153 C/ decade when averaged over the last 30 years. (note: C is the same as K when incremental measurements are taken).

    http://www.ssmi.com/msu/msu_data_description.html

    These temperature increases are in line with both the land weather stations and weather balloon (radiosonne) temperature measurements. It is also in the higher range of the IPCC projections.

    (Note RSS climatologists are also the guys who found Spencer/Christy’s clerical arithmetic errors in UAH, forcing them to admit to warming in their satellite data –although shhh they don’t like to admit their satellite data now shows warming too, especially if one ONLY counts after 1998 after a large El Nino caused a big spike… )

  7. Tilo Reber says:

    “The reasons why the number of stations in GHCN drop off in recent years are because some of GHCN’s source datasets are retroactive data compilations (e.g., World Weather Records) and other data sources were created or exchanged years ago. Only three data sources are available in near-real time.”

    What? I’m not getting this. If you are doing retroactive data compilations for a station, why can’t you continue to get information from that station?

    “There is no significant difference between the temperature from discontinuous and continuous stations, suggesting that there was no purposeful or selective “dropping” of stations to bias the data.”

    You are kidding, right. There is definitely a difference in the trend post 1980, which is when the dropping was being done. And since the so called AGW signal is mostly derived from what happened between 1978 and 1998, this is really important.

    “there is a good distribution of stations across all major landmasses,”

    Not true. Look at the sparsity of the Arctic stations. This means that most of the Arctic get’s extrapolated from a very few coastal stations for GISS. And those few coastal stations are subject to huge temperature swings dependent on nearby sea ice coverage. You can find that some of those stations that have 4C swings or more in 6 or 7 years. The reason that GISS has 2005 as the hottest year instead of 1998 is all due to those radical swings in arctic stations and the extrapolation that results from that. If you want to see what difference station usage and extrapolation make, look at this chart from Hansen:

    http://www.realclimate.org/images/Hansen09_fig3.jpg

    Look at the HadCRUT 2005 chart and the GISS 2005 chart. Now look at the 6 cool cells directly above Svalbard in the HadCRUT 2005 chart. Then look at the same cells in the GISS 2005 chart. As you can see, negative anomaly cells from HadCRUT are represented as maximum positive anomaly cells in GISS. The HadCRUT cells are filled with SST data. The GISS cells are extrapolated station data because GISS won’t use SST data if the areas is frozen over for any part of the year.

    A small amount of warming or currents that causes the ice to retreat from those coastal stations can bring about huge temperature differences for those stations. That difference is then projected over the entire Arctic ocean, even though SST temperatures for the Arctic ocean may not be all that changed.

  8. Tilo Reber says:

    “As far as the alumni or foundations that provide financial support for this website, none of them have any input on or editorial control over the content that appears.”

    That doesn’t mean that their continued funding isn’t contingent upon liking what they see.

  9. carrot eater says:

    Tilo,
    It’s up to the national weather services of each country to electronically send in monthly updates in the correct format.

    That format is described here.
    http://gosic.org/gcos/GSN/CLIMAT-code_practical-help_081223.pdf

    Doing this is apparently something of a burden, so many countries don’t send in the data for all their stations. But they continue collecting it, so every so often somebody can go and chase it all down, and digitise it in some usable format.

    As Gavin at RC keeps saying, if somebody out there wants to do something that’d actually be useful, they could find some way to collect all the other station data that is published online, but doesn’t get turned into CLIMAT reports.

    As for Zeke kidding, why would he be? He did his analysis above. You can see that he does not detect any spurious trend due to discontinuity of station records. My only complaint is that he could go the extra step and take a proper spatial average.

  10. Carrot eater hit the nail on the head with the need for spatial weighting. The problem with my current approach is that the discontinuous station temps start to become biased due to geographic limitations as the sample size drops from over 400 down to 50 or so. You can see the change in stations over time in this graph:

    http://i81.photobucket.com/albums/j237/hausfath/Picture56-2.png

    Now, if some wascally wabbit or ambitious auditor wanted to do a proper spatial analysis, you could divide the world up into some fairly broad grid cells, assign each station to a grid cell based on its lat/lon, and make a graph comparing anomalies in all grid cells that include at least one discontinuous and continuous station for each year. Unfortunately doing it reasonably quickly is a tad beyond my programming ken. That said, I haven’t seen any papers yet that examine this particular issue, so you might be able to turn it into something publishable.

  11. carrot eater says:

    For those wanting to keep score at home, I think Hansen and Lebedeff (1987) http://pubs.giss.nasa.gov/abstracts/1987/Hansen_Lebedeff.html

    provide some guidance in how to go about taking a spatial average. I think their method has been updated since then, but one can start there for inspiration. Of course, one can alter the method however they see fit, so long as the choices are defensible.

  12. Tilo Reber says:

    carrot eater:
    “Doing this is apparently something of a burden, so many countries don’t send in the data for all their stations. But they continue collecting it, so every so often somebody can go and chase it all down, and digitise it in some usable format.”

    I don’t see that as an excuse. Obviously we were collecting the data in the past – why can’t we continue to do so.

    Zeke:
    “You can see the change in stations over time in this graph:”

    Do you have the data set used to create that graph on line somewhere?

  13. Marvin says:

    Zeke,

    Tilo actually was the one who hit the nail on the head you just didn’t want to reply to him and give him credit because carrot sympathetically replied to your article.

    Tellingitlikeitis,

    There are bias problems with the satellites as well. The reason the science focussed skeptics are so critical about the methods used are because each method is apparently supporting the other with its results. However, when looking at how the temperatures are ‘adjusted’ for land stations (we don’t know how we just know they do it but they don’t release this information) it would appear possible an inherent bias could occur. Also, the satellites used are really bad at detecting temperatures in icy areas because of their reflectivity of radiation.

    Sorry for the lack of links.

    Summarising.. it’s like me saying, okay I have an astrologist who can determine your future but just to be safe we’ll double confirm it with my palm reading.

  14. Something to keep in mind:

    It doesn’t matter that much what the absolute temperature of any given station is, since the global anomaly is calculated with respect to local anomalies, not absolute temps. So if stations with discontinuous records in GHCN tend to have a colder absolute temperature than stations with continuous records, it will have no real effect on the global anomaly as long as the the change in temps over time is unrelated to the baseline temp. A series with fewer high latitude stations vis-a-vis low latitude stations might actually show less warming than a series with more high latitude stations since colder places seem to be warming faster than warm places. The folks at NCDC told me something similar:

    “By the way – the absence of any high elevation or high latitude stations would likely only serve to create a cold bias in the global temperature average because we calculate the gridded and global averages using anomalies – not absolute station temperatures – as I explained in the information in my earlier e-mail to you. Anomalies for stations in areas of high latitudes and high elevations are typically some of the largest anomalies in the world because temperatures are warming at the greatest rates in those areas. So the suggestion that the absence of station data in these areas creates an artificial warm bias is completely false.”

    However, not wanting to rely on their word alone, I figured I’d do the analysis myself, looking at the mean annual anomaly across the raw data from all stations at > 60 degrees latitude (both north and south) and <= 60 degrees latitude. You can find the source code here: http://drop.io/2pqk4vg (see the lat lon version of the do file).

    The results? http://i81.photobucket.com/albums/j237/hausfath/Picture67.png

    Looks like higher latitude stations do show a significantly larger warming trend (0.28 C per decade vs 0.18 C per decade since 1960).

    We can also look at high latitude stations in the discontinuous vs. continuous group: http://i81.photobucket.com/albums/j237/hausfath/Picture69-1.png (note that this ends at 1993 due to there only being 2 stations in the discontinuous group after that).

  15. Tilo,

    See the caption below the chart in the original article. There is a link to the the source code and raw data used: http://drop.io/2pqk4vg

    “Obviously we were collecting the data in the past – why can’t we continue to do so.”

    I think the point is that GHCN is continuing to do so. However, barring a massive increase in their budget and manpower, I suspect recent years in GHCN will always be dominated by the 1000 odd WMO GSN stations with automated monthly updates. Hence my argument that:

    “During that spike in station counts in the 1970s, those stations were not actively reporting to some central repository. Rather, those records were collected years and decades later through painstaking work by researchers. It is quite likely that, a decade or two from now, the number of stations available for the 1990s and 2000s will exceed the 6,000-station peak reached in the 1970s.”

  16. carrot eater says:

    Tilo, “Obviously we were collecting the data in the past – why can’t we continue to do so.”

    Tilo, GHCN didn’t even start until 1992. You seem to have this idea that the GHCN has existed for decades, but it has not. When they compiled the GHCN, they dug up every historical source and archive they could find, but not all of those sources then went on to report monthly results in real time, in the required format. So no, there was no real-time reporting to continue.

    This is all clearly described by Peterson/Vose in BAMS in 1997.

    The NOAA is at the mercy of its sources. There is no selection there by the NOAA; they simply take what the different countries send.

    Now, many of those extra stations probably do continue to exist, and even though those stations don’t send CLIMAT reports. Gavin Schmidt has been suggesting to readers that they could provide a useful service by collating reports from the ‘missing’ stations. In particular, some station data are reported as SYNOP reports, instead of CLIMAT, and somebody could come up with a way to collate them.

  17. carrot eater says:

    Marvin, you say that “we don’t know how we just know they do it but they don’t release this information”

    The methods for homogenisation, detecting urban effects, and correcting for changes in observation times have all been clearly published in a long line of papers. Look up papers by Karl, Peterson, Vose, Menne for the NCDC, and Hansen for NASA. You can say you don’t like the methods (if you then give quantitative demonstrations why), but you can’t say the methods aren’t published.

  18. Andy S says:

    Zeke

    Can you or anyone else explain why, on the map you provided, there are apparently so few monthly reporting stations in Canada compared to say Nigeria, Uruguay or Mongolia where there is much denser coverage?

  19. Andy,

    Blame Mercator projections. This map might give a better picture (and a more up-to-date one, given that its from 2009 instead of 1997):

    http://www.wmo.ch/pages/prog/gcos/documents/GSN_Station_Map.png

  20. carrot eater says:

    Andy S:

    I think that question is best directed towards Environment Canada. It’s a matter of how many stations they report to the right place in the right format.

    Perhaps they would retort that the NOAA should stop being so picky, and accept other report formats. I don’t know. Somebody could well ask.

  21. turnip beater says:

    There is no need for any surface stations. We can infer the temperatures from our models without any need for data. What data we have can simply be discarded as unnecessary.

    We know we are right. We know the world is warming. We have the latest model technology. The rest is number crunching.

  22. Andy S says:

    Thanks for the replies. I don’t doubt the overall results of the temperature measurements and D’Aleo’s cherry picking/deliberate fraud accusation seemed absurd to me from the start.

    However, the charge that high latitudes are undersampled seemed to be confirmed by some of the maps and the retort that this is because some of the reporting countries don’t hand over the data in the right format didn’t seem likely, at least for Canadian stations. But, as a Canadian taxpayer, I’d be annoyed if that was indeed the case.

    It’s easy for laymen, like me to get lost in the alphabet soup of all the institutional acronyms: NOAA/NASA/GHCN/NCDC/GISS/GSN/WMO, which makes it hard to know where to look to get the correct information.

  23. carrot eater says:

    Zeke:

    I took a look into Canada, and got a bit bogged down. The map you list above is based on the following (I think) station list:

    http://www.wmo.int/pages/prog/gcos/documents/GSN_Stations_by_Region.pdf

    To some extent, this list matches against Environment Canada’s station list (look for stations with CLIMAT in the remarks column)

    http://climate.weatheroffice.gc.ca/prods_servs/wmo_volumea_e.cfm

    But it gets a bit harder to see how many of those stations make it into the GHCN (data in v2.mean with station names in v2.temperature.inv), because the station IDs seem to sometimes be different between the sources. But it would be good to cross-check the sources, as people seem interested in the data flow.

    Canada’s country code adds a prefix of 403.

  24. carrot eater says:

    The JMA webpage has tons of data from Bolivia. I don’t know for sure, but I think the difference is that they’re using SYNOP in addition to CLIMAT reports.

    http://ds.data.jma.go.jp/gmd/tcc/climatview/index.html

    You may need to install some Adobe software. Their baseline is 1971-2000, so pointing the GISS mapper to that baseline should lead to directly comparable anomaly maps.

    Doing some visual spotchecks, GISS and the JMA seem to be roughly consistent on the rest of South America.

    Looking at the patterns of anomalies within the region in the JMA maps, there is pretty decent spatial correlation but there are some local variations you’d miss if you didn’t have local stations.

  25. carrot eater says:

    My last comment might be off a bit off. Looks like Bolivia is recently issuing CLIMAT reports again (and I think you mentioned that), just very sporadically. The JMA map also has holes over Bolivia at various times.

  26. Andy S:

    It actually is due to the Canadian government not reporting data from those stations to the WMO. The raw Global Surface Network data reported by the countries (before NCDC touches it) is available here: http://gosic.org/gcos/GSN-data-access.htm

    If you dig deep into it, you will find that most Northern Canadian stations (other than Eureka) did not report data for the majority of the months between 2000 and 2010.

  27. carrot eater says:

    It strikes me just how far behind EM Smith is.

    Take a look in Peterson et al, 1997, “Initial Selection of a GCOS Surface Network”, BAMS 78:2145-2152.

    Figure 1 shows which stations (as of 1997) are giving monthly CLIMAT reports. Nothing from Peru or Bolivia, nor Angola, Namibia or Afghanistan. The paper notes, “As shown in Fig. 1, there are large areas of the world, such as South America from the equator to 20°S, with very sparse CLIMAT reports”

    It can’t be a conspiracy to remove stations, if in the published literature of over ten years ago, researchers discuss the issue and design a network of stations meant to better cover the world.

  28. E.M.Smith says:

    For those who have explained thermometer deletions as an accident of creation date or lack of electronic reporting, NOAA have deleted just shy of 1/3 at the 2010 year beginning. Including Dallas Fort Worth Airport. I think your explanation “needs work”…

    http://wattsupwiththat.com/2010/02/12/noaa-drops-another-13-of-stations-from-ghcn-database/

  29. carrot eater says:

    Mr. Smith, you might consider that not every station sends out the CLIMAT reports on time. Wait a month or two.

    As for DFW, as one of your readers points out, it’s right there in the QC file.

    You are overly haste in making assertions.

  30. E.M. SMith:

    Neither GISS nor HadCRU have released any temperature data for 2010. The GSN doesn’t have any data up yet for 2010:

    http://www.dwd.de/bvbw/appmanager/bvbw/dwdwwwDesktop/?_nfpb=true&_windowLabel=T15806838371147176099165&_state=maximized&_pageLabel=_dwdwww_klima_umwelt_datenzentren_gsnmc

    While I’d suspect any anomaly at this point would simply be due to partial reporting by GSN stations to GHCN, I’ll have to wait for the GSN data to be available to see whats been going on. If someone uses GHCN data with 1/3rd less stations than Dec. 2009 to produce any sort of temperature record, it would indeed be somewhat odd, though I have a higher standard of evidence needed before I suspect any malfeasance.

  31. 1) If they could handle 6,000 stations in the 70s, why can’t they handle this load today with much greater computer capacity. To my thinking, accuracy is far more important than time considerations. Direct measurements are far more accurate than interpolations because you are left using warmer sites as reference points to perform interpolations. Look at the CRU-Hadley, NASA-GISS and NOAA-NCDC maps to see where most of the warming is occurring. The substantial warming, as indicated by red, is occurring at high latitude and elevation. This is exactly where interpolations are used to create a virtual temperature to fill empty grids created when they elimated 75% of the stations.

    Example, in the 70s, Canada provided a total of 600 stations; 100 above the Arctic Circle. Today the world data set has been reduced by 75% which means that Canada should be represented by 150 and 25. Instead, this second largest country in the world is represented by a total of 35 stations, only 1 above the Arctic Circle which is arguably the warmest of the former Arctic sites. Making things far worse, if you extend down to 55 degrees latitude, which is at least half of the Canadian land mass, you still have only 3 stations; this means that there are 32 sites in the warmer half of Canada. Even worse, the 32 station are near cities, airports, water waste treatment sites and on the the coast. Thus, they are using sites effected by man-made heat pollution. In summary, they are using the warmest locations within the warmer half of Canada for their interpolations which smears higher temperations into the north; this explains the red color on their map for the Arctic region.

    2) The Moscow based Institute of Economic Analysis (IEA) criticized the the CRU of minipulaing the data set from Russia. Note: NOAA-NCDC supplies the GHCN global data base to GISS and CRU. For those who might dispute their ability to analyze temperature data, both economics and climate use the same mathematical procedures. They atate that if you use the original set of temperature station, which still exist, you get a value of 0.64 degrees C less than the modern interpolated value for Russia. This value is approximately the same as what has been claim for the rise in temperature ofthe world as a whole since 1975.

    Their is no justification for their use of interpolated data. Their intentions seem clear to me, since they use all 6,000 stations to calculate the base temperature values of each grid; one would expected that they should have use the same number of stations to calculate the grid base value as they use for their temperature calculations today. By using 6,000 stations, they make sure that everthing following will be warmer.

    PB

  32. carrot eater says:

    Zeke: I think this is exactly why GISS doesn’t publish its monthly analysis until a couple weeks into the next month, at least. It takes a while for all the stations to come in.

    If nothing else, I now know how to read a raw CLIMAT report. There’s quite a bit of information in just 3-4 lines of code.

    Peter Bartner: Review the information here. It isn’t the fault of “they”. The NOAA can only work with the data that the different countries send in. If the countries sent more data, then NOAA would have more data to work with.

    It’s only GISS that uses interpolation; this is why GISS has been running slightly warmer than CRU over the last few years – due to the Arctic. High elevation has nothing to do with anything. It’s simply that there was been rapid warming observed at stations in/around the Arctic, and GISS uses those stations in-fill the rest of the Arctic. CRU does not do this, and that actually gives a wrong idea, since it assumes that the Arctic is warming at the same rate as the average of the rest of the world.

    The Russia/IEA thing was much ado about nothing. Even in their own report, you can see that, if you go to the figures themselves. Or, see here:
    http://scienceblogs.com/deltoid/2009/12/russian_analysis_confirms_20th.php

    There is good justification for interpolating anomalies. It’s in Hansen/Lebedeff (1987). That paper shows how well anomalies correlate over distance, and how close the final “answer” would be to the real answer. This is how science is supposed to work – before you interpolate, you see how justifiable it is, and how far you can do it. This is exactly what Hansen did.

    As for your last statement, just look at Zeke’s plot above. The stations that dropped off aren’t any cooler than the stations that remained. To make the case air-tight, we’d just need a spatial average of those series. It’s one thing to list a bunch of assertions; but what you really have to do is a bit of math to see if they make any sense.