Beam asked in EnvironmentGlobal Warming · 1 decade ago

Who is responsible for QA(Quality Assurance), i.e. testing, the GCM's (Global Climate Models), who SHOULD be?

"The results of my direct correspondence about these matters with several of the US agencies and personnel involved in GCM development and applications, and analyses of CO2 and Climate Science, with a strict focus on software Verification and SQA, have shown that Verification and SQA are not considered necessary aspects of any analyses of these important issues. My correspondence has been with appropriate personnel at NASA and GFDL, among others. Correspondence with editors of several of the high-impact Journals in Climate Science, Journals such as Nature, Science, and those published by the AGU, AMS, AAAS, for examples, indicate the same results. These Journals do not have Verification and SQA requirements in place for the papers submitted for publication."-- from a letter by Dan Hughes to Rona Birmbaum

"Third, they have undergone an extensive peer-review process and been validated by numerous scientific bodies."--Rona Birnbaum

Chief, Climate Science & impacts Branch

http://danhughes.auditblogs.com/2009/06/10/epa-hq-...

As my personal training is in the area of computer science, I am appalled that one so high up in the food chain thinks that an "extensive peer-review" process is a valid substitute for actual code testing.

Does this dimwit have any idea what the process of regression testing encompasses? How is it that a peer-reviewer, caught up with the time requirements of their own projects, has the time to write a "valid" test plan and perform it to the extent required to validate a GCM? She is obviously out of her mind and her league. She would best be best advised to sit this one out and leave it to the experts in this area.

How many of you out there even realize that today's modern software systems are NEVER bug free. When you have a million lines of code it is impossible to keep all errors out.

Considering that fact now think about how many times the AGW supporters have tried to convince you that you should believe based on the output of these models. Absolutely baffling, if they are so accurate, then why have they never been tested?

So what do you think, who SHOULD be responsible for insuring QA in these systems that we are basing BILLION dollar decisions on?

Update:

pegminer:

Show me one test plan for the dozen or so GCM's that exist?

I have personally taken part in software testing. Do you know the slightest thing about testing or what cases you need to check for when designing one? Or do you think that because you have written a few VB applications for your varied scientific projects that you are now qualified to decide when code has been tested?

Just because you have studied science does not mean you know a da** thing about software design or validation.

Update 2:

The GCM code is publicly available. Where are the test plans? As is illustrated in the excerpt from David Hughes letter, nobody feels that vlidation is "necessary". Well why the h*** not? We aren't talking about spending $300 on an operating system that might flash you the blue screen of death every 5 minutes, we are talking about a mufti-billion dollar investment to fight global warming, based heavily on the output of this software.

Update 3:

pegminer:

I am beginning to believe that you are naive enough to believe that software testing can actually consist almost entirely of running a program to "see what happens."

This is not how software testing is done. The sooner those of you in the Climate Science community, and those following you blindly, realize this the sooner we can get get some actual work done.

Update 4:

Hi Dawei:

There are any number of errors that can occur in software from failing to check for array bounds to initializing variables incorrectly, to using multiple pointers to access the same variable, intermittently changing a value. Then the next time you use the variable for no discernible reason it has changed. If your lucky it is a catastrophic error. Then there is the whole problem that 0 may or may not be zero. So what happens when you cast a float (decimal) to an integer and back? For example lets say I have a number 3.247237 and I actually need it to 4 decimal places. But somewhere somebody has cast the value to an integer. Now the value is 3. Sometimes casting can occur automatically and when I cast it back it might be something like 3.0002 because floating values are not exactly 0

Just imagine what could happen if you have enough numbers that go through this process? Did the writer of the function fail to document this function truncates values

Update 5:

Hi Dawei:

There are any number of errors that can occur in software from failing to check for array bounds to initializing variables incorrectly, to using multiple pointers to access the same variable, intermittently changing a value. Then the next time you use the variable for no discernible reason it is has changed. If your lucky it is a catastrophic error. Then there is the whole problem that 0 is not necessarily 0. for example when you cast a float (decimal value) to an integer and back you loose data. lets say I have a number 3.247237 and I actually need it to 4 decimal places. But somewhere somebody has cast the value to an integer. Now the value is 3. Maybe it was automatically cast back and in the process is something like 3.0000002 because floating values are not exactly 0. Just imagine what could happen if enough numbers go through this process?

Or maybe the writer of the function failed to document that this function truncates values at 2 decimal places?

Update 6:

pegimer:

You are a great one for telling others to stick to their area of expertise. I suggest you stick to yours. My education is in software engineering. I don't care what you think will "pass" as testing. There is a way to do it and a way not to do it.

Software testing has been well studied and documented. So like I said, because you have written a few lines of code in your lifetime you seem to feel that qualifies you in the area of software design and test?

If there is no test plan how can you know it was tested?

For example let's say I have a function that is getting a pointer to an array of functions. The first time I test it it works. But in subsequent iterations someone has deleted the function my procedure needs. When I tested it it worked. Does that mean my code is valid, even though I have failed to perform regression testes that would have caught my error? Oh, but that is right you have reinvented the wheel. It is Ok to test any old which way.

Update 7:

Pegminer:

You are an arrogant self absorbed hypocrite.

From one of your previous posts: “I think it's time for the deniers to leave science to the scientists.” http://answers.yahoo.com/question/index;_ylt=Ahgn....

I think it is time for you to leave computer science to computer scientists, such as Dan Hughes and those of us actually trained in the field.

Update 8:

Pegminer:

Although you are not a trained computer scientist, you are willing to use your limited knowledge about computing from the few basic programs you have written within the framework of your research as a testimony to how much you know about software testing. If there are “other ways” of testing, then tell me, what is a “valid” test?

You’re a climate scientist right? Exactly how much time do you spend in a week writing test cases? How many test plans have you written to test the code? If you have “peer-reviewed” software when do you go back and test it again to be sure it hasn’t broken since your last test? You are a peer-reviewer right? How much time do you have in a week to alot to quality assurance of other peoples’ work? How much of your precious time are you willing to spend on testing in a week?

Update 9:

An excerpt from the university at which I received my degree: “Less than 10 percent of computer sci¬ence programs in the U.S. are accred¬ited by the Computer Science Accredi¬tation Commission.” My university is one of those less than %10. I know for a fact that it is very easy to write unmaintainable spaghetti code. I have worked with individuals that create such atrocities.

As I have shown you verifiably in the excerpts from Dan Hughes letter, that the science “has not been done.” The most important ingredient has been left out, testing. Would you fly in an airliner with untested software systems?

Update 10:

I wanted to point out one last thing for people to consider if you come across this question.

Pegminer asserts that the testing of the GCM's is far greater than anything he has seen at a simulation company.

This assertion from a climate scientist is in direct conflict with the evidence which shows that SQA is not "deemed necessary"

If the agencies responsible for generating the GCMs response to a letter inquiring about SQA is that it is "not deemed necessary", then how does pegminer expect us to take him at his word?

In all of pegminer's ramblings not once was the issue of the agencies not deeming testing necessary addressed, which was the root of the question.

Update 11:

In all of the answers only part of the question got answered. I feel that the problem of testing the GCM's and the data is an important one. We must not let a few ego filled scientists wanting the lime light steal the show prematurely. Let the science and the methods mature.

So here is my suggestion:

Let's establish an independent organization that is solely responsible for verification of GCM's. Let this organization be responsible for developing tests and performing them on any code used in GCM calculation.

Let's also establish an independent Climate Data Standards Committee to which all experiments utilized to dictate public policy must comply.

We need to make the science better before it can be used to form the basis of billion dollar decisions.

8 Answers

Relevance
  • 1 decade ago
    Favorite Answer

    Yes, I couldn’t agree more.

    I wonder what the result would be if these GCMs were put through the same double-blind tests that drugs companies have to go through.

    One team supplies the GCM. Another team supplies the parameters to go into it. And a third team runs the model to produce the result. None of the teams have any knowledge of who they are working with.

    I can’t help but think that the whole Global Warming panic would disappear almost overnight if this were to happen – which is why the Global Warming Lairs are making sure it never does, of course. Hardly very scientific.

    Would you happily take a drug on the strength of the claims and promises of the manufacturer?

    Of course you wouldn’t. You’d insist that the drug is properly tested first. That’s science.

    With Global Warming, you are expected to trust the word of the people involved. That’s religion.

    As ever with Global Warming - don’t believe the hype.

    :::EDIT:::

    Response to Dawei.

    You raise a very interesting point in your middle paragraph that applies to many aspects of Global Warming, not just computer models. What you’re saying is that, all things being equal, around 50% of the “problems” with any data will show too much warming, while around 50% of the time they will show too little warming, or even cooling.

    Is that what you’re saying?

    The problem with the issues you mention is that, in Global Warming, the “bugs”, “mistakes”, “errors” etc, are only ever found when they throw up anomalous data that shows too little warming. If the result of the data is the warming that was expected, then the attitude is just “That’s what we expected, so that’s fine.” And no one checks the data.

    Think about it; how many times have you heard of data in the area of Global Warming being flagged as anomalously too high? It almost never happens – and when it does, it’s the sceptics that find it, not the alarmists.

    But, as you point out, it should be a 50/50 split, shouldn’t it?

    Take the ARGO data as a recent example. It initially showed rapid ocean cooling, so the data was checked and checked until errors were found. After correction the result was still slight cooling, so the checks continued. I’m not actually sure what the current situation is, but realclimate.org is saying that the ARGO data now shows warming – though I’ve seen a recent paper that says it’s showing cooling still. Regardless, the point is; had the data showed warming as expected, or even higher than expected, do you honestly think that the problems would have been found?

    Just look at the GISS “corrections” to the global temperature data. All things being equal, the positive corrections should be matched by similar negative corrections, but that’s not the case is it? The “corrections” amount to a shocking +0.5⁰F!

    So, in summary, what you’re saying is absolutely correct, but, sadly, Global Warming “science” (I use the word very loosely) is so corrupt that the errors are only ever spotted (or even looked for) if they result in data that contradicts the Global Warming hypothesis.

    Are you comfortable with that situation Dawei?

    As a final comment, I note with astonishment that (at the time of editing) I have 2 thumbs down for the first part of my answer. So at least two people think it would be a bad idea to have double-blind tests done on CGMs to ensure that the predictions they are making are accurate.

    I rest my case!

  • 1 decade ago

    I really don't give a crap what models say...

    I live in Jeddah Saudi Arabia on coast of red sea Somewhere around Latitude 21.35'

    Longitude 39.10'

    This isn't my first year here I have lived here for past 30 years now.

    Its Summer, August, start of August temperatures are suppose to be around 45 max and 30 min. (thats in Deg C)

    I'm trying to lose 3 Kgs so I go for a walk every night, this isn't something new for me I would go for a walk every night even if its 35 Deg C and 80% humidity. The most amazing thing this winter is we have temperatures of

    40 Deg C Max

    26 Deg C Min

    And humidity of max 65% during night and 45% during day.

    This might be quite hot for few but I remember the days during every July, August and start to mid of September that it was impossible to breath due to humidity, just walking from car to home or office was a killer.

    I'm not really missing that but something else is missing, which is global warming.

    Frankly I would say enough of lies, even if you show high temperatures around the world just because you control the media and those scientists who are payed by you, you still can't tell me that every place in the world is showing signs of cooling other than you satellites? This is the biggest lie I've ever come across in my life.

    I actually have an excel sheet with me that I can use to predict temperatures around the world by tweaking simple parameters, I really can use it to show global cooling, heating or whatever you want.

    Computer models remain computer models, they can never replicate the real thing.

    I was reading interview of one of the designers for F22 Raptor, he said something weird that till today they use wind tunnel to do the final testing of airflow on the air craft, so someone asked him why don't you use computers for that? He said there aren't enough powerful computers available today to replicate the wind effect.

    This is Lockheed Martin/Boeing a company that does nothing but make aircrafts they say they cannot replicate wind tunnel and some people who can't even figure out tomorrows weather perfectly will tell me that their computer model could predict world temperature 4-5 years in advance when it has failed to predict that its freaking too cool for Summer here?

    Bunch of lies!

  • David
    Lv 6
    1 decade ago

    Maybe I'm missing the point, and I freely admit I don't know much about software, but exactly how would the occasional error or bug make the model inaccurate?

    And if there is a serious bug in one model, wouldn't the chances be next to nil that the exact same bug would occur in another similar model? So wouldn't the use of multiple models easily eliminate the possibility that there is a bug that happens to make the predicted temperature anomalies much higher than they would be without this bug?

    Or maybe I'm misunderstanding you completely. What kind of errors are you talking about and what might happen as a result of them? And why wouldn't the use of multiple models completely account for the chances of a significant random bug in a single model?

    Source(s): Edit: Beam, I still don't understand how errors of the type you describe will not be corrected by using multiple models. If one careless programmer decides to truncate at a certain place, what are the odds that the authors of another model will do the exact same thing? What about the authors of the next 10 models? I just don't see how all of these groups could have made the exact same programming bug. http://upload.wikimedia.org/wikipedia/commons/a/aa... And Chuda I was mostly referring to unintentional errors in software code, not the idea that politically pressured scientists are intentionally manipulating data.
  • Marion
    Lv 4
    4 years ago

    Well all of these models account for as many things as they can. Some other things include ocean currents, greenhouse gas concentrations, and aerosols. Answer below mine quite wrong, while you cannot say a single event was caused by global warming, it is a fact that warmer oceans cause stronger more powerful storms.

  • How do you think about the answers? You can sign in to vote the answer.
  • Anonymous
    1 decade ago

    I do not find this at all surprising as I found a major set of calculation errors in a major SPC package specified by major auto manufactures to their suppliers. The errors are in the X- calculation and ends up requiring that one achieve a Cpk of 5+ in programs using AIAG formulas order to get a Cpk of 1.33 in theirs. I worked out a calculation converter that would allow me to figure the existence of the calculation error, but without access to their source code correcting it was impossible and they would no admit to its existence.

  • 1 decade ago

    The first answer given to this question:

    'It sounds like most of the AGW crowd believe that once a code is written there is no mistakes and the output is correct if it proves what they want."

    is written by someone who is either intentionally lying or completely out-of-touch with climate modelling, or both. (I pick the latter, actually).

    Researchers spend an ENORMOUS amount of time testing models, fixing them, et cetera. Anyone that would believe the quoted statement has probably never written a piece of code in his life, and has no business attempting to answer such a question.

    There is more than one type of peer review. The one that gets talked about all the time on here is peer review as practiced at refereed journals. At organizations of any size there is also internal peer review, which is where a lot of the code-checking will go on. It may be years between when a climate modelling code development starts and when results are published. During that time there will be constant error-checking, refining of the code, internal review, etc.

    EDIT: Beam, you're awfully testy about this. Did you get fired by NCAR or something? I don't know what sort of software you work on, but some software gets tested extensively, some does not. There are all sorts of different ways of testing software, with different goals in mind. People have spent many years using computers for atmospheric modelling. You're probably unaware of this, but that was pretty much the first application of large computers, back when "programming" was done with vacuum tubes rather than Visual Basic or whatever it is that you program in.

    I worked for six years at a simulation software company, so yes, I have participated in software testing. The testing on GCM's goes WAY beyond anything I saw there. If you think you're such hot stuff, why don't you download the code for the Community Climate System Model (CCSM) and get to work testing it? You can get it here:

    http://www.ccsm.ucar.edu/

    Play with it, test it, poke holes in it, add new physics to it, just stop making the assumption that people with Ph.D.s that spend their lives doing this are stupid. I think because other groups do things differently than what you have been trained to do in your software engineering courses, you think they're wrong.

    If you're advocating that more money be spent on programmers for climate research, I'm all for that. I don't do GCM development, but in my career I have done programming in Fortran, GW-Basic, VB, PV-Wave, IDL, Matlab and NCL, and I'd LOVE to have some programmer lackey doing that stuff for me. I'm just a hack at programming and if you're saying that I need to have someone writing and testing code for me, I'd love it. That way I could spend more time on the science.

    Another EDIT: No Beam, I believe you are naive for thinking that software testing is only done one way or with one goal. I see you only wanting to criticize software development that I don't think you really know anything about. I gave you the link to the CCSM, use that as a test case for your own GCM software test suite--show the climate scientists how it should be done! If you're only willing to bad-mouth other people's work while doing nothing concrete of your own, why should anyone respect what you are saying?

    Once more for Beam: Yes, I could tell your education was in software engineering. Bully for you. You don't think there are plenty of software engineers with all the training that you have had working on climate models? Testing is great, I'm all for testing. Just because YOU are not aware of it doesn't mean it doesn't go on. I'll even say that every climate model has bugs in the software, that's true with any large complex piece of code. Atmospheric models can be some of the largest and most complex pieces of code around, and need to be tested extensively and they are. You just don't believe it, and you don't really want to find out the details, which I'm sure you could by looking at various websites for the groups that put together the models and contacting them. You'd rather just say that you're smarter than they are and try to throw doubt on their results with nothing to back it up.

    Final EDIT: The only assumption I'm making is that the software engineers and climate scientists working on climate models know a lot more about them than people that don't. And again, I don't claim any special knowledge of computer science or software engineering, I only do what I have to (I don't generate climate models). YOU'RE the expert in software testing, go do it, stop complaining about what other people are doing. The models are out there, go for it!

  • andy
    Lv 7
    1 decade ago

    It sounds like most of the AGW crowd believe that once a code is written there is no mistakes and the output is correct if it proves what they want. If it gives them some other response then they can change the code to get it to give the answer that they want not necessarily the correct answer.

  • beren
    Lv 7
    1 decade ago

    It is funny how people who have no information about the simulations can actually know so much about them.

Still have questions? Get your answers by asking now.