The Brigati Verde–How To Win A Klimate Konversation.

Update: In the comments section below there is a long discussion of Cook’s 97% ‘paper’. In the comments I rely heavily on material scraped from Poptech, Jose Duarte’s blog and Andrew Montford’s critique at the GWPF. Because commenting is quick and messy, I didn’t put the sources in as religiously as I ought to have. Sorry!

Billy Connelly is a very funny comedian from the UK. He’s really good.

He has an almost namesake in the climate world, William Connolley. He’s not funny at all. He runs a blog called Stoat (a weaselly type animal–I call him the Miserablist Mustelid in honor of his title). He’s one of three veterans of the Klimate Konsensus Team, joining Michael Tobis and Eli Rabett as hard-nosed enforcers of message purity and all-out war on opponents of their religion. Where the Brigati Rossi terrorized Italy for a decade, these three are key parts of the Brigati Verde, a green brigade of blog snipers, best at vitriol and doing anything to evade the weaker parts of the very real climate consensus, a consensus they claim to support but do nothing but undermine with their tactics.

Connolley yesterday had a post up and I was commenting there. He has the nasty habit of putting his comments in the middle of yours, and the nastier habit of eliminating comments he doesn’t like. Which he has done with me…(I don’t claim to be pure in this regard–I’ve banned two commenters and removed all their comments here, although I hope I had better justification than W.C., whose initials tend to express best my opinion of him.)

Connolley in one of his precious edits applauded another commenter’s claim that I never talk about the science. So I was surprised, to say the least, when another commenter (a certain Marco) asked me to show why I have such a low opinion of John Cook’s claim that 97% of climate scientists are on the side of the Klimate Konsensus–and Connolley vanished my comment down the memory hole:

..and Then There’s Physics

2015/02/28

Matt,
You might enjoy Tom’s recent “The perils of Great Causes” post which covers Peter Gleick and Al Gore and then ends with the classic

Oh for the days when we talked about science.

[I saw that. The mocking self-irony would be poetic, were it not unknowing -W]

 

  1. Marco

    2015/03/02

    Fuller, your “error” is the result of your willingness to accept any factoid that suits your beliefs and makes a good story. For example, I am certain you will never be able to provide any evidence that “Cook did cook the books”, but it fits what you already believe to be true. Since you generally cannot prove a negative, you can also maintain your belief that “Cook did cook the books”, because it is so much more easy to believe *that*, then to accept that the people you hang around with are wrong and contemptible human beings for making such false claims without evidence.

  2. #60Tom Fuller

    Taipei

    2015/03/02

    Marco (yawn)

    https://thelukewarmersway.wordpress.com/2015/01/14/cooking-the-consensus/

    “The ‘97% consensus’ article is poorly conceived, poorly designed and poorly executed. It obscures the complexities of the climate issue and it is a sign of the desperately poor level of public and policy debate in this country [UK] that the energy minister should cite it.”

    – Mike Hulme, Ph.D. Professor of Climate Change, University of East Anglia (UEA)

    And the comment that Connelly didn’t dare print?

    “Cook wrote on Skeptical Science, “We’re basically going with [a definition of] AGW = “humans are causing global warming” Eg [sic] – no specific quantification.” This is very different from what the IPCC says–that humans have caused 90% of global warming. This lower bar renders the conclusion almost meaningless.

    The Cook et al study data base has seven categories of rated abstracts:
    1. 65     explicit endorse, >50% warming caused by man
    2. 934 explicit endorse
    3. 2,933 implicit endorse
    4. 8,261 no position
    5. 53     implicit reject
    6. 15     explicit reject
    7. 10     explicit reject, <50% warming caused by man

    The highest level of endorsement–“Endorsement level 1, Explicitly endorses and quantifies AGW as 50+%.(human actions causing 50% or more warming)” was assigned by the raters to a grand total of 65 out of the 12,000 papers evaluated. This certainly is a weak finding. Even combined with level 2’s 934 papers it amounts to less than 10%.

    The Cook et al 97% paper included a bunch of psychology studies, marketing papers, and surveys of the general public as scientific endorsement of anthropogenic climate change.”

    So, Connolley, when you refuse to allow conversation about the data used in a peer-reviewed paper purporting to advance our understanding of climate science, you’re left–I surmise–with your toadies and a circle of jerks basically congratulating each other on the purity of your thoughts.

    Your blog’s motto is ‘Taking Science By The Throat.’ Perhaps I may speak on behalf of science and request that you quit squeezing.

121 responses to “The Brigati Verde–How To Win A Klimate Konversation.

  1. Update: In the comments section below there is a long discussion of Cook’s 97% ‘paper’. In the comments I rely heavily on material scraped from Poptech, Jose Duarte’s blog and Andrew Montford’s critique at the GWPF. Because commenting is quick and messy, I didn’t put the sources in as religiously as I ought to have. Sorry!

    Kevin’s comment begins here:

    Spin. ClimateBall. Denial. Take your pick for how you wish your position to be characterized.

    The 97% consensus is real. It’s been shown by multiple different methods. I know it’s a nutter meme, but it’s time to ditch it and move on.

    If you were interested in ‘truth’ you would acknowledge that the Cook et al ratings only looked at abstracts. Obviously there is a limited amount of space available and scientists just as obviously use the abstract to highlight their paper’s interesting/new results. Since human-induced climate change is neither a new nor interesting result why would we expect it to receive any particular attention in the abstracts?

    But in addition to rating the abstracts Cook et al also surveyed the authors. The authors wrote the papers. Their knowledge was not limited to the abstracts. The author survey yielded the same result as the reported abstract ratings.

    The 97% consensus exists. Get over it.

    • Kevin, now you’re just spouting religious doctrine. It’s the new Virgin Birth.

      They cooked the books and it’s there in print. There have been two reputable surveys that I know of–von Storch et al in 2008 and Verheggen et al last year. Both done by staunch defenders of the consensus.Both came back with about 81% support from practicing climate scientists.

      Quit trying to blow smoke up everybody’s butt. It demeans you and amuses the rest of us unnecessarily. Cook was a fool and he showed it in that piece of junk.

      • Kevin ONeill

        Tom – Cook et al had a ‘response rate’ of 95.8%. Since their survey was of the literature itself, the 12,465 papers selected by the search criteria had no choice but to participate. You would think the response rate would have been 100%, but after eliminating papers that were non-peer-reviewed, not related to climate, and papers without abstracts, the resulting number of papers was 11,944. I don’t think either von Storch or Verheggen were anywhere in that ballpark.

      • Kevin, you’re better off sticking with thinly veiled insults than discussing research. Your comment shows ignorance of some pretty basic terms.

        Papers don’t respond. They are selected. Authors may or may not respond.

        von Storch and Verheggen were smarter than Oreskes, Anderegg, Prall et al and Cook. They surveyed the authors, rather than having a bunch of kids whiz through abstracts.

      • Kevin ONeill

        Tom – I really thought you understood this stuff a little better than you’ve shown.

        The headline result from Cook et al was a survey of the literature itself. Not people. They did this by rating 12k abstracts. Think of the abstract in this study as a proxy for the paper itself.

        The question then becomes, how well does the proxy (rated abstract) represent the ‘true’ value of the parameter of interest (the full paper).

        To validate the proxy, they asked authors to self-rate their papers. They could then compare the abstract rating to the ‘true’ value; using the author’s self-rating as the true value.

        If there had been a large discrepancy between the rated abstracts and the self-ratings, then we’d know the proxy was an invalid indicator for the true value.

        The real test was the self-rating of the ‘No Position’ papers. Since the authors wanted to be conservative in their ratings and err on the side of neutral, many papers that the authors themselves felt belonged in Endorse or Reject were abstract rated as ‘No Position’. But the self-rated ‘No Position’ papers also followed the same distribution as the Endorse/Reject abstract ratings – with Endorse outnumbering reject by 20:1.

        The author self-rating is only necessary to show the validity of the method – rating abstracts to obtain an estimate of the ‘true’ value of the paper as a whole as it pertains to the consensus on AGW.

        By doing this they avoid the whole self-selection and response rate problems that are inherent in other methods – i.e., von Storch and Verheggen. If anything, they were probably too conservative.

      • Hi Kevin

        I’m afraid you really don’t understand what you’re discussing. Having authors self-rate their papers is fine (even though it is not necessarily dispositive). But you have to have a sample of authors that is representative of the universe of papers you have.

        And you don’t get that by hoovering up publicly available email addresses and looking at the 25% who respond. You need a sample design and most probably a sample frame. You need to have a program of recontacting authors until you get the right percentage of respondents for each category of endorsement or lack thereof.

        Sorry.

      • Kevin ONeill

        Tom – did you see the quotes around ‘response rate’ ? You’re the one comparing apples to oranges. The Cook et al result is of the literature itself, yet you’re comparing it to self-selected surveys. So to use *your* comparison, the ‘response rate’ for the literature selected is the number of rated abstracts divided by the number initially selected.. Perhaps you should read, closely, think twice, then respond.

      • I’m sorry Kevin. You’re not making sense. The 25% response rate I cite refers to the recontact survey of the papers’ authors.

      • Kevin ONeill

        “bunch of kids” ?

        Just the ones I recall off the top of my head —

        John Cook – degree in Physics
        Dana Nuccitelli, – BSc astrophysics, MSc Physics
        Sarah Green – chair of chemistry at Michigan Tech
        Andy Skuce – retired, BSc in geology and MSc in geophysics
        Ari Jokimäki – BA computer engineering
        Riccardo Reitano – professor astrophysics

        Most of the coauthors were professors in a hard science
        kids? Who are you referring to?

      • Who were the raters?

      • Kevin ONeill

        Tom – you made an allegation/assertion or just some random insult/ad hominem statement: “… rather than having a bunch of kids whiz through abstracts.”

        When pointed out that the raters I knew of offhand were, in fact, degreed in the hard sciences – many with advanced degrees – you ask who the raters were.

        This is typical of how you operate. You make asinine statements – then ask for the facts.

      • Kevin, I thought the people you named were co-authors, primarily because you referred to them as ‘co-authors.’ I was under the distinct impression that those rating the papers included others recruited from the SS website community.

  2. Tell me how many examples you want before you cry uncle, Kevin.

    Dr. Idso, your paper ‘Ultra-enhanced spring branch growth in CO2-enriched trees: can it alter the phase of the atmosphere’s seasonal CO2 cycle?’ is categorized by Cook et al. (2013) as; “Implicitly endorsing AGW without minimizing it”.

    Is this an accurate representation of your paper?
    Idso: “That is not an accurate representation of my paper.

  3. Tom – You have neglected the author survey. No one expected or should expect the abstract ratings to be perfect. Is one example of what you believe to be an error supposed to prove something? Would 3? 5? 10? You need to establish a lot more than that to move the numbers. #freethetol300

    BTW, *I* would rate Idso’s abstract as implicitly accepting AGW without quantification. Or neutral, but I would tend toward the former. Here’s the first three sentences:

    Since the early 1960s, the declining phase of the atmosphere’s seasonal CO2 cycle has advanced by approximately 7 days in northern temperate latitudes, possibly as a result of increasing temperatures that may be advancing the time of occurrence of what may be called ‘climatological spring.’ However, just as several different phenomena are thought to have been responsible for the concomitant increase in the amplitude of the atmosphere’s seasonal CO2 oscillation, so too may other factors have played a role in bringing about the increasingly earlier spring drawdown of CO2 that has resulted in the advancement of the declining phase of the air’s CO2 cycle. One of these factors may be the ongoing rise in the CO2 content of the air itself; for the aerial fertilization effect of this phenomenon may be significantly enhancing the growth of each new season’s initial flush of vegetation, which would tend to stimulate the early drawdown of atmospheric CO2 and thereby advance the time of occurrence of what could be called ‘biological spring.’

    Trying to parse this, relevant to the goals of the study, I come up with:

    The declining phase of the atmosphere’s seasonal CO2 cycle has advanced by 7 days, possibly as a result of increasing temperatures. Many factors may play a role in the spring drawdown of CO2; one of these factors may be the ongoing rise in the CO2 content of the air itself.

    Authors have full knowledge both of their papers and their own personal views. The abstract raters do not. But the author survey shows the same results as the abstract ratings. You fail to account for this.

    • Dr. Scafetta, your paper ‘Phenomenological solar contribution to the 1900–2000 global surface warming’ is categorized by Cook et al. (2013) as; “Explicitly endorses and quantifies AGW as 50+%”

      Is this an accurate representation of your paper?
      Scafetta: “Cook et al. (2013) is based on a strawman argument because it does not correctly define the IPCC AGW theory, which is NOT that human emissions have contributed 50%+ of the global warming since 1900 but that almost 90-100% of the observed global warming was induced by human emission.

      What my papers say is that the IPCC view is erroneous because about 40-70% of the global warming observed from 1900 to 2000 was induced by the sun. This implies that the true climate sensitivity to CO2 doubling is likely around 1.5 C or less, and that the 21st century projections must be reduced by at least a factor of 2 or more. Of that the sun contributed (more or less) as much as the anthropogenic forcings.

      • Scafetta is probably the best quantifying of the natural component and I think he puts it at over 60%. 1.6C for a doubling of co2 is a good maximum.

    • How do you know Idsos replied to the recontact survey?

  4. Tom – it should also be noted that there were actually *three* 97% results from Cook et al – and one 87% result.

    Abstracts that assigned a quantitative contribution of human activity to the warming 87% endorsed dominant human causation.

    Abstracts that explicitly expressed the cause of global warming, 97.6% endorsed human causation.

    Abstracts that explicitly or implicitly expressed the cause of global warming, 97.1% endorsed human causation.

    96.4% of the authors self-rated their papers as agreeing with the consensus on AGW.

    Ok, that’s two 97% and one 96% (and one 87%) So sue me. Note that Idso’s paper – even if accepted as an error – would not change 3 of these results.

    Ho hum.

    • Dr. Shaviv, your paper ‘On climate response to changes in the cosmic ray flux and radiative budget’ is categorized by Cook et al. (2013) as; “Explicitly endorses but does not quantify or minimise”

      Is this an accurate representation of your paper?
      Shaviv: “Nope… it is not an accurate representation.

      • Kevin ONeill

        The author’s self ratings were 96.4%, you haven’t any answer to that, that I can see 🙂

        Hey – but keep up the ClimateBall and pick out 8 or 10 errors that won’t/don’t change the result one iota 🙂

        Meanwhile ignore the big picture – the result is robust.

      • It’s not at all robust if we don’t know who participated in the recontact survey and how it was administered.

      • How do you know Shaviv completed the recontact survey?

  5. Tom, when you appear to be getting attacked from “both sides,” it probably means that you are dealing with a false dichotomy and you are doing something right.

  6. Tom – Regarding the comment you characterize as WC wouldn’t dare print: Maybe because it’s an old tire meme that relies on nonsense to try and score ClimateBall points.

    Using the “no position” papers in the denominator is foolish. The percentages would then change simply based on what search criteria were used – a nonsensical result.

    The question is/was: What is the level of scientific consensus that human activity is very likely causing most of the current global warming. The answer could be anywhere from 0% to 100% – but for any given point in time there is one ‘true’ answer. The task then is to design a test/survey/poll to determine that answer.

    To show the insanity of using the ‘no position’ papers in the denominator, assume we had the time and resources to enlarge the search criteria to include *every* peer-reviewed paper written in the last twenty years. The denominator would be in the hundreds of thousands – if not millions. Yet almost all of these would be in disciplines unrelated to climate science. So using *your* maths, the answer would be infitesimally small, approaching 0%.

    On the other hand, if the search criteria could be devised so that it *only* selected papers that had a position on global warming, then the denominator would be relatively small. And your maths would result in an answer close to 100%.

    The actual number of papers expressing an opinion would not change, the ‘true’ answer would not change, but the results of your math would yield an answer near 0% in one instance and near 100% in the second. I.e., your maths are worthless in trying to get close to the truth.

    That WC doesn’t want to waste anyone’s time any longer on this type of nonsensical analysis is hardly surprising. It’s not like it’s new or hasn’t been rebutted before. Just the same old tired crap trotted out yet again.

    • Dr. Morner, your paper ‘Estimating future sea level changes from past records’ is categorized by Cook et al. (2013) as having; “No Position on AGW”.

      Is this an accurate representation of your paper?
      Morner: “Certainly not correct and certainly misleading. The paper is strongly against AGW, and documents its absence in the sea level observational facts. Also, it invalidates the mode of sea level handling by the IPCC.”

      • Kevin ONeill

        And the proper question to ask is not – is this an accurate representation of your paper, but is this an accurate representation of the abstract. The author’s self ratings then tell us whether the abstract ratings are a proxy for the paper’s views. You really don’t understand this at all do you?

    • Monckton would have qualifed as part of the consensus using Cook’s definition.

    • Yet Cook used many papers that had nothing to do with climate science: ”
      Acker, R. H., & Kammen, D. M. (1996). The quiet (energy) revolution: analysing the dissemination of photovoltaic power systems in Kenya. Energy Policy, 24″

  7. The abstract says nothing about whether global warming is taking place, and whether it’s caused by humans – per the studies boundaries it was correctly classified as no position.

    The author’s self-rating indicated 96.4% agreed with the consensus 🙂

    • Dr. Soon, your paper ‘Polar Bear Population Forecasts: A Public-Policy Forecasting Audit’ is categorized by Cook et al. (2013) as having; “No Position on AGW”.

      Is this an accurate representation of your paper?
      Soon: “I am sure that this rating of no position on AGW by CO2 is nowhere accurate nor correct. Rating our serious auditing paper from just a reading of the abstract or words contained in the title of the paper is surely a bad mistake.

  8. The author’s self rating were 96.4%. No answer to that, Tom? Just keep quoting the 3.6% – hey that’s a representative sample 🙂

    • Dr. Carlin, your paper ‘A Multidisciplinary, Science-Based Approach to the Economics of Climate Change’ is categorized by Cook et al. (2013) as; “Explicitly endorses AGW but does not quantify or minimize”.

      Is this an accurate representation of your paper?
      Carlin: “No, if Cook et al’s paper classifies my paper, ‘A Multidisciplinary, Science-Based Approach to the Economics of Climate Change’ as “explicitly endorses AGW but does not quantify or minimize,” nothing could be further from either my intent or the contents of my paper.

  9. I was surprised to learn at Lucia’s that Connolley isn’t quite so doctrinaire. He has made bets against the consensus with regards to ice levels. Perhaps he is trying to enforce a consensus, so he can profit off the extreme positions he makes others take?

  10. Tom – do you understand statistics? Do you understand sampling error? Have you actually ever *read* Cook et al (2013)?

    Look at their Table 5. That tells us that you could list dozens more of these little denier factoids and it will not change the results – they are already accounted for in the paper. They are already accounted for in the results.

    Position ______Abstract rating _____Self-rating
    Endorse AGW_ 791 (36.9%) _______1342 (62.7%)
    No position__ _1339 (62.5%) _______761 (35.5%)
    Reject AGW ___12 (0.6%) _________39 (1.8%)

    Of the 2142 papers that were self-rated, 1339 were rated per the abstract as no position or undecided. The authors moved 551 of these to Endorse. The authors moved 27 of these to Reject. I believe that’s a 20:1 ratio.

    The author’s self-rating was 96.4%. It is telling you cannot/will not respond to that. For every ‘reject’ you cite there are dozens of ‘no position’ papers that *did* accept the consensus position. It cuts both ways, but you only cite the ones whose position you agree with You are NOT interested in truth – merely playing ClimateBall.

    All this little charade shows is that you obviously come to this with a very deep prejudice.

    • Tol: “WoS lists 122 articles on climate change by me in that period. Only 10 made it into the survey.

      I would rate 7 of those as neutral, and 3 as strong endorsement with quantification. Of the 3, one was rated as a weak endorsement (even though it argues that the solar hypothesis is a load of bull). Of the 7, 3 were listed as an implicit endorsement and 1 as a weak endorsement.

      …from 112 omitted papers, one strongly endorses AGW and 111 are neutral”

      On Twitter Dr. Tol had a heated exchange with one of the “Skeptical Science” authors of Cook et al. (2013) – Dana Nuccitelli,
      Tol: “@dana1981 I think your data are a load of crap. Why is that a lie? I really think so.”

      Tol: “@dana1981 I think your sampling strategy is a load of nonsense. How is that a misrepresentation? Did I falsely describe your sample?”

      • Kevin ONeill

        ““There is no doubt in my mind that the literature on climate change overwhelmingly supports the hypothesis that climate change is caused by humans. I have very little reason to doubt that the consensus is indeed correct.”” — Richard Tol

      • There is no doubt in my mind either. The literature does support the consensus view. Hell, I support the (narrow, by which I mean scientific) consensus view.

        But making up a phoney paper using incredibly shoddy methodology to create the impression of a 97% consensus when it is closer to 80% does not serve anybody’s interests. It reduces the credibility of science and associates the hard work of James Hansen with the idiocy of John Cook and Dana Nuccitelli.

        The amazing thing is that it’s the skeptics who are angry. It is the consensus that got screwed by this piece of junk.

    • Kevin, you ask “Tom – do you understand statistics? Do you understand sampling error? Have you actually ever *read* Cook et al (2013)?”

      Yes.
      Yes.
      Yes.

      It’s a pity Cook couldn’t answer yes to the first two…

      • Kevin ONeill

        Yes, Tol. Gremlins ate his data. Nice source, Tom 🙂
        #freethetol300

        96.4% of the authors self rated their papers as endorsing the consensus. Still silent on that little bit of trivia? I wonder why …..

      • (From Jose Duarte’s blog): Now, let’s look at a tiny sample of papers they didn’t include:

        Lindzen, R. S. (2002). Do deep ocean temperature records verify models? Geophysical Research Letters, 29(8), 1254.

        Lindzen, R. S., Chou, M. D., & Hou, A. Y. (2001). Does the earth have an adaptive infrared iris? Bulletin of the American Meteorological Society, 82(3), 417–432.

        Lindzen, R. S., & Giannitsis, C. (2002). Reconciling observations of global temperature change. Geophysical Research Letters, 29(12), 1583.

        Spencer, R. W. (2007). How serious is the global warming threat? Society, 44(5), 45–50.

        There are many, many more excluded papers like these. They excluded every paper Richard Lindzen has published since 1997. How is this possible? He has over 50 publications in that span, most of them journal articles.

      • Kevin ONeill

        If you understand statistics – then why would you attempt to put all the ‘no position’ papers in the denominator? If you understand sampling error, then why would you cite errors as somehow proving something. Errors are expected. You show no evidence of understanding either.

        96.4% of author self ratings agreed with the consensus position. Abstract ratings were 97.1%. 98% of papers that were in the ‘no position’ category that the authors felt did express a position agreed with the consensus.

        96.4, 97.1, 98.2 – take your pick.

        But let me repeat, 96.4% of author self ratings agreed with the consensus position. You can cite *all* of the 3.6% and it doesn’t change the numbers one bit.

      • Let’s talk about sampling, Kevin. How do you conduct a literature search on climate change and manage not to include any papers by Lindzen?

        You don’t–unless you are intentionally biasing your sample. Cook copied Oreskes in doing exactly the same thing.

        Sample bias has killed a lot of research studies. Including this one. And it’s not even the most important mistake Cook made.

        It’s a pathetic imitation of research from the Barbizon school of science: “Be a scientist–or just look like one!”

      • Kevin ONeill

        Tom, you claim to understand statistics and sampling – and now you’re complaining that not every possible paper was included? You are a hoot 🙂

        The overall coverage percentage is estimated to be 8.7 %, which means that to achieve complete coverage we would have to have looked at 140,000 papers! …. Reading one of them each day would take 375 years. ” — Ari Jokimäki, one of the group that assisted Cook et al.

        Statistics. Sampling. Are you *sure* you understand them?

        96.4% of self-ratings agreed with the consensus.

      • 96% of whom, Kevin? (See KCH’s comment below.) Who was invited to respond and why? Who chose to respond? What steps were taken to insure they were representative? How many times were they contacted? How was the invitation worded? Who was not invited and why?

        1,200 responses to 8,500 invitations is not a good response rating and would cause problems for analysis. Was a sampling frame designed for author contacts?

      • Kevin, there is no discussion of what steps they took to insure their sample was representative.

      • They used the global search term ‘global warming’ and got this paper: “Gampe, F. (2004). Space technologies for the building sector. Esa Bulletin, 118, 40–46. ” but missed this one: “Spencer, R. W. (2007). How serious is the global warming threat? Society, 44(5), 45–50.”

        I do understand sampling. I also understand sample bias.

      • Reading one of them each day might take decades, but one rater bragged on the discussion board that he had rated 100 in one day. Must have been really, really careful in his evaluations.

      • Kevin, you don’t know if any of the authors I have quoted here took the recontact survey. You don’t even know if they were invited.

  11. Paper misclassified as a Climate Change paper explicitly endorsing:

    Boykoff, M. T. (2008). Lost in translation? United States television news coverage of anthropogenic climate change, 1995–2004. Climatic Change, 86(1-2), 1–11.

  12. The author’s self-rating were 96.4%. Errors cut both ways, Tom. Why haven’t you cited errors in the opposite direction? You’re not interested in truth – just playing Climateball 🙂

    The author’s self-rating were 96.4% agreed with the consensus 🙂

    • You do love that 96.4%, don’t you?

      From: http://scholarsandrogues.com/2013/05/15/cook-et-al-2013-climate-consensus/

      “…Cook et al 2013 contacted 8547 authors of the papers and asked them to self-rate their own papers. 1200 authors responded…”

      and

      “But something that isn’t discussed or mentioned in the Supplementary Information that I can find is a discussion of the representativeness of the paper authors who responded to requests to self-rate their own papers. Generally speaking people who respond to polls are the most energized by the questions being asked, so we could reasonably expect that the scientists who responded would be most likely to either endorse or reject the consensus.”

      I’d be a little more careful about pinning your beliefs on such a poorly substantiated number.

      • Kevin ONeill

        kch – I assume you’re another one that hasn’t actually read the paper. The results of the self ratings are part of the paper. Look at Table 5. I’ve already reposted it here once. I’ll do it again.

        Position ______Abstract rating _____Self-rating
        Endorse AGW_ 791 (36.9%) _______1342 (62.7%)
        No position__ _1339 (62.5%) _______761 (35.5%)
        Reject AGW ___12 (0.6%) _________39 (1.8%)

        All of the papers are listed on the web. If there was any real *there* there then it would be easy enough to duplicate the research. Either showing a sampling bias or a self-selection bias. For all the time that pseudoskeptics have spent crying about Cook et al they could have done the research several times over.

        It’s easy to claim that there *could* be major errors, bias, etc. It’s possible that *every* single paper they *didn’t* look at actually rejects AGW. Of course the odds of that are nil. Zero. nada. What you have is handwaving. What Tom has is a poor undertstanding of stats and sampling.

        As Table 5 shows – for every ‘no position’ paper that the authors believe was a Reject, there are 20 ‘no position’ papers the authors believe should be an Endorse. This *despite* the fact the Reject ratio slightly increased – which is opposite of what you hypothesized. Or whomever you quoted hypothesized.

      • Kevin, who was invited to participate in the recontact survey? How were they chosen? Who did respond and how were they different from those who declined to participate? Was a sample frame used? Why is the response rate so low?

        Why are you ducking my questions?

      • Kevin ONeill – You’d be wrong on that assumption. I also have read quite a number of discussions of Cook et al from both sides. I referenced, and quoted from, a *supportive* blog for a reason – if a *supporter* questions that one number, I’d think it might be best to reflect on his reasoning for doing so.

        For a more thorough slagging of the paper, though, you might wish to read (if you haven’t already) Jose Duarte’s piece:

        http://www.joseduarte.com/blog/cooking-stove-use-housing-associations-white-males-and-the-97

        Or possibly you’d prefer Richard Tol. Or Roman M at Climate Audit. Or Brandon Shollenberger at The BlackBoard. There are others as well – I can get you links if you need them. All point out pretty shoddy work, both in design and execution of the project.

        On the point of duplication, I will add that it is never necessary to duplicate work to demonstrate it has no validity. It is enough to show errors in method. [And really, say Heartland were to duplicate the study and come to – surprise, surprise – opposite conclusions, would you accept that as definitive? I sure wouldn’t. Duplication is a pretty sorry red herring here.]

        To quote Duarte: “Meaningful results require valid methods.” The people above pretty convincingly showed – at least to me – the invalidity of the method, hence meaningless results. Your mileage obviously varies.

    • Who were the 96.4%? How were the authors chosen? How many times were they contacted? How was the recontact invitation worded? What steps were taken to insure that those who responded were representative of either the entire database or the climate science community? Why is the response rate only 15%? Usually such studies achieve between 25% and 35%?

      • Kevin ONeill

        Geez Tom, idk – why don’t you read the paper, supporting materials, and the many comments by the authors that have been written about it? Otherwise it appears like you’re just whining – since most or all of your questions are answered in the materials I just cited 🙂

      • Ah, I begin to see. I have the supporting data in hand. Many questions not answered, but their sample is opportunistic. If the email address of a scientists was publicly available, they were invited to complete the survey.

        Not good sampling.

        Response rate was poor: 2,142 replies from 8,536 invitations, a 25% rate.

        No indication of how many invitations were sent out. Text of invitation not available.

        No explanation of what the protocol was for respondents who had more than one paper evaluated.

        The SI is definitely supplemental. Information? Not enough.

  13. Paper misclassified as Climate Change paper endorsing: Tran, T. H. Y., Haije, W. G., Longo, V., Kessels, W. M. M., & Schoonman, J. (2011). Plasma-enhanced atomic layer deposition of titania on alumina for its potential use as a hydrogen-selective membrane. Journal of Membrane Science, 378(1), 438–443.

  14. From the paper: “Each abstract was categorized by two independent, anonymized raters.”

    From the discussion board where these ‘anonymous’ and ‘independent’ raters discussed the papers and the ratings they gave: “We have already gone down the path of trying to reach a consensus through the discussions of particular cases. From the start we would never be able to claim that ratings were done by independent, unbiased, or random people anyhow.”

    What a joke.

  15. The rater said, “I was mystified by the ambiguity of the abstract, with the author wanting his skeptical cake and eating it too. I thought, “that smells like Lindzen” and had to peek.”

  16. From the paper (again via Jose Duarte): “Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden. Each abstract was categorized by two independent, anonymized raters.”

    From the raters’ discussion board: “FYI, here are all papers in our database by the author Wayne Evans:”

    • Hey, let’s quote some more private communications out of context – that’s the honorable thing to do 🙂

      • Truth hurts, don’t it?

      • Kevin ONeill

        Truth? Selective and out of context quotations rarely bear any resemblance to ‘truth’ – but then truth isn’t your game, ClimateBall is.

        Last time I checked the raters were human – surprise. And at the beginning of the process they were all worried about bias, whether their ratings were inline technically with the guidelines, how to handle papers that seemed to fall into grey areas, etc.

        It’s clear from the discussion that they *wanted* to err towards neutral – to remove any bias. As pointed out – the ‘Hockey Stick’ paper would have received a neutral rating.

        It’s also clear that they realized there would be disparate ratings early in the process – but not to worry, they had 12k papers to rate *twice* and discrepancies between 1st and 2nd ratings would be settled then – so the first rating set of 12k basically served almost as a training run.

        And how did they do? Author self-ratings were 96.4%

        Go figure 🙂

      • Feel free to supply context. Just a link will do.

      • Kevin ONeill

        Ah, then you *are* quoting out of context and your source did as well. Nice to have my suspicions confirmed 🙂

  17. Here is a paper classified as a climate paper endorsing: ”
    Douglas, J. (1995). Global climate research: Informing the decision process. EPRI Journal”. It is interesting in part because… it had no abstract…

  18. How to create the results before you even start. You tell your researchers what you’re looking for–what the right answer is: “In one exchange, Cook said ‘It’s essential that the public understands that there’s a scientific consensus on AGW. So Jim Powell, Dana [Nuccitelli] and I have been working on something over the last few months that we hope will have a game changing impact on the public perception of consensus. Basically, we hope to establish that not only is there a consensus, there is a strengthening consensus.”

    • And yet the author’s self rating were 96.4%. Odd that. Funny how you can’t do anything but handwave and play Climateball 🙂

      • Actually the survey information is not available at IOP. They refer readers to the Skeptical Science website, home of… John Cook. Sadly, you have to register to view the description of the survey. Sadly, they won’t let me register.

        Klimate Science at its best.

      • As you won’t answer questions about the authors recontacted for a survey, your hand waving of a mythical 96% is about as valid as Cook’s 97%. Which is to say not at all.

      • Kevin ONeill

        Tom – whine 😦

        I’m not your research arm. If you can’t figure out how to access the data online that’s on you. I thought you’d already read this material?

        In any case, the proper channel is to ask one of the authors to perhaps forward it to you – not some random commenter in the blogosphere.

        You’re not even on a fishing expedition, you’re just playing ClimateBall and I don’t have to indulge your every whim.

      • Your style of thinking is pretty much evident here for all to see. I read the paper. The methods is protected by the registration at John Cook’s SS. And I guess I’m not part of the climate elect, as I’m not permitted to register.

      • You keep not answering. Which authors?

      • Kevin ONeill

        And?? So you’ve never read or studied the methods before? Why are you blathering on about it then? you just read some pseudoskeptic blog posts about it?

        A regular here just recently said: “ The only people who belive it get their talking points from websites, they don’t read original literature.

        I didn’t realize he was talking about you.

      • As I believe I mentioned several times, I did read the paper. So, which authors? Opportunistic sampling offers no hope that the sample will represent the universe of papers examined. This is worsened by the poor response rate, as it is the primary assumption that a poor response rate means the survey was not relevant to the bulk of respondents and only those most motivated will take it, as kch pointed out previously.

    • English your 2nd language, Tom? “We hope to establish…” tells me the end result is in doubt. I.e., we *hope* the data bears out our hypothesis. Seems pretty standard stuff for scientific research to me.

      The people at CERN were running tests *hoping* to find a Higgs particle – that was the goal – and they did! Somebody call the scientific police – their result was predetermined. Jose Duarte where are you in our time of need?

      Jose Duarte? Anytime I want an expert in a hard science I always ring up the nearest social psychologist or economist – makes perfect sense to me 🙂

      • Kevin ONeill –

        “…Anytime I want an expert in a hard science I always ring up the nearest social psychologist or economist…”

        Come on, you can do better than this inane line of defense. It’s too easily answered:

        In what fashion was this paper hard science as opposed to, say, social psychology? It’s a badly flawed survey with a veneer of stats. I’ve seen science fair ant farms with more “hard science”. A social psycologist would seem to me to be well qualified to pass judgement on it, as that branch of science does a lot of this kind of interpretive work.

        And if it was hard science, wouldn’t it be damaged (by your own reasoning) by the presence in the author list of a cartoonist, a blogger, a psychogist and an oil industry consultant?

      • Well, at least they didn’t ask a railroad engineer.

      • 我會說一點. No? Well, I’ll struggle along in English, wherever it ranks on my list of languages. (Se io credesse che mia risposta fosse…)

        Their hope was in vain, as was their research. But that’s what happens when you know the answer before you start asking the questions. You screw it up.

      • Kevin ONeill

        “We hope to establish that the Higgs particle exists”

        FRAUD

      • Kevin ONeill

        kch – Have you read the abstracts? Read them, try to rate them, and then come back and tell me this isn’t a hard science experiment.

        Try this out with a soc-psyche degree:

        The regional climate change index (RCCI) is employed to investigate hot-spots under 21st century global warming over East Asia. The RCCI is calculated on a 1-degree resolution grid from the ensemble of CMIP3 simulations for the B1, A1B, and A2 IPCC emission scenarios. The RCCI over East Asia exhibits marked sub-regional variability. Five sub-regional hot-spots are identified over the area of investigation: three in the northern regions (Northeast China, Mongolia, and Northwest China), one in eastern China, and one over the Tibetan Plateau. Contributions from different factors to the RCCI are discussed for the sub-regions. Analysis of the temporal evolution of the hot-spots throughout the 21st century shows different speeds of response time to global warming for the different sub-regions. Hot-spots firstly emerge in Northwest China and Mongolia. The Northeast China hot-spot becomes evident by the mid of the 21st century and it is the most prominent by the end of the century. While hot-spots are generally evident in all the 5 sub-regions for the A1B and A2 scenarios, only the Tibetan Plateau and Northwest China hot-spots emerge in the B1 scenario, which has the lowest greenhouse gas (GHG) concentrations. Our analysis indicates that subregional hot-spots show a rather complex spatial and temporal dependency on the GHG concentration and on the different factors contributing to the RCCI.

        Implicit, explicit, impacts, methods, endorses, rejects ???? I’m sure your resident economist has an opinion too …. of course opinions are like ….

      • Fraud? Are you dipping into the cooking sherry a bit early?

      • “Try this out with a soc-psyche degree:” What degree did the rater possess who evaluated 765 of those abstracts in three days?

      • Kevin ONeill

        Tom – you were the one that claimed the books were cooked and quoted this in support, “Basically, we hope to establish that not only is there a consensus, there is a strengthening consensus.””

        I was the one that said ‘hope’ is not indicative of a predetermined outcome; and that most if not all scientific research ‘hopes’ to prove something or other – even CERN ‘hoped’ to prove the existence of the Higgs Boson.

        So if your (nonsensical) reasoning holds, then CERN too was guilty of cooking the books (FRAUD) and arriving at a predetermined result.

        But as I said, parenthetically, this is complete nonsense. Apparently the English language eludes your grasp. ‘Hope’ does not equal ‘predetermined’ – it’s just another failed ClimateBall move on your part.

      • Telling your research team what you hope the findings are is… well… not good practice. It’s enough to get the findings tossed in the circular file. Interviewer bias is almost as bad as sample bias. In tandem, it’s pretty prejudicial.

      • You believe they actually found a Higgs particle?

      • Kevin ONeill –

        “Have you read the abstracts? Read them, try to rate them, and then come back and tell me this isn’t a hard science experiment.”

        I give up. You obviously have no clue what Cook et al actually was: an attempt to quantify the positions taken in a large sample of published literature. The only “hard science” used in the paper itself, as opposed to – pay attention here – the abstracts it purported to rate, would be the science of conducting and analysing surveys.

        It is on this aspect – the science of the project itself – that Cook et al would seem to have failed. That the answer they got confirms their (and possibly your) preconceived notions does not magically make what they did correct.

      • Kevin ONeill

        Tom – “Telling your research team what you hope the findings are is… well… not good practice. It’s enough to get the findings tossed in the circular file. Interviewer bias is almost as bad as sample bias. In tandem, it’s pretty prejudicial.”

        Every grant proposal I’ve ever seen awarded has ‘objectives’ – this is what the researchers hope to accomplish. So, according to you, there should be no objectives and every grant-awarding organization in existence should just stop asking for objectives.

        BTW, you once again compare apples to oranges. How do you ‘interview’ an abstract of a scientific paper? Where have you demonstrated sample bias?

        The first question, of course, is rhetorical – because you don’t interview an abstract and just shows you’re confused. the second question is more to the heart of the matter and neither you nor anyone else has shown a sample bias. All you’ve done is whine.

  19. From Andrew Montford’s analysis of the paper: “There was also apparently
    a problem with the number of papers processed by raters, with one participant getting through no fewer than 765 abstracts in a 72-hour period.” http://www.thegwpf.org/content/uploads/2014/09/Warming-consensus-and-it-critics1.pdf

  20. Tom,
    Why are you pig wrestling?

    • Ditto, Tom, most of us think you made your point. I never bought the 97%, It didn’t agree with my experience or the papers I was reading. You rarely get that type of consensus on anything. The only people who belive it get their talking points from websites, they don’t read original literature.

    • Hiya Hunter and Marty!

      Would either of you think less of me if I confessed I was enjoying this?

      It’s fascinating to watch clinical ‘denial’ in action, as opposed to the consensus objectification of it.

      • Tom,
        You know me well enough to have seen me do far worse.
        Kevin is derivative troll, just sorry he didn’t live in an earlier age that permitted more direct enforcement of the “consensus”.
        I bet he is still angry at not being able to play dress up with Cook & gang.

      • Well, since his bombast isn’t winning any of the arguments here on the thread, O’Neill is apparently getting frustrated and resorting to personal attacks. I’ll give him a little slack, but…

        Kevin, you need to change the content of your comments. You need to quit insulting people participating in this conversation.

        If you can’t do that, return to one of your echo chambers where you can insult us all you like. Don’t do it here.

    • hunter – have you figured out yet that the adjustments to the global temperature record reduce the experienced warming? It’s pretty basic math. And heck I even gave it to you in graphic form where you can just look at the difference between two colored lines – though I probably should have asked if you were color-blind first.

      Our host is too genteel to tell you you’re just flat out wrong and in denial, but I’m not. It really is a mind-boggling error that you can’t read a simple chart.

      It kind of makes you wonder what all the fuss about adjustments is really about – doesn’t it? If the adjustments reduce the warming, then why do pseudoskeptics keep harping on about it? Well, my personal opinion is they’re really not too bright and can be led around by the nose pretty easily.

      • Changing the subject much?

      • Kevin,
        I leave the idiocratic thinking to you- you do it at a professional level.

      • Kevin,
        If anyone besides a self-declared internet expert like yourself believed that the adjustments were actually reducing the long-stalled warming trend, it would be blasted out bny every alarmist media outlet.
        Since that dog is not barking, I can safely conclude you are simply talking out of your nether regions.

      • Kevin ONeill

        Hunter writes: “If anyone besides a self-declared internet expert like yourself believed that the adjustments were actually reducing the long-stalled warming trend, it would be blasted out bny every alarmist media outlet.”

        Tom, care to enlighten hunter as to what the end effect of adjustments are on the land/sea global temperature average? Or is it one step too far for you to admit the truth plainly, clearly, and right in the face of one of the denialnati?

      • The adjustments do bend the curve down. Hunter is IMO wrong on this.

        But Hunter, besides being an all around decent human, is also correct on other points in the climate argument, even if we disagree on quite a few. I respect him.

        How many people are willing to say any of that about you, Kevin?

      • Kevin ONeill

        Tom, it is a very simple exercise to judge the effects of the adjustments.

        If hunter can’t get simple math correct, and builds fantasies around it, then he is – as I said before – not too bright.

        I play a game when I run into those who are clearly in error; ignorant, stupid, insane, or just plain evil? Ignorant? Hey, no crime there. We’re all ignorant on different subjects. But if you’re ignorant of basic facts, or can’t figure it out given the correct information – well, hate to say it, but that’s generally what we call stupid.

        If you *know* the basic facts – or everyone is telling you that you’ve got them wrong – and you persist in believing otherwise, that’s almost a clinical definition of insanity.

        Lastly, evil knows. When one knows the truth, but spreads misinformation, disinformation, lies, half-truths, anything to obscure or hide the truth – that’s (in just about any moral code) evil.

        Now, I don’t know if hunter is stupid or insane; he can’t claim ignorance because he’s been given the correct information. I don’t think he’s evil – he shows no hint of actually knowing he’s wrong. Probably a classic case of D-K.

        You, on the other hand, likely well know that most of the smoke you’re blowing is merely to obscure and hide the truth.

      • Kevin, there’s a book I would recommend to both you and Hunter. It’s called The Half Life of Facts. The author is Samuel Arbesman(sp?)

        When I went to school there were nine planets and 105 elements in the periodic table. What I learned and carried about as part of my intellectual portfolio is no longer true.

        I think Hunter would benefit from bringing fresh vision to what we have learned since he studied the climate issue.

        I think you would too.

      • You write, “You, on the other hand, likely well know that most of the smoke you’re blowing is merely to obscure and hide the truth.”

        Welcome to Climateball. That’s pretty much how I feel about you.

  21. No, I’m just checking in on hunter – he has issues with data. Or data analysis. Or simply admitting the whole adjustment meme is nothing more than deniers doing what deniers do – which is deny the science.

  22. At the end of the day, the correct way to present the findings of Cook’s survey would have been,

    (Acknowledgment of flaws in methodology–disclosure of desired findings prior to research, communication between raters, lack of independence of raters, disclosure of authors of papers being rated, hurried work by raters–765 in three days?, improper sampling protocol for survey of authors)

    We analyzed 11,944 abstracts of published papers with the phrase ‘global climate change’ or ‘global warming’ in the title or body of the abstract.

    65 of those papers, or 0.5% of the total, were considered to explicitly endorse the thesis that humans have caused more than 50% of the recent rise in global surface temperatures.

    A further 934, or 7.8% of the total, were considered to explicitly endorse a weaker proposition, that humans have contributed to global warming.

    2,933 abstracts, 24.5% of the total, were considered to provide implicit endorsement of the same weaker proposition, that humans have contributed to global warming.

    8,261 abstracts, 69% of the total, were deemed to have taken no position on the issue of anthropogenic contributions to global warming.

    75 abstracts, or 0.6% of the total, were judged to either implicitly or explicitly reject the proposition that human contributions of CO2 have warmed the atmosphere, with 53 of those rejections being implicit, and 22 being explicit.

    A follow-up survey with 2,142 authors of the papers the abstracts were drawn from found little in the way of disagreement with our ratings. However, as the survey respondents were drawn from an opportunistic sample and the response rate to our survey invitation was only 25%, we are unable to say that the authors who responded to the survey were representative of the larger group of authors whose work was analyzed.

    That’s what an honest executive summary would look like.

  23. I think a better paper might be a social science one that analyzes the effects of fatigue on ratings.

  24. Pingback: Sigh… Cook Again | The Lukewarmer's Way

  25. Tom, IMHO you make a tactical (if not ethical) mistake in responding substantively to anyone, like Marco, who begins his comment with the word “Fuller.”

    I don’t bother reading comments that begin with “Keyes.”

    The only correct response to such people is to politely ask them if they were raised by f*cking wolves and teach them that your name is Tom or Thomas, not Fuller. Then they are free to re-post using human conventions of dialogue.

    If someone like Marco doesn’t correct himself, then it can be proven mathematically that it’s a fait accompli that his comments constitute angry and truthless polemic and nothing besides, so why muddy the simplicity of this axiom by reacting to any “points” he makes?

    That’s my philosophy anyway. (And third parties have always taken my side when I put it into practice.)

Leave a reply to Kevin ONeill Cancel reply