A day after Typhoon Soudelor passed overhead and I am busy getting water out of our apartment. Casualties were few, the mess is large.
While my wife thought I was watching the torrential rain, I was actually thinking of the Verheggen et al survey I have been going on and on about over the past week.
Briefly, Verheggen et al conducted a survey of practicing climate scientists to determine how robust the climate consensus really is. In his report on the survey, he almost completely neglects to tell us the consensus is 66%, focusing instead on the fact that scientists with more publications are stronger supporters of the consensus than scientists with fewer publications. More here. And more here. And here. And their paper is here.
So, I went over to Bart Verheggen’s excellent weblog and posted the following comment.
If analyzing subgroups based on their level of expertise had been part of the project design, perhaps the questions should have and would have been written differently.
Analyzing by level of expertise is not mentioned as one of the project goals in the Introduction to your paper, nor in any of the material I’ve seen written about it prior to fielding the survey. It appears to be something added after you looked at the results.
This is not unusual–often data surprises researchers and provides new avenues to explore. But never in 20 years of doing this have I seen it completely eclipse the principal objective of the research.
Bart, I’m specifically excluding you from what follows–I think one member of your team (John Cook) is an apologist for the worst of the climate activist community and this is primarily aimed at the activist community. Please feel free to correct me–if Mr. Cook was completely neutral and acted like all our best visions of a scientist at work, let me know.
Because John Cook is lead author of a heavily publicized paper that trumpets a 97% consensus in the literature, I believe that the 66% consensus found in your survey ( and repeated in Bray, von Storch 2010) was considered either unhelpful or anomalous. I note you cite other studies but not Bray, von Storch. I am ‘struggling to understand’ (apologies to ATTP) why you would fail to note that another survey conducted in 2008 came up with exactly the same percentage of agreement with the consensus (although the definition was different).
It appears from what has been written regarding the survey that because the topline agreement with the consensus statement came in at 66% that a decision was reached to highlight the results of other questions.
That would explain why the topline percentage was reported only by question number only and combined with the figure for another question in the only sentence where it was mentioned in the report.
As there is a clear difference in responses between those with more publications and those with fewer, that became the story that was reported.
To repeat–it is not wrong or even unusual to note differences between subgroups–that’s why you ask demographic or organographic questions in the first place.
But to bury the topline finding and focus on the subgroups is something I’ve never seen before. Ever.
I’ve designed, fielded, analyzed and reported on the results of over a thousand surveys. In addition, I have trained other researchers, coached them, corrected their mistakes (and learned from them) and edited their reports. I am not a scientist but I believe I will claim status of subject matter expert on the technical aspects of quantitative surveys, both consumer and professional.
You know I have the highest regard for you–for years you were the only consensus blogger I knew that ‘played fairly’ and I have learned a lot of what little I know about climate science here at your blog.
So I don’t say this lightly. Your survey is good. The reporting of the results is not.”