When Scientists Commit Scienticide

So, I’m a language guy, and one of my favorite things about knowing Latin is adding the suffice “-cide” to the ends of things to create words that mean “the killing of ___”.

There’s lots of fun outcomes from the familiar (regicide) to the useful (ludicide–which is what high school actors do to good plays) to the sad (genocide, which was the goal of Margaret Sanger*).

Scienticide is, to be Latin-y, the killing of knowledge.  But I think we should also get to include in it the deliberate obfuscation of knowledge or the short-circuiting of the scientific method to present something as good science when it is, in fact, not good science.

I’m a big fan of science.  When I was a kid, I used to hide things in the refrigerator to pull out weeks later and examine mold growth on.  When I told my middle school science teacher that I had become a language teacher, she was disappointed I hadn’t wound up surrounded by test tubes and beakers.  My wife is a capital-S Scientist and I appreciate what she does.

But me, I’m a statistician, and statisticians look at knowledge or “knowing things” a little differently than other people.  I guess, in a way, it’s because statisticians write the code, so to speak, for how modern science knows things.

Like, okay, some science also uses mathematics, so let’s talk about the two for a second.

Mathematics is a set of philosophical tools, based on logic and reason.  Mathematics is aloof and very, very self-assured when it says that something is true, because it is.  The more mathematics a field uses (I’m looking at you, physics), the better ground it has for when it says that something is true or not.

Statistics is a lot more wishy-washy in what it can do.  Statistics can do a super great job at telling you when something is probably true (and even how probably true it is).  It can do a super great job telling you when something is probably not true (and how probably not true it is).  What the field of statistics cannot do (and don’t believe anyone who tells you differently) is tell you that something is absolutely true or absolutely not true.

And here’s the thing, pretty much all modern sciences use statistics, not mathematics, to make judgements about the truth of a claim.

This means that if, say, a psychologist, sociologist, or climatologist tells you that something is true or not true, you should immediately be skeptical of their claims and tell them that you want to see the confidence intervals.

Confidence intervals are the wiggle-room that statistics produce as a byproduct of creating an estimate.  They are the fuzziness inherent in statistical sampling of data.  (When you see a poll, and it says “margin of error xxx%” or “plus or minus xxx%”, those are the confidence intervals, only phrased differently).

When a biologist has 10,000 flies and weights 100 of them to get an average weight, it doesn’t just produce the right, true answer.  It produces a range of probably true right answers. (“Range of probably true right answers” is probably as good a definition as any for what a confidence interval is).

So let’s say the biologist has two groups of 10,000 flies.  She feeds diet A to group A and diet B to group B.  At the end of the study, she weighs samples from group A and from group B.  She’ll get a RoPTRA (Range of Probably True Right Answers or “confidence interval”) for group A and group B.  If there is any no overlap between the two groups (say group A was between 14 and 20, and group B was between 21 and 27), then the biologist is allowed to say “there is a statistically significant difference between the two groups.”

However, if there is aaaaaannnnnnyyyyy overlap at all between the two groups (say 14-20 and 19-25), then she is not allowed to say that there is a difference, she is only allowed to say “there was not a statistically significant difference between the two groups.”

Note, either way, she’s not allowed to say anything about “truth.”  She’s only giving the most likely result based on the data.

And this is the basic rule for any two sets of confidence intervals.  Let’s say that instead of flies, she was measuring dudes at a college to see if they were getting chubbier over time.  If she took the confidence intervals from one year to the next, and then compared them, and they overlapped, then she should conclude that there’s no statistically significant changes happening between those two years.

So, here’s the data claiming that 2015 was the hottest year on record:

graph.png

From this, climatologists claimed that 2015 was the hottest year on record.

Sigh.  See where there is overlap between 2015 and 2014?  That means that what the scientists who used this data said is the exact opposite of what they were allowed to say.

Because there is overlap between 2014 and 2015, statistics does not allow them to say that there was a statistically significant

It’s easier without the deceptive blue line there:

graph2

In fact, if you look at the data from 2001 to 2014, they all look pretty similar.  Even 2015 is within the range of 2014, 2010, and 2005.

Was 2015 the warmest on record?  Maybe, but the data doesn’t show this.

The worst part is that the same scientists reported 2014 as the warmest on record, when the confidence interval for that year clearly overlaps with almost the entirety of the rest of the data presented.

Climate scientists aren’t using mathematics in their work, they’re using statistics, which means they have to explain their data in terms of statistics, which these scientists clearly are not.

So why would a scientist explain that a graph means the opposite of what it actually means?  There’s only two options:

  1. They’re not good at their jobs, i.e. they don’t know how to read a graph.
  2. They’re liars.

Now, Hanlon’s Razor notwithstanding, option 1 is probably not what’s going on here.

So what would cause someone who struggled all the way through graduate school and, let’s face it, you don’t become a climatologist for the chicks and drugs–you do it because you care about the job, to risk their career fabricating results?

A few options spring to mind:

  1. They really, really think that global warming** is happening, even if the data doesn’t support it.  They feel that in order to head off a catastrophe, it’s okay for them to tell some white lies about science to save the planet.
  2. They need the money.  Scientists who publish “we didn’t find anything” papers don’t, uh, get published.  So they get fired.  This is a huge problem in academia, and I’d assume, at all research institutes.
  3. They want to be important.  They want to be the guy who broke the story and that everything thinks is so great and now they all listen to him

Here are my tips for combatting such non-science from scientists.

  1. Ask to see the confidence intervals when people tell you stuff.  If they can’t show them to you (or won’t), they’re not actually doing science.
  2. Be skeptical of experts until they prove that they’re not trying to game you.
  3. Become an expert yourself.

 

[*I’ve always felt a certain kinship with African Americans and Jews in this regard.  Being of Sicilian heritage, we three groups were the poster children of the unfit that Sanger sought to eradicate from the earth.  Goddam WASPs, always coming for us.]

[**Even after “climate change” got popular because of all those years when there was no rise, the true believers never abandoned “global warming”]

[***Additional note:  I’m not even nitpicking on methodology for how these things are measured, which is a whole lot of other cans of very wormy worms.]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s