When Scientists Commit Scienticide

So, I’m a language guy, and one of my favorite things about knowing Latin is adding the suffice “-cide” to the ends of things to create words that mean “the killing of ___”.

There’s lots of fun outcomes from the familiar (regicide) to the useful (ludicide–which is what high school actors do to good plays) to the sad (genocide, which was the goal of Margaret Sanger*).

Scienticide is, to be Latin-y, the killing of knowledge.  But I think we should also get to include in it the deliberate obfuscation of knowledge or the short-circuiting of the scientific method to present something as good science when it is, in fact, not good science.

I’m a big fan of science.  When I was a kid, I used to hide things in the refrigerator to pull out weeks later and examine mold growth on.  When I told my middle school science teacher that I had become a language teacher, she was disappointed I hadn’t wound up surrounded by test tubes and beakers.  My wife is a capital-S Scientist and I appreciate what she does.

But me, I’m a statistician, and statisticians look at knowledge or “knowing things” a little differently than other people.  I guess, in a way, it’s because statisticians write the code, so to speak, for how modern science knows things.

Like, okay, some science also uses mathematics, so let’s talk about the two for a second.

Mathematics is a set of philosophical tools, based on logic and reason.  Mathematics is aloof and very, very self-assured when it says that something is true, because it is.  The more mathematics a field uses (I’m looking at you, physics), the better ground it has for when it says that something is true or not.

Statistics is a lot more wishy-washy in what it can do.  Statistics can do a super great job at telling you when something is probably true (and even how probably true it is).  It can do a super great job telling you when something is probably not true (and how probably not true it is).  What the field of statistics cannot do (and don’t believe anyone who tells you differently) is tell you that something is absolutely true or absolutely not true.

And here’s the thing, pretty much all modern sciences use statistics, not mathematics, to make judgements about the truth of a claim.

This means that if, say, a psychologist, sociologist, or climatologist tells you that something is true or not true, you should immediately be skeptical of their claims and tell them that you want to see the confidence intervals.

Confidence intervals are the wiggle-room that statistics produce as a byproduct of creating an estimate.  They are the fuzziness inherent in statistical sampling of data.  (When you see a poll, and it says “margin of error xxx%” or “plus or minus xxx%”, those are the confidence intervals, only phrased differently).

When a biologist has 10,000 flies and weights 100 of them to get an average weight, it doesn’t just produce the right, true answer.  It produces a range of probably true right answers. (“Range of probably true right answers” is probably as good a definition as any for what a confidence interval is).

So let’s say the biologist has two groups of 10,000 flies.  She feeds diet A to group A and diet B to group B.  At the end of the study, she weighs samples from group A and from group B.  She’ll get a RoPTRA (Range of Probably True Right Answers or “confidence interval”) for group A and group B.  If there is any no overlap between the two groups (say group A was between 14 and 20, and group B was between 21 and 27), then the biologist is allowed to say “there is a statistically significant difference between the two groups.”

However, if there is aaaaaannnnnnyyyyy overlap at all between the two groups (say 14-20 and 19-25), then she is not allowed to say that there is a difference, she is only allowed to say “there was not a statistically significant difference between the two groups.”

Note, either way, she’s not allowed to say anything about “truth.”  She’s only giving the most likely result based on the data.

And this is the basic rule for any two sets of confidence intervals.  Let’s say that instead of flies, she was measuring dudes at a college to see if they were getting chubbier over time.  If she took the confidence intervals from one year to the next, and then compared them, and they overlapped, then she should conclude that there’s no statistically significant changes happening between those two years.

So, here’s the data claiming that 2015 was the hottest year on record:

graph.png

From this, climatologists claimed that 2015 was the hottest year on record.

Sigh.  See where there is overlap between 2015 and 2014?  That means that what the scientists who used this data said is the exact opposite of what they were allowed to say.

Because there is overlap between 2014 and 2015, statistics does not allow them to say that there was a statistically significant

It’s easier without the deceptive blue line there:

graph2

In fact, if you look at the data from 2001 to 2014, they all look pretty similar.  Even 2015 is within the range of 2014, 2010, and 2005.

Was 2015 the warmest on record?  Maybe, but the data doesn’t show this.

The worst part is that the same scientists reported 2014 as the warmest on record, when the confidence interval for that year clearly overlaps with almost the entirety of the rest of the data presented.

Climate scientists aren’t using mathematics in their work, they’re using statistics, which means they have to explain their data in terms of statistics, which these scientists clearly are not.

So why would a scientist explain that a graph means the opposite of what it actually means?  There’s only two options:

  1. They’re not good at their jobs, i.e. they don’t know how to read a graph.
  2. They’re liars.

Now, Hanlon’s Razor notwithstanding, option 1 is probably not what’s going on here.

So what would cause someone who struggled all the way through graduate school and, let’s face it, you don’t become a climatologist for the chicks and drugs–you do it because you care about the job, to risk their career fabricating results?

A few options spring to mind:

  1. They really, really think that global warming** is happening, even if the data doesn’t support it.  They feel that in order to head off a catastrophe, it’s okay for them to tell some white lies about science to save the planet.
  2. They need the money.  Scientists who publish “we didn’t find anything” papers don’t, uh, get published.  So they get fired.  This is a huge problem in academia, and I’d assume, at all research institutes.
  3. They want to be important.  They want to be the guy who broke the story and that everything thinks is so great and now they all listen to him

Here are my tips for combatting such non-science from scientists.

  1. Ask to see the confidence intervals when people tell you stuff.  If they can’t show them to you (or won’t), they’re not actually doing science.
  2. Be skeptical of experts until they prove that they’re not trying to game you.
  3. Become an expert yourself.

 

[*I’ve always felt a certain kinship with African Americans and Jews in this regard.  Being of Sicilian heritage, we three groups were the poster children of the unfit that Sanger sought to eradicate from the earth.  Goddam WASPs, always coming for us.]

[**Even after “climate change” got popular because of all those years when there was no rise, the true believers never abandoned “global warming”]

[***Additional note:  I’m not even nitpicking on methodology for how these things are measured, which is a whole lot of other cans of very wormy worms.]

Scientia Contra Scientitatem (vel Fides et Sceientia, Manus in Manu)

Here’s an article titled Math is Racist, which is a misleading name.

Essentially  profiles a professional mathematician (sounds like a data analyst to me) who trucks with the Occupy Wallstreet crowd.  She claims that mathematics is being used to hurt poor people on loan applications, criminal sentencing, and so on.

A few thoughts:

1) It’s crazy easy to hurt poor people.  Want to not live by minorities?  Move somewhere with high property taxes.  Want to enmesh them in the criminal justice system and feel good about yourself at the same time?  Enact usurious vice taxes on things like cigarettes and alcohol and outlaw cheap vices like marijuana.  It’s not like you need to be a statistician to think of ways to hurt poor people.

2) One of the things they make you learn when you become a statistician is something called discriminant analysis, which is a fancy way for coming up with mathematical rules for separating things into groups.  The classic example is that you have a bunch of different flowers that look alike but are different flowers.  You create a mathematical model based on the size of the petals and sepals and it classifies them.

You don’t always get perfect rules to tell them apart, but you can do a lot better than random chance.  So, for example, if you had four types of flowers in a hat, picked one out and then randomly said it was one of the four types of flowers, you’d have a 25% chance of being correct.  Using discriminant analysis, you might be able to move that up to a much higher rate of accuracy (maybe 75 or 85%).

When talking heads talk about computer algorithms to predict our behavior, what they’re really talking about is discriminant analysis.  When you buy some stuff on Amazon, it uses discriminant analysis to figure out what “type” of person you are and then selects other stuff that your “type” might want to buy.

Sometimes it works, sometimes it doesn’t.  But it’s not supposed to be perfect.  Amazon has an almost infinite number of products–if it randomly picked something to suggest, it would have a negligible chance of picking something you’d want.  But if it can improve that chance to even a few percentage points, it’s a success.

When you get turned down for a loan or face harsher penalties for crimes based on your zip code, credit score, and other things, that’s discriminant analysis.

3) My problem with the title of the article is that the profilee is clearly not anti-science.  She’s anti-a-particular-application-of-science, which is a very different thing.  She doesn’t not believe that discriminant analysis is a real mathematical tool, she believes that employing it in  loan applications or criminal sentencing is wrong.

A better way of looking at this is that she’s anti-scientism, which I’d define as the belief that “Science” and the scientific method is the only reliable guide for human understanding.

In a sense, anti-scientism is the entire underpinning of books like Frankenstein or Jurassic Park, which are essentially reframings of the question “Science tells us we can do this, but should we?”

4) We have a big problem politically (on both sides–but not with enlightened people like me who are moderates), wherein each claims that the other is “anti-science.”  (Liberal friends, you may not know that that conservatives think that you are the anti-science ones, but, like, it’s a thing. You need to read more NRO).

Sometimes the problem is legitimate dogmatic anti-scientism (opposition to an earth age in the billions of years or a belief that nuclear power is unsafe).  But that’s not really what we’re talking about here.  We’re talking about whether or not certain scientific methods or tools should be employed, which is a discussion of ethics.
Ultimately, ethics is something I have a hard time letting scientists police themselves on.  Scientifically, neuroscience can eventually tell us (maybe now?) which parts of the brain we could manipulate in order to create hordes of willing slaves, but nothing in the scientific method is gong to tell us whether or not we should.
I mean, heck, science tells us that H-bomb and mustard gas are super good ways of killing people, it tells us that forced chemical castration is possible, and that poor children are more likely to grow up to be societal drains.
But nothing in science or scientism can tell us not to gas the Southside of Chicago or chemically castrate the entire barrio population in Sao Paulo.
Instead, the woman from the article makes to me a very appropriate faith-based argument against using a scientific method to the detriment of people.
5) Faith is a funny thing, since it can be either deistic or non-deistic.  Lots of people believe in the golden rule because they think that a supernatural power wants them to.  Lots of others who are atheists believe in the golden rule because that’s the “right thing to do.”
But there’s nothing scientific about it.  It can’t be tested or created in a lab.  Sure, there can be studies on reciprocity in the natural world or game theory on outcome optimization and whatever, but nothing in the inductive scientific method could produce it.
Which means that believing that humans should obey the golden rule becomes a tautological “People should do this because people should do this.”  At this point, I’d argue, we’re talking about a belief that cannot be only rooted in logic or reason, which is another way of saying faith.
The great strength of science is answering the question “how?” and the great strength of faith is answering the question “should?”.
6) This implies, I think, that in order to utilize science to it’s best, we need faith to inform us (just as faith is best when it’s underpinned by rational and logical criticism).  Neither can exist in a vacuum.  Both are necessary for the survival of the other.

Common Sense on Gun Control

Disclosure: When I was in high school, I earned a spot into a program sponsored by the Friends of the NRA, through which I was flown to DC for a week to study the US Constitution (mostly the 2nd Amendment parts of the US Constitution…).  During this week, I once mentioned that I didn’t see any harm in gun registration and, for the first moment in my life, turned out to be the most lefty progressive person in a room.  I generally don’t have a problem with gun ownership (I don’t own one, but I am licensed in my state.  I’ve done several gun safety classes from the time I was a Boy Scout through my mid 30’s with trained NRA instructors, who are the finest gun safety instructors in the world.  I think criminals should generally not have guns or access to them, and I don’t think that the gun laws we typically have on the books today (a few days waiting periods, mandatory safety training, registration, licensing to own/carry/conceal) are overly onerous.

I think people who want to own guns should be able to with a minimum of headache and hassle, but I understand why some people want there, overall, to be fewer guns in our possession.

Anyway, maybe it’s the statistician talking, but I started doing some digging into murder rates in the US.

(you can see for yourself at: https://en.wikipedia.org/…/List_of_countries_by_intentional…, http://www.infoplease.com/ipa/A0873729.html, and http://www.washingtonpost.com/…/study-the-u-s-has-had-one-…/)

The good news: We are doing great, murder-wise. Like, I get it, all murder is bad, and there are a lot of things that go into reductions in murder-rates that I don’t want to get into here. But back when I was in middle school, the US overall was at about 10 deaths per 100,000. Now we are at less than half that.

The meh news with a positive spin: We have a murder rate around 4.5 per 100,000, which is really good compared to a lot of other countries that have areas with hugely concentrated poverty.

(Honduras, for example, where my beloved wife did her mission work, has a murder rate like 20 times what the US is. Mexico’s is about 3 times as much as ours. I hope anti-immigration people get a sense real quick about what people from the South are trying to get away from.)

Yeah, it’s not as good as Canada (which overall has about the same murder rate, 1.6%, as super progressive, hard line Democrat Utah does, 1.8%). But not all of Canada. Manitoba, the cleverly named Northwest Territories, and something called Nunavut (none of what? Canada? Why don’t you speak English!?) all have comparable or higher rates of murder rates than the US does.

So, whereas the US has a higher percentage overall, not all parts of it are equally murder, and there’s not a good correlation between blue- or red-ness of the states with murder rates. (Utah and Vermont are really low, Weesiana and DC are really high, with DC being the typical “gun-laws don’t work” example and Louisiana being the typical “no gun-laws doesn’t work” example from the parties respectively).

More meh: Mass shootings are a tiny percentage of homicides. They are just higher visibility. Trying to make gun-policy by focusing on mass shootings is not a good tactic, statistically, any more than focusing on foreign aid is a good way to discuss balancing the budget.

The bad: Lots of people still get murdered in the United States. Each of these is a very sad thing, and it would be great if we had better ways of preventing this. I recommend working and volunteering at low-income, high poverty areas, since murder rates are highly correlated with poverty and better education is correlated with not poverty.

The scary: If anyone is wondering what is going to kill their children, it will be obesity, cancer, and “unintentional injuries”. http://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm. It will very likely not be guns.

To quote Nick Naylor: “Well, the real demonstrated #1 killer in America is cholesterol. And here comes Senator Finistirre whose fine state is, I regret to say, clogging the nation’s arteries with Vermont Cheddar Cheese. If we want to talk numbers, how about the millions of people dying of heart attacks? Perhaps Vermont Cheddar should come with a skull and crossbones.”