Hello, world -
It’s me, on my soapbox again. The latest straw was a discussion of articles in Science which have been the subject of retraction or “expression of concern”.

According to Retraction Watch there are between 30 and 95 retractions each year. The ones you hear about are usually in medicine and the “hard” sciences – biology, physics, chemistry.

Retractions of articles published in Wiley-Blackwell journals over a two-year period showed social science to be dead last in the proportion of articles retracted, a contest I’m sure social scientists are happy to lose.

Let’s exclude the sheer stupidity of plagiarism (I mean, seriously, you managed to get a Ph.D. without learning that copying off the kid in the desk next to you is wrong? And you haven’t heard that we have computers now where people, including the original author, can see what you wrote and compare it with that other kid’s work).

No, let’s talk about subtler stupidity. First up is the type of stupidity, hubris or willingness to believe one’s own numbers too much that leads you to grab a significant r-squared and run with it. Every project I have ever worked on, every contract I have ever written, allows substantial time for “data quality” and “convergence”.

My fellow (experienced) statistical consultants never get the answer to this particular question wrong.

If a client objects that his/ her data has no problems, I add a clause to the contract stating that we will refund that percentage of our fee that was applied to these analyses if we don’t find any problems with the data. How many times, in 25 years, have I had to pay a refund? (Extra bonus points if you can guess how many dollars this added up to.)

The correct answer is zero and zero.

Examples of the kinds of errors we see:

  1. Columns are off by one or two so from point X onward all of your data are actually for the next gene/survey item/test question etc.
  2. Dataset is supposed to only be for one state / school/ organism/ race etc. but includes others.
  3. Some of your survey respondents are dead. That is, they have a date of death entered in their medical records we merged with the survey data.
  4. Your data were entered wrong. People in the control group were entered as experimental or vice-versa.
  5. Your data were scored wrong. Questions that should have been marked right were marked wrong. Questions that should have been reverse-coded weren’t.

You can see these are not small inflate your standard errors a bit beyond the stated rate type of errors but completely f—ed up your data type of errors. And yet, it is no problem because we EXPECT these. We run a PROC CONTENTS in SAS or codebook in Stata. We run descriptive statistics – means, frequencies, standard deviations – on everything. We run graphs of distributions and stare at them. And we catch these errors.

On the list of things I don’t understand, right after teenagers, comes why people will spend so much time worrying about if they should use a mixed model or general linear model, or puzzling out endogenous versus exogenous variables in a structural equation model and not spend the time checking their data.

Stupid crap to stop doing, take two:
So, you checked all of your statistics every way from Tuesday, your data are great. You coded everything correctly, fixed all data entry errors.

You have 22,000 people in your organization. For reasons beyond me, you randomly sampled 3,000 to send the emailed survey. (Why? Because bytes are so expensive?) From your 3,000 people you get 92 responses and based on this you write a report saying people are generally happy with all aspects of your organization. (When the person who conducted this study asked for my opinion, I sent a two-page report detailing the inadequacies in sampling and the survey used. He later complained to someone that I was overly negative. It’s good I did not go with my initial reaction on reading it, which was to send an email that said, “You’re f—ing kidding me, right?”


Sometimes surveys work great. This is not one of those times. Polls generally work well for the presidential election because people understand both the questions and the multiple choice options.

Who are you going to vote for president in 2008?
A. Obama – the African-American, Muslim, born in either Kenya or Indonesia who is a socialist/communist who wants to kill your grandmother and take your guns
B. McCain – the old white guy, two steps from death’s door (may already be dead for all we know), who looks eerily like the evil emperor in Star Wars (coincidence, I think not!) and hates gays and immigrants (especially gay immigrants)
C. Someone who has less chance of getting elected than a naked mole rat, who, in fact, for all we know, may actually be running from a political party controlled by naked mole rats.

Those kind of surveys generally produce valid data. Yours, on the other hand, is more along the lines of:

How many people in your life would you consider to be “like family” ? ___
Rate your level of happiness on a scale of 1(=extremely unhappy) to 10 (=extremely happy).

You have actual numbers. How much more objective could you be? And then you go on to talk about correlations with social support and happiness. You put these numbers into extremely complex models and control for stratification, distributions of the independent and dependent variables. You use the appropriate statistics.

Somewhere, you are missing the point. The point was where you asked those questions. My daughter is a pretty good athlete (as in, having won world and Olympic medals and now competing in MMA -she’s the uninjured one in this video). Coaches, managers, fans and random people are always telling me, “She’s just like family to me.”

My husband has suggested I send the tuition and car insurance bills to these people and see if they pay them. I’ve gotten so sick of it, I tell them,

“Excuse me, but you have a different definition of family than me. NO ONE is like family to me but my family. I may like other people a whole lot, but if you threaten one of these four children, I will shoot you dead if I have to, and not feel one bit of remorse looking down at your bleeding corpse.”

My children have a whopping amount of social support, even if it’s only from me. Just because you SAID that some number you got from a random question you asked is a measure of social support doesn’t mean that it IS.

One of the reasons fewer retractions are required in social science research (in my not-the-least-bit-humble opinion) is that it is far more difficult to replicate the sampling method and measures used to actually question the findings in the first place.

Now that I am really on my soap box, I think I will just make this an on-going, perhaps infinite series on all of the things that are wrong with research and particularly social science research (and yeah, marketing research guys, I’m including you!)

Comments

5 Responses to “Statistics Begins Here: Just Stop this Crap Right Now, Part 1”

  1. Tweets that mention Statistics Begins Here: Just Stop this Crap Right Now, Part 1 : AnnMaria’s Blog -- Topsy.com on November 17th, 2010 2:13 pm

    [...] This post was mentioned on Twitter by John D. Cook and annmariastat, Phil Kalina. Phil Kalina said: Yes! RT @annmariastat Statistics crap people should just cut the hell out http://www.thejuliagroup.com/blog/?p=857 [...]

  2. Weekend miscellany — The Endeavour on November 19th, 2010 1:28 pm

    [...] Derivative-based MCMC Statistics begins here: Just stop this crap right now [...]

  3. disgruntledphd on November 19th, 2010 2:15 pm

    Thanks for this post (i came by way of the endeavour) it neatly sums up a lot of the problems I have had myself, and a lot of the mistakes I have seen others make.

  4. ecjs on November 19th, 2010 4:58 pm

    I came by way of the endeavour too. This was an interesting read, although I do not like the violence which your text demonstrates.

  5. admin on November 19th, 2010 5:07 pm

    Well, if it makes you feel better, I haven’t ever really shot anyone.

Leave a Reply