I read this in a review of a study on teacher expectancy effects but it could really apply to so many other studies.
If these results bear any relationship at all to reality, it is indeed a fortunate coincidence.
Those of us who choose careers in research like to believe that it is all like everyone learns in their textbooks: hypothesis, data collection, analysis, results, conclusion and *PRESTO* knowledge.
In a few weeks, I will be in San Diego at the Western Users of SAS Software conference presenting results of the past year of testing with Fish Lake and Spirit Lake: The Game.
Occasionally, colleagues will ask me about my interest in the nuts and bolts of data analysis and why I ‘bother’ presenting at SAS conferences instead of ‘the real thing’, like the American Educational Research Association or the National Council on Family Relations. One of the main reasons is that I like to be very transparent about how my data were collected, scored and analyzed. I find it odd that these “details” are given short shrift in publications when, in fact, all of the conclusions ever published rely on the assumption that these “details” were done correctly.
Presenting the nuts and bolts of the data cleaning, coding and analysis assures any funding agency or consumer of the research that it was done correctly. Or, if anyone wants to dispute the way I’ve done the analyses, at least it is crystal clear how exactly the data were processed. In most cases, the reader has no idea and is just taking it on faith that the researcher did everything correctly – which given some of the bozos I know is pretty shaky ground.
Once I have confidence that the data sets are in good shape, have corrected any data entry problems, deleted outliers, accurately scored measures and identified any statistical assumptions that need to be met, then I’m ready to proceed to the analyses with confidence.
Think about that next time someone with a turned up nose says,
“I don’t go to that type of conference.”
Yeah? Well, I do.