Here is the scenario …
1. The researcher had a return rate more than triple the average return rate for emailed surveys in general. This return rate was despite apparently not having any of the features related to higher return rates – incentives for completion, an advance postcard or email explaining the survey and incentives, email or phone follow-ups to non-respondents. A 2005 study conducted with 1,500 subjects from the same population in the same state on a similar topic had a return rate less than one-third that this researcher claims, even though the 2005 researchers used incentives AND advance notice AND in four out of five sites, emailed or called non-respondents.
2. The completion rate for each item was 97-98%. From the first item asking about the topic to the 50th, there was no drop out in responses. There appeared to be almost no drop out of the study, over 98% finished it. There was zero correlation between where the item was on the survey and how many people completed it because virtually every one of over 750 people completed every question.
3. Responses came in three types – from Group 0, Group 1 and Group 2. Over half of the subjects – nearly 400 – came from the same location, there were times when as many as 9 respondents would start the survey at the exact same minute of the same day. In one group, surveys were ONLY started between 10 and 11 in the morning or between 3 and 7:15 pm. (Well, 5% did come in between 7:15 and 9). None of the 141 surveys in that group came at any other time.
4. The typical audience member quoted is verbatim the same in both this report and another report from a study at a different site completed two years ago.
5. In the report, there is an almost complete absence of detail on sampling method, nothing on return rate, nothing on how the online survey was distributed, nothing about missing data, incentives or follow-up. There is minimal discussion about possible bias in the sample. Return rate had to be computed based on the raw data and a report on the number of students surveyed. Similarly, missing data percentage was computed from the raw data.
When someone questioned this, it was stated that,
“The data were reviewed by a statistician who said that he saw no problems.”
Your comments and opinions are eagerly awaited.