Sometimes, you can just eyeball it.
Really, if something truly is an outlier, you ought to be able to spot it. Take this plot, for example.
It should be pretty obvious that the vast majority of our sample for the Fish Lake game were students in grades, 4, 5 and 6. Those in the lower grades are clearly exceptions. I don’t know who put 0 as their grade, because I doubt any of our users had no education.
I use these plots especially if I’m explaining why I think certain records should be deleted from a sample. For many people, it seems as if the visual representation makes it clearer that “some of these things don’t belong here.”
Did you know that you can get a plot from PROC FREQ just by adding an option, like so:
PROC FREQ DATA= datasetname ;
TABLES variable / PLOTS=FREQPLOT ;
This will produce the frequency plot seen above, as well as a table for your frequency distribution.
Well, if you didn’t know, now you know.
Previously, I discussed PROC FREQ for checking the validity of your data. Now we are on to data analysis, but, as anyone who does analysis for more than about 23 minutes can tell you, cleaning your data and doing analysis is seldom a two-step process. In fact, it’s more like a loop of two steps, over and over.
First, we have the basic.
PROC FREQ DATA = mydata.quizzes ;
TABLES passed /binomial ;
This will give me not only what percentage passed a quiz that they took,
but also the 95% confidence limits.
I didn’t have any real justification for hypothesizing any other population value. What proportion of kids should be able to pass a quiz that is ostensibly at their grade level? Half of them – as in, the “average” kid? All of them, since it’s their grade level? I’m sure there are lots of numbers one could want to test.
If you do have a specific proportion, say, 75%, you’d code it like this:
PROC FREQ DATA =in.quizzes ;
TABLES passed / BINOMIAL (P=.75);
Note that the P= has to be enclosed in parentheses or you’ll get an error.
So, out of the 770 quizzes that were taken by students, only 30.65% of them passed. However, the quizzes aren’t all of equal difficulty, are they? Probably not.
So, my next PROC FREQ is a cross-tabulation of quiz by passed. I don’t need the column percent or percent of total. I just want to know what percent passed or failed each quiz and how many players took that quiz. The way the game is designed, you only need to study and take a quiz if you failed one of the math challenges, so there will be varying numbers of players for each quiz.
PROC FREQ DATA =in.quizzes ;
TABLES quiz*passed /NOCOL NOPERCENT ;
The first variable will be the row variable and the one after the * will be the column variable. Since I’m only interested in the row percent and N, I included the NOCOL and NOPERCENT options to suppress printing of the column and total percentages.
Before I make anything of these statistics, I want to ask myself, what is going on with quiz22 (which actually comes after quiz2) and quiz4? Why did so many students take these two quizzes? I can tell at a glance that it wasn’t a coding error that made it impossible to pass the quiz (my first thought), since over a quarter of the students passed each one.
This leaves me three possibilities:
- The problem before the quiz was difficult for students, so many of them ended up taking the quiz (another PROC FREQ)
- One of the problems in the quiz was coded incorrectly, so some students failed the quiz when they shouldn’t have,
- There was a problem with the server repeatedly sending the data that was not picked up in the previous analyses (another PROC FREQ).
Remember what I said at the beginning about data analysis being a loop? So, back to the top!
I’m in the middle of data preparation on a research project on games to teach fractions. This is the part of a data analysis project that takes up 80% of the time. Fortunately, PROC FREQ from SAS can simplify things.
1. How many unique records ?
There are multiple quizzes in the game, and you only end up taking a quiz if you miss one of the problems, so knowing how many unique players my 1,000 or so records represent isn’t as simple as dividing the number of players by X, where X is a fixed number of quizzes.
PROC FREQ DATA = mydata.quizzes NLEVELS ;
TABLES username ;
Gives me the number of unique usernames. If you were dying to know, in the quizzes file for Fish Lake it was 163.
2. Are there data entry problems?
We had a problem early in the history of the project where, when the internet was down, the local computer would keep trying to send the data to our server, so we would get 112 of the same record once the connection was back up.
Now, it is very likely that a player might have the same quiz recorded more than once. Failing it the first time, he or she would be redirected to study and then have a chance to try again. Still, a player shouldn’t have TOO many of the same quiz. I thought this problem had been fixed, but I wanted to check.
To check if we had the same quiz an excessive number of times, I simply did this :
PROC FREQ DATA= in.quizzes ;
TABLES username*quiztype / OUT=check (WHERE = (COUNT > 10)) ;
This creates an output data set of those usernames that had the same quiz more than 10 times.
There were a few of these problems. The question then became how to identify and delete those without deleting the real quizzes. This took me to step 3.
3. The LAG function
The LAG function provides the value from the prior observation. Assuming that it would take at least 2 minutes for a quiz, I sorted the data by username, quiz type, number correct and the time. I assumed it would take a minimum of 120 seconds for even the fastest student to complete a study activity and complete a test for the second time. Using the code below, I was able to delete all duplicate quizzes that occurred due to dropped internet connections.
proc sort data = check4;
by username quiztype numcorrect date_time ;
data check5 ;
set check4 ;
lagu = lag(username) ;
lagq = lag(quiztype) ;
lagn = lag(numcorrect) ;
lagd = lag(dt) ;
if lagu = username & lagq = quiztype & lagn = numcorrect then ddiff = dt – lagd ;
if ddiff ne . & ddiff < 120 then delete ;
Having finished off my data cleaning in record time, I’m now ready to do more PROC FREQ ‘ ing for data analysis – tomorrow.
(Actually, being 12:22 am, I guess it is technically tomorrow now.)
In the past, I have questioned the extent to which we really suck at math in the U.S. While I’m still a bit skeptical that the average child in developing countries is doing dramatically better than children in the U.S., one thing is pretty clear from our results to date, and that is that the average child living in poverty** in the U.S. is doing pretty darn bad when it comes to math.
About a week ago, I discussed the results from a test on fractions given as part of our Fish Lake game evaluation. The pretest score was around 22% correct. Not terribly good.
There were also two questions where children had to explain their answers:
Zoongey Gniw ate 1/3 of a fish. Willow ate 1/3 of a different fish. Zoongey Gniw said that he ate more fish. Willow says that he ate the same amount as she did, because they both ate 1/3 of a fish. Explain to Willow how Zoongey Gniw could be right.
Explain why each of the above figures represents ONE-FOURTH.
Answers were scored 2 points if correct, 1 if partially correct and 0 if incorrect.
Out of 4 points possible, the mean for 260 students in grades 3 through 7 was .42. In other words, they received about 10% of the possible points.
These two questions test knowledge that is supposed to be taught in 3rd grade and 96% of the students we tested were in fourth grade or higher.
PUH-LEASE don’t say,
“Well, those are hard questions. I’m not sure I could explain that.”
If that is the case, feel sad! These are easy questions if you understand basic facts about fractions. “Understand” is the key word in that sentence.
SO many people, including me, when I was young, simply memorize facts and repeat them when prompted, like some kind of trained parrot, and with no more understanding.
When understanding of mathematics is required, they fail. Yes, some of the items tested under the new Common Core standards are harder. That doesn’t show a failure of the standards or tests, but rather of the students’ knowledge.
This is one of those cases where “teaching to the test” is not a bad idea.
** The reason I limited my statement to children living in poverty is that the schools in our study had from 72% -98% of their students receiving free lunch. Being a good little statistician I don’t want to extrapolate beyond the population from which our sample was drawn.
I tell clients on our statistical consulting side all of the time that if your conclusion is only valid if you look at this specific subset of your sample, with this particular statistical technique. You need to look for a convergence or results. Does the mean score increase? Does the proportion of people passing a test increase? Do the test scores still increase when you co-vary for the pretest score?
(This is for my friend, Dr. Longie, who tells me I always put too many numbers in things and should get to the point – no matter how we sliced it, the scores of students who played Fish Lake improved over 30% from their pretest. Analysis is continuing on Spirit Lake and other data from Fish Lake. There! Are you happy now?)
We are just at the very beginning stages of analyzing data from the second phase of our research grant funded by the U.S. Department of Agriculture. Coincidentally, we are in Maryland at the National SBIR Conference this week and got the chance to meet in person all of the folks whose email we have been receiving for years.)
When we were in the middle of developing and testing Fish Lake, one of the interns in our office asked me,
“Are you sure this is going to work?”
I told her,
“No, I’m not sure. That’s why they call it research.”
School has now ended at all of our test sites and I have just completed cleaning the data for analysis from the first data set, which is the pre- and post-test data for Fish Lake, our game that teaches fractions as your avatar retraces the Ojibwe migration – canoeing, hunting and fishing your way across the continent.
So … what happened?
The first thing I did was compute the mean and standard deviation for the students who completed the pretest and the posttest. Then, I merged the datasets together and did a paired t-test for the 61 students who took the post-test and pre-test both. I didn’t show you any of those results because I assumed (correctly) that the merge would have to be reviewed because some people would have misspelled their username on the pretest or posttest. Surprisingly, I only found two of those, as well as one record that was just testing the software by one of our interns. The programs that I developed to clean the data (programs presented at a couple of regional SAS software conferences) worked pretty well.
Then, I re-ran the analysis.
Pre-test mean = 22.4%, SD= 16.5% N=260
Post-test mean = 30.8% SD =17.4% N=63
So far, so good. We were not surprised by the low scores on the pretest. We knew that the majority of students in several of our test schools were achieving a year or two below grade level. The improvement from pre-test to post-test of 8.4% represented an improvement in test scores of 37.5%
BUT …. what if the students who did not take the post-test were the lower performing students? Shouldn’t we do a pretest and post-test comparison only including matched pairs?
This brings us to ….
Result 2 – With Matched Pairs
Pre-test mean = 23.6%, SD= 17.4% N=63
Post-test mean = 30.8% SD =17.4% N=63
As hypothesized, the students who completed the post-test scored higher on the pretest than the average, but not dramatically so. The difference was still statistically significant (p < .01)
What about outliers? That standard deviation seems awfully high to me and when I look at the raw data I find five players who have a 0 on the pretest or post-test and one who had one of the highest scores on the pretest whose test is blank after the first few questions.
Now, it is possible that those students just knew none of the questions – but it appears they just entered their username and (almost) nothing else. I deleted those 6 records and got this
Result 3 – Matched Pairs with Outliers Deleted
Pre-test mean =24.4 SD =16.2 N=57
Post-test mean = 32.7 SD = 16.7 N=57
With a difference of 8.3 percentage points, this presents an improvement of 34% (p<.001 )
Conclusion ? Well, we are not even close to a conclusion because we have a LOT of more data still to analyze, but what I can say is that the results are looking promising.
I’m preparing a data set for analysis and since the data are scored by SAS I am double-checking to make sure that I coded it correctly. One check is to select out an item and compare the percentage who answered correctly with the mean score for that item. These should be equal since items are scored 0=wrong, 1=correct.
When I look at the output for my PROC MEANS it says that 31% of the respondents answered this item correctly, that is, mean = .310.
However, the correct answer is D and when I look at the results from my PROC FREQ it shows that 35% of the respondents gave ‘D’ as the correct answer.
What is going on here? Is my program to score the tests off somewhere? Will I need to score all of these tests by hand?
I am sure those of you who are SAS gurus thought of the answer already (and if you didn’t, you’re going to be slapping your head when you read the simple solution).
By default, PROC FREQ gives you the percentage of non-missing records. Since many students who did not know the answer to the question left it blank, they were (rightfully) given a zero when the test was automatically scored. To get your FREQ and MEANS results to match, use the MISSING option, as so
PROC FREQ DATA =in.score ;
TABLES item1 / MISSING ;
You will find that 31% of the total (including those who skipped the question) got the answer right.
Sometimes it’s the simplest things that give you pause.
Do you have a bunch of sites bookmarked with articles you are going to go back and read later? It’s not just me, is it?
One of my (many) favorite things at SAS Global Forum this year was the app. It included a function for emailing links to papers you found interesting. Perhaps the theory is that you would email these links to your friends to rub it in that their employer did not like you well enough. I emailed links to myself to read when I had time. Finally catching up on coding, email and meetings, today, I had a bit of time.
I was reading a paper by Lisa Henley
It’s a really cool and relatively new concept – from the 1970s – compared to 1900 for the Pearson chi-square, for example.
In brief, here is the idea. You have a large number of independent variables. How do you select the best subset? One way to do it is to let the variables fight it out in a form of natural selection.
Let’s say you have 40 variables. Each “chromosome” will have 40 “alleles” that will randomly be coded as 0 or 1, either included in the equation or not.
You compute the equation with these variables included or not and assess each equation based on a criterion, say, Akaike Information Criterion or the Root Mean Square Error.
You can select the “winning” chromosome/ equation either head to head, whichever has the higher AIC/ RMSE , although there are other methods of determination, like giving those with the higher criterion a higher probability of staying.
You do this repeatedly until you have your winning equation. Okay, this is a bit of a simplification but you should get the general idea. I included the link above so you could check out the paper for yourself.
Then, while I was standing there reading the paper, the ever-brilliant David Pasta walked by and mentioned the name of another paper on use of Genetic Algorithm for Model Selection that was presented at the Western Users of SAS Software conference a couple of years back.
I don’t have any immediate use for GA in the projects I’m working on at this moment. However, I can’t even begin to count the number of techniques I’ve learned over the years that I had no immediate use for and then two weeks later turned out to be exactly what I needed.
Even though I knew the Genetic Algorithm existed, I wasn’t as familiar with its use in model selection.
You’ll never use what you don’t know – which is a really strong argument for learning as much as you can in your field, whatever it might be.
We’ve looked at data on Body Mass Index (BMI) by race. Now let’s take a look at our sample another way. Instead of using BMI as a variable, let’s use obesity as a dichotomous variable, defined as a BMI greater than 30. It just so happened (really) that this variable was already in the data set so I didn’t even need to create it.
The code is super-simple and shown below. The reserved SAS keywords are capitalized just to make it easier to spot what must remain the same. Let’s look at this line by line
LIBNAME mydata “/courses/some123/c_1234/” ACCESS=READONLY;
PROC FREQ DATA = mydata.coh602 ;
TABLES race*obese / CHISQ ;
WHERE race NE “” ;
LIBNAME mydata “/courses/some123/c_1234/” ACCESS=READONLY;
Identifies the directory where the data for your course are stored. As a student, you only have read access.
PROC FREQ DATA = mydata.coh602 ;
Begins the frequency procedure, using the data set in the directory linked with mydata in the previous statement.
TABLES race*obese / CHISQ ;
Creates a cross-tabulation of race by obesity and the CHISQ following the option statistic produces the second table you see below of chi-square and other statistics that test the hypothesis of a relationship between two categorical variables.
WHERE race NE “” ;
Only selects those observations where we have a value for race (where race is not equal to missing)
Pretty obvious? Runs the program.
Similar to our ANOVA results previously, we see that the obesity rates for black and Hispanic samples are similar at 35% and 38% while the proportion of the white population that is obese is 25%. These numbers are the percentage for each row. As is standard practice, a 0 for obesity means no, the respondent is not obese and a 1 means yes, the person is obese.
You can see from this that there is a statistically significant relationship between race and obesity. Another way to phrase this might be that the distribution of obesity is not the same across races.
The next three statistics give you the size of the relationship. A value of 1.0 denotes perfect agreement (be suspicious if you find that, it’s more often you coded something wrong than that everyone of one race is different from everyone of another race). A value of 0 indicates no relationship whatsoever between the two variables. Phi and Cramer’s V range from -1 to +1 , while the contingency coefficient ranges from 0 to 1. The latter seems more reasonable to me since what does a “negative” relationship between two categorical variables really mean? Nothing.
From this you can conclude that the relationship between obesity and race is not zero and that it is a fairly small relationship.
Next, I’d like to look at the odds ratios and also include some multivariate analyses. However, I’m still sick and some idiot hit my brand new car on the freeway yesterday and sped off, so I am both sick and annoyed. So … I’m going back to bed and discussion of the next analyses will have to wait until tomorrow.
So far, we have looked at
- How to get the sample demographics and descriptive statistics for your dependent and independent variable.
- Computing descriptive statistics by category
Now it’s time to dive into step 3, computing inferential statistics.
The code is quite simple. We need a LIBNAME statement. It will look something like this. The exact path to the data, which is between the quotation marks, will be different for every course. You get that path from your professor.
LIBNAME mydata “/courses/ab1234/c_0001/” access=readonly;
DATA example ;
WHERE race ne “” ;
I’m creating a data set named example. The DATA statement does that.
It is being created as a subset from the coh602 dataset stored in the library referenced by mydata. The SET statement does that.
I’m only including those records where they have a non-missing value for race. The WHERE statement does that.
If you already did that earlier in your program, you don’t need to do it again. However, remember, example is a temporary data set (you can tell because it doesn’t have a two level name like mydata.example ) . It resides in working memory. Think of it as if you were working on a document and didn’t save it. If you closed that application, your document would be gone. Okay, so much for the data set. Now we are on to ….. ta da da
Inferential Statistics Using SAS
Let’s start with Analysis of Variance. We’re going to do PROC GLM. GLM stands for General Linear Model. There is a PROC ANOVA also and it works pretty much the same.
PROC GLM DATA = example ;
CLASS race ;
MODEL bmi_p = race ;
MEANS race / TUKEY ;
The CLASS statement is used to identify any categorical variables. Since with Analysis of Variance you are comparing the means of multiple groups, you need at least one CLASS statement with at least one variable that has multiple groups – in this case, race.
MODEL dependent = independent ;
Our model is of bmi_p – that is body mass index, being dependent on race. Your dependent variable MUST be a numeric variable.
The model statement above will result in a test of significance of difference among means and produce an F-statistic.
What does an F-test test?
It tests the null hypothesis that there is NO difference among the means of the groups, in this case, among the three groups – White, Black and Hispanic . If the null hypothesis is accepted, then all the group means are the same and you can stop.
However, if the null hypothesis is rejected, you certainly also want to know which groups are different from which other groups. After that significant F-test, you need a post hoc test (Latin for “after that”. Never say all those years of Catholic school were wasted).
There are a lot to choose from but for this I used TUKEY. The last statement requests the post hoc test.
Let’s take a look at our results.
I have an F-value of 300.10 with a probability < .0001 .
Assuming my alpha level was .o5 (or .01, or .001, or .ooo1) , this is statistically significant and I would reject my null hypothesis. The differences between means are probably not zero, based on my F-test, but are they anything substantial?
If I look at the R-square, and I should, it tells me that this model explains 1.55% of the variance in BMI – which is not a lot. The mean BMI for the whole sample is 27.56.
You can see complete results here. Also, that link will probably work better with screen readers, if you’re visually impaired (Yeah, Tina, I put this here for you!).
Next, I want to look at the results of the TUKEY test.
We can see that there was about a 2-point difference between Blacks and Whites, with the mean for Blacks 2 points higher. There was also about a 2-point difference between Whites and Hispanics. The difference in mean BMI between White and Black samples and White and Hispanic samples was statistically significant. The difference between Hispanic and Black sample means was near zero with the mean BMI for Blacks 0.06 points higher than for Hispanics.
This difference is not significant.
So …. we have looked at the difference in Body Mass Index, but is that the best indicator of obesity? According to the World Health Organization, who you’d think would know, obesity is defined as a BMI of greater than 30.
The next step we might want to take is examine our null hypothesis using categorical variable, obese or not obese. That, is our next analysis and next post.
There is no difference in obesity among Caucasians, African-Americans and Latinos.
Since my question only pertains to those three groups, let’s begin by creating a data set with just those subjects.
libname mydata “/courses/ab1234/c_0001/” access=readonly;
Data example ;
where race ne “” ;
Don’t forget to run the program!
Now, let’s do something new and use something relatively new, the tasks in SAS Studio. On the left screen, click on TASKS, then on STATISTICS, then click DATA EXPLORATION.
Once you click on DATA EXPLORATION, in the right window pane you’ll see several boxes, but the first thing you need to do is select the correct data set. To do that, click on the thing that looks like a sort of spreadsheet.
When you do that, you’ll see the list of libraries available to you. You need to scroll all the way down to the WORK library. This is where temporary data sets that you create are stored. Click on the WORK library to see the list of data sets in it.
Click on the + next to the variables and you’ll get a list of variables from which you can select. Scroll down and select the variable you want. First, as shown above, I selected RACE for the classification variable.
This gives me a chart, and it appears that whites have a lower body mass index than black or Latino respondents in this survey.
My next analysis is to do the summary statistics. I simply click on SUMMARY STATISTICS under the statistics tab (it’s right under data exploration) and select the same two variables. You can click here to see the results. Mean BMI for both the black and Hispanic samples was 29, while for whites it was 27. Standard deviations for the three groups ranged from 5.7 to 6.9 which was actually less than I expected.
So, there are differences in body mass index by race/ ethnicity, but that leaves a few questions left:
- Do those differences persist when you control for age and gender?
- While there are differences in body mass index, that doesn’t necessarily mean more people are obese. Maybe there are more underweight white people. Hey, it’s possible.
Well, now you have a chart and a table to add to the table you created in the first analyses. In the next post, we can move on to those other questions.