Where we left off, the reliability was unacceptably low for our measure to assess students knowledge of sad iconmultiplication, division and other third and fourth grade math standards. We were sad.

One person, whose picture I have replaced with the mother from our game, Spirit Lake, so she can remain anonymous, said to me:

But there is nothing we can do about it, right?I mean, how can you stop kids from guessing?

mother from game

This was the wrong question. What we know about the measure could be summarized as this:

  1. Students in many low-performing schools were even further below grade level than we or the staff in their districts had anticipated. This is known as new and useful knowledge, because it helps to develop appropriate educational technology for these students. (Thanks to USDA Small Business Innovation Research funds for enabling this research.) 
  2. Because students did not know many of the answers, they often guessed at the correct answer.
  3. Because the questions were multiple choice, usually A-D, the students had a 25% probability of getting the correct answer just by chance, interjecting a significant amount of error when nearly all of the students were just guessing on the more difficult items.
  4. Three-fourths of the test items were below the fifth-grade level. In other words, if you had only gotten correct the answers three years below your grade level, the average seventh-grader should have scored 75% – generally, a C.

There are actually two ways to address this and we did both of them. The first is to give the test to students who are more likely to know the answers so less guessing occurs. We did this, administering the test to an additional 376 students in low-performing schools in grades four through eight. While the test scores were significantly higher (Mean of 53% as opposed to mean of 37% for the younger students) they were still low. The larger sample had a much higher reliability of 87. Hopefully, you remember from your basic statistics that restriction of range attenuates  the correlation. By increasing the range of scores, we increased our reliability.

The second thing we did was remove the probability of guessing correctly by changing almost all of the multiple choice questions into open-ended ones. There were a few where this was not possible, such as which of four graphs shows students liked eggs more than bacon . We administered this test to 140 seventh-graders. The reliability, again was much higher: .86

However, did we really solve the problem? After all, these students also were more likely to know (or at least, think they knew, but that’s another blog) the answer. The mean went up from 37% to 46%. 

To see whether the change in item type was effective for lower performing students, we selected out a sub-sample of third and fourth-graders from the second wave of testing. With this sample, we were able to see that reliability did improve substantially from .57 to. 71 . However, when we removed four outliers (students who received a score of 0), reliability dropped back down to .47.

What does this tell us? Depressingly, and this is a subject for a whole bunch of posts, that a test at or near their stated ‘grade level’ is going to have a floor effect for the average student in a low-performing school. That is, most of the students are going to score near the bottom.

It also tells us that curriculum needs to start AT LEAST two or three years below the students’ ostensible grade level so that they can be taught the prerequisite math skills they don’t know. This, too, is the subject for a lot of blog posts. 

If you’re a teacher (or parent) and you’d like students to take the test for practice, you can see it here

—-

For schools who use our games, we provide automated scoring and data analysis. If you are one of those schools and you’d like a report generated for your school, just let us know. There is no additional charge.

Last post I wrote a little about local norms versus national norms and gave the example of how the best-performing student in the area can still be below grade level.

Today, I want to talk a little about tests. As I mentioned previously, when we conducted the pretest prior to student playing our game, Spirit Lake,  the average student scored 37% on a test of mathematics standards for grades 2-5. These were questions that required them to say, subtract one three-digit number from another or  multiply two one-digit numbers.

Originally, we had written our tests to model the state standardized tests which, at the time, were multiple choice. This ended up presenting quite a problem. Here is a bit of test theory for you. A test score is made up two parts – true score variance and error variance.

True score variance exists when Bob gets an answer right and Fred gets it wrong because Bob really knows more math (and the correct answer) compared to Fred.

Error variance occurs when, for some reason, Bob gets the answer right and Fred gets it wrong even though there really is no difference between the two. That is, the variance between Fred and Bob is an error.  (If you want to be picky about it, you would say it was actually the variance from the mean was an error, but just hush.)

How could this happen? Well, the most likely explanation is that Bob guessed and happened to get lucky. (It could happen for other reasons – Fred really knew the answer but misread the question, etc.)

If very little guessing occurs on  a test, or if guesses have very little chance of being correct, then you don’t have to worry too much.

However, the test we used initially had four multiple-choice items for each question. The odds of guessing correctly were 1 in 4, that is, 25%. Because students turned out to be substantially further below grade level than we had anticipated, they did a LOT of guessing. In fact, for several of the items, the percentage of correct responses was close to the 25% students would get from randomly guessing.

When we computed the internal consistency reliability coefficient (Cronbach alpha) which measures the degree to which items in a test correlate with one another, it was a measly .57. In case you are wondering, no, this is not good. It shows a relatively high degree of error variance. So, we were sad.

SAS CODE FOR COMPUTING ALPHA

PROC CORR DATA = mydataset NOCORR ALPHA ;

VAR item1 – item24 ;

 

The very simple code above will give you coefficient alpha as well as the descriptive statistics for each item. Since we very wisely scored our items 0 = wrong, 1= right a mean of say, .22 would indicate that only 22% of students answered an item correctly.

To find out how we fixed this, read the next post.

To buy our games or donate one to a school, click here. Evaluated and developed based on actual data. How about that? Learn fractions, multiplication , statistics – take your pick!

steam_icon-300x142Fish lake woman

I hate the concept of those books with titles like “something or other for dummies”  or “idiot’s guide to whatever” because of the implication that if you don’t know microbiology or how to create a bonsai tree of take out your own appendix you must be a moron. I once had a student ask me if there was a structural equation modeling for dummies book. I told her that if you are doing structural equation modeling you’re no dummy. I’m assuming you’re no dummy and I felt like doing some posts on standardized testing without the jargon.

I haven’t been blogging about data analysis and programming lately because I have been doing so much of it. One project I completed recently was analysis of data from a multi-year pilot of our game, Spirit Lake. 

Buffalo

Before playing the game, students took a test to assess their mathematics achievement. Initially, we created a test that modeled the state standardized tests administered during the previous year, which were multiple choice. We knew that students in the schools were performing below grade level but how far below surprised both us and the school personnel. A sample of 93 students in grades 4 and 5 took a test that measured math standard for grades 2 through 5. The mean score was 37%. The highest score was 63%.

Think about this for a minute in terms of local and national norms. The student , let’s call him Bob, who received a 63% was the highest among students from two different schools across multiple classes. (These were small, rural schools.) So, Bob would be the ‘smartest’ kid in the area. With a standard deviation of 13%, Bob scored two standard deviations above the mean.

Let’s look at it from a different perspective, though. Bob, a fifth-grader, took a test where three-fourths of the questions were at least a year, if not, two or three, below his current grade level, and barely achieved a passing score. Compared to his local norm, Bob is a frigging genius. Compared to national norms, he’s none too bright. I actually met Bob and he is a very intelligent boy, but when most of his class still doesn’t know their multiplication tables, it’s hard for the teacher to get time to teach Bob decimals, and really, why should she worry, he’s acing every test. Of course, the class tests are a year below what should be his grade level.

One advantage of standardized testing, is that if every student in your school or district is performing below grade level it allows you to recognize the severity of the problem and not think “Oh, Bob is doing great.”

He wouldn’t be the first student I knew who went from a ‘gifted’ program in one community to a remedial program when he moved to a new, more affluent school.

Get Fish Lake here (yes, that’s another game) before it is released on Steam next month! Learn fractions, canoe rapids, spear fish. Buy for yourself or donate to a school for under ten bucks!

steam_icon-300x142

Occasionally, when I am teaching about a topic like repeated measures Analysis of Variance, a brave student will raise a hand and ask,

Seriously, professor, WHEN will I ever use this?

The aspiring director of a library, clinic, afterschool program, etc. does not see how statistics apply to conducting an outreach campaign or HIV screening or running a recreational program for youth – or whatever one of hundreds of other good causes that students intend to pursue with their graduate degrees. Honestly, they often look at the required research methods and statistics courses is a waste of time mandated for some unknown reason by the University, probably to keep professors employed. Often, they will find a way to do a dissertation using only qualitative analysis and never think about statistics again.

This is a huge mistake.

For all of those people who say, “I never used statistics in my career”, I would answer, “well, I never used French in my career either and you know why – because I never learned it very well.”

Now, those people who don’t see a real use for French probably aren’t convinced. However, to me, it’s pretty evident that if I could speak French I could be making games in both French and English.

Actually, statistics can answer the very most important question in any social program – does it work?

So, I had written a couple of blogs about the presentation I gave at SACNAS (Society for the Advancement of Chicanos and Native Americans in Science) where I discussed using statistics to identify need for intervention and mathematics for students prior to middle school. I also gave examples of teaching statistics concepts in games.

The question is, did these games work for increasing student scores?

For this – surprise! Surprise! Drumroll – – – we used repeated measures Analysis of Variance. If you look at the graph below you can see that the students who played the games improved substantially more from pretest to posttest than the students in the control group.

graph showing increase from pretest to posttest

This was a relatively small sample, because it was our first pilot study, and conducted in two small rural schools, that also happen to have very high rates of mobility and absenteeism, so we were only able to obtain complete data from 58 students.

Now, the results look impressive but where these differences higher than one would expect by chance with four groups (two grades from each school) of a fairly small size?

Well, when we look at the ANOVA results we see that the time by school interaction, which tests if one school changed more overtime than the other is quite significant (F = 7.13, P = .01). Yes, the P value equaled exactly .0100.

The time by school by grade 3 – way interaction was not significant. It’s worth noting that the fifth grade at the intervention school had less time playing the game due to logistical reasons – they had to schedule the computer lab as opposed to playing in their classroom, and sometimes, their class being scheduled later in the day, they missed playing the game altogether when school was let out early due to weather.

One way that I could reanalyze these data – and I will – would be to look at it not by grade but by time spent playing. So, instead of four groups, I would have three – those who played the game not at all, in other words, the control group, those who played at less than recommended and those who played it the recommended amount.

My point is that repeated measures ANOVA is just one of the many statistical techniques that can answer the most important questions in social programs – whether something works and under what conditions it works best. There’s also the question of who it works best for – and statistics can answer that too.

So, my answer to the student who questions if he or she will ever use this is, “if you’re smart you will.”

banner

Check out the games from which these data were collected. They are effective now and we are in the middle of a major update. Did I mention that you get free updates for life with all of our games?

For all of those who have asked us if these data are going to be published, the answer is yes, we have two articles in press that should come out in 2017.

We are working on more in our copious spare time that we do not have, but right now we are focusing on game updates.

 and on our new, free iPad game, Making Camp

wigwam button

Choke and arm bar

Yes, I am choking a blind lady but she’s a black belt and on the US Paralympic team so I don’t feel sorry for her

I will be the first to admit that I’m not the warm fuzzy type. Maybe you’re like me, you’d like to do good for your community but you just can’t see yourself as a physician.

Maybe your bedside manner is to snap at someone to quit being a whiner.

Or maybe you really are a sweet kind person but you are not very extroverted. You just can’t see yourself looking someone in the eye and asking them to tell you about their problems at home. Perhaps you really genuinely care about children in your community and really would like to help them succeed in school but the thought of speaking in front of the 30 people makes you break out into a cold sweat – even if the 30 people are all under 13 years old.

Maybe, like me, you really like math. To be specific, maybe you really like analyzing data, looking for correlations, inspecting distributions. Maybe, you really like programming. Or that’s what we called it in my day – now all the cool kids call it coding.

Does that mean that we are condemned to be a bunch of Silicon Valley dwelling, Soylent swigging, soulless drones with nothing to keep us warm at night but our stock options? In fact, quite the opposite! These last few years I have been having a lot of fun working with statistics in two very different ways.

First of all, I’ve been working with our team at 7 Generation Games to make adventure games that teach statistical concepts.

Let me give you an example. Some items are more valuable than others. Why? Try to figure  it out by looking at this distribution.

graphs showing frequency of different goods

Players can click on this interactive graph for help reading it. They have a sentence written with blanks to fill in to model academic language.

Once a student answers one or two questions in the game correctly, the reward is being able to play a related game – in this case, collecting items in the jungle. As you might guess, the more common items are worth less in the game.

jungle game

Here is a second example. Below, we have a section of our 3D game where the player is building a pyramid.

pyramid

To build your pyramid fast enough that the Emperor doesn’t decide to chop off your head, you want to get stronger than average workers. What is an easy way to determine if you have stronger than average workers? Find the median!

explanation on finding median

Players can also click a button to switch the page to an explanation in Spanish.

explanation of median in Spanish

Just because we were all out last night at the Latino Tech meet-up to celebrate Hispanic Heritage Month, don’t assume everything we make it is focused on Latinos.

our staff

Here is yet another example of teaching statistics in a game, this one re-tracing the Ojibwe migration.

question on averages from Forgotten trail

In this case, the player computes an average to figure how many miles need to be walked per day to get to the end of the trail in eight days.

Get this question right and you can play the next level, where you canoe down the river to meet up with your old uncle who will – surprise – pose another statistics problem before you can move on to the next level.

 Girl in canoe

So, there you have it! You can apply your knowledge of statistics to create adventure video games that teach students. As you can see, you also can apply knowledge of programming to meet the special needs of students whether it is to have a page read to them (did you notice the read it to me button in the page above?) Or to have it translated into a second language.

I’ll bet that you thought I was going to talk about using statistics to evaluate whether the games worked. That, is a post for another day.

_______

You can buy Forgotten Trail now. Only $4.99 . Yep, under five bucks. Runs on Mac, Windows or Chromebook.

Buy our games

burning carTo be honest, when I first began studying statistics social justice never entered the equation. Like most people in America, I think, I was concerned about problems like crime, poverty, low educational attainment of minority groups. Like most people, my concern didn’t translate into much actual effort on my part.

No, I took my first statistics course because it seemed really interesting. I made a C+ in it because it was Monday, Wednesday and Friday afternoons and the fraternity parties started on Friday afternoon, so I missed every third class. When later research of mine showed a negative correlation between absenteeism and grades I was not surprised. I had personal experience. I mentioned this too, because I have seen too many women and minority students discouraged from science and technical fields when they were not at the top of the class right away.

It’s a very long journey for my first statistics class in 1978 to now. Although I started learning statistics because I was just very interested in what I would call “messing around with data” and I like programming a lot, along the way I learned something interesting.

A lot of money is allocated based on statistics. Maybe not directly so, not very often does someone say to you,

“That’s a very interesting statistic. Have $9 million!”

Statistics do come into play. About 1/4 of a century ago, I realized that grant money often did not go to the program where the funds were needed most or the staff were most effective. No, they went to the programs that would best at writing grant proposals. These proposals included statistics on needs assessment and evaluation of prior efforts. Often, people who were really good at helping low income students raise their academic achievement or getting people with substance abuse disorders off of drugs were nowhere near as good at writing grant proposals.

For all of those proposals, statistics were required. What proportion of students in the target schools are achieving below grade level? What was the distribution of test scores of students in the previous three years and how does that compare to the state or national average? What evidence is there that the proposed program for academic enrichment will have any impact on the students at all?

Often, my very well meaning colleagues disagreed with the necessity for this type of analysis, even while they appreciated me doing it and made good use of the grant funds. Their point of view was that they knew what worked in their classrooms or clinics.

Personally, I feel that if all I had done in my career was bring tens of millions of dollars in grant money to programs that apply those funds to do good in their communities that would have been a satisfactory accomplishment. However, I’d like to argue that I did a little bit more good than that because I disagree with some of my esteemed colleagues that, “I know it when I see it“,  is adequate for determining program effectiveness.

I can give you many many reasons why statistics are essential. First of all, something I have seen over and over in my career is that what gets measured gets done. If you are measuring the number of tutoring sessions or the number of times students play your games or the duration of those sessions, that allows you to correlate the “dose” of treatment your students received with the “response” in terms of increased achievement.

Many times, I have seen programs that were initially judged ineffective because everyone who came through the door was lumped together whether they were seen 10 times, once or not at all, having left before they ever saw a tutor counselor or whatever. Tracking your interaction with people allows you to determine whether you are effective for people who spend some substantial amount of time with your program. It also lets you tell what percentage of the people fall through the cracks that is who come in, fill out a form to be part of your program and then drop out almost immediately.

In brief, effective application of statistics cannot only help you obtain money but also see that money from federal agencies, foundations, etc. is intelligently applied.

If you are interested, I will be speaking at the Society For the Advancement Of Chicanos And Native Americans In Science annual conference in Long Beach on  Saturday , Discovery and Societal Impact with Statistical Science. You can come to hear much more on this topic (or just read my next blog post).

———

Check out our latest game we will soon be using to collect data. You can download Making Camp free for your iPad. Play with your children, hand them your iPad to do something productive or take a little break yourself (you deserve it)

Wigwam

If I were to give one piece of advice to a would-be program evaluator, it would be to get to know your data so intimately it’s almost immoral.

Generally, program evaluation is an activity undertaken by someone with a degree of expertise in research methods and statistics (hopefully!) using data gathered and entered by people’s whose interest is something completely different, from providing mental health services to educating students.

Because their interest in providing data is minimal, your interest in checking that data better be maximal. Let’s head on with the data from the last post. We have now created two data sets that have the same variable formats so we are good to go with concatenating them.
DATA answers hmph;
SET fl_answers ansfix1 ;
IF username IN(“UNDEFINED”,”UNKNOWN”) or INDEX(username,”TEST”) > 0 THEN OUTPUT hmph;
ELSE OUTPUT answers;

PRO TIP : I learned from a wise man years ago that one should not just gleefully delete data without looking at it. That is, instead of having a dataset where you put the data you expect and deleting the rest, send the unwanted data to a data set. If it turns out to be what you expected, you can always delete the data after you look at it.

There should be very few people with a username of  ‘UNDEFINED’ or ‘UNKNOWN’. The only way to get that is to be one of our developers who are entering the data in forms as they create and test them, not by logging in and playing the game.   The INDEX function checks in the variable in the first argument for the string given in the second and returns the starting position of the string, if found. So,  INDEX(username, “TEST”) > 0 looks for the word TEST anywhere in the username.

Since we ask our software testers to put that word in the username they pick, it should delete all of the tester records. I looked at the hmph data set and the distribution of usernames was just as I expected and most of the usernames were in the answers data set with valid usernames.

Did you remember that we had concatenated the data set from the old server and the new server?

I hope you did because if you didn’t you will end up with a whole lot of the same answers in their twice.

Getting rid of the duplicates

PROC SORT DATA = answers OUT=in.all_fl_answers NODUP ;
by username date_entered ;

The difference between NODUP and NODUPKEY is relevant here. It is possible we could have a student with the same username and date_entered because different schools could have assigned students the same username. (We do our lookups by username + school). Some other student with the same username might have been entering data at the same time in a completely different part of the country. The NODUP option only removes records if every value of every variable is the same. The NODUPKEY removes them if the variables in the BY statement are duplicates.

All righty then, we have the cleaned up answers data, now we go back and create a summary data set as explained in this post. You don’t have to do it with SAS Enterprise Guide as I did there, I just did it for the same reason I do most things, the hell of it.

MERGING THE DATA

PROC SORT DATA = in.answers_summary ;
BY username ;

PROC SORT DATA = in.all_fl_students ;
BY username ;

DATA in.answers_studunc odd;
MERGE in.answers_summary (IN=a) in.all_fl_students (IN=b) ;
IF a AND b THEN OUTPUT in.answers_studunc  ;
IF a AND NOT  b THEN OUTPUT odd ;

The PROC SORT steps sort. The MERGE statement merges. The IN= option creates a temporary variable with the name ‘a’ or ‘b’. You can use any name so I use short ones.  If there is a record in both the student record file and the answers summary file then the data is output to a data set of all students with summary of answers.

There should not be any cases where there are answers but no record in the student file. If you recall, that is what set me off on finding that some were still being written to the old server.

LOOK AT YOUR LOG FILE!

There is a sad corner of statistical purgatory for people who don’t look at their log files because they don’t know what they are looking for. ‘Nuff said.

This looks exactly as it should. A consistent finding in the pilot studies of assessment of educational games has found a disconcertingly low level of persistence. So, it is expected that many players quit when they come to the first math questions.  The fact that of the 875 players slightly less than 600 had answered any questions was somewhat expected. As expected, there were no records where

NOTE: There were 596 observations read from the data set IN.ANSWERS_SUMMARY.
NOTE: There were 875 observations read from the data set IN.ALL_FL_STUDENTS.
NOTE: The data set IN.ANSWERS_STUDUNC has 596 observations and 11 variables.
NOTE: The data set WORK.ODD has 0 observations and 11 variables.

So, now, after several blog posts, we have a data set ready for analysis ….. almost.


Want to see these data at the source?

Check out our game, playable on Mac or Windows. Download Spirit Lake or Fish Lake  to play, or for Forgotten Trail, just click on the link provided, no download required.

Mom and kid

You can also donate a copy of the game to a school or give as a gift.

Further Reading

For more on SAS character functions check out Ron Cody’s paper An Introduction to Character Functions, an oldie but goodie from WUSS back in 2003.

Or you could read my last post!

This paper by Britta Kelsey from SAS Users Group International in 2005 will tell you more than you want to know about the NODUP and NODUPKEY.

Occasionally, a brave student will ask me,

When will I ever use this?

The “this” can be anything from a mixed model analysis to nested arrays. (I have answers for both of those, by the way.)

I NEVER get that question when discussing topics like filtering data, whether for records or variables, because it is so damn ubiquitous.

computer in a field

Before I headed out to be, literally, testing in the field (you can read why here) , I was working on an evaluation of the usability of one of our games, Fish Lake.

I had expected to find a correlation between performance and persistence but it didn’t quite turn out that way because the players who had 100% of the problems correct skewed the results.

My next thought was that many students played the game for a very short time, got the first answer correct and then quit. I decided to take a closer look at those people.

First step: from the top menu select TASKS, then DATA, then FILTER AND SORT

filter and sort

Second step:  Create the filter. Click on the FILTER tab, select from the drop-down menu the variable to use to filter, in this case the one named “correct_Mean” , select the type of filter in the next drop-down menu, in this case EQUAL TO and in the box, enter the value you want it to equal. If you don’t remember all of the values you want, clicking on the three dots next to that box will bring up a list of values. You can also filter by more than one variable, but in this case, I only want one, so I’m done.Create filter

Third step:  Select the variables. Steps two and three don’t have to be done in a particular order, but you DO have to select variables or your procedure won’t run, since it would end up with an empty data set. I do the filter first so I don’t forget. I know the filter is the whole point and you’re probably thinking you’d never forget that but you’re probably smarter than me or never rushed.

Selecting variables

If you click the double arrows in the middle, that will select all of the variables.  In this case, I just selected the two variables I wanted and clicked the single arrow (the top one) to move those over.

Why include correct_mean, since obviously that is a constant?

Because I could have made a mistake somewhere and these aren’t all with 100% correct. (Turns out, I didn’t and they were, but you never know in advance if you made a mistake because if you did then you wouldn’t make it.)

I click OK and now I have created a data set of just the people who answered 100% correctly.

For a first look, I graphed the frequency distribution of the number of questions answered by these perfect scorers.  To do this,

  1. Go to TASKS > GRAPH > Bar Chart

bar chart menu to select type of graph

2. Click on the first chart to select it, that’s a simple vertical bar chart

data menu
3. Click on the DATA tab and drag correct_N under column to chart

appearance option

4. Under APPEARANCE click the box next to SPECIFY NUMBER OF BARS. The default here is one bar for each unique data value, which is already clicked. Caution with this if you might have hundreds of values, but I happen to know the max is less than 20.

bar chart of number of answersI thought I’d find a bunch answered one question and a few answered all of the questions and maybe those few were data entry errors, say teachers who tested the game and shouldn’t be in the databaseWhen I look at this graph, I’m surprised. There are a lot more people who had answered 100% correctly than I expected and they are distributed a lot more across the number of questions than I expected.  That’s the fun of exploratory data analysis. You never know what you are going to find.

SO, now what?

 


Want to see the game that generated these data? Canoe rapids, catch fish and learn fractions.

Fish lake splash screen

Runs on Mac and Windows.


So, now what?

I want to find out more about the relationship among persistence and performance. To do this, I’m going to need to merge the answers summary data set with demographics.

I’m going to go back to the Summary Data Set I created in the last post (remember that one) and just filter variables this time, keeping all of the records.

Again, I’m going to go to the TASKS menu, select DATA then FILTER AND SORT, this time, I’m going to have no filter and select the variables.

Since the pop-up window opens with the VARIABLES tab selected, I just click the variables I want, which happens to be “correct_N”,” correct_mean” and “username”, click the single arrow in between the panes to move them over, and click OK at the bottom of the pop-up window. Done! My data set is created.

variables selected

You can always click on PROGRAM from the main menu to write code in SAS Enterprise Guide, but being an old dinosaur type, I’d like to export this data set I just created and do some programming with it using SAS. Personally, I find it easier to write code when I’m doing a lot of merging and data analysis. I find Enterprise Guide to be good for the quick looks and graphics but for more detailed analysis, the old timey SAS Editor is my preference.  If you happen to be like me, all you need to do to output your data set is click on it in the process flow and select EXPORT.

export file option

You want to export this file as a stand-alone data set, not as a step in a project. Just select the first option and you can save it like any file, select the folder you want, give it the name you want. No LIBNAME statement required.

And it’s a beautiful sunny day in Santa Monica, so that’s it on this project for today.

—–

In the last post, I used SAS Enterprise Guide to filter out a couple of ‘bad’ records that came from test data, then I created a summary table of the number of questions answered and the percentage correct. Then, I calculated the mean percentage correct for the  around 84%. That seemed a bit high to me.

Having (temporarily) answered the first question regarding the number of individual subjects and the average percent of correct answers from the 424 subjects, I turned to the next question:

Is there a correlation between percentage correct and the number of questions attempted? That is, do students who are getting the answers correct persist more often?

Since I had both variables, N and the mean correct (which, since this was score 0= correct, 1= incorrect gave me the percentage correct) from the summary tables I had created in the previous step, it was a simple procedure to compute the correlation.

I just went to the TASKS menu, selected MULTIVARIATE and then CORRELATIONS

Selection menu for correlations

Under ANALYSIS VARIABLES correct_ N for the ‘correct’ variable, which is a variable that holds whether the  student answered correctly, 0(= no) or 1(=yes).  Under CORRELATE WITH I dragged correct_mean, which has the percentage each student answered correctly.

Variables selected for correlationSince it is just a bivariate correlation and the correlation of X with Y = the correlation of Y with X , it would make absolutely no difference if I switched the spots where I dragged the two variables.

I click run and I get a somewhat unexpected result, you can see here, with a correlation of -.07.

I also note that the minimum number of answers attempted is 1. Now, I have done (and published) analyses of these data elsewhere, as this is an on-going project.

 


Other analyses from this same project can be found in:

Telling Stories with Your Data and

Yes, PROC FREQ Does That!


Because of these analyses of ‘Fidelity of Implementation’, that is the degree to which a project is implemented as planned, I am pretty sure that these data include a large proportion of students who only had the opportunity to play the game once.

So … I decided to run a scatter plot and check my suspicion. This is pretty simple. I just go to the TASKS menu and select GRAPH then SCATTER PLOT.

I selected 2-D Scatter Plot

2D scatter plot selected

Then, I clicked on the DATA tab, dragged correct_Mean under Horizontal and Correct_N and vertical, then clicked RUN.

Data window for scatter plot

This produced the graph below.

scatter3Now, this graph isn’t fancy but it serves its purpose, which is to show me that there IS in fact a correlation of mean correct and the number of problems attempted. Look at that graph a minute and tell me that you don’t see a linear trend – but it is pulled off by the line of 1.0 at the far end.

This did NOT fit my preconceived notion, though, that the lack of correlation was due to the players who played once, and so there would be a bunch of people who had answered 1 or 2 questions and got 100% of them correct. Actually, those 100-percenters were all over the distribution in terms of number of problems attempted.

This reminds me of a great quote by Isaac Asimov,

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny …’

Well, we shall see, as our analysis continues …

 


Want to see these data at the source?

Check out our game, playable on Mac or Windows. Download Spirit Lake or Fish Lake  to play, or for Forgotten Trail, just click on the link provided, no download required.

Mom and kid

You can also follow the link above to donate a copy of the game to a school or give as a gift.

 

 

The government is extremely fond of amassing great quantities of statistics. These are raised to the nth degree, the cube roots are extracted, and the results are arranged into elaborate and impressive displays. What must be kept ever in mind, however, is that in every case, the figures are first put down by a village watchman, and he puts down anything he damn well pleases.
Josiah Stamp

Any time you do anything with any data your first step is to consider the wisdom of Sir Josiah Stamp and check the validity of your data. One quick first step is using the Summary Tables task from SAS Enterprise Guide. If you are not familiar with SAS Enterprise Guide, it is a menu driven application for using SAS for data analysis. You can open a program window and write code if you like, and I do that every now and then but that’s another post. In my experience, SAS Enterprise Guide works much better with smaller data sets – defined by me, as the blog owner, of less than 400,000 records or so. Your mileage may vary depending upon your system.

How to do it:

  1. Open SAS Enterprise Guide
  2. Open your data set – (FILE > OPEN > DATA)
  3. From the TASKS menu, select DESCRIBE and then SUMMARY TABLES. The window below will pop up
  4. Drag the variables to the roles you want for each. Since I have less than 450 usernames here, I just quickly want to see are there duplicates, errors (e.g. ‘gret bear’ is really the same kid as ‘grey bear’ , with a typo). I also want to find out the number of problems each student attempted and the percent correct. So, I drag ‘username’ under CLASSIFICATION VARIABLES and ‘correct’  under ANALYSIS variables. You can have more than one of each but it just so happens I only have one classification and one analysis variable I’m interested in right now.

window with options for data

 

5. Next click on the tab at left that says SUMMARY TABLES and drag your variables and statistics where you want them. I want ‘username’ as the row, so I drag it to the side, ‘correct’ as the column, N is already filled in as a statistic if you drag your classification variable to the table first. I also want the mean, so I drag that next to the N. Then, click RUN.

summary tables tab with statistics selected

Wait a minute! Didn’t I say I wanted the percent correct for each student? Why would I select mean instead of percent?

Because the pctN will simply tell me what percent of the total N responses from this username make up. I don’t want that. Since the answers are score 0 = wrong, 1= right, the mean will tell me what percentage of the questions were answered correctly by each student. Hey, I know what I’m doing here.

6. Look at the data! In looking at the raw data, I see that there are two erroneous usernames that shouldn’t be there. These data have been cleaned pretty well already, so I don’t find much to fix.summary4_badoutputNow, I want to re-run the analysis deleting these two usernames.

7. At the top of your table, you’ll see an option that says “Modify Task”. Click that.

summary5_modify_task8. You’ll have the summary tables window pop up, this time with your data filled in.  Click on the edit button at the top right of this window. You are about to create a task filter.

window with options for data

8. Under TASK FILTER pull down the first box to show the variable ‘username’. Pull down the second box to show the option NOT EQUAL TO and then click the three dots next to the third box. This will pull up a list of all of your values for usernames. You can select the one you want to exclude and click OK.  Next to the three dots, pull down to select AND, then go through this to select the second username you want to delete. You can also just type in the values, but I tend to do it this way because I’m a bad typist with a bad short-term memory.

summary6_edit9. Create a SAS dataset of the output. It’s super easy. Click on the RESULTS tab to the left and in the window that pops up click SAVE RESULTS TO A DATA SET. Then, click RUN.

summary3   10. The most recently created data set should be your default data set for analysis but click on it in your process flow diagram to activate it just in case. summary7_newdata

11. From the DESCRIBE menu again select SUMMARY STATISTICS

12. Drag ‘correct_mean’ under ANALYSIS VARIABLES and click RUN.

The resulting table gives me my answer – the mean is .838 with a standard deviation of .26 for N=424 subjects.  So … the average subject answered 84% of the problems correctly. This, however, is just the first step. There are couple more interesting questions to be answered with this data set before moving on. Read the next step here. 

————–

Want to play the game that produced these data? Own a Mac or Windows computer? Have ten bucks?

Girl on TV playing game

Here you go.

Next Page →