As I said in my last post, repeated measures ANOVA seems to be one of the procedures that confuses students the most. Let’s go through two ways to do an analysis correctly and the most common mistakes.

Our first example has people given an exam three times, a pretest, a posttest and a follow up and we want to see if the pretest differs from the other two time points.

proc glm data = example ;
model pre post follow = /nouni ;
repeated exams 3 contrast (1) /summary printm ;

Among other things, this will give you a table of Type III Sum of Squares that tells you that you have a significant difference across time. It will also give you contrasts between the 1st treatment and each of the other two.

You can see all of the output produced here.

This is using PROC GLM and so it requires that you have multiple VARIABLES representing each of the multiple times you measured people. This is in contrast to PROC MIXED which requires multiple records for each subject. We’ll get into that another day.

One thing that throws people all of the time is they ask, “Where did you get the exams variable?” In fact, I could have used any valid SAS name. It could have been “Nancy” instead of “exams” and that would have worked just as well. It’s a label we use for the factor measured multiple times. So, as counterintuitive as it sounds, there is NO variable named “exams” in your data set.

Let’s try a different example. This time, I have a treatment variable. I have administered two different treatments to my subjects. I want to see if treatment has any effect on improvement.

proc glm data =example ;
class treatment ;
model pre post follow = treatment/ nouni ;
repeated exams 3 /summary ;

The fixed effect does *not* go in your REPEATED statement

In this case, I do need a CLASS statement to specify my fixed effect of treatment. A really common mistake that students make is to code the REPEATED statement like this:

repeated treatment 3 /summary ; *WRONG! ;

It seems logical, right? Why would you use a completely made up name instead of one of your variables? If you think about it for a minute, though, treatment wasn’t repeated. Each subject only received one type of treatment.

When you are asking whether one group improved more than the other(s) what you are asking is, “Is there an interaction effect?” You can see by the table of Type III Sums of Squares produced below that there was no interaction effect.


A significant effect for the repeated measure does not mean your treatment worked!

A common mistake is to look at the significance for the repeated measure and because a significant change was found between times 1 and 3 to say that the treatment had an effect. In fact, though, we can see by the non-significant interaction effect that there was not an impact of treatment because there was no difference in the change in exam scores across the levels of treatment.

There are a lot of other common mistakes but I need to go back to work so those will have to wait for another blog.

When I teach students how to use SAS to do a repeated measures Analysis of Variance, it almost seems like those crazy foreign language majors I knew in college who were learning Portuguese and Italian at the same time.

I teach how to do a repeated measures ANOVA using both PROC GLM and PROC MIXED. It seems very likely in their careers my students will run into both general linear models and mixed models. The problem is that they confuse the two and the result is buggy code.

Let’s start with mistakes in PROC GLM today. Next time we can discuss mistakes in PROC MIXED.

Let’s say I have the simplest possible analysis – I’ve given the same students a pre- and a post-test and want to see if there has been a significant increase from time one to time two.

This will work just fine:

proc glm data =mydata.fl_pre_post ;
model pretest posttest = /nouni ;
repeated time 2 ;

Coding the repeated statement like this will also work

repeated time 2 (1 2) ;

So will

repeated time ;

It almost seems as if anything or nothing after the variable name will work. That’s not true. First of all,

repeated time 2 (a b) ; IS WRONG

… and will give you an error – Syntax error, expecting one of the following: a numeric constant, a datetime constant.

“Levels gives the number of levels associated with the factor being defined. When there is only one within-subject factor, the number of levels is equal to the number of dependent variables. In this case, levels is optional. When more than one within-subject factor is defined, however, levels is required,”

SAS 9.2 Users Guide

So, this explains why you can be happily using your repeated statement without bothering to specify the number of levels for a factor and then one day it doesn’t work. WHY? Because now you have two within-subject factors and you need to specify the number of levels but you don’t know that. This is why, when teaching I always include the number of levels. It will never cause your program to fail, even if it is unnecessary sometimes.

One more cool thing about the repeated statement for PROC GLM, you can do a planned contrast super easy. Let’s say I have done 3 tests, a pretest, a post-test and a follow-up. I want to compare the posttest and followup to the pretest.

proc glm data =mydata.fl_tests ;
model pretest posttest follow = /nouni ;
repeated test_time 3 contrast (1) /summary ;

What this will do is compare each of the other time points to the first one. A common mistake students make is to use a CONTRAST statement here with test_time. This will NOT work, although it will work with PROC MIXED, but that is a story for another day.

I cannot believe that it’s been over two months since I’ve written a post. That is the longest I’ve gone in the ten years I have been writing this blog. I read somewhere that the average blog has the lifespan of a fruit fly – after 31 days most people give it up.

That seems to lead to a cottage industry in taking over dormant sites. This site isn’t exactly stagnant even when I am not blogging because people use it for reference.

I started getting emails about “a somewhat embarrassing page”. At first I was aghast that hackers had redirected clients to a porn site.

Fortunately, no, it was just a failed re-direct attempt that ended up breaking a link so you get a 404 page that literally says, “Well, this is somewhat embarrassing.”

The Invisible Developer spent a good bit of time while we were in New York deleting malware from the site. At first, I was feeling very guilty because I thought my cavalier attitude toward security issues with PHP was the reason, but we did clean up most of the problems pointed out in those comments years ago, so that wasn’t the culprit. I should admit here that Paul and Clint were right and I was wrong. Although we have no data of particular value to anyone on this site, hackers are interested in re-directing sites to get links and for other nefarious purposes.

As near as we can tell it was a plugin on another site that was hosted by us that had not been updated in years. We had several more or less abandoned domains of content we had created for clients over the years. They paid us, we created the content for their course or other purpose, and then just left it up. Kind of like all of that stuff you have in your closets that you just shove to the back because you have room.

That’s all cleaned up now. The site, not the closets. Those are still chaos. For all I know, there is an entire new civilization developing in that closet under the stairs. Or maybe Harry Potter lives there.

As for me, I have been teaching two courses during the past 3 months, where I usually only teach one in a year. After landing back in the U.S. in February, I have been criss-crossing the country. Since the beginning of the year, I think I’ve been in 11 cities, 3 states and 2 countries but I may have forgotten a few.

We also released two new games, Fish Lake Adventure , for the iPad, and a new version of AzTech: Meet the Maya, also for the iPad.

Get it in the app store

My lovely daughter, Ronda, headlined this show called Wrestlemania, which is why we were in New York. We have chosen very different careers , my daughters and I. The Perfect Jennifer, or, as she likes to call herself, “the normal one”, is a middle school history teacher, in case you were wondering. The Spoiled One is currently doing a semester abroad in London. She will be back in the U.S. next month and needs a summer internship. Her talents include Instagram, shopping and soccer. If your company doesn’t need any of those skills, she’s also a good writer. Darling Daughter Number One, is 7 Generation Games CEO, she’s also a good writer, having co-authored a New York Times best seller, but she’s not looking for an internship.

So, anyway, I am back, well for a couple of weeks. Next, I head to SAS Global Forum in Texas for a few days to give a couple of presentations on biostatistics and career advice . You’d think my career advice might be to study biostatistics but, maybe not…

Then, I come home for a couple more weeks and am off to a Tech Inclusion conference in Melbourne, Australia. My talk there is going to be, well – different than most – and that’s all I’m going to say about that.

So, now, I’m back to blogging. I have a few things to say about the infinite number of ways people can incorrectly code a repeated measures ANOVA , subdomains and number needed to treat. Between the next game, new website, two conferences and two grant proposals all coming due before June, I’m sure I’ll fit it in there somewhere.

Stop and read this. It may save you whole lot of grief and panic.

Maybe you’ve heard that a stolen iPhone is nothing more than a brick. Perhaps you feel as if your data is safe.

You have a password and it’s not 123456.

You have find my iPhone.

Allow me to burst your bubble by telling you what happened to me and why it could have been WAY worse. Also, turn off Siri right fucking now. If you cannot bear to part with it, turn it off when locked. Go to Settings , then Siri and Search. Turn off answering when locked.

HOW THIEVES ALMOST GOT AWAY WITH EVERYTHING

On Thursday, when I got of the subway I noticed the side of my bag was unzipped. I didn’t see my phone but my credit card and money was still in the pocket so I didn’t think I was robbed. I just figured I’d thrown it in with my computer. When I got home, i emptied my bag and still couldn’t find it. I used find my iPhone and saw it was 7 miles away. So, I put it in lost mode.

Keep this mind, the thieves had my phone for an hour at most before I noticed and locked it.

After I contacted people from my office and made sure I hadn’t left it there,I erased it.

HOW SIRI IS YOUR ENEMY

In the meantime , the thieves had gotten into my yahoo email and my Facebook  page. How did they do that?

Because when you get your phone and you don’t disable this, Siri will answer hi when your phone is locked. Say,

“Siri, what’s my phone number? Siri, what’s my email?”

…. and Siri will tell you.

So, now the thief has your phone, your phone number and your email. TURN OFF SIRI NOW!

I never would have thought the default setup would have such a huge security flaw.

It gets worse.

Now the thief goes to yahoo, enters your email and click “Forgot my password.” They have the reset sent to your phone and then they reset the password .  Guess what? The default is that messages show up on locked iPhones so they get the message and enter a new password. Now, they have your email and your password and your phone.

Next, they go to Facebook and log in using that email. They say that they have lost the password and have the password reset code sent to your iPhone or email they have stolen.

Now the thief has your email, Facebook, phone and phone number.

By this time, it had maybe been a few hours, I had figured out what they were doing ERASED  my iPhone using the Find my iPhone app, deleted the yahoo email from my Facebook and changed the phone number on my yahoo account .

WATCH OUT FOR PHISHING EMAILS CLAIMING TO HAVE YOUR PHONE

This is where disaster really could have happened. So, I’m back in the office on Friday trying to do a million things plus reset my password on everything , handle things that come up every day with two companies in two countries and in one of my company accounts I get a message from “Find my iPhone “ . It looks legit . It says we’ve found your iPhone. It gives the model of iPhone , storage , how would a thief know that ? If you think about it , duh, they have my iPhone . But I’m thinking someone jacking iPhones on the subway certainly doesn’t have the skills to create something  this professional. So, I click on it.  Nothing happens. Thank God for my internet provider that strips out malicious code .

What this was supposed to have done was take me to a page that asked for my Apple ID and password to prove I was me. I might have done it,too. I’m staying with my ISP for life now.

After I switched phones,I got the same message in a text to my new phone number. I can only guess that either a) they were still logged in when I changed it or b) they searched for me on Google.

!!!!! These were not some gifted thieves. There are actually SERVICES that do this for them ! Want to get the Apple ID  and password of a person whose phone you’ve stolen? Send them all of the info you have and they will create the rack email and text messages !

 

EVEN IF YOUR PHONE IS DISABLED,THEY STILL HAVE YOUR SIM CARD

They can (and did) swap that into another phone. When I thought of it two days later,Dennis disabled the account with ATT and he got a message that it was now disabled on a Huawei phone which is not sold in the US but very popular in Chile .

HERE IS WHAT I DID WRONG BEFORE MY PHONE WAS STOLEN

Obviously the Apple default is a huge security flaw. I should have disabled Siri as I never use it and also disabled messages showing on lock mode.

Ironically, I had the yahoo account on my Facebook account thinking it gave me EXTRA security. I hadn’t really used that account in years .

It was possible to reset my yahoo account from a phone, so if someone had my phone they could get access to my email.

HERE IS WHAT I DID RIGHT

I had a second email account that could NOT be reset from a phone. I used that to lock the thief out before they thought of removing it.

When I changed the password and phone associated with my email and Facebook I picked “Log me out of other devices” so if they were logged in somewhere else they couldn’t just change it back.

My phone does not allow purchases so even when someone had my SIM card they could not use it to buy anything. We turned this off with ATT years ago.

None of my bank information is written down anywhere ,  not passwords, accounts,  SSN, nothing . I memorized them. Logins for things like that Software I bought five years ago and the license are written down , or for that stupid forum on blogs. These are not used for anything important .

Any information that might be important is recorded like this:

Password- same as for that computer we used to have in the living room

Had an Internet service provider that stripped out the script on the phishing email and saved me from a huge mistake.

Called ATT to block the number so no one else could use the SIM card

My social media accounts are not connected. Getting into my Facebook doesn’t allow you access to my Instagram, Twitter or anything else. Whenever Facebook asks to connect to anything I say No.

There is very little information in my social media profiles and some of what has been put there automated by Facebook is wrong. So,if anyone was hoping to use the information they got for identity theft they are out of luck .

WHAT SHOULD YOU DO NOW?

At the very least , this second, disable Siri when locked and turn off notifications when locked.

Turn off purchases from your phone.

Turn off resetting your password from a phone .

Disconnect social media accounts form each other so if someone has one account they don’t have all of them.

And for the love of God quit believing that bullshit that stolen iPhone is no more than a brick!

red line

Support my day job!

Wigwams at sunsey

Learn Native American history, math and English all at the same time. You can play it on your iPad, the web or on your phone (if it isn’t stolen).

 

I was going to write more about reading JSON data but that will have to wait because I’m teaching a biostatistics class and I think this will be helpful to them.

What’s a codebook?

If you are using even a moderately complex data set, you will want a code book. At a minimum, it will tell you the name of each variable, the type (character, numeric or date), a label, if it has one and its position in the data set. It will also tell you the number of records and number of variables in a data set. In SAS, you can get all of this by running a PROC CONTENTS. (Also from a PROC DATASETS but we don’t cover that procedure in this class.)

So, for the sashelp.heart data set, for example, you would see:

output from Proc contents

The variable AgeAtDeath is the 12th variable in the data set. It is numeric, with a length of 8 and the label for it is “Age At Death”. Because it is a numeric variable, if you try to use it for any character functions, like finding a substring, you will get an error. (A substring is a subset of a string, so ‘ABC’ is a substring of ‘ABCDE’.)

Similarly, BP_Status is the 15th variable in the data set, it is a character, with a length of 7 and a label of “Blood Pressure Status”. Because it’s a character variable, if you try to do any procedures or functions that expect numeric variables, like find the mean, you will get an error. The label will be used in output, like in the table below.

Frequency distribution of blood pressure status

This is useful because you may have no idea what BP_Status is supposed to mean. HOWEVER, if you use “Blood Pressure Status” in your statements like the example below, you will get an error.

**** WRONG!!!
Proc means data=sashelp.heart ;
Var blood pressure status ;

Seems unfair, but that’s the way it is.

The above statement will assume you want the means for three separate variables named “blood” “pressure” and “status”.

There are no variables in the data set named “blood” or “pressure” so you will get an error. There is a variable named “status”, but it’s something completely different, a variable telling if the subject is alive or dead.

Even if you don’t have a real codebook available, you should at a minimum start any analysis by doing a PROC CONTENTS so you have the correct variable names and types.

What about these errors I was talking about, though? Where will you see them?

LOOK AT YOUR SAS LOG!!

If you are using SAS Studio , it’s the second tab in the middle window, to the right of the tab that says CODE.

Click on that tab and if you have any SYNTAX errors, they will conveniently show up in red.

Also, if you are taking a course and want help from your professor or a classmate, the easiest way for them to help you is if you is to copy and paste your SAS log into an email, or even better, download it and send it as an attachment.

Just because you have no errors in the SAS log doesn’t mean everything is all good, but it’s always the first place you should look.

To get a table of blood pressure status, you may have typed something like

Proc freq data=sashelp.heart ;
Tables status ;

That will run without errors but it will give you a table that gives status as alive or dead, not blood pressure as high, normal or optimal.

PROC CONTENTS is a sort of “codebook light”. A real codebook should also include the mean, minimum, maximum and more for each variable. We’ll talk about that in the next post. Or, who knows, maybe I’ll finally finish talking about reading in JSON data.

Sometimes data changes shape and type over time. In my case, we had a game that was given away free as a demo. We saved the player’s game state – that is, the number of points they had, objects they had earned in the game, etc. as a JSON object. Happy Day! The game became popular. Schools started using it. We came out with a premium version with lots more activities, a bilingual Spanish-English version, a bilingual Lakota- English version. Life is good.
Making Camp Premium on Google Play 
Once schools started using our games, they wanted data on how much students played, how many problems they answered. This is when life started to get complicated. We added more fields to the JSON object to show which activities they had completed and whether they had won. Data for one person might look like this, or much, much longer. “{“”points””:””8″”,””first_trade””:””false””,””first_visit””:””true””, “”has_wigwam””:””true””,””inventory””:””6,3,4,14″”,””inventory_position””:””[{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:176,\””top\””:4},{\””left\””:-309,\””top\””:-254},{\””left\””:0,\””top\””:0},{\””left\””:619,\””top\””:-45},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:293,\””top\””:-44},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0},{\””left\””:0,\””top\””:0}]””,””milestone””:””EarnPage””}”

The JSON engine didn’t work (not surprisingly)

I actually didn’t expect it would work because we didn’t actually have a JSON object but rather a JSON object converted to a string and saved in an SQL database. However, if something would only take a minute, I try it first
libname testmcb JSON fileref =reffile ;

Okay, so what now? We have a variable record length, the values we want might be in any column. It must be really difficult to figure out, right? Not really. This next bit looks complicated but it actually took very little time to code. Before we get into the code, let’s talk about what I needed to do and why. The “why” is that I want to know how much kids played the games. There are three milestones – logging in, getting a wigwam and trading for items. However, some kids just liked playing the mini-games and they played A LOT without pausing to trade in their points. Also, you can’t just look at how many points they have because they may have traded some points for items. Fortunately, all items cost two points, so I need to compute points + (items in inventory/2).
  • First, there were some records that did NOT start with points, they started with “milestone” because we were coding the JSON object one way and then we switched. (Pause to swear under my breath at people here.) So, I need to decide what type of record it was.
  • Second, I want to read in those variables that are milestones in the game play. That is, is this their first time playing, did they get points, did they get a wigwam and did they trade for anything for their wigwam. I need to keep in mind that a variable might be coded 1 or true if the player passed that milestone and 0, false or null if not. (Pause for more swearing.)
  • Third, I want to create Boolean variables that show whether or not a player passed a particular milestone.
  • Fourth, find out the NUMBER of objects in the inventory.
  • Fifth, divide the number in inventory by two and add to the points
This actually took me very little time. Let’s look at it bit by bit (ha ha). The first line just assigns a file reference for where my text file is located. FILENAME reffile '/home/annmaria.demars/data_analysis_examples/data2018/exceldata/mc_bilingualtest.txt'; Data mcb_json ; INFILE reffile DLM="," LRECL = 1337 ; *** READ IN FILE, VARIABLES ARE SEPARATED (DELIMITED) BY A COMMA. LENGTH OF THE RECORD IS 1,337 ; INFORMAT first_trade $40. first_visit $40. has_wigwam $40. ; INPUT  firstcol $ 15-16 @@ ; IF firstcol = '":' THEN  INPUT @43 points $ first_trade $ first_visit $ has_wigwam $ ; ELSE INPUT @15 points $ first_trade  first_visit  has_wigwam  ; ***** READ IN THE VARIABLES FOR TRADING, FIRST VISIT AND WIGWAM AS CHARACTER WITH A LENGTH OF 40 ; ****  READ COLUMNS 15-16 AND STAY ON THAT LINE ; **** IF THE VALUE IN COLUMNS 15-16 = “:  THEN IT HAD MILESTONE AT THE BEGINNING; ***** THE VARIABLES WE WANT START AT COLUMN 43, OTHERWISE, THEY START AT COLUMN 15; *** THESE ARE SEPARATED BY COMMAS SO WE CAN JUST LIST THE VARIABLE NAMES; *** NOTE THAT I NEED TO SPECIFY THAT POINTS IS CHARACTER DATA SINCE IT’S NOT IN MY INFORMAT STATEMENT ; IF INDEX(first_trade,"true") > 0 or index(first_trade,"1") > 0 then traded =0 ; ELSE IF INDEX(first_trade,"false") or index(first_trade,"0") then traded =1 ; IF INDEX(first_visit,"false") or index(first_visit,"0") then play_twice = 1 ; ELSE IF INDEX(first_visit,"true") or index(first_visit,"null") or index(first_visit,"1") then play_twice = 0 ; IF INDEX(has_wigwam,"true") then wigwam = 1 ; ELSE IF INDEX(has_wigwam,"null") > 0 or index(has_wigwam,"0") > 0 then wigwam = 0 ; *** ABOVE I JUST USE A FEW IF THEN STATEMENTS TO CREATE MY NEW VARIABLES ; **** The INDEX function returns the position in the variable in the first argument where it finds the value in the second argument; **** If the value isn’t found, it returns a value of 0 ; PROC FREQ ; TABLES  traded*first_trade first_visit*play_twice wigwam*has_wigwam ; This last step isn’t strictly necessary, except that it is. Here I do a cross-tabulation to make sure that all of the variables were assigned correctly and they were.  For example, you can see that if the value of “has_wigwam” was 0 or null the wigwam variable was set to ). If has_wigwam was equal to “true” the wigwam variable was set to 1. Screen shot of table of has_wigwam by wigwam values This would have worked for the whole JSON object except for the commas in the inventory. If you look at just a piece of the data, you can see that after the variables denoting milestones there is a variable that is actually an array, separated by commas. “first_visit””:””true””, “”has_wigwam””:””true””,””inventory””:””6,3,4,14″”,” We’ll look at how to handle that in the next post.

When I was young and knew everything, I would frequently see procedures or statistics and think, “When am I ever going to use THAT?” That was my thought when I learned about this new procedure to transpose a data set. (It was new then. Keep in mind, I learned SAS when I was pregnant with my first child. She is now CEO of a an educational game company and the mother of three children. )

PROC TRANSPOSE is super-useful. You might only think it is useful for transforming data for use with PROC GLM to use with PROC MIXED, or you might have no idea what the hell that means and it is still super-useful.

Let me give you today’s example. I’m looking for data to use in a biostatistics class I’m teaching next month. It’s a small data set, with data on eight states included in the Center for Disease Control’s Autism and Developmental Disabilities Monitoring Network.

The data looks like this:

As you can see, each state is a column. I would like to know, for example, what percentage of people with autism also have a physical disability. There is a way to do it by finding the mean across variables but I want to use this data set for a few examples and it would be much easier for me if each of those categories was a variable.

The code is super simple:

PROC TRANSPOSE DATA=mydata.autism OUT=mydata.autism2 NAME=state;
ID eligibility ;

The NAME = option is not required nor is the ID statement but they will make your life easier.  First, let’s take a look at our new data.

Data set with one record for each state

Now, instead of state being a variable, we have one record for each state, the percent with autism diagnosis only is one  variable, percent with emotional disturbance another, and so on. What the NAME = option does is give a name to that new variable which was the name of each column. If you don’t use that option, the first column would be named  _name_  . Now, with these data it would still be pretty obvious that this variable is the state but in some cases it wouldn’t be obvious at all.

The ID statement is really necessary in this case because otherwise each column is going to be named “COL1”, “COL2” etc.  Personally, I found the ID statement here confusing because normally the ID statement I think of as the individual ID for each record, like a social security number or student ID. In this case, the variable name you give in the ID statement is going to be used to name the variables. So, as you can see above, the first column is named Autism(%), the second is named Emotional Disturbance (%) and so on.

So, that’s it. All I need to do to get means, standard deviation, minimum and maximum is :

PROC MEANS DATA =mydata.autism2;

So, that’s it.

By the way, I get this data set and a few others from SAS Curriculum Pathways. Nice source for small data sets to start off a course.


I live in opposite world, where my day job is making games and I teach statistics and write about programming for fun.  You can check out our games here. You’re probably already pretty good with division but you’ll learn about the Lakota language and culture with Making Camp Lakota.  A bilingual (English-Lakota) game that teaches math.

feather

I was reminded today how useful a SAS log can be, even when it doesn’t give you any errors.

I’m analyzing data from a study on educational technology in rural schools. The first step is to concatenate 10 different data sets. I want to keep the source of the data, that is, which data set it came from, so if there are issues with these data, outliers, etc. I can more easily pinpoint where it occurred.

I used the IN= option for each data set when I read them in and then some IF statements to assign a source.

DATA mydata.all_users18 ;
    SET  sl_pre_users18 (in=slp )
              aztech_pre_clean (in=azp )
             AZ_maya_students18 (in=azms)
            fl_pretest_new18 (in=flpn)
            fl_pretest_old18 (in=flpo)
            ft_users18(in=ft)
           mydata.fl_students18 (in=fls )
          mc_bilingual_students18 (in=mcb)
        mc_users18 (in=mc)
        mydata.sl_students18 (in=sls)
;

After I run the data step, I see that 425 observations do not have a value for “source”. How would you spot the error?

Of course, there is more than one way, but I thought the simplest thing was to search in the SAS log and see which of the data sets had exactly 425 observations. Yep. There it is. Took me 2 seconds to find.

147 PROC IMPORT DATAFILE=REFFILE
148 DBMS=XLSX
149 OUT=WORK.MC_bilinguaL_students18 replace;
150 GETNAMES=YES;

NOTE: The import data set has 425 observations and 2 variables.

So, I looked at the code again and sure enough, I had misspelled “source”

IF  slp THEN source = “Spirit Pre” ;
      else if azp then source = “Az Pre” ;
     else if fls then source = “Fish Studn”;
     else if mcb then sourc = “M.Camp.Bil” ;

You might think I could have just read through the code, and you are right, but there were a lot of lines of code. In this case, I could immediately identify that it was something to do with that specific data set and reduce the code I needed to look at significantly. I just started with the last place that data set was referenced to work backward. Fortunately for me, it was in the very last place I called it.

The fact is, you will probably spend as much time debugging code as you do writing it. The log and logic are your friends. Also, no matter how long you have been programming you still make typos.

Want to play one of the games from this study? Have a computer? Go ahead, maturity is over-rated.


I know people who are so obsessive about testing and validating their code to the point they spend more time on testing it than actually writing it and analyzing the output. I said I know people like that, I didn’t say I was one of them. However, it is good practice to validate your SAS code and despite false rumors spread by my enemies, I do it sometimes.

Here is a simple example.  I believed that using the COMPRESS function with “l” for lower case or “I” for case-insensitive gave the same results. I wanted to test that. So, I ran two data steps

DATA USE_L;
set mydata.aztech_pre ;
q3 = compress(Q3,’ABCDEFGHIJKLMNOPQRSTUVWXYZ’,’l’);
q5 = compress(Q5,’ABCDEFGHIJKLMNOPQRSTUVWXY’,’l’);

… and a whole bunch more statements like that.

Then, I ran the exact same data step but with an “I” instead of an “l”  .

Finally, I ran a PROC COMPARE step

PROC COMPARE base =USE_L compare=USE_I ;
Title “Using l for lowercase vs I for insenstitive” ;

PROC COMPARE RESULTS SHOW NO DIFFERENCES

But, hey, maybe PROC COMPARE just doesn’t work. Is it really removing everything whether it is upper or lower case? To test this, I ran the procedure again comparing the dataset with the compressed results with the original data set.

PROC COMPARE base =mydata.aztech_pre compare=use_I ;
Title “Comparing with and without compress function” ;

The result was a whole lot of output, which I am not going to reproduce here, but some of the most relevant was:

  Values Comparison Summary                                                      
                                                                                                                                    
Number of Variables Compared with All Observations Equal: 24.                                     
 Number of Variables Compared with Some Observations Unequal: 16.                                  
Number of Variables with Missing Value Differences: 10.                                           
Total Number of Values which Compare Unequal: 694. 

Looking further in the results, I can see comparison of the results for each variable by observation number

          ||  q5                                                                              
           ||  Base Value           Compare Value                                              
       Obs ||  q5                    q5                                                        
 ________  ||  ____________          ____________                                              
            ||                                                                                  
         5  ||  150m                  150                                                       
         6  ||  42 miles              42                                                        
        10  ||  one thousand                                                                    
        12  ||  200 MILES             200       

So, I can see that the data step is doing what I want, which is removing all of the text from the responses and only leaving numbers. This is important because the next step is comparing the responses to the questions with the answer key and I don’t want any mismatches to occur because the student wrote ‘200 miles’ instead of 200.

In case you are interested, this is the pretest for two games that are used to teach fractions and statistics. You can find Aztech: The Story Begins here and play it for free, on your iPad , Mac, Windows or Chromebook computer.

Mayan god
Play Aztech !

Forgotten Trail can be played in a browser on any Mac, Windows or Chromebook computer.

Some people believe you can say anything with statistics. I don’t believe that is true, unless you flat out lie, but if you are a big fat liar, I am sure you would lie just as much without statistics.

However, a point was made today when Marshall and I were discussing, via email, our presentation for the National Indian Education Association. One point we made was, while most vocational rehabilitation projects serve relatively few youth, the number at Spirit Lake has risen dramatically. He said, 

You said the percentage of youth increased from 2% to 20% and then you said the percentage of youth served tripled. Which was it?

It depends on how you slice your data

There is more decision-making in even basic statistics than most people realize. We are looking at a pretty basic question here, “Did the percentage of the caseload that was youth age 25 and under, increase?”

The first question is, “Increase from when to when?”  That is, what year is the cutoff? In this case, that answer is easy. We had observed that the percentage of youth served was decreasing and changes were undertaken in 2015 to reduce that trend. So, the decision was to compare 2015 and later with 2014 and earlier.

Percent of Youth on Caseload, by Year

How much of an increase is found depends on the year used for comparison and whether we use one year or an average.

The discrepancy between the 10x improvement versus 3x comes because the percentage of youth served by the project varied from year to year, although the overall trend was going down. If we wanted to make ourselves look really good, we could compare the lowest year – 2013 at 2% with the highest year, 2015 at 20% and say the increase was 10x, but I think that isn’t the best representation, although it is true. One reason is that the changes we discussed in the paper weren’t implemented until 2015, so there is no justification for using 2013 as the basis.

The second question is how do you compute the baseline? If we use all of the data from 2008-2014 to get a baseline,  youth comprised  7% of the new cases added. At first, I used the previous year six years as baseline 2008-2014, we get 7% and if we compare that to 2015 with 20.2% the percentage of youth served almost tripled. 

However, we just started using the current database system in 2012 fiscal year and the only people from prior years in the data were those who had been enrolled prior to 2012 and still receiving services. The further back in time we went, the fewer people there were in the system, and they were definitely a non-representative sample. Typically, people don’t continue receiving vocational rehabilitation services for three or four years. 

You can see the number by year below. The 2018 figure is only through June of this year, which is when I took a snapshot of the database.

If we use 2013-2014  as a baseline, the percentage of youth among those served was 4%. If we use 2012-2014, it’s 6%. 

To me, it makes more sense to compare it to an aggregate over a few years.  I averaged 2012 through 2014 because it gave larger sample size, had representative data and also because I didn’t feel comfortable using the absolute lowest year as a baseline. Maybe it was just a bad year. As any good psychometrician knows, the more data points you have, the more reliable your measure. 

 The third question is how to select the years for comparison. I combined 2015-2018 also because it gave a larger sample size and, again,  I did not want to just pick the best year as a comparison. Over that period, 18% of those served by the project were youth.

So … what have we learned? Depending on how you select the baseline and comparison years we have either improved 10 times, from 2% to 20% , 2.6 times, from 7% to 18%,  tripled, from 6% to 18% , quadrupled, from 4% to 20% – and there are some other permutations possible as well.

Notice something here, though. No matter how we slice it, after 2014, the percentage of youth increased, and substantially so. This increase was maintained year after year. 

I thought this was an interesting example of being able to come up with varying answers in terms of the specific statistic but no matter what, you came to the same conclusion that the changes in outreach and recruitment had a substantial impact.

Next Page →