I was going to write more about reading JSON data but that will have to wait because I’m teaching a biostatistics class and I think this will be helpful to them.

What’s a codebook?

If you are using even a moderately complex data set, you will want a code book. At a minimum, it will tell you the name of each variable, the type (character, numeric or date), a label, if it has one and its position in the data set. It will also tell you the number of records and number of variables in a data set. In SAS, you can get all of this by running a PROC CONTENTS. (Also from a PROC DATASETS but we don’t cover that procedure in this class.)

So, for the sashelp.heart data set, for example, you would see:

output from Proc contents

The variable AgeAtDeath is the 12th variable in the data set. It is numeric, with a length of 8 and the label for it is “Age At Death”. Because it is a numeric variable, if you try to use it for any character functions, like finding a substring, you will get an error. (A substring is a subset of a string, so ‘ABC’ is a substring of ‘ABCDE’.)

Similarly, BP_Status is the 15th variable in the data set, it is a character, with a length of 7 and a label of “Blood Pressure Status”. Because it’s a character variable, if you try to do any procedures or functions that expect numeric variables, like find the mean, you will get an error. The label will be used in output, like in the table below.

Frequency distribution of blood pressure status

This is useful because you may have no idea what BP_Status is supposed to mean. HOWEVER, if you use “Blood Pressure Status” in your statements like the example below, you will get an error.

**** WRONG!!!
Proc means data=sashelp.heart ;
Var blood pressure status ;

Seems unfair, but that’s the way it is.

The above statement will assume you want the means for three separate variables named “blood” “pressure” and “status”.

There are no variables in the data set named “blood” or “pressure” so you will get an error. There is a variable named “status”, but it’s something completely different, a variable telling if the subject is alive or dead.

Even if you don’t have a real codebook available, you should at a minimum start any analysis by doing a PROC CONTENTS so you have the correct variable names and types.

What about these errors I was talking about, though? Where will you see them?

LOOK AT YOUR SAS LOG!!

If you are using SAS Studio , it’s the second tab in the middle window, to the right of the tab that says CODE.

Click on that tab and if you have any SYNTAX errors, they will conveniently show up in red.

Also, if you are taking a course and want help from your professor or a classmate, the easiest way for them to help you is if you is to copy and paste your SAS log into an email, or even better, download it and send it as an attachment.

Just because you have no errors in the SAS log doesn’t mean everything is all good, but it’s always the first place you should look.

To get a table of blood pressure status, you may have typed something like

Proc freq data=sashelp.heart ;
Tables status ;

That will run without errors but it will give you a table that gives status as alive or dead, not blood pressure as high, normal or optimal.

PROC CONTENTS is a sort of “codebook light”. A real codebook should also include the mean, minimum, maximum and more for each variable. We’ll talk about that in the next post. Or, who knows, maybe I’ll finally finish talking about reading in JSON data.

When I was young and knew everything, I would frequently see procedures or statistics and think, “When am I ever going to use THAT?” That was my thought when I learned about this new procedure to transpose a data set. (It was new then. Keep in mind, I learned SAS when I was pregnant with my first child. She is now CEO of a an educational game company and the mother of three children. )

PROC TRANSPOSE is super-useful. You might only think it is useful for transforming data for use with PROC GLM to use with PROC MIXED, or you might have no idea what the hell that means and it is still super-useful.

Let me give you today’s example. I’m looking for data to use in a biostatistics class I’m teaching next month. It’s a small data set, with data on eight states included in the Center for Disease Control’s Autism and Developmental Disabilities Monitoring Network.

The data looks like this:

As you can see, each state is a column. I would like to know, for example, what percentage of people with autism also have a physical disability. There is a way to do it by finding the mean across variables but I want to use this data set for a few examples and it would be much easier for me if each of those categories was a variable.

The code is super simple:

PROC TRANSPOSE DATA=mydata.autism OUT=mydata.autism2 NAME=state;
ID eligibility ;

The NAME = option is not required nor is the ID statement but they will make your life easier.  First, let’s take a look at our new data.

Data set with one record for each state

Now, instead of state being a variable, we have one record for each state, the percent with autism diagnosis only is one  variable, percent with emotional disturbance another, and so on. What the NAME = option does is give a name to that new variable which was the name of each column. If you don’t use that option, the first column would be named  _name_  . Now, with these data it would still be pretty obvious that this variable is the state but in some cases it wouldn’t be obvious at all.

The ID statement is really necessary in this case because otherwise each column is going to be named “COL1”, “COL2” etc.  Personally, I found the ID statement here confusing because normally the ID statement I think of as the individual ID for each record, like a social security number or student ID. In this case, the variable name you give in the ID statement is going to be used to name the variables. So, as you can see above, the first column is named Autism(%), the second is named Emotional Disturbance (%) and so on.

So, that’s it. All I need to do to get means, standard deviation, minimum and maximum is :

PROC MEANS DATA =mydata.autism2;

So, that’s it.

By the way, I get this data set and a few others from SAS Curriculum Pathways. Nice source for small data sets to start off a course.


I live in opposite world, where my day job is making games and I teach statistics and write about programming for fun.  You can check out our games here. You’re probably already pretty good with division but you’ll learn about the Lakota language and culture with Making Camp Lakota.  A bilingual (English-Lakota) game that teaches math.

feather

Some people believe you can say anything with statistics. I don’t believe that is true, unless you flat out lie, but if you are a big fat liar, I am sure you would lie just as much without statistics.

However, a point was made today when Marshall and I were discussing, via email, our presentation for the National Indian Education Association. One point we made was, while most vocational rehabilitation projects serve relatively few youth, the number at Spirit Lake has risen dramatically. He said, 

You said the percentage of youth increased from 2% to 20% and then you said the percentage of youth served tripled. Which was it?

It depends on how you slice your data

There is more decision-making in even basic statistics than most people realize. We are looking at a pretty basic question here, “Did the percentage of the caseload that was youth age 25 and under, increase?”

The first question is, “Increase from when to when?”  That is, what year is the cutoff? In this case, that answer is easy. We had observed that the percentage of youth served was decreasing and changes were undertaken in 2015 to reduce that trend. So, the decision was to compare 2015 and later with 2014 and earlier.

Percent of Youth on Caseload, by Year

How much of an increase is found depends on the year used for comparison and whether we use one year or an average.

The discrepancy between the 10x improvement versus 3x comes because the percentage of youth served by the project varied from year to year, although the overall trend was going down. If we wanted to make ourselves look really good, we could compare the lowest year – 2013 at 2% with the highest year, 2015 at 20% and say the increase was 10x, but I think that isn’t the best representation, although it is true. One reason is that the changes we discussed in the paper weren’t implemented until 2015, so there is no justification for using 2013 as the basis.

The second question is how do you compute the baseline? If we use all of the data from 2008-2014 to get a baseline,  youth comprised  7% of the new cases added. At first, I used the previous year six years as baseline 2008-2014, we get 7% and if we compare that to 2015 with 20.2% the percentage of youth served almost tripled. 

However, we just started using the current database system in 2012 fiscal year and the only people from prior years in the data were those who had been enrolled prior to 2012 and still receiving services. The further back in time we went, the fewer people there were in the system, and they were definitely a non-representative sample. Typically, people don’t continue receiving vocational rehabilitation services for three or four years. 

You can see the number by year below. The 2018 figure is only through June of this year, which is when I took a snapshot of the database.

If we use 2013-2014  as a baseline, the percentage of youth among those served was 4%. If we use 2012-2014, it’s 6%. 

To me, it makes more sense to compare it to an aggregate over a few years.  I averaged 2012 through 2014 because it gave larger sample size, had representative data and also because I didn’t feel comfortable using the absolute lowest year as a baseline. Maybe it was just a bad year. As any good psychometrician knows, the more data points you have, the more reliable your measure. 

 The third question is how to select the years for comparison. I combined 2015-2018 also because it gave a larger sample size and, again,  I did not want to just pick the best year as a comparison. Over that period, 18% of those served by the project were youth.

So … what have we learned? Depending on how you select the baseline and comparison years we have either improved 10 times, from 2% to 20% , 2.6 times, from 7% to 18%,  tripled, from 6% to 18% , quadrupled, from 4% to 20% – and there are some other permutations possible as well.

Notice something here, though. No matter how we slice it, after 2014, the percentage of youth increased, and substantially so. This increase was maintained year after year. 

I thought this was an interesting example of being able to come up with varying answers in terms of the specific statistic but no matter what, you came to the same conclusion that the changes in outreach and recruitment had a substantial impact.

I’m sure I’ve written about this before – after all, I’ve been writing this blog for 10 years – but here’s something I’ve been thinking about:

Most students don’t graduate with nearly enough experience with real data.

You can use government websites with de-identified data from surveys, and I do, but I teach primarily engineering and business students so it would be helpful to have some business data, too. Unfortunately, businesses aren’t lining up to hand me their financial, inventory and manufacturing data (bunch of jerks!)

So, I downloaded this free app, Medica Scientific from the app store and ran a simulation of data for a medical device company. Some friends did the same and this gave me 4 data sets, as if from 4 different companies.

Now, that I have 4 Excel files with the data, before you get to uploading the file, I’m going to give you a tip. By default, SAS is going to import the first worksheet. So, move the worksheet you want to be first. In this case, it’s a worksheet named “Financials”. Since SAS will use the first worksheet, it could just as well be named “A whale ate my sandwich”, but it wouldn’t be as obvious.

While you are at it, take a look at the data, variable names in the first row.  ALWAYS give your data at least a cursory glance. If it is millions of records, opening the file isn’t feasible and we cover other ‘quick looks’ in class.

These steps and the next few use SAS Studio, which is super-duper helpful for online courses.

1. Upload the file into the desired directory
2. Under Tasks and Utilities select Utilities and then Import Data
3. Click select file and then navigate to the folder where your file is and click open
4. You’ll see a bunch of code but nothing actually happens until you click on the little running guy.

menus to select data to import

First select the data set

the import data window

Have you clicked the running guy? Good!

 

Okay, now you have your code. Not only has SAS imported your data file into SAS, it’s also written the code for you.

FILENAME REFFILE '/home/annmaria/examples/simulation/Tech2Demo.xlsx';
PROC IMPORT DATAFILE=REFFILEDBMS=XLSX OUT=WORK.IMPORT1;
GETNAMES=YES;
RUN;
PROC CONTENTS DATA=WORK.IMPORT1;
RUN;

Now, if you had a nice professor who only gave you one data set, you would be done, which is why I showed you the easy way to do it.

However, very often, we want to compare several factories or departments or whatever it is.

Also, life comes with problems. Sigh.

One of your problems, which you’d notice if you opened the data set is that the variables have names like “Simulation Day” .  I don’t want spaces in my variable names.

My second problem is that I need to upload all of my files and concatenate them so I have one long file.

Let’s attack both of these at once. First, upload the rest of your files.

Now,  open a new SAS program and at the top of your file, put this:

OPTION VALIDVARNAME=V7 ;

It will make life easier in general if your variable names don’t have spaces in them. The option above automatically recodes the variables to valid variable names without spaces.

Now, to import the next 3 files, just create a new SAS program and copy and paste the code created by your IMPORT procedure  FOUR TIMES (yes, four).

From Captain Obvious:

Captain Obvious wearing her obvious hat

Although you’d think this would be obvious, experience has shown that I need to say it.

  • Do NOT copy the code in this blog post. Copy the code produced by your own IMPORT procedure, it will have your own directory name.
  • Do NOT name every output data set IMPORT1 because if you do, each step will replace the data set and you will end up with one dataset and be sad.

Since I want to replace the first file, I’m going to need to add the REPLACE option in the first PROC IMPORT statement.

OPTION VALIDVARNAME=V7 ;

FILENAME REFFILE '/home/annmaria/examples/simulation/Tech2Demo.xlsx';
PROC IMPORT DATAFILE=REFFILEDBMS=XLSX
REPLACE
OUT=WORK.IMPORT1;
GETNAMES=YES;
RUN;
PROC CONTENTS DATA=WORK.IMPORT1;
RUN;

FILENAME REFFILE '/home/annmaria/examples/simulation/Tech2Demo2.xlsx';
PROC IMPORT DATAFILE=REFFILEDBMS=XLSX
REPLACE OUT=WORK.IMPORT2;
GETNAMES=YES;
RUN;
PROC CONTENTS DATA=WORK.IMPORT2;
RUN;

Do that two more times for the last two datasets

Did you need to do the utility? Couldn’t you just have done the code from the beginning? Yes. I just wanted to show you that the utility existed. If you only had one file and it had valid filenames, which is a very common situation, you would be done at that point.

In a real-life scenario, you would want to merge all of these into one file so you could compare clinics, plants, whatever. Super easy.

[IF you have write access to a directory, you could create a permanent dataset here using a LIBNAME statement, but I’m going to assume that you are a student and you do not. The default is to write to the working directory. ] ;

DATA allplants ;
set import1 - import4 ;

IF you get an error at this point, what should you do?

There are a few different answers to that question and I will answer them in my next post.

SUPPORT MY DAY JOB . IT’S FUN AND FREE!
YOU CAN DOWNLOAD A SPIRIT LAKE DEMO FOR YOUR WINDOWS COMPUTER FROM THE MICROSOFT STORE

I was supposed to be teaching statistics to undergraduate Fine Arts majors this semester but I’m going to Santiago to open a Latin American office for 7 Generation Games instead.

I’m a bit disappointed because even though when I was younger and got asked at cocktail parties what I did for a living, I would say,

I teach statistics to people who don’t want to learn it.

teaching Fine Arts majors would probably be a new experience.

I was planning on using Excel to teach that course. However, as I take a closer look at SAS Studio I think it might be feasible to use SAS.

First of all, it’s free for academics and you can use it on any device, including an iPad. I know because I’ve tested it.

Second, and more important for this group, you can use the tasks and do some real-life analyses with almost no coding.

For example, I want to know if the sample of students we tested on American Indian reservations who had a family member addicted to methamphetamine were, on the average, over the cutoff for depressive symptoms. On the scale we used, the CESD-C , the cutoff score is 15.

Step 1: Run the code to assign the directory with the data I made available for the course, for example,

libname in “/home/annmaria.demars/data_analysis_examples”;
run;

Step 2: Under the TASKS menu on the left select STATISTICS and then t TESTS

selecting t-tests

 

3.  Next to the DATA field you’ll see a thing that looks kind of like a spreadsheet. It’s supposed to symbolize a data file. Click on that and a box will come up that lets you pick the directory (library) and the file within it. In my case, it is the CESD_score file.

selecting the data

4. Now that I have my dataset selected, from the ROLES menu  I select one-sample t-test.

5. Click the + next to Analysis Variable and select the dependent variable, in my case, this is CESDTotal

Data selected for one-sample t-tes

6.  Now click on the OPTIONS tab. Two-tailed test is selected as the default. That’s good, leave it.  The alternative hypothesis tested is usually that the mean is equal to 0, but I want to change that to 15. Just click the little running guy at the top to get results.

options for t-test

 

I showed the results in a previous post, the mean for my sample of 18 youth was 21 (p <.05).

What if we did an UPPER one-tailed t-test? Then my p-value is .015 instead of .03.

What if we did a LOWER one-tailed test? Then my p-value is 1.0.

To get these latter 2 tests takes about 5 seconds. All  I need to do is change the option for tails and click on the running man again.

Now, in just a few minutes, I have data under three different assumptions, from an actual study. My students and I can start discussing what that means.

Bottom line, check out SAS Studio. It may be more of an option for your students than you think.

monkey

Meet the howler monkey in Aztech Games

 

Speaking of baby steps for learning statistics, check out Aztech Games. You can play them in English or Spanish on your iPad. Learn statistics and Latin American history at the same time.

In a previous post, I asked what you would do if one person’s score changed your results?

  • Would you throw them out?
  • Leave them in?
  • Does it depend on whether they support your hypothesis or not?

A few people suggested collecting more data and I completely agree with their very valid points that if one person can change your results from significant to non-significant, you probably have a small sample size, which we did, and that is a problem for a number of reasons that warrant their own posts. It’s not always possible to collect more data, due to time, money or other constraints (only so many people are considerate enough to die from rabies bites in a given year). In our case, we have a grant under review to follow up on this pilot study with  a much larger sample so if you are on the review committee let me just take this opportunity to say that you are good-looking and your mother doesn’t dress you funny at all.

A couple of other people commented on not getting tied up with significance vs non-significance too much, especially since a confidence interval with a sample size this small tends to be awfully wide. I agree with that also, but that, too, is a post in itself.

So, what would I do?

students at desk

First of all, I would check if there were any problems in data entry. You’d laugh if you knew how often I have heard people trying  to explain results due to an outlier and that outlier turns out to be a data entry person who typed 00 instead of 20 or a student who just went down the column circling everything “Always”.

For example, on this particular screening measure for depression, some of the items are reverse coded. If you did not pay attention to that and you just answered “A lot” for every item you would get an artificially depressed score (no pun intended). That was not the case here. I looked at the individual responses and, for example, the subject answered “Not at all” to “I felt down and unhappy” and “A lot” to “I felt happy”.

I checked to see that the measure was scored properly. Yes, there answers were consistent, with “Not at all” to all of the depressed items and “A lot” to all of the reverse coded items. This was just a happy kid.

So, that wasn’t it.

Second, I checked to see if there was a problem with the subject. Occasionally, we will get a perfect score on the pre or post-tests for our math games and upon closer examination, it turns out that prodigy is actually a teacher who wanted to see what our test was like for him/herself. Either that, or it was a really dumb kid whose failed fifth-grade 37 times.

That wasn’t it, either. This student was in the same target age group from one of the same two American Indian reservations as the rest of the students.

After ruling out both non-sampling error and sampling error, I then went and did what most people recommended. I analyzed the data both ways. Now, in my case, the one student did not change the results, so when I reported the results to staff from the cooperating reservations, I mentioned that there was one outlier but 2/3 of the youth tested were above the screening cut off for symptoms of depression and the cut-off score is 15 while the mean for the young people assessed on their reservation was 21.  I should note that this was not a random sample but rather a sample of young people who had a family member addicted to alcohol or drugs, mostly methamphetamine.

Since in this case the results did not change substantively, I just reported the results including the outlier.

If there HAD been a major difference, I would have reported both results, starting with the results without the outlier and state that this was without one subject included and that with that outlier, the results were X.

I think the results without the outlier are more reliable because if you finding significance (or not) depends on that one person it’s not a very robust finding.

Here is my general philosophy of statistics and it has served me well in terms of preventing retracted results and looking like an idiot.

Look for convergence.

What I mean by that is to analyze your data multiple ways, and, if possible, over multiple years with multiple samples.USDA logoThat’s one reason I’m really grateful we’ve received USDA Small Business Innovation Research funding over multiple years. Where university tenure committees are fond of seeing people crank out articles, the truth is, at least with education, psychology and most fields dealing with actual humans, it often takes quite some time for an intervention to see a response. Not only that, but there is a lot of variation in the human population. So, you are going to have a lot more confidence in your results if you have been able to replicate those with different samples, in different places, at different times.

If your significant finding only occurs with a specific group of 19 people tested on January 2, 2018 in De Soto, Missouri, and only when you don’t include the responses from Betty Ann McAfferty, then it’s probably not that significant, now is it?


What I do when I’m not blogging — make educational video games.  

girl in jungle

Please check our latest series in the app store for your iPad, Aztech Games, which teaches Latin American history and (what else) statistics. The first game in the series is free.

This is a hypothetical question, but it could easily happen. Let me give you a real example.

Using a mobile phone game, we administered a standard depression screening measure (CESD-C) to 18 children living on or near an American Indian reservation. All children had a family member who was an alcoholic or addicted to drugs.  I decide to do a one-sample t-test of the hypothesis that the mean for this population = 15, which is the cutoff value for symptoms of depression .  Here is the code but I didn’t code it (more about that later).

PROC TTEST DATA=cesd_score SIDES=2 H0=15 plots(showh0);

var CESDTotal;

The results are shown below, with  a mean of 21 and a range from 3 to 38.

ttest results

You can see that the t-value of 2.34 is significant at p < .05, that is the mean for this sample is significantly different than the cutoff score of 15. You can see more results here.  What if it hadn’t been, though? What if, instead of .0317 the probability was .0517?

What if dropping out this one person with a score of 3 changed the result? In fact, it did change the mean to 22, and the p-value to .0115 . You can see all of those results here.

So, let’s say that hypothetically dropping out this outlier WOULD change your results. Would you do it? Would you report it?

Think about it. In a couple of days, I will give you my answer and my justification.

As to not having coded it – I used the tasks in SAS Studio which I found to be pretty fun, but more on that in my next post.


Play Aztech: Meet the Maya – for your iPad in the app store, in Spanish and English.  The second in our series of bilingual games teaching basic statistics and Latin American history. Only $1.99 

girl in jungle

P.S. There is a third possibility here, which is changing the test from a two-tailed test to one-tailed test. Surely, an argument can be made that we don’t expect children with a family member who is addicted to alcohol or drugs to be less depressed than the cut-off score? They would either be equal or more depressed. Personally, I don’t buy that argument. I could accept that the sample might be more depressed than the average but I’m not sure one could justify that the mean necessarily MUST be more than the cut-off for depressive symptoms. 

 

 

 

Let me say right off the bat that the number of contracts I’ve had where people wanted me to tell them what to do I can count on one hand – and I’ve been in business 30 years. Generally, whether it is an executive in an organization where I’m an employee or a client for my consulting services, people don’t want me to tell them what to do,

Hey, you should do a repeated measures ANOVA.

Nope, they want me to DO it. It’s funny how often I find myself doing the same procedures for vastly different organizations, everywhere from the middle of Missouri to downtown Los Angeles to American Indian reservations in North Dakota to (soon) Santiago, Chile.

view over the top of my ipad

There are also those procedures I only use once in a great while, but that’s the topic of another post. Here are a couple of my go-to procedures.

Fisher’s Exact Test

Earlier this year I wrote about the Fisher’s Exact Test and how I had used this teeny bit of code

PROC FREQ DATA = install ;
TABLES rural*install / CHISQ ;

is an example of how you do it in SAS for everything from testing whether urban school districts have significantly more bureaucratic barriers to using educational technology than rural districts (they do) to whether mortality rates are lower in a specialized unit in a hospital than for patients with the same diagnosis in a standard unit.

Confidence Limits for the Mean

Working with small samples in rural communities, I often don’t have the luxury of a control group. I know this makes me sound like a terrible researcher and that I never read a quantitative methods or experimental design textbook. However, let me give you an example of the types of conversations I have all of the time.

Me:  I’d like to use your program as a control group. I’ll come in and test all of your students and then two months later, I’ll test them all again.

Principal/ Superintendent/ Program Director:  You mean you want me to take up two periods of class / counseling time for your tests?

Me: Yes.

Them: You wouldn’t actually be giving our students any services or educational program, you’d just be taking two hours from all of our students.

Me: Yes, and then I’ll compare their results to those of the students who do get services.

Them: What do our students get out of it?

You can see where this conversation is going. One solution might be to pay all of the students some amount to stay after school or come in for an extra counseling period or whatever is being compared, so they aren’t missing out on services to take the test. However, Institutional Review Boards are cautious about having substantial incentives because then they feel very low income might be coerced into participating – for some of the people on our research, $10 is a lot of money.

The result is that I don’t always have a control group, but all is not lost. Being smarter than I look (yes, really),  I often use standardized measures for which there is a lot of research documenting the mean and I can do a one-sample test.

proc means data=cesd_score alpha=.05 clm mean std ;
var cesdtotal ;

This will give me the 95% confidence interval for the mean and I can see if my sample is significantly different from the mean .  For example, with a sample of 18 children from an American Indian reservation, the mean score on the CESD – C, a measure of depression, the mean score was 21. The cutoff for considering the respondent as showing depressive symptoms is 15. With a confidence interval from 15.6 to 26.4  I can say that there is a greater than 95% probability that the population mean fits the cutoff for depressive symptoms. Notice that the lower confidence limit still is above the screening cutoff point of 15.

There is an interesting question related to this specific study, but it will have to wait for tomorrow since I have to head to the airport in a few hours. This week, I’m heading to Missouri. If you want to meet up and talk statistics, video games or just drink beer, let me know.


Play Aztech: The Story Begins – free for your iPad in the app store, in Spanish and English.  The first in our series of bilingual games teaching math and history.

girl in jungle

 

Almost always when I get asked to teach anything my answer is:

No. 

I don’t even think about it . Just, no. I’m too busy.  Usually, I’ll teach one graduate class a year and that’s it. However, recently I had the opportunity to teach an introduction to statistics course and design the whole course from the ground up, which sounded like my idea of fun. The college is predominantly an arts school, with students majoring in screenwriting, dance, drama and a smattering of entertainment business majors.

Normally, when I teach graduate statistics courses I use SAS, I require students to learn at least a minimal amount of programming and be able to do things like partition the sums of squares.

Julia in Trinidad

The Spoiled One NOT computing the area under the curve

It just so happens that The Spoiled One, who is a Creative Writing major (what does she want to be when she graduates? Unemployed, apparently) took statistics last year, which resulted in many 11 pm (2 am Eastern time where she attends school) phone calls to me on things like how to compute the area under the curve between two z-scores.

Despite my best efforts, I believe she left the class with zero conviction that she would ever use statistics, and I really don’t blame her. There is not a lot of call in one’s daily life for looking up values in a z table, it being the 21st century and all and us having computers.

Here is my honest appraisal of my soon-to-be students – nearly 100% of them will be able to use skills such as creating graphs with Excel, computing averages, understanding the difference between the median and the mean and when which measure is appropriate. I can tell them truly how they could use this information in deciding which contract to accept, in which film to invest and whether a particular dance studio is preferable to another in terms of business viability. There is less than a 10% chance that as juniors and seniors in an arts college they are going to change their minds and decide they want to go into a research career. If they do make that choice, everything they learn in this course will apply.  What I did not do was include a lot of proofs and matrix algebra or computation.

I gave some thought to using JMP because of the graphics, and to SAS Studio, because it is available free and we could use the tasks menu, which is pretty cool, but the fact is these students are most likely familiar with Excel and the campus already has a license. It’s installed on every computer in the lab. Installing the analysis toolpak is super-easy, whether you are using Office 365 or the regular Office (I hear some people calling that the productivity suite).

So, if I am not having students use SAS or calculate the area under a curve, what am I doing?

One thing I am requiring is that every student create their own livebinder. You’re welcome to take a look at it in the livebinder I’m preparing for my own purposes for the course. Just look under the livebinder assignment tab.

I have a lot more to write about this later. Right now,  I have guests on the way so I’ll try to post more tomorrow.


Want to learn statistics in a game?  Play Aztech: The Story Begins – free for your iPad in the app store, in Spanish and English.  The first in our series of bilingual games teaching math and history.

deer in back yardWhat do a herd of deer and a sea lion have to do with statistics?

Friday, I was on the Spirit Lake Dakota Nation in North Dakota. Most of the time while I was there, I spent at the Spirit Lake Vocational Rehabilitation Project, an impressively effective group of people who help tribal members with disabilities get and keep jobs. A few years back, I wrote a system to track their data using PHP and MySQL. It is deliberately simple because they wanted a basic database that would give reports on the number of people served, how many had jobs, and some demographic information. A research project used SAS to analyze the data to try to identify predictors of employment.

Due to a delayed flight, I spent the night with my friend in Minot, discussing, among other things, the decline in native speakers of Cree, and not the herd of deer in her backyard, which was common place enough to pass without comment.

harbor at night

Saturday, I was back home in California, on a dinner cruise in Marina del Rey. We were discussing how to analyze the data on persistence in our games to show that the re-design, with a longer lead-in story line and a higher proportion of game play early on was effective. I suggested maybe we could use survival analysis. Really, it’s the same scenario as how many people are alive after 2, 3 or 4 months or how many people kept playing the game after the 2nd, 3rd or 4th problem.sea lion on dock

The deer, the large loud sea lion on the dock and I spent the exact same amount of time discussing the probability mass function for a Poisson distribution and proving the Central Limit Theorem.

My point is, that everywhere I go, and that is a REALLY broad range of places, people are interested in the application of statistics, but SO much of school is focused on teaching how to compute the area under the normal curve or how to prove some theorem or computing coefficients using a calculator and plugging numbers into a formula, inverting matrices. I’m not sure how helpful that was to a student and I can guarantee you that the last time I computed the sums of squares without using a computer was about 35 years ago.

Whether you are are using SAS, SPSS, Excel, R, JMP or any one of a dozen other statistical packages, it lets you focus on what’s really important. Does age actually predict whether or not someone is employed (in this case, no)? Do rural school districts have fewer bureaucratic barriers ? Is this a reliable test? Did students who played these games improve their math scores?

When I was young, and many of the current statistical packages were either very new and limited or didn’t yet exist, someone asked me if I was worried that I would be out of a job. I laughed and said no, because what computers were replacing was the computational part of statistics, and except for that tiny proportion of people who were going to be developing new statistics, the jobs were all going to be in applying formula, not proving them and sure as hell not computing them with a pencil and a piece of paper. A computer allows you to focus on what’s important.

What IS important? That’s a good question and another post.


Having trouble teaching basic statistics to students? Start with Aztech: The Story Begins  — free from the app store (and it’s bilingual)

girl in jungle

Next Page →