Everyone should pause every now and then and ask themselves this question:

What would you want to be doing if you weren’t doing this?

Whatever “this” is, your answer will be revealing. If you excitedly exclaim,

“I would start a restaurant, using my grandma’s recipes, but it would be more of a modern look – kind of a Tuscany theme. The Venice area is coming up, I’d open it there.”

… then you have spent far too much time thinking about what you could be doing instead of what you are doing now. Maybe you should really take some steps to do something else instead of whatever your “this” happens to be.

If you ask me what I would be doing if I weren’t making games and analyzing data, I’d be hard pressed to give you an answer.

My first answer would be to protest,

“But I WANT to be doing this.”

It’s not that I haven’t thought about it. If I was not working, I could be doing any of these things:

  • Going to the Aquarium of the Pacific with my family (I have a membership),
  • Going to Disneyland ( I have a membership)
  • Going out to eat with my family
  • Writing a paper for a conference
  • Teaching judo
  • Riding a bike on the beach
  • Reading a book

All of those are perfectly fine things to do – as evidence, I offer lunch today with The Perfect Jennifer and Darling Daughter Number Three =  but notice that none of them is a full-time gig.

Darling daughters If I could not be doing what I’m doing now, I wouldn’t mind going back to teaching college. I wouldn’t mind going back to teaching middle school if they would start classes at some reasonable hour, say 10 am.

I have two memberships to tourist attractions because I tend not to take off work very often. The memberships mean that I have some incentive to get out of the office. It’s not going to cost me anything to go to Disneyland and it’s wasting money if I don’t go because The Invisible Developer paid for me to have a membership.

Maybe if your list of other things to do is too long, or when you come to one, you think longingly,

I really SHOULD go for a walk on the beach! I can’t remember the last time I did that!

…. you should take a break and do some of them.

At the moment, I’ve been doing better than usual at meeting friends for lunch, going to movies, calling my mom and cleaning the house. Asking myself on a regular basis,

“What would you be doing if you weren’t doing this?”

forces me to focus on the things that I am NOT doing, whether it be in my business or personal life, and an evaluation of whether what I’m doing at this moment is worth giving up those other things.

Most of the time the answer is, “Yes.”

If, for you, most of the time the answer is, “No”, well, I think you know what you need to do.

+++++++++++

FEEL SMART AFTER READING THIS BLOG?

WANT TO BE EVEN SMARTER?

Check out 7 Generation Games – Games that make you smarter

Rocks

This IS my day job.

The heroine in Labyrinth (fun movie, by the way) complains ,

“That’s not fair!”

and the Goblin King responds,

“You say that so often. I wonder what your standard of comparison is.”

Sunday evening and I’m taking a break from working on our latest game to offer another installment in Mama AnnMaria’s Guide on Not Getting Your Sorry Ass Fired. I was talking to my son-in-law, Mr. Perfect Jennifer, and he commented that many in his field – graphic art and animation – set their rates at what the client can be convinced to pay. If a client is somewhat naive about the going rate, they will charge two or three times as much – the sky’s the limit! What is wrong with that?  Well, for one thing, if you are vastly over-charging your clients, at some point they will find out that other people are not paying $775 per hour for a statistical consultant and they will fire your sorry ass and hire someone more reasonable.

People starting out as freelancers or consultants are often unclear as what is a “fair rate of pay”.

Let me recommend this:

1.  NEVER base your fees on what you “need to maintain a certain lifestyle”. Unbelievably, I have seen many people starting out set their fees exactly like this. They will tell me how expensive it is to live in San Francisco or New York City, as if that matters. Let me give you this example to show how stupid this is. You hire someone to clean your office every night. We’re not talking Trump tower here, but just your standard executive office – desk, a couple of chairs, table. Dave’s Cleaning tells you that the cost will be $3,000 a month. When you ask how he can possibly justify $$700 to clean one room, Dave explains that you are his only client. He lives in a studio apartment that costs $1,100 a month, he has to eat, doesn’t he?

desk

2. Find statistics on rates for your field. For example, the American Statistical Association published this article in 2011 on rates for statistical consulting. You could look up your field on the Bureau of Labor Statistics site , although since that is for full-time employees, who receive benefits, you should estimate cost to the employer as 20-30% more than whatever the median salary is. What, you mean to tell me that you have no idea what the average market rate is for your services? How the hell did you come up with a rate then?

3. Figure out where you fall in the range. Do NOT, NOT, NOT look at the top of the range and say, “I could charge that much.”
The inter-quartile range for statistical consultants in 2011 was $89 to $189. That means that 75% of the consultants charged more than $89 and 25% charged $189 or more. The median was $130 an hour. Do you find yourself saying,

“Hey, I’m in at least the top 25% for sure, I could charge $189 an hour.”

What is your standard of comparison? According to the ASA, consultants who have a Ph.D. charge an average of $44 an hour more. Do you have a Ph.D.?  What justifies your claim to be in the top 25% ? How many years of experience do you have? How many years does the average person in your field have?  What metrics do you have that justify your claim? How much have you brought in sales, grant money, subscriptions, enrollment? Is that as much as the average person? If not, how do you justify that $130 an hour?

4. Account for your “non-monetary costs”. I have a Ph.D., 30 years of experience, have brought in tens of millions of dollars in grant funds for clients, published articles in academic journals, given hundreds of conference presentations. Would it surprise you to know that I do NOT charge in the top 25% ? There are many non-monetary factors in my consulting work, that do not apply to other consultants who charge more. I don’t come into the office before 10 a.m.  I wear a suit about 10 times a year. I only take projects that interest me, working with people I like and respect. With rare exceptions (I’m looking at YOU Belcourt, ND in January) , I don’t travel to places unless I really want to go there. On the other hand, when my children were all living at home and I really needed the money, I traveled more, got up earlier, wore more suits, worked with more jackasses and charged more money. Back then, I agreed to respond to client calls or emails within an hour. Now, it is within 24 hours. I also charge less now so I can choose the clients I want to work with. (BTW, we are not taking new clients at this time.)

So, that’s it, decide a fair rate based on what the market is paying, where, based on objective criteria, your skills and experience fall compared to the general population of whatever-you-do and figure in what non-monetary requirements you or the employer have.

Charge a fair price, based on the market and your documented accomplishments , not on what you need or what you think you’re worth and you’re less likely to get your sorry ass fired.

–*—–*—–*—–

Feel smarter after reading this blog? Want to be even smarter? Buy our games, Fish Lake and Spirit Lake. Learn math, social studies and explore our virtual worlds.

7 Generation Games. Games that make you smarter.

7 Generation Games Logo

In the past, I have questioned the extent to which we really suck at math in the U.S. While I’m still a bit skeptical that the average child in developing countries is doing dramatically better than children in the U.S., one thing is pretty clear from our results to date, and that is that the average child living in poverty** in the U.S. is doing pretty darn bad when it comes to math.

About a week ago, I discussed the results from a test on fractions given as part of our Fish Lake game evaluation. The pretest score was around 22% correct. Not terribly good.

There were also two questions where children had to explain their answers:

   Zoongey Gniw ate 1/3 of a fish. Willow ate 1/3 of a different fish. Zoongey Gniw said that he ate more fish. Willow says that he ate the same amount as she did, because they both ate 1/3 of a fish. Explain to Willow how Zoongey Gniw could be right.

3 different ways of showing 1/4

 

Explain why each of the above figures represents ONE-FOURTH.

Answers were scored 2 points if correct, 1 if partially correct and 0 if incorrect.

Out of 4 points possible, the mean for 260 students in grades 3 through 7 was .42. In other words, they received about 10% of the possible points.

These two questions test knowledge that is supposed to be taught in 3rd grade and 96% of the students we tested were in fourth grade or higher.
PUH-LEASE don’t say,

 

“Well, those are hard questions. I’m not sure I could explain that.”

 

If that is the case, feel sad! These are easy questions if you understand basic facts about fractions. “Understand” is the key word in that sentence.

SO many people, including me, when I was young, simply memorize facts and repeat them when prompted, like some kind of trained parrot, and with no more understanding.

When understanding of mathematics is required, they fail. Yes, some of the items tested under the new Common Core standards are harder. That doesn’t show a failure of the standards or tests, but rather of the students’ knowledge.

This is one of those cases where “teaching to the test” is not a bad idea.

** The reason I limited my statement to children living in poverty is that the schools in our study had from 72% -98% of their students receiving free lunch. Being a good little statistician I don’t want to extrapolate beyond the population from which our sample was drawn.

 

 

Sometimes, you can know too much programming for your own good. Yesterday, I was working on analyzing a data set and can I just say here that SAS’s inability to handle mixed-type arrays is the bane of my existence. (In case you don’t know, if you mix character and numeric variable types in an array, SAS will give you an error. If you know an easy way around this, you will be my new best friend.)

I started out doing all kinds of fancy things, using

ARRAY answersc{*} _char_ ;

to get all of the character variables in an array and the DIM function to give the dimension of an array.

There were various reasons why creating new variables that were character using the PUT function or numeric using the INPUT function was a bad idea.

It occurred to me that I was making the whole process unnecessarily complicated. I had a relatively small data set with just a few variables that needed to be changed to character. So, I opened the original CSV file in SAS Enterprise Guide by selecting IMPORT DATA and picking COMMA from the pull-down menu for how fields are delimited.

opening in SAS EG

Next, for each of the variables, I changed the name, label and type to what I wanted it to be.

change properties

If you’re one of those people who just click, “NEXT” over and over when you are importing your data you may not be aware that you can change almost everything in those field attributes.  To change the data in your to-be-created SAS data set, click in the box labeled TYPE. Change it from number to string, as shown below. Now you have character variables.
change hereNice! Now I have my variables all changed to character.

One more minor change to make my life easier.

change length

We had some spam in our file, with spambots answering the online pretest and resulting in an input format and output format length of 600. Didn’t I just say that you can change almost anything in those field attributes? Why, yes, yes I did. Click in the box on that variable and a window will pop up that allows you to change the length.

That’s it. Done!

Which left me time to start on the data analysis that you can read about here.

I tell clients on our statistical consulting side all of the time that if your conclusion is only valid if you look at this specific subset of your sample, with this particular statistical technique. You need to look for a convergence or results. Does the mean score increase? Does the proportion of people passing a test increase? Do the test scores still increase when you co-vary for the pretest score?

(This is for my friend, Dr. Longie, who tells me I always put too many numbers in things and should get to the point – no matter how we sliced it, the scores of students who played Fish Lake improved over 30% from their pretest. Analysis is continuing on Spirit Lake and other data from Fish Lake. There! Are you happy now?) 

We are just at the very beginning stages of analyzing data from the second phase of our research grant funded by the U.S. Department of Agriculture. Coincidentally, we are in Maryland at the National SBIR Conference this week and got the chance to meet in person all of the folks whose email we have been receiving for years.)

Me, our CMO and USDA staff

When we were in the middle of developing and testing Fish Lake, one of the interns in our office asked me,

“Are you sure this is going to work?”

I told her,

“No, I’m not sure. That’s why they call it research.”

School has now ended at all of our test sites and I have just completed cleaning the data for analysis from the first data set, which is the pre- and post-test data for Fish Lake, our game that teaches fractions as your avatar retraces the Ojibwe migration – canoeing, hunting and fishing your way across the continent.

So … what happened?

The first thing I did was compute the mean and standard deviation for the students who completed the pretest and the posttest. Then, I merged the datasets together and did a paired t-test for the 61 students who took the post-test and pre-test both. I didn’t show you any of those results because I assumed (correctly) that the merge would have to be reviewed because some people would have misspelled their username on the pretest or posttest. Surprisingly, I only found two of those, as well as one record that was just testing the software by one of our interns. The programs that I developed to clean the data (programs presented at a couple of regional SAS software conferences) worked pretty well.

Then, I re-ran the analysis.

Result 1

Pre-test mean = 22.4%, SD= 16.5% N=260

Post-test mean = 30.8% SD =17.4% N=63

So far, so good. We were not surprised by the low scores on the pretest. We knew that the majority of students in several of our test schools were achieving a year or two below grade level. The improvement from pre-test to post-test of 8.4% represented an improvement in test scores of 37.5%

BUT …. what if the students who did not take the post-test were the lower performing students? Shouldn’t we do a pretest and post-test comparison only including matched pairs?

This brings us to ….

Result 2 – With Matched Pairs

Pre-test mean = 23.6%, SD= 17.4% N=63

Post-test mean = 30.8% SD =17.4% N=63

As hypothesized, the students who completed the post-test scored higher on the pretest than the average, but not dramatically so. The difference was still statistically significant (p < .01)

What about outliers? That standard deviation seems awfully high to me and when I look at the raw data I find five players who have a 0 on the pretest or post-test and one who had one of the highest scores on the pretest whose test is blank after the first few questions.

Now, it is possible that those students just knew none of the questions – but it appears they just entered their username and (almost) nothing else. I deleted those 6 records and got this

Result 3 – Matched Pairs with Outliers Deleted

Pre-test mean =24.4 SD =16.2 N=57

Post-test mean = 32.7 SD = 16.7 N=57

With a difference of 8.3 percentage points, this presents an improvement of 34% (p<.001 )

Conclusion ? Well, we are not even close to a conclusion because we have a LOT of more data still to analyze, but what I can say is that the results are looking promising.

I’m preparing a data set for analysis and since the data are scored by SAS I am double-checking to make sure that I coded it correctly. One check is to select out an item and compare the percentage who answered correctly with the mean score for that item. These should be equal since items are scored 0=wrong, 1=correct.

When I look at the output for my PROC MEANS it says that 31% of the respondents answered this item correctly, that is, mean = .310.

However, the correct answer is D and when I look at the results from my PROC FREQ it shows that 35% of the respondents gave ‘D’ as the correct answer.

What is going on here? Is my program to score the tests off somewhere? Will I need to score all of these tests by hand?

Real hand soaps

I am sure those of you who are SAS gurus thought of the answer already (and if you didn’t, you’re going to be slapping your head when you read the simple solution).

By default, PROC FREQ gives you the percentage of non-missing records. Since many students who did not know the answer to the question left it blank, they were (rightfully) given a zero when the test was automatically scored. To get your FREQ and MEANS results to match, use the MISSING option, as so

PROC FREQ DATA =in.score ;
TABLES  item1 / MISSING ;

You will find that 31% of the total (including those who skipped the question) got the answer right.

Sometimes it’s the simplest things that give you pause.

Do you have a bunch of sites bookmarked with articles you are going to go back and read later? It’s not just me, is it?

One of my (many) favorite things at SAS Global Forum this year was the app. It included a function for emailing links to papers you found interesting. Perhaps the theory is that you would email these links to your friends to rub it in that their employer did not like you well enough. I emailed links to myself to read when I had time. Finally catching up on coding, email and meetings, today, I had a bit of time.

I was reading a paper by Lisa Henley

A Genetic Algorithm for Data Reduction.

It’s a really cool and relatively new concept – from the 1970s – compared to 1900 for the Pearson chi-square, for example.

In brief, here is the idea. You have a large number of independent variables.  How do you select the best subset? One way to do it is to let the variables fight it out in a form of natural selection.

Let’s say you have 40 variables. Each “chromosome” will have 40 “alleles” that will randomly be coded as 0 or 1, either included in the equation or not.

You compute the equation with these variables included or not and assess each equation based on a criterion, say, Akaike Information Criterion or the Root Mean Square Error.

You can select the “winning” chromosome/ equation either head to head, whichever has the higher AIC/ RMSE , although there are other methods of determination, like giving those with the higher criterion a higher probability of staying.

You do this repeatedly until you have your winning equation. Okay, this is a bit of a simplification but you should get the general idea. I included the link above so you could check out the paper for yourself.

Then, while I was standing there reading the paper, the ever-brilliant David Pasta walked by and mentioned the name of another paper on use of Genetic Algorithm for Model Selection that was presented  at the Western Users of SAS Software conference a couple of years back.

I don’t have any immediate use for GA in the projects I’m working on at this moment. However, I can’t even begin to count the number of techniques I’ve learned over the years that I had no immediate use for and then two weeks later turned out to be exactly what I needed.

Even though I knew the Genetic Algorithm existed,  I wasn’t as familiar with its use in model selection.

You’ll never use what you don’t know – which is a really strong argument for learning as much as you can in your field, whatever it might be.

I think I need some advice on appreciating how great my life is.

I haven’t been posting for two weeks, I realized today. First, I got sick  and then when I got better, I was so far behind in working on our games that I just squashed bugs and held design meetings for days.

Now, I’m 99% back to my usual self. Here is what has happened lately, in semi-chronological order.

  1. The Spoiled One was elected senior class president at the La-di-da College Preparatory School, where they also renewed her scholarship for a fourth year and she earned a perfect 4.0. Actually, I think it is higher than that because Honors and AP classes count for 5 points in GPA. She was also signed to a club soccer team. It’s not very usual to start playing club soccer at 17 but she’s not a very usual kid.
  2. Darling Daughter Number 3 and Darling Daughter Number 1 co-authored a book that appeared on the New York Times best-seller list this week.
  3. In the past year, The Perfect Jennifer has gotten married, moved into a house and had her contract renewed to teach yet another year in downtown Los Angeles where she continues to be a blessing to her students (and lots of people consider her that besides me).
  4. Darling Daughter Number 3 had three movie premieres, including one this week, for Entourage I just watched it tonight with The Spoiled One. It was good.
  5. I spent all day on a set today for a commercial. Since there were big signs about not posting on social media, you will just have to wonder until it comes out.
  6. We received another grant from the U.S. Department of Agriculture to develop games for rural schools serving English language learners.
  7. Last month we had a successful Kickstarter campaign to develop another game, Forgotten Trail, to teach statistics.
  8. Darling Daughter Number 1 had a healthy baby and moved into a house, with her husband and three lovely children, that is about half a mile away from us.
  9. In August, I will be flying to Brazil to spend a week with all four of my daughters, while Darling Daughter Number 3 defends her world title.
The Perfect Jennifer and Niece

The Perfect Jennifer Teaches Tree-Climbing

So, basically, everything good you could imagine happening to anyone has happened to me.

Instead of savoring how awesome my life is, most of my time I focus on how much more I want to do on these games, teaching statistics, teaching judo, writing conference papers, reports, journal articles. Not that all of those things aren’t useful and important, but it occurred to me today that I’m certainly not unhappy but I feel as if I should be tap-dancing happy and no tap-dancing has been happening.

We’ve looked at data on Body Mass Index (BMI) by race. Now let’s take a look at our sample another way. Instead of using BMI as a variable, let’s use obesity as a dichotomous variable, defined as a BMI greater than 30. It just so happened (really) that this variable was already in the data set so I didn’t even need to create it.

The code is super-simple and shown below. The reserved SAS keywords are capitalized just to make it easier to spot what must remain the same.  Let’s look at this line by line

LIBNAME  mydata “/courses/some123/c_1234/” ACCESS=READONLY;
PROC FREQ DATA = mydata.coh602 ;
TABLES race*obese / CHISQ ;
WHERE race NE “” ;
RUN ;

LIBNAME  mydata “/courses/some123/c_1234/” ACCESS=READONLY;

Identifies the directory where the data for your course are stored. As a student, you only have read access.
PROC FREQ DATA = mydata.coh602 ;

Begins the frequency procedure, using the data set in the directory linked with mydata in the previous statement.

TABLES race*obese / CHISQ ;

Creates a cross-tabulation of race by obesity and the CHISQ following the option statistic produces the second table you see below of chi-square and other statistics that test the hypothesis of a relationship between two categorical variables.
WHERE race NE “” ;

Only selects those observations where we have a value for race (where race is not equal to missing)
RUN ;

Pretty obvious? Runs the program.

Cross-tabulation of race by obesity

 

Similar to our ANOVA results previously, we see that the obesity rates for black and Hispanic samples are similar at 35% and 38% while the proportion of the white population that is obese is 25%. These numbers are the percentage for each row. As is standard practice, a 0 for obesity means no, the respondent is not obese and a 1 means yes, the person is obese.

The CHISQ option produces the table below. The first three statistics are all tests of statistical significance of the relationship between the two variables. Table with chi-square statistics

You can see from this that there is a statistically significant relationship between race and obesity. Another way to phrase this might be that the distribution of obesity is not the same across races.

The next three statistics give you the size of the relationship. A value of 1.0 denotes perfect agreement (be suspicious if you find that, it’s more often you coded something wrong than that everyone of one race is different from everyone of another race). A value of 0 indicates no relationship whatsoever between the two variables. Phi and Cramer’s V range from -1 to +1 , while the contingency coefficient ranges from 0 to 1. The latter seems more reasonable to me since what does a “negative” relationship between two categorical variables really mean? Nothing.

From this you can conclude that the relationship between obesity and race is not zero and that it is a fairly small relationship.

Next, I’d like to look at the odds ratios and also include some multivariate analyses. However, I’m still sick and some idiot hit my brand new car on the freeway yesterday and sped off, so I am both sick and annoyed.  So … I’m going back to bed and discussion of the next analyses will have to wait until tomorrow.

So far, we have looked at

  1. How to get the sample demographics and descriptive statistics for your dependent and independent variable.
  2. Computing descriptive statistics by category 

Now it’s time to dive into step 3, computing inferential statistics.

The code is quite simple. We need a LIBNAME statement. It will look something like this. The exact path to the data, which is between the quotation marks, will be different for every course. You get that path from your professor.

LIBNAME mydata “/courses/ab1234/c_0001/” access=readonly;

DATA example ;
SET mydata.coh602;
WHERE race ne “” ;
run ;

I’m creating a data set named example. The DATA statement does that.

It is being created as a subset from the coh602 dataset stored in the library referenced by mydata. The SET statement does that.

I’m only including those records where they have a non-missing value for race. The WHERE statement does that.

If you already did that earlier in your program, you don’t need to do it again. However, remember, example is a temporary data set (you can tell because it doesn’t have a two level name like mydata.example ) . It resides in working memory. Think of it as if you were working on a document and didn’t save it. If you closed that application, your document would be gone.  Okay, so much for the data set. Now we are on to ….. ta da da

Inferential Statistics Using SAS

Let’s start with Analysis of Variance.  We’re going to do PROC GLM. GLM stands for General Linear Model. There is a PROC ANOVA also and it works pretty much the same.

PROC GLM DATA = example ;

CLASS race ;

MODEL bmi_p = race ;

MEANS race / TUKEY ;

The CLASS statement is used to identify any categorical variables. Since with Analysis of Variance you are comparing the means of multiple groups, you need at least one CLASS statement with at least one variable that has multiple groups – in this case, race.

MODEL dependent = independent ;

Our model is of bmi_p  – that is body mass index, being dependent on race. Your dependent variable MUST be a numeric variable.

The model statement above will result in a test of significance of difference among means and produce an F-statistic.

What does an F-test test?

It tests the null hypothesis that there is NO difference among the means of the groups, in this case, among the three groups – White, Black and Hispanic . If the null hypothesis is accepted, then all the group means are the same and you can stop.

However, if the null hypothesis is rejected, you certainly also want to know which groups are different from which other groups. After that significant F-test, you need a post hoc test (Latin for “after that”. Never say all those years of Catholic school were wasted).

There are a lot to choose from but for this I used TUKEY. The last statement requests the post hoc test.

Let’s take a look at our results.

I have an F-value of 300.10 with a probability < .0001 .

Assuming my alpha level was .o5 (or .01, or .001, or .ooo1) , this is statistically significant and I would reject my null hypothesis. The differences between means are probably not zero, based on my F-test, but are they anything substantial?

If I look at the R-square, and I should, it tells me that this model explains 1.55% of the variance in BMI – which is not a lot. The mean BMI for the whole sample is 27.56.

You can see complete results here. Also, that link will probably work better with screen readers, if you’re visually impaired (Yeah, Tina, I put this here for you!).

ANOVA table

 

Next, I want to look at the results of the TUKEY test.

table of post hoc comparisons

 

We can see that there was about a 2-point difference between Blacks and Whites, with the mean for Blacks 2 points higher. There was also about a 2-point difference between Whites and Hispanics. The difference in mean BMI between White and Black samples and White and Hispanic samples was statistically significant. The difference between Hispanic and Black sample means was near zero with the mean BMI for Blacks 0.06 points higher than for Hispanics.

This difference is not significant.

So …. we have looked at the difference in Body Mass Index, but is that the best indicator of obesity? According to the World Health Organization, who you’d think would know, obesity is defined as a BMI of greater than 30.

The next step we might want to take is examine our null hypothesis using categorical variable, obese or not obese. That, is our next analysis and next post.

Next Page →