The heroine in Labyrinth (fun movie, by the way) complains ,

“That’s not fair!”

and the Goblin King responds,

“You say that so often. I wonder what your standard of comparison is.”

Sunday evening and I’m taking a break from working on our latest game to offer another installment in Mama AnnMaria’s Guide on Not Getting Your Sorry Ass Fired. I was talking to my son-in-law, Mr. Perfect Jennifer, and he commented that many in his field – graphic art and animation – set their rates at what the client can be convinced to pay. If a client is somewhat naive about the going rate, they will charge two or three times as much – the sky’s the limit! What is wrong with that?  Well, for one thing, if you are vastly over-charging your clients, at some point they will find out that other people are not paying $775 per hour for a statistical consultant and they will fire your sorry ass and hire someone more reasonable.

People starting out as freelancers or consultants are often unclear as what is a “fair rate of pay”.

Let me recommend this:

1.  NEVER base your fees on what you “need to maintain a certain lifestyle”. Unbelievably, I have seen many people starting out set their fees exactly like this. They will tell me how expensive it is to live in San Francisco or New York City, as if that matters. Let me give you this example to show how stupid this is. You hire someone to clean your office every night. We’re not talking Trump tower here, but just your standard executive office – desk, a couple of chairs, table. Dave’s Cleaning tells you that the cost will be $3,000 a month. When you ask how he can possibly justify $$700 to clean one room, Dave explains that you are his only client. He lives in a studio apartment that costs $1,100 a month, he has to eat, doesn’t he?


2. Find statistics on rates for your field. For example, the American Statistical Association published this article in 2011 on rates for statistical consulting. You could look up your field on the Bureau of Labor Statistics site , although since that is for full-time employees, who receive benefits, you should estimate cost to the employer as 20-30% more than whatever the median salary is. What, you mean to tell me that you have no idea what the average market rate is for your services? How the hell did you come up with a rate then?

3. Figure out where you fall in the range. Do NOT, NOT, NOT look at the top of the range and say, “I could charge that much.”
The inter-quartile range for statistical consultants in 2011 was $89 to $189. That means that 75% of the consultants charged more than $89 and 25% charged $189 or more. The median was $130 an hour. Do you find yourself saying,

“Hey, I’m in at least the top 25% for sure, I could charge $189 an hour.”

What is your standard of comparison? According to the ASA, consultants who have a Ph.D. charge an average of $44 an hour more. Do you have a Ph.D.?  What justifies your claim to be in the top 25% ? How many years of experience do you have? How many years does the average person in your field have?  What metrics do you have that justify your claim? How much have you brought in sales, grant money, subscriptions, enrollment? Is that as much as the average person? If not, how do you justify that $130 an hour?

4. Account for your “non-monetary costs”. I have a Ph.D., 30 years of experience, have brought in tens of millions of dollars in grant funds for clients, published articles in academic journals, given hundreds of conference presentations. Would it surprise you to know that I do NOT charge in the top 25% ? There are many non-monetary factors in my consulting work, that do not apply to other consultants who charge more. I don’t come into the office before 10 a.m.  I wear a suit about 10 times a year. I only take projects that interest me, working with people I like and respect. With rare exceptions (I’m looking at YOU Belcourt, ND in January) , I don’t travel to places unless I really want to go there. On the other hand, when my children were all living at home and I really needed the money, I traveled more, got up earlier, wore more suits, worked with more jackasses and charged more money. Back then, I agreed to respond to client calls or emails within an hour. Now, it is within 24 hours. I also charge less now so I can choose the clients I want to work with. (BTW, we are not taking new clients at this time.)

So, that’s it, decide a fair rate based on what the market is paying, where, based on objective criteria, your skills and experience fall compared to the general population of whatever-you-do and figure in what non-monetary requirements you or the employer have.

Charge a fair price, based on the market and your documented accomplishments , not on what you need or what you think you’re worth and you’re less likely to get your sorry ass fired.


Feel smarter after reading this blog? Want to be even smarter? Buy our games, Fish Lake and Spirit Lake. Learn math, social studies and explore our virtual worlds.

7 Generation Games. Games that make you smarter.

7 Generation Games Logo



In the past, I have questioned the extent to which we really suck at math in the U.S. While I’m still a bit skeptical that the average child in developing countries is doing dramatically better than children in the U.S., one thing is pretty clear from our results to date, and that is that the average child living in poverty** in the U.S. is doing pretty darn bad when it comes to math.

About a week ago, I discussed the results from a test on fractions given as part of our Fish Lake game evaluation. The pretest score was around 22% correct. Not terribly good.

There were also two questions where children had to explain their answers:

   Zoongey Gniw ate 1/3 of a fish. Willow ate 1/3 of a different fish. Zoongey Gniw said that he ate more fish. Willow says that he ate the same amount as she did, because they both ate 1/3 of a fish. Explain to Willow how Zoongey Gniw could be right.

3 different ways of showing 1/4


Explain why each of the above figures represents ONE-FOURTH.

Answers were scored 2 points if correct, 1 if partially correct and 0 if incorrect.

Out of 4 points possible, the mean for 260 students in grades 3 through 7 was .42. In other words, they received about 10% of the possible points.

These two questions test knowledge that is supposed to be taught in 3rd grade and 96% of the students we tested were in fourth grade or higher.
PUH-LEASE don’t say,


“Well, those are hard questions. I’m not sure I could explain that.”


If that is the case, feel sad! These are easy questions if you understand basic facts about fractions. “Understand” is the key word in that sentence.

SO many people, including me, when I was young, simply memorize facts and repeat them when prompted, like some kind of trained parrot, and with no more understanding.

When understanding of mathematics is required, they fail. Yes, some of the items tested under the new Common Core standards are harder. That doesn’t show a failure of the standards or tests, but rather of the students’ knowledge.

This is one of those cases where “teaching to the test” is not a bad idea.

** The reason I limited my statement to children living in poverty is that the schools in our study had from 72% -98% of their students receiving free lunch. Being a good little statistician I don’t want to extrapolate beyond the population from which our sample was drawn.





Sometimes, you can know too much programming for your own good. Yesterday, I was working on analyzing a data set and can I just say here that SAS’s inability to handle mixed-type arrays is the bane of my existence. (In case you don’t know, if you mix character and numeric variable types in an array, SAS will give you an error. If you know an easy way around this, you will be my new best friend.)

I started out doing all kinds of fancy things, using

ARRAY answersc{*} _char_ ;

to get all of the character variables in an array and the DIM function to give the dimension of an array.

There were various reasons why creating new variables that were character using the PUT function or numeric using the INPUT function was a bad idea.

It occurred to me that I was making the whole process unnecessarily complicated. I had a relatively small data set with just a few variables that needed to be changed to character. So, I opened the original CSV file in SAS Enterprise Guide by selecting IMPORT DATA and picking COMMA from the pull-down menu for how fields are delimited.

opening in SAS EG

Next, for each of the variables, I changed the name, label and type to what I wanted it to be.

change properties

If you’re one of those people who just click, “NEXT” over and over when you are importing your data you may not be aware that you can change almost everything in those field attributes.  To change the data in your to-be-created SAS data set, click in the box labeled TYPE. Change it from number to string, as shown below. Now you have character variables.
change hereNice! Now I have my variables all changed to character.

One more minor change to make my life easier.

change length

We had some spam in our file, with spambots answering the online pretest and resulting in an input format and output format length of 600. Didn’t I just say that you can change almost anything in those field attributes? Why, yes, yes I did. Click in the box on that variable and a window will pop up that allows you to change the length.

That’s it. Done!

Which left me time to start on the data analysis that you can read about here.



I tell clients on our statistical consulting side all of the time that if your conclusion is only valid if you look at this specific subset of your sample, with this particular statistical technique. You need to look for a convergence or results. Does the mean score increase? Does the proportion of people passing a test increase? Do the test scores still increase when you co-vary for the pretest score?

(This is for my friend, Dr. Longie, who tells me I always put too many numbers in things and should get to the point – no matter how we sliced it, the scores of students who played Fish Lake improved over 30% from their pretest. Analysis is continuing on Spirit Lake and other data from Fish Lake. There! Are you happy now?) 

We are just at the very beginning stages of analyzing data from the second phase of our research grant funded by the U.S. Department of Agriculture. Coincidentally, we are in Maryland at the National SBIR Conference this week and got the chance to meet in person all of the folks whose email we have been receiving for years.)

Me, our CMO and USDA staff

When we were in the middle of developing and testing Fish Lake, one of the interns in our office asked me,

“Are you sure this is going to work?”

I told her,

“No, I’m not sure. That’s why they call it research.”

School has now ended at all of our test sites and I have just completed cleaning the data for analysis from the first data set, which is the pre- and post-test data for Fish Lake, our game that teaches fractions as your avatar retraces the Ojibwe migration – canoeing, hunting and fishing your way across the continent.

So … what happened?

The first thing I did was compute the mean and standard deviation for the students who completed the pretest and the posttest. Then, I merged the datasets together and did a paired t-test for the 61 students who took the post-test and pre-test both. I didn’t show you any of those results because I assumed (correctly) that the merge would have to be reviewed because some people would have misspelled their username on the pretest or posttest. Surprisingly, I only found two of those, as well as one record that was just testing the software by one of our interns. The programs that I developed to clean the data (programs presented at a couple of regional SAS software conferences) worked pretty well.

Then, I re-ran the analysis.

Result 1

Pre-test mean = 22.4%, SD= 16.5% N=260

Post-test mean = 30.8% SD =17.4% N=63

So far, so good. We were not surprised by the low scores on the pretest. We knew that the majority of students in several of our test schools were achieving a year or two below grade level. The improvement from pre-test to post-test of 8.4% represented an improvement in test scores of 37.5%

BUT …. what if the students who did not take the post-test were the lower performing students? Shouldn’t we do a pretest and post-test comparison only including matched pairs?

This brings us to ….

Result 2 – With Matched Pairs

Pre-test mean = 23.6%, SD= 17.4% N=63

Post-test mean = 30.8% SD =17.4% N=63

As hypothesized, the students who completed the post-test scored higher on the pretest than the average, but not dramatically so. The difference was still statistically significant (p < .01)

What about outliers? That standard deviation seems awfully high to me and when I look at the raw data I find five players who have a 0 on the pretest or post-test and one who had one of the highest scores on the pretest whose test is blank after the first few questions.

Now, it is possible that those students just knew none of the questions – but it appears they just entered their username and (almost) nothing else. I deleted those 6 records and got this

Result 3 – Matched Pairs with Outliers Deleted

Pre-test mean =24.4 SD =16.2 N=57

Post-test mean = 32.7 SD = 16.7 N=57

With a difference of 8.3 percentage points, this presents an improvement of 34% (p<.001 )

Conclusion ? Well, we are not even close to a conclusion because we have a LOT of more data still to analyze, but what I can say is that the results are looking promising.



I’m preparing a data set for analysis and since the data are scored by SAS I am double-checking to make sure that I coded it correctly. One check is to select out an item and compare the percentage who answered correctly with the mean score for that item. These should be equal since items are scored 0=wrong, 1=correct.

When I look at the output for my PROC MEANS it says that 31% of the respondents answered this item correctly, that is, mean = .310.

However, the correct answer is D and when I look at the results from my PROC FREQ it shows that 35% of the respondents gave ‘D’ as the correct answer.

What is going on here? Is my program to score the tests off somewhere? Will I need to score all of these tests by hand?

Real hand soaps

I am sure those of you who are SAS gurus thought of the answer already (and if you didn’t, you’re going to be slapping your head when you read the simple solution).

By default, PROC FREQ gives you the percentage of non-missing records. Since many students who did not know the answer to the question left it blank, they were (rightfully) given a zero when the test was automatically scored. To get your FREQ and MEANS results to match, use the MISSING option, as so

PROC FREQ DATA =in.score ;

You will find that 31% of the total (including those who skipped the question) got the answer right.

Sometimes it’s the simplest things that give you pause.



Do you have a bunch of sites bookmarked with articles you are going to go back and read later? It’s not just me, is it?

One of my (many) favorite things at SAS Global Forum this year was the app. It included a function for emailing links to papers you found interesting. Perhaps the theory is that you would email these links to your friends to rub it in that their employer did not like you well enough. I emailed links to myself to read when I had time. Finally catching up on coding, email and meetings, today, I had a bit of time.

I was reading a paper by Lisa Henley

A Genetic Algorithm for Data Reduction.

It’s a really cool and relatively new concept – from the 1970s – compared to 1900 for the Pearson chi-square, for example.

In brief, here is the idea. You have a large number of independent variables.  How do you select the best subset? One way to do it is to let the variables fight it out in a form of natural selection.

Let’s say you have 40 variables. Each “chromosome” will have 40 “alleles” that will randomly be coded as 0 or 1, either included in the equation or not.

You compute the equation with these variables included or not and assess each equation based on a criterion, say, Akaike Information Criterion or the Root Mean Square Error.

You can select the “winning” chromosome/ equation either head to head, whichever has the higher AIC/ RMSE , although there are other methods of determination, like giving those with the higher criterion a higher probability of staying.

You do this repeatedly until you have your winning equation. Okay, this is a bit of a simplification but you should get the general idea. I included the link above so you could check out the paper for yourself.

Then, while I was standing there reading the paper, the ever-brilliant David Pasta walked by and mentioned the name of another paper on use of Genetic Algorithm for Model Selection that was presented  at the Western Users of SAS Software conference a couple of years back.

I don’t have any immediate use for GA in the projects I’m working on at this moment. However, I can’t even begin to count the number of techniques I’ve learned over the years that I had no immediate use for and then two weeks later turned out to be exactly what I needed.

Even though I knew the Genetic Algorithm existed,  I wasn’t as familiar with its use in model selection.

You’ll never use what you don’t know – which is a really strong argument for learning as much as you can in your field, whatever it might be.



I think I need some advice on appreciating how great my life is.

I haven’t been posting for two weeks, I realized today. First, I got sick  and then when I got better, I was so far behind in working on our games that I just squashed bugs and held design meetings for days.

Now, I’m 99% back to my usual self. Here is what has happened lately, in semi-chronological order.

  1. The Spoiled One was elected senior class president at the La-di-da College Preparatory School, where they also renewed her scholarship for a fourth year and she earned a perfect 4.0. Actually, I think it is higher than that because Honors and AP classes count for 5 points in GPA. She was also signed to a club soccer team. It’s not very usual to start playing club soccer at 17 but she’s not a very usual kid.
  2. Darling Daughter Number 3 and Darling Daughter Number 1 co-authored a book that appeared on the New York Times best-seller list this week.
  3. In the past year, The Perfect Jennifer has gotten married, moved into a house and had her contract renewed to teach yet another year in downtown Los Angeles where she continues to be a blessing to her students (and lots of people consider her that besides me).
  4. Darling Daughter Number 3 had three movie premieres, including one this week, for Entourage I just watched it tonight with The Spoiled One. It was good.
  5. I spent all day on a set today for a commercial. Since there were big signs about not posting on social media, you will just have to wonder until it comes out.
  6. We received another grant from the U.S. Department of Agriculture to develop games for rural schools serving English language learners.
  7. Last month we had a successful Kickstarter campaign to develop another game, Forgotten Trail, to teach statistics.
  8. Darling Daughter Number 1 had a healthy baby and moved into a house, with her husband and three lovely children, that is about half a mile away from us.
  9. In August, I will be flying to Brazil to spend a week with all four of my daughters, while Darling Daughter Number 3 defends her world title.
The Perfect Jennifer and Niece

The Perfect Jennifer Teaches Tree-Climbing

So, basically, everything good you could imagine happening to anyone has happened to me.

Instead of savoring how awesome my life is, most of my time I focus on how much more I want to do on these games, teaching statistics, teaching judo, writing conference papers, reports, journal articles. Not that all of those things aren’t useful and important, but it occurred to me today that I’m certainly not unhappy but I feel as if I should be tap-dancing happy and no tap-dancing has been happening.


WP Themes