To a statistician, this error message makes perfect sense:

`ERROR: Variable BP_Status in list does not match type prescribed for this list.`

but to someone new to both statistics and SAS it may be clear as mud.

Here is your problem.

The procedure you are using, PROC UNIVARIATE , PROC MEANS is designed ONLY for numeric variables. You have tried to use it for a categorical variable.

This error means you’ve used a categorical variable in a list where only numeric variables are expected. For example, bp_status is “High”, “Normal” and “Optimal”

You cannot find the mean or standard deviation of words, so your procedure has an error.

So … what do you do if you need descriptive statistics?

Go back to your PROC UNIVARIATE or PROC MEANS and delete the offending variables. Re-run it with only numeric variables.

For your categorical variables, use a PROC FREQ for a frequency distribution and/ or PROC GCHART.

Problem solved.

You’re welcome.

]]>Driving 90 miles to take The Spoiled One back to school and then turning right around and driving 90 miles home seemed like a waste of time that I did not have. The Invisible Developer pointed out that he had work to do also on the spear fishing part of the game and that he had picked her up on Friday.

So … away we went, and since she recently got her learner’s permit, The Spoiled One drove on the freeway for the first time. This was interesting in itself, since the 101 regularly makes the list of 10 most congested freeways in America.

Not only did she get nearly two hours of practice in driving, but I also got filled in on all of the latest news on her soccer team, college fairs, the campuses she was interested in visiting and life in general. If your child is 16 and still talks to you in a civil tone for two hours straight, count yourself among a lucky minority of parents.

Having raised four daughters, I know whereof I speak.

When we got to the school, she immediately began complaining (she’s not called The Spoiled One for nothing). According to her, she is living in “hell”. (See picture below for what hell looks like. It is surprisingly more scenic than I had imagined.)

What is so infernal about her school, I asked. They make her study. Even on Sundays. There is a study hall from 7 to 9 pm and she has to walk across the yard to get to the building. Yes, like prison.

Just as she was telling me this, I saw something in front of her dorm. It was a deer! I said we should go take pictures of it and she said we’d never be able to get close enough, and besides we were wasting time. She had to get to study hall and put away her clothes and books in her dorm room. Besides, her religion teacher had told the students to stay away from the deer because coyotes track them and students who got too close could get attacked by coyotes. (You would think a nun wouldn’t just go around making shit up, now wouldn’t you? Having spent a good bit of the last twenty-five years in North Dakota, I’m justifiably skeptical of the deer-coyote-mauled prep school student triumvirate.)

Just then, the deer walked through the gate on to the baseball field and I spotted a second one in there. So, we sneaked up on them and took pictures.

That’s when it occurred to me that sometimes the best use of my time is to “waste it”. Really, what better way to spend my time than talking to my daughter and watching deer grazing as the sun sets in the mountains.

But now, I really do need to finish that lecture.

=======

If you want to see what I’m wasting my time on the rest of the time, check out 7 Generation Games

]]>“And some of you, your code won’t run and you’ll swear you did exactly what was shown in the examples. But, of course, all of the rest of us will know that is not true.”

This perfectly describes my experience teaching. For example, the problem with the LIBNAME.

I tell students,

Do not just copy and paste the LIBNAME from a Word document into your program. Often, this will cause problems because of extra formatting codes in the word processor. You may not see the code as any different from what you typed in, but it may not work. Type your LIBNAME statement into the program.

Apparently, students believe that when I say,

Do not just copy and paste the LIBNAME statement.

either, that what I really mean is,

Sure, go ahead and copy and paste the LIBNAME statement

or, that I did mean it but that is only because I want to force them to do extra typing, or because I am so old that I am against copying and pasting as a new-fangled invention and how the hell would I know if they copied and pasted it anyway.

Then their program does not work.

Very likely, their log looks something like this:

58 LIBNAME mydata “/courses/d1234455550/c_2222/” access=readonly;

59 run ;

NOTE: Library MYDATA does not exist.

**All quotation marks are not created equal.**

What you see above if you look very closely is that the end quote at the end of the path for the LIBNAME statement does not exactly match the beginning quote. Therefore, your reference for your library was not

/courses/d1234455550/c_2222/

but rather, something like

/courses/d1234455550/c_2222/ access=readonly run ;

Which is not what you had in mind, and, as SAS very reasonably told you, that directory does not exist.

The simplest fix: delete the quotation marks and TYPE in quotes.

LIBNAME mydata ‘/courses/d1234455550/c_2222/’ access=readonly;

If that doesn’t work, do what I said to begin with. Erase your whole LIBNAME statement and TYPE it into the program without copying and pasting.

Contrary to appearances, I don’t just make this shit up.

]]>

Here are the steps:

1. Compute the statistic of interest– that is mean, proportion, difference between means

2. Compute the standard error of the statistic

3. Obtain critical value. Do you have 30 or more in your sample and are you interested in the 95% confidence interval?

- If yes, multiply standard error by 1.96
- If no (fewer people), look up t-value for your sample size for .95
- If no (different alpha level) look up z-value for your alpha level
- If no (different alpha level AND less than 30) look up the t-value for your alpha level.

4. Multiply the critical value you obtained in step #3 by the standard error you obtained in #2

5. Subtract the result you obtained in step #4 from the statistic you obtained in #1 . That is your lower confidence limit.

6. Add the result you obtained in step #4 to the statistic you obtained in #1. That is your upper confidence limit.

**Simplifying it with SAS**

Here is a homework problem:

The following data are collected as part of a study of coffee consumption among undergraduate students. The following reflect cups per day consumed:

3 4 6 8 2 1 0 2

A. Compute the sample mean.

B. Compute the sample standard deviation.

C. Construct a 95% confidence interval

I did this in SAS as so

data coffee ;

input cups ;

datalines ;

3

4

6

8

2

1

0

2

;

proc means mean std stderr;

var cups ;

I get the follow results.

Analysis Variable : cups | ||
---|---|---|

Mean | Std Dev | Std Error |

3.2500000 | 2.6592158 | 0.9401748 |

These results give me A and B. Now, all I need to do to compute C is find the correct critical value. I look it up and find that it is 2.365

3.25 – 2.365 * .94 = 1.03

3.25 + 2.365 * .94 = 5.47

That is my confidence interval (1.03, 5.47)

=========================

If you want to verify it, or just don’t want to do any computations at all, you can do this

Proc means clm mean stddev ;

var cups ;

You will end up with the same confidence intervals.

Prediction: At least one person who reads this won’t believe me, will run the analysis and be surprised when I am right.

]]>I thought it did make sense in statistical terms. Think of it this way:

The “true score” of the population in this case is the mean of what an infinite number of judges would rate a fighter’s performance. Of course, there is going to be variation around that mean. Some judges may tend to weight take downs a tiny bit more. Judges vary in their definition of a significant strike. Some judges are just going to be clueless or inattentive and give a score that is far from accurate. On the average, though, these balance out and the mean of all of those infinite judges’ scores should be the true score. Let’s say our fighter, Bob, had a true score of 27. The most common score we should see a judge give him is 27, but a 26 or 28 would not be totally unexpected. Given that the standard deviation of fight scores is low, we would be surprised to see him given a score of 25 or 29 and completely floored if he received a 24 or a 30.

Let’s say we have a second fighter, Fred. His true score is 29. The most common score we should see for him is a 29, but again, a 28 or a 30 would not be unexpected because there is variation in our sample of judges.

Here is the point … when fighters are far apart in the true score of their performance, judges should very seldom have a difference of opinion in who won. Even when Bob is scored high, for him, at 28 and Fred is scored his average of 29, Fred still wins. Let’s say the standard deviation of judge’s scores is 1. I believe it is really lower than that and I do know that the winner of a round has to get 10 points, but for ease of computation, just go with me.

For Bob to win, he must be rated at least two standard deviations above his true score (which occurs 2.5% of the time) and Fred must be rated below his true score, which occurs half the time. Since the scores for Bob and Fred are independent probabilities the probability of BOTH of these events happening is .025 x .5 = .0125

The other way for Bob to win is if Fred scores two standard deviations below his true score, which will occur 2.5% of the time AND for Bob to score above his true score. Again, the combined probability is .0125. SO …. only 2.5% of the time (.0125 + .0125) would Bob win. Since judges’ scores are independent, the probability of any one scoring it for him, causing a split decision is .025 + .025 + .025 = 7.5%

(If all **three** judges scored it for Bob, that would be a very, very low probability of .o25 * .025 * .025 because, again, the judges scores are assumed independent of one another. In only 0.063% of the cases would this occur. We should probably subtract that and the probability of two of them scoring it for Bob to be exact, but I have to finish grading papers tonight so we’ll just acknowledge that it is not exactly 7.5% and move on.)

Let’s go back to the fight that actually happened. I didn’t see it so I am going to take some people’s word that it was a close fight. They might be lying but let’s assume not.

In this case, Bob, who has a true score of 27, is not fighting Fred, but rather, Ignatz, who has a true score of 27.3 (with three judges, he’d get a 27, 27, 28 score). There is great overlap in Bob and Ignatz’s scores. To outscore Ignatz’s average score, Bob would need a score of 27.4 – well, a z-score of .4 occurs about 35% of the time. Half of the time Ignatz is going to score 27.3 or lower so the probability of him both having an average or below score AND Bob having a 27.4 or high score is .5 *.35 or .175. So 17.5% of the time, a judge would give Bob a higher score. Since there are three judges, the probability of ONE of them giving him a higher score would be .175 + .175 + .175 = 52.5%

There is also the small probability that it could go unanimous the other way, but that’s not really pertinent to our argument.

The point is simply this … if two fighters’ true scores are close, it is much less likely that you will see a unanimous decision than if their true scores are really far apart. The closer they are, the more that statement holds. So, no, it is not a stupid comment to say that you believe someone warranted a split decision rather than a unanimous decision. It may simply mean that you think the fighters’ were so close that you were surprised there was not any variance in favor of the only slightly better fighter.

Really, I think most people would find that a reasonable statement.

**Extra credit points: **

Give one reason why the Central Limit Theorem does not apply in the above scenario.

Answer this question:

Does the fact that the distribution of errors is necessarily non-symmetric in Fred’s case (cannot score above 30) negate the application of the Central Limit Theorem?

]]>Malicious obedience is discussed on the englishstackexchange page (who even knew this existed) as

“….when people set their boss up to fail by doing exactly as he or she says even though they know in their hearts that their actions are incorrect or not optimal.”

I would add that it also includes taking zero personal responsibility. For example, let’s say you are the administrative assistant in an organization and you have been running lots of personal errands during work hours. The boss tells you that you need to stay at your desk. However, part of your job is to take the mail to the post office and in today’s mail is a major grant proposal that needs to be postmarked today. You don’t mail it and when the company loses out on a huge amount of money you protest self-righteously that you were told to stay at your desk.

In this case, as very often happens in the work place, you had two conflicting directives – one to stay at your desk and a second to take the mail to the post office.

**Of two conflicting orders, you CHOSE to do the one that caused the company harm**.

I have seen this sort of thing played out over and over. Never once have I seen the individual involved accept any responsibility.

An article in Infoworld gives a great way to discuss this with an employee , I quoted them here because I could not have said it better myself

“I don’t know what you think you’re going to accomplish, but what you are going to accomplish is finding yourself another position – this isn’t acceptable, and I really don’t care how good you are at loopholing policies and guidelines to prove you didn’t violate any of them. What I care about is getting the job done well, and that isn’t what you’re doing. …You’ll need the documentation because employees who act this way are brilliant at denial – both to you and to themselves. And know in advance that the odds aren’t all that good – mostly, you’re putting yourself through this to satisfy yourself that you did the right thing. “

I really don’t know what other people who are maliciously obedient are trying to accomplish. As others have written, I think they are trying to sabotage their bosses because they are unhappy in their positions. As I have said before, if you are that unhappy in a job – quit.

In my youth, I have been that pain in the ass employee who did not work up to their potential due to being unhappy for a variety of reasons – not being paid enough, not having my own office, not having an expense account, working for a boss who was technologically illiterate – you get the idea. The point is, I was at fault – yes, even in the one position where my boss was an idiot (I’ve usually been amazingly lucky when it comes to bosses, but there will always be that one).

I had taken the job at that salary, with those benefits, with that boss (okay, in that case I might say the truth in advertising rule was violated because the boss did not announce during the interview, “I AM AN IDIOT,” but it was also my fault for not asking more questions.)

I can tell you what I was trying to accomplish and it is embarrassing to admit – I was trying to prove I was smarter than my boss. (Even the smart bosses I had – and that was all but one of them – I thought would have been smarter to have paid me more money, given me an expense account, etc. ) I was acting stupid. The time I spent hanging around trying to prove I was smarter than my boss was wasted.

My point, which you may despaired of me having by now, is that the right thing for me to have done was either do the job to the best of my ability or quit.

Since I have written today about being a dumbass as an employee, in the interest of fair time, I guess I will have to write next about being a dumbass as a boss.

===================

Speaking of bosses and business - check out Spirit Lake: The Game version 4.0

]]>

How does that work? Does it work?

Initially, I was skeptical myself. I thought if students were really serious they would make the sacrifices to take the class in a “regular” setting. Interestingly, I had to take a class on a new system and had the option to sign up for a session held on a local campus or on-line. After looking at my schedule, I chose the on-line option. No one has *ever* accused me of being a slacker – in fact, it may be the only negative thing I’ve not been called. Still, I thought it was possible I might have conflicts those days, whether meeting with clients, employees or investors. The option of taking the course in smaller bits – an hour here or there – was a lot more convenient for me than several hours at a time. To be truthful, too, I didn’t really want to spend hours hanging out with people with whom I didn’t expect I would have that much in common. It wasn’t like a class on statistics that I was really interested in.

So … if we are willing to accept that students who sign up for on-line, limited-term classes might be just as motivated and hard-working as anyone else, do they work? I think the better question is how they work or for what type of students they work.

National University, where I teach, offers courses in a one course one month format. Students are not supposed to take more than one course at a time and , although exceptions can be made, I advise against it. The courses work for those students (and faculty) who can block off a month, and then, during that month DEVOTE A LOT OF TIME TO THE COURSE. Personally, I give two-hour lectures twice a week. If a student cannot attend – and some are in time zones where it is 2 a.m. when I’m teaching – the lectures are recorded and they can listen to them at their leisure. Time so far – 16 hours in the month. Normally, a graduate course I teach will require 50-100 pages of reading per week. Depending on your reading speed that could take you from one to four hours.

I just asked our Project Manager, Jessica, how long she thought it took the average person to read 75 pages of technical material she said,

“Whatever it is, I’m sure it’s a lot more than you are thinking!”

Talking it over, we agreed it probably took around 3-5 minutes per page, because even if some pages you get right away, others you have to read two or three times to figure out wait, that -1 next to a capital letter in bold means to take the inverse of a matrix while the single quote next to it means to transpose the matrix. These are things that are not second nature to you when you are just learning a field. Discussing this made me think I want to reduce the required reading in my multivariate statistics course. Let’s say on the low end, then it takes five hours to read the assigned material and review it for a test or just for your own clarity. Now we are up to 20 hours a month + 16 = 36 hours.

I give homework assignments because I am a big believer in distributed practice. We have all had classes we crammed for in college that we can’t remember a damn thing about. Okay, well, I have, any way. So, I give homework assignments every week, usually several problems like, “What is the cumulative incidence rate given the data in Table 2?” as well s assignments that require you to write a program, run it and interpret the results. I estimate these take students 4-5 hours per week. Let’s go on the low end and say 16 + 20 + 16 = 52 hours

There is also a final paper, a final exam and two quizzes. The final and quizzes are given 5 hours total and it is timed so students can’t go over. I think, based on simply page length, programs required and how often they call me, the average student spends 14 hours on the paper. Total hours for the course 52 hours plus another 19 = 71 hours in four weeks.

IF students put in that amount of time, they definitely pass the course with a respectable grade and probably learn enough that they will retain a useful amount of it. The kiss of death in a course like this is to put off the work. It is impossible to finish in a week.

My personal bias is that I require students actually DO things with the information they learn. It is not just memorizing formula and a lot of calculations because I really do think students will forget that after a few weeks. However, if they have to post a question that is a serious personal interest and then conduct a study to answer that question, the whole time posting progress and discussions on line with their classmates , then I think they WILL retain more of the material.

So, yes, students can learn online and they can learn in a compressed term. It IS harder, though, I think, both for the students and the instructor, and takes a lot of commitment on the part of both, which is why I don’t teach very many courses a year.

]]>

I wanted to test SAS Text Miner and was surprised to find the university did not have a license. No problem – and it really was, astoundingly, no problem – I had SAS On-Demand Enterprise Miner on a virtual machine using VMware.

I had installed it thinking – “This probably won’t work but what the hell.”

Here are all the links on this blog on getting SAS Enterprise Miner to work in all of its different flavors, because I am helpful like that.

Let me emphasize that you just better have the correct version of the Java Run Time Environment (jre), don’t say I didn’t warn you, and after you have it running whenever Java asks if you want to update, give it a resounding, “NO!”

So, surprisingly, running Windows 8.1 Pro on a 4GB virtual machine, it pops open no problem.

**Okay, now how to find your data.**

Turns out that even if you have SAS Enterprise Miner you need to use SAS Studio to upload your data. So, you go to SAS Studio, on the top left hand side of your screen, you see an UP arrow. Click on that arrow and you will be prompted to upload your data.

Not so fast …. where do you want to put your data?

You can only upload the data if you are a professor but since I am, that should be no problem. There is also a note on my login page that

The directory for all of your courses will be “courses/lalal123/” .

The LIBNAME for your courses should be

LIBNAME mydata “courses/lalal123″ access = readonly ;

Except that it isn’t. In fact, my course directory is something like

“courses/lalal123/c_1223″

I found that out only by calling tech support a few months ago where someone told me that. Now, when I look on the left window pane I see several directories, most of which I created, and a few I did not. One of the latter is named my_content. If I click on the my_content directory I see two subdirectories

c_1223

and

c_7845

These are the directories for my two courses. How would you have known to look there if you didn’t call SAS tech support or read this blog? Damned if I know, but hey, you did read it, so good for you.

If you leave off the subdirectory … say you actually followed the instructions on your login page and in your start code had this:

LIBNAME mydata “courses/lalal123″ access = readonly ;

run ;

Why, it would run without an error but it would show your directory is empty of data sets, which is kind of true because they are all in those subdirectories whose name you needed to find out.

So …. to recap

1. Use SAS Studio to upload the data to the directory you are using for your SAS Enterprise Miner course. (Seems illogical but it works, so just go with it.)

2. In the start code for your SAS Enterprise Miner project, have the LIBNAME statement *including the subdirectory* which is under the my_content directory.

Once you know what to do, it runs fine. You can access your data, create a diagram, drag the desired nodes to it.

I’ve only been using this for testing purposes for use in a future course. For that it works fine. It is convenient to be able to pull it up on a virtual machine on my Mac. It is pretty slow but nowhere near as bad as the original version years ago, which was so slow as to be useless.

If you teach data mining – or want to – and your campus doesn’t have a SAS Enterprise Miner license, which I believe is equivalent to the cost of the provost’s first born and a kidney – you definitely want to check out SAS On-demand. It’s a little quirky, but so far, so good.

]]>

The presumption is this:

There isn’t a number like a t-value or F-value to use to test if an eigenvalue is significant. However, it makes sense that the eigenvalue should be larger than if you factor analyzed a set of random data.

Random data is, well, random, so it’s possible you might have gotten a really large or really small eigenvalue the one time you analyzed the random data. So, what you want to do is analyze a set of random data with the same number of variables and the same number of observations a whole bunch of times.

Horn, back in 1965, was proposing that the eigenvalue should be higher than the average of when you analyzed a set of random data. Now, people are suggesting it should be higher than 95% of the time you analyzed random data (which kind of makes sense to me).

Either way, it seems simple. Here is what I did and it seems right so I am not clear why other macros I see are much more complicated. Please chime in if you see what I’m missing.

- Randomly generate a set of random data with N variables and Y observations.
- Keep the eigenvalues.
- Repeat 500 times.
- Combine the 500 datasets (each will only have 1 record with N variables)
- Find the 95th percentile

%macro para(numvars,numreps) ;

%DO k = 1 %TO 500 ;

data A;

array nums {&numvars} a1- a&numvars ;

do i = 1 to &numreps;

do j = 1 to &numvars ;

nums{j} = rand(“Normal”) ;

if j < 2 then nums{j} = round(100*nums{j}) ;

else nums{j} = round(nums{j}) ;

end ;

drop i j ;

output;

end;

proc factor data= a outstat = a&k noprint;

var a1 – a&numvars ;

data a&k ;

set a&k ;

if trim(_type_) = “EIGENVAL” ;

%END ;

%mend ;

%para(30,1000) ;

data all ;

set a1-a500 ;

proc univariate data= all noprint ;

var a1 – a30 ;

output out = eigvals pctlpts = 95 pctlpre = pa1 – pa30;

*** You don’t need the transpose but I just find it easier to read ;

proc transpose data= eigvals out=eigsig ;

Title “95th Percentile of Eigenvalues ” ;

proc print data = eigsig ;

run ;

It runs fine and I have puzzled and puzzled over why a more complicated program would be necessary. I ran it 500 times with 1,000 observations and 30 variables and it took less than a minute on a remote desktop with 4GB RAM. Yes, I do see the possibility that if you had a much larger data set that you would want to optimize the speed in some way. Other than that, though, I can’t see why it needs to be any more complicated than this.

If you wanted to change the percentile, say, to 50, you would just change the 95 above. If you wanted to change the method from say, Principal Components Analysis (the default, with commonality of 1) to saying else, you could just do that in the PROC FACTOR step above.

The above assumes a normal distribution of your variables, but if that was not the case, you could change that in the RAND function above.

As I said, I am puzzled. Suggestions to my puzzlement welcome.

]]>

In short, the book is entirely devoted to explaining the part that the computer does for you that students will never need to do and 10% or less on the decisions and interpretation that they will spend their careers doing. One might argue that it is good to understand what is going on “under the hood” and I’m certainly not going to argue against that but there is a limit on how much can be taught in any one course and I would argue very strenuously that there needs to be a much greater emphasis on the part the computer cannot do for you.

There was an interesting article in Wired a few years ago on The End of Theory, saying that we now have immediate access to so much data that we can use “brute force”. We can throw the data into computers and “find patterns where science cannot.”

Um. Maybe not.

Let’s take an example I was working on today, from the California Health Interview Survey. There are 47,000+ subjects but it wouldn’t matter if there were 47 million. There are also over 500 variables measured on these 47,000 people. That’s over 23,000,000 pieces of data. Not huge by some standards, but not exactly chicken feed, either.

Let’s say that I want to do a factor analysis, which I do. By some theory – or whatever that word is we’re using instead of theory – I could just dump all of the variables into an analysis and magically factors would come out, if I did it often enough. So, I did that and came up with results that meant absolutely nothing because the whole premise was so stupid.

Here are a couple of problems

1. The CHIS includes a lot of different types of variables, sample weights, coding for race and ethnic categories, dozens of items on treatment of asthma, diabetes or heart disease, dozens more items on access to health care. Theoretically (or computationally, I guess the new word is), one could run an analysis and we would get factors of asthma treatment, health care access, etc. Well, except I don’t really see that the variables that are not on a numeric scale are going to be anything but noise. What the heck does racesex coded as 1= “Latin male”, 10 = “African American male” etc. ever load on as a factor?

2. LOTS of the variables are coded with -1 as inapplicable. For example, “Have you had an asthma attack in the last 12 months?”

-1 = Inapplicable

1 = Yes

2 = N0

While this may not be theory, these two problems do suggest that some knowledge of your data is essential.

Once you get results, how do you interpret them? Using the default minimum eigenvalue of 1 criterion (which if all you learned in school was how to factor analyze a matrix using a pencil and a pad of paper, I guess you’d use the defaults), you get 89 factors. Here is my scree plot.

I also got another 400+ pages of output that I won’t inflict on you.

What exactly is one supposed to do with 500 variables that load on 89 factors? Should we then factor analyze these factors to further reduce the dimensions? It would certainly be possible. All you’d need to do is output the factor scores on the 89 factors, and then do a factor analysis on that.

I would argue, though, and I would be right, that before you do any of that you need to actually put some thought into the selection of your variables and how they are coded.

Also, you should perhaps understand some of the implications of having variables measured on vastly different scales. As this handy page on item analysis points out,

“Bernstein (1988) states that the following simple examination should be mandatory: “When you have identified the salient items (variables) defining factors, compute the means and standard deviations of the items on each factor. If you find large differences in means, e.g., if you find one factor includes mostly items with high response levels, another with intermediate response levels, and a third with low response levels, there is strong reason to attribute the factors to statistical rather than to substantive bases” (p. 398).”

And hold that thought, because our analysis of the 517 or so variables provided a great example …. or would it be using some kind of theory to point that out? Stay tuned.

]]>