Oct

28

SAS Enterprise Miner – Free, on a Mac – Bet You Didn’t See That Coming … but how the hell do you get your data on it?

I wanted to test SAS Text Miner and was surprised to find the university did not have a license. No problem – and it really was, astoundingly, no problem – I had SAS On-Demand Enterprise Miner on a virtual machine using VMware.

I had installed it thinking   – “This probably won’t work but what the hell.”

Here are all the links on this blog on getting SAS Enterprise Miner to work in all of its different flavors, because I am helpful like that.

Let me emphasize that you just better have the correct version of the Java Run Time Environment (jre), don’t say I didn’t warn you, and after you have it running whenever Java asks if you want to update, give it a resounding, “NO!”

So, surprisingly, running Windows 8.1 Pro on a 4GB virtual machine, it pops open no problem.

Okay, now how to find your data.

Turns out that even if you have SAS Enterprise Miner you need to use SAS Studio to upload your data. So, you go to SAS Studio, on the top left hand side of your screen, you see an UP arrow. Click on that arrow and you will be prompted to upload your data.

Not so fast …. where do you want to put your data?

You can only upload the data if you are a professor but since I am,  that should be no problem. There is also a note on my login page that

The directory for all of your courses will be “courses/lalal123/”   .

The LIBNAME for your courses should be

LIBNAME  mydata “courses/lalal123” access = readonly ;

Except that it isn’t. In fact, my course directory is something like

“courses/lalal123/c_1223”

I found that out only by calling tech support a few months ago where someone told me that. Now, when I look on the left window pane I see several directories, most of which I created, and a few I did not. One of the latter is named my_content. If I click on the my_content directory I see two subdirectories

c_1223

and

c_7845

These are the directories for my two courses. How would you have known to look there if you didn’t call SAS tech support or read this blog? Damned if I know, but hey, you did read it, so good for you.

If you leave off the subdirectory … say you actually followed the instructions on your login page and in your start code had this:

LIBNAME  mydata “courses/lalal123” access = readonly ;
run ;

Why, it would run without an error but it would show your directory is empty of data sets, which is kind of true because they are all in those subdirectories whose name you needed to find out.

So …. to recap

1. Use SAS Studio to upload the data to the directory you are using for your SAS Enterprise Miner course. (Seems illogical but it works, so just go with it.)

2. In the start code for your SAS Enterprise Miner project, have the LIBNAME statement including the subdirectory which is under the my_content directory.

Once you know what to do, it runs fine. You can access your data, create a diagram, drag the desired nodes to it.

I’ve only been using this for testing purposes for use in a future course. For that it works fine. It is convenient to be able to pull it up on a virtual machine on my Mac. It is pretty slow but nowhere near as bad as the original version years ago, which was so slow as to be useless.

If you teach data mining – or want to – and your campus doesn’t have a SAS Enterprise Miner license, which I believe is equivalent to the cost of the provost’s first born and a kidney – you definitely want to check out SAS On-demand. It’s a little quirky, but so far, so good.

 

 

Oct

23

Am I missing something here? All of the macros I have seen for the parallel analysis criterion for factor analysis look pretty complicated, but, unless I am missing something, it is a simple deal.

The presumption is this:

There isn’t a number like a t-value or F-value to use to test if an eigenvalue is significant. However, it makes sense that the eigenvalue should be larger than if you factor analyzed a set of random data.

Random data is, well, random, so it’s possible you might have gotten a really large or really small eigenvalue the one time you analyzed the random data. So, what you want to do is analyze a set of random data with the same number of variables and the same number of observations a whole bunch of times.

Horn, back in 1965, was proposing that the eigenvalue should be higher than the average of when you analyzed a set of random data. Now, people are suggesting it should be higher than 95% of the time you analyzed random data (which kind of makes sense to me).

Either way, it seems simple. Here is what I did and it seems right so I am not clear why other macros I see are much more complicated. Please chime in if you see what I’m missing.

  1. Randomly generate a set of random data with N variables and Y observations.
  2. Keep the eigenvalues.
  3. Repeat 500 times.
  4. Combine the 500 datasets  (each will only have 1 record with N variables)
  5. Find the 95th percentile

%macro para(numvars,numreps) ;
%DO k = 1 %TO 500 ;
data A;
array nums {&numvars} a1- a&numvars ;
do i = 1 to &numreps;
do j = 1 to &numvars ;
nums{j} = rand(“Normal”) ;
if j < 2 then nums{j} = round(100*nums{j}) ;
else nums{j} = round(nums{j}) ;
end ;
drop i j ;
output;
end;

proc factor data= a outstat = a&k  noprint;
var a1 – a&numvars ;
data a&k ;
set  a&k  ;
if trim(_type_) = “EIGENVAL” ;

%END ;
%mend ;

%para(30,1000) ;

data all ;
set a1-a500 ;

proc univariate data= all noprint ;
var a1 – a30 ;
output out = eigvals pctlpts =  95 pctlpre = pa1 – pa30;

*** You don’t need the transpose but I just find it easier to read ;
proc transpose data= eigvals out=eigsig ;
Title “95th Percentile of Eigenvalues ” ;
proc print data = eigsig ;
run ;

It runs fine and I have puzzled and puzzled over why a more complicated program would be necessary. I ran it 500 times with 1,000 observations and 30 variables and it took less than a minute on a remote desktop with 4GB RAM. Yes, I do see the possibility that if you had a much larger data set that you would want to optimize the speed in some way. Other than that, though, I can’t see why it needs to be any more complicated than this.

If you wanted to change the percentile, say, to 50, you would just change the 95 above. If you wanted to change the method from say, Principal Components Analysis (the default, with commonality of 1) to saying else, you could just do that in the PROC FACTOR step above.

The above assumes a normal distribution of your variables, but if that was not the case, you could change that in the RAND function above.

As I said, I am puzzled. Suggestions to my puzzlement welcome.

 

Oct

22

I’m about to tear my hair out. I’ve been reading this statistics textbook which shall remain nameless, ostensibly a book for graduate students in computer science, engineering and similar subjects. The presumption is that at the end of the course students will be able to compute and interpret a factor analysis, MANOVA and other multivariate statistics. The text spends 90% of the space discussing the mathematics in computing the results, 10% discussing the code to get these statistics and 0% discussing decisions one makes in selection of communality estimates, rotation, post hoc tests or anything else.

In short, the book is entirely devoted to explaining the part that the computer does for you that students will never need to do and 10% or less on the decisions and interpretation that they will spend their careers doing. One might argue that it is good to understand what is going on “under the hood” and I’m certainly not going to argue against that but there is a limit on how much can be taught in any one course and I would argue very strenuously that there needs to be a much greater emphasis on the part the computer cannot do for you.

There was an interesting article in Wired a few years ago on The End of Theory, saying that we now have immediate access to so much data that we can use “brute force”. We can throw the data into computers and “find patterns where science cannot.”

Um. Maybe not.

Let’s take an example I was working on today, from the California Health Interview Survey. There are 47,000+ subjects but it wouldn’t matter if there were 47 million. There are also over 500 variables measured on these 47,000 people. That’s over 23,000,000 pieces of data. Not huge by some standards, but not exactly chicken feed, either.

Let’s say that I want to do a factor analysis, which I do. By some theory – or whatever that word is we’re using instead of theory – I could just dump all of  the variables into an analysis and magically factors would come out, if I did it often enough. So, I did that and came up with results that meant absolutely nothing because the whole premise was so stupid.

Here are a couple of problems

1. The CHIS includes a lot of different types of variables, sample weights, coding for race and ethnic categories, dozens of items on treatment of asthma, diabetes or heart disease, dozens more items on access to health care. Theoretically (or computationally, I guess the new word is), one could run an analysis and we would get factors of asthma treatment, health care access, etc. Well, except I don’t really see that the variables that are not on a numeric scale are going to be anything but noise. What the heck does racesex coded as 1=  “Latin male”, 10 = “African American male” etc. ever load on as a factor?

2. LOTS of the variables are coded with -1 as inapplicable. For example, “Have you had an asthma attack in the last 12 months?”
-1 = Inapplicable
1 = Yes
2 = N0

While this may not be theory, these two problems do suggest that some knowledge of your data is essential.

Once you get results, how do you interpret them? Using the default minimum eigenvalue of 1 criterion (which if all you learned in school was how to factor analyze a matrix using a pencil and a pad of paper, I guess you’d use the defaults), you get 89 factors. Here is my scree plot.

Scree plot showing 89 eigenvalues  I also got another 400+ pages of output that I won’t inflict on you.

What exactly is one supposed to do with 500 variables that load on 89 factors? Should we then factor analyze these factors to further reduce the dimensions? It would certainly be possible. All you’d need to do is output the factor scores on the 89 factors, and then do a factor analysis on that.

I would argue, though,  and I would be right, that before you do any of that you need to actually put some thought into the selection of your variables and how they are coded.

Also, you should perhaps understand some of the implications of having variables measured on vastly different scales. As this handy page on item analysis points out,

“Bernstein (1988) states that the following simple examination should be mandatory: “When you have identified the salient items (variables) defining factors, compute the means and standard deviations of the items on each factor.  If you find large differences in means, e.g., if you find one factor includes mostly items with high response levels, another with intermediate response levels, and a third with low response levels, there is strong reason to attribute the factors to statistical rather than to substantive bases” (p. 398).”

And hold that thought, because our analysis of the 517 or so variables provided a great example …. or would it be using some kind of theory to point that out? Stay tuned.

 

Oct

19

I’ve written here before about visual literacy and Cook’s D is just my latest example.

Most people intuitively understand that any sample can have outliers, say, an 80-year-old man who is the father of a six-year-old child, the new college graduate who is making $150,000 a year. We understand that those people may throw off our predictions and perhaps we want to exclude those outliers from our models.

What if you have multiple variables, though? It’s possible that each individual value may not be very extreme but the combination is. Take this data set below that I totally made up, with mom’s age, dad’s age and child’s age.

Mom Dad Child

30 32 6
20 27 5
31 33 8
29 28 6
40 42 20
44 44 21
37 39 14
25 29 7
30 32 6
20 27 5
31 33 8
29 28 6
39 42 19
43 44 20
37 39 13
25 28 6
40 29 15

Look at our last record. The mother has an age of 40, the father an age of 29 and the child an age of 15. None of these individually are extreme scores. These aren’t even the minimum or maximum for any of the variables. There are mothers older (and younger) than 40, fathers younger (and older) than 29; 15 isn’t that extreme an age in our sample of children.  The COMBINATION, however, of a 40-year-old mother, 29-year-old father and 15-year-old child is an extreme case.

Enter Cook’s distance, a.k.a. Cook’s D, which measures the effect of deleting an observation. The larger the distance, the more influential that point is on the results. Take a look at my graph below.

Cook's D showing one very high pointIt is pretty clear that the last observation is very influential. Now, you might have guessed that if you had thought to look at the data. However, if you had 11 variables and 100 observations it wouldn’t be so easy to see by looking at the data and you might be really happy you had Cook around to help you out.

Let’s look at the data re-analyzed without that last observation. Here is what our plot of Cook’s D looks like now.

Plot with 2 points moderately high but not vastly differentThis gives you a very different picture. While a couple of points are higher than the others, it is certainly not the extreme case we saw before.

In fact, dropping out that one point changed our explained variance from 89%  to 93%.

So … knowing how to use Cook’s D for regression diagnostics is our latest lesson in visual literacy.

You’re welcome.

Oct

19

Since it is the weekend, I decided to blog about weekend stuff. Look for more statistics tomorrow. For most of the past quarter-century, I have been roped into being a volunteer for one organization or another. Here is a very, very partial list:

I’ve been everything from Chair of the Board to chaperone. I’ve spoken at more conferences than I can count, certainly giving a few hundred presentations. I’ve raised hundreds of thousands of dollars.

Given that experience,  I’ve concluded that volunteers fall into three broad categories. Recognizing that fact is probably key to having a successful non-profit organization, because for most non-profits, volunteers are essential.

Category 1: People who are very excited to be a volunteer. These individuals derive a lot of their self-esteem from their position in the organization. Their enthusiasm may stem from a genuine passion for the mission of the organization, be it youth sports, individuals with disabilities or health care. Alternatively, the volunteer position may be an exciting departure from a boring day job, an opportunity to use more of their talents. Generally, it is both reasons. They are willing to do a lot of work. They are also willing to put up with authoritarian and unprofessional interactions with the organizations, because they are so enthusiastic and often, they are accustomed to being bossed around and devalued on their “day jobs”. There is a limit to their tolerance, though.

Category 2: People who are not at all excited to volunteer but have skills and talents your organization needs. These individuals are there out of obligation – they have a child on the team, a friend on the staff, or they really care deeply about the mission of the organization. These people do valuable work for the organization like raising money, providing free legal or accounting services. They have very little tolerance for authoritarian and unprofessional interactions with the organizations, because they would rather be somewhere else in the first place and they are accustomed to being the boss or highly valued on their “day jobs”.

Category 3: People who show up and don’t do any real work.

It seems pretty clear to me that organizations need both of the first two categories, and the more the better.

Not everyone sees it that way, obviously. Let me give you just a few examples, and again, this is a very partial list. I have witnessed

 

Do’s and don’ts

Well, first of all, don’t do any of those things above.

Second, say “Thank you.” A lot.

Think of these individuals just like donors who are giving you thousands of dollars, because they are. It would costs you a lot of money to replace their services. Treat them as you would professionals providing services for you. Would you ask your accountant to take a drug test? Would you tell your attorney to be sure he dressed professionally when he represents you in court? Don’t assume just because someone is working for free that he is a degenerate or an idiot.

It’s funny that most organizations seem to think what volunteers want is an engraved plaque or a certificate printed out from PowerPoint. Really, a little common courtesy goes a long way.

Oct

15

Ever wonder why with goodness of fit tests non- significance is what you want?

Why is that sometimes when you have a significant p-value it means your hypothesis is correct, there is a relationship between the price of honey and the number of bees, and in other cases, significance means your model is rejected? Well, if you are reading this blog, it’s possible you already know all of this, but I can guarantee you that students who start off in statistics learning that a significant p-value is a good thing often are confused to learn that with model fit statistics, non-significance is (usually) what you want.

You are hoping that you find non-significance when you are looking at model fit statistics  because the hypothesis you are testing is that the full model – one that has as many parameters as there are observations  – is different than this model you have postulated.

To understand model fit statistics, you should think about three models.

The null model, and contains only one parameter, the mean. Think of it this way, if all of your explanatory variables are useless then your best prediction for the dependent variable is the mean. If you knew nothing about the next woman likely to walk into the room, your best prediction of her height would be 5’4″ , if you live in the U.S., because that is the average height.

The full model  has one parameter per observation. With this model, you can predict the data perfectly. Wouldn’t that be great? No, it would be useless. Using the full model is a bad idea because it is non- replicable

Here is an example data set where I predict IQ using gender, number of M & M’s in your pocket and hair color.

EXAMPLE

Male 10 redhead   100

Female 0 blonde. 70

Male 10 blonde 60

Female 30 brunette 100

50 + MMx1  + female x 20 + redhead x 40

Is that replicable at all? If you selected another random sample of 4 people from the population do you think you could predict their scores perfectly using this equation?

No.

Also, I do not know why that woman has so many M & M’s in her pocket.

The M and M store

In between these two useless models is your model. The hypothesis you are testing is that your model, whatever it is, is non-significantly different from the full model. If you throw out one of your parameters, your new model won’t be as good as the full model – that one extra parameter may explain one case – but the question is, does the model without that parameter differ significantly from the full model. If it doesn’t then we can conclude that the parameters we have excluded from the model were unimportant.

We have a more parsimonious model and we are happy.

But WHY do more parsimonious models make us happy? Well, because that is kind of the whole point of model building. If you need a parameter for each person, why not just examine each person individually?  The whole point of a model is dimension reduction, that is, reducing the number of dimensions you need to measure while still adequately explaining the data.

If, instead of needing 2,000 parameters to explain the data gathered from 2,000 people you can do just as well with 47 parameters, then you would have made some strides forward in understanding how the world works.

Coincidentally, I discussed dimension reduction on this blog almost exactly a year ago, in a post with the title “What’s all that factor analysis crap mean, anyway?”

(Prediction: At least one person who follows this link will be surprised at the title of the post.)

Oct

12

PHP Rambling

October 12, 2014 | 5 Comments

I was reading a book on PHP just to get some ideas for a re-design I’m doing for a client, when I thought of this.

Although I think of PHP as something you use to put stuff into a database and take it out –  data entry of client records, reports of total sales – it is possible to use without any SQL intervention.

You can enter data in a form, call another file and use the data entered to determine what you show in that file. The basic examples used to teach are trivial – a page asks what’s your name and the next page that pops up says, “Howdy”  + your name .

We make games that teach math using Unity 3D, Javascript, PHP, MySQL and C# .

Generally, when a player answers a question, the answer is written to our database and the next step depends on whether the answer was correct. Correct, go on with the game. Incorrect, pick one of these choices to study the concept you missed. Because schools use our games, they want this database setup so they can get reports by student, classroom, grade or school.

What about those individual users, though? They can tell where they/ their child is by just looking at the level in the game.  So, I could drop the PHP altogether in that case and just use Javascript to serve the next step.

I could also use PHP.

In the case where we drop out the database interface altogether, is there a benefit to keeping PHP? I couldn’t think of one.

Still thinking about this question.

Oct

11

I’ve been looking high and low for a supplemental text for a course on multivariate statistics and I found this one –

The Multivariate Social Scientist, by Graeme Hutcheson 7 Nick Sofroniou

They are big proponents of generalized linear models, in fact, the subtitle is “Introductory statistics using generalized linear models”, so if you don’t like generalized models, you won’t like this book.

I liked this book a lot. Because this is a random blog, here is day one of my random notes

A generalized linear model has three components:

The systematic component takes the form

η  = α + ß1×1 + ß2×2 + … ßnxn

They use η to designate the predicted variable instead of y-hat. I know you were dying to know that.

Obviously, since that IS  a multiple regression equation (which could also be used for ANOVA), when you have linear regression, the link function is actually identity.  With logistic regression, it is the logit function, which maps the log odds of the random component on to the systematic one.

The reason I think this is such a good book for students taking a multivariate statistics course is that it relates to what they should know.  They certainly should be familiar with multiple regression and logistic regression, and understand that the log of the odds is used in the latter.

The book also discusses the log link used in loglinear analyses, which I don’t necessarily assume every student will have used. I don’t say that as a criticism, merely an observation.

 

 

Oct

9

How to be amazing

October 9, 2014 | 1 Comment

I find it weird when I make people nervous. I’ve had people shake and stutter so much that I thought they had some sort of disability, only to find out later that it was a reaction to meeting me!

My family and friends say I’m intimidating, which I also find bizarre. I am, literally, a little old grandma.

me at lunchI said this to a friend who responded,

Are you kidding? You’re just amazing! Do you think we forget that you were the FIRST American to win a world judo championship, have a Ph.D. , published a book last year, started a company that made a million dollars in less than two years, then started another company to make games, came out with your first game this year,  published scientific articles. Oh, and you raised four successful kids, one of whom is also a world champion and making movies.

He went on to an embarrassing degree about a lot more stuff. I’m not one of those fake humble-brag people, like the super-models who claim to be “so fat”.

It’s just that …. it’s not like that when it’s happening. Even to me, if I stopped and piled it all up like that, it sounds impressive, but day to day, it’s not really like that at all.

Whether it’s winning a world championship, earning a Ph.D., building a company or making computer games, it’s not amazing when you’re in the middle of it.

For example, I spent the last week fixing up our next game, Fish Lake. I improved the graphics, added gravity so that when a character walks off a hill, it falls down instead of walking around on the air. I added artificial intelligence to make the animals run around at random instead of just stand there. I modified the css  so that the input boxes for the math problems stand out better. All of those are minor fixes in the grand scheme of things. The purpose of the game I was working on is to teach fractions, which are a super important part of understanding math, but if it’s not a fun game, kids aren’t going to play it.

Tomorrow, my day starts with reviewing the quizzes one person wrote, followed by reviewing a PowerPoint and video clip someone else wrote to teach about reading graphs and then testing some software for podcasts.

Hopefully, enough days like this piled on top of one another and we’ll have an amazing game.

It’s just like in my judo competition time, when I trained three times a day, every day. Looking back, winning a gold medal and being best in the world was amazing.

In the middle of it, though, it’s just getting up and working hard all day. Repeat a few thousand times.

Oct

8

You might have gotten the misimpression from my previous post that I don’t think students need to learn all that much matrix algebra that I am a slacker as far as expecting students to come to courses with some prior knowledge. That’s not exactly the case. In fact, here are some things I just assume students coming into a multivariate statistics course should know and even though some textbooks begin with these, well, all I can say is if you have had three statistics courses and you still don’t know what a covariance is, I think something has gone awry in your education.

Diving into MANOVA was really what I wanted to blog about next, so maybe I will actually get to that in the context of analyzing missing data, but having failed already at my attempt to leave my desk before midnight, that will have to wait until next time.

Having found no significant differences in the missing and non-missing data, as I’d expected, I went on to do a couple of more analyses where I was quite surprised not to find differences, but that will also have to wait for next time. I’m really only mentioning it here so I don’t forget. Wouldn’t you think that there would be differences in hospital length of stay and age by race and region? Well, I would, but I was wrong.

On a random note, I have to say,  I really do love this remote desktop set up for teaching. It solves the problem of whether students have Windows or Mac, having to get needed software installed. All the way around, I love it.

 

keep looking »

Blogroll

WP Themes