Lately, I’ve seen a lot of examples of this …

Malicious obedience is discussed on the englishstackexchange page (who even knew this existed) as

“….when people set their boss up to fail by doing exactly as he or she says even though they know in their hearts that their actions are incorrect or not optimal.”

I would add that it also  includes taking zero personal responsibility. For example, let’s say you are the administrative assistant in an organization and you have been running lots of personal errands during work hours. The boss tells you that you need to stay at your desk. However, part of your job is to take the mail to the post office and in today’s mail is a major grant proposal that needs to be postmarked today. You don’t mail it and when the company loses out on a huge amount of money you protest self-righteously that you were told to stay at your desk.

In this case, as very often happens in the work place, you had two conflicting directives – one to stay at your desk and a second to take the mail to the post office.

Of two conflicting orders, you CHOSE to do the one that caused the company harm.

I have seen this sort of thing played out over and over. Never once have I seen the individual involved accept any responsibility.

An article in Infoworld gives  a great way to discuss this with an employee , I quoted them here because I could not have said it better myself

“I don’t know what you think you’re going to accomplish, but what you are going to accomplish is finding yourself another position – this isn’t acceptable, and I really don’t care how good you are at loopholing policies and guidelines to prove you didn’t violate any of them. What I care about is getting the job done well, and that isn’t what you’re doing. …You’ll need the documentation because employees who act this way are brilliant at denial – both to you and to themselves. And know in advance that the odds aren’t all that good – mostly, you’re putting yourself through this to satisfy yourself that you did the right thing. “

I really don’t know what other people who are maliciously obedient are trying to accomplish. As others have written, I think they are trying to sabotage their bosses because they are unhappy in their positions. As I have said before, if you are that unhappy in a job – quit.

In my youth, I have been that pain in the ass employee who did not work up to their potential due to being unhappy for a variety of reasons – not being paid enough, not having my own office, not having an expense account, working for a boss who was technologically illiterate – you get the idea. The point is,  I was at fault – yes, even in the one position where my boss was an idiot (I’ve usually been amazingly lucky when it comes to bosses, but there will always be that one).

I had taken the job at that salary, with those benefits, with that boss (okay, in that case I might say the truth in advertising rule was violated because the boss did not announce during the interview, “I AM AN IDIOT,” but it was also my fault for not asking more questions.)

I can tell you what I was trying to accomplish and it is embarrassing to admit – I was trying to prove I was smarter than my boss.  (Even the smart bosses I had – and that was all but one of them – I thought would have been smarter to have paid me more money, given me an expense account, etc. )  I was acting stupid. The time I spent hanging around trying to prove I was smarter than my boss was wasted.

My point, which you may despaired of me having by now, is that the right thing for me to have done was either do the job to the best of my ability or quit.

Since I have written today about being a dumbass as an employee, in the interest of fair time, I guess I will have to write next about being a dumbass as a boss.

===================

Speaking of bosses and business - check out Spirit Lake: The Game version 4.0 

It makes math awesome.

 

Last year, I went from teaching in classrooms in a pretty building with a library on the ground floor to teaching on-line. I also went from the semester system to teaching the same content in four weeks. This has led curious friends of mine, used to teaching in the traditional format, to ask ,

How does that work? Does it work?

Initially, I was skeptical myself. I thought if students were really serious they would make the sacrifices to take the class in a “regular” setting. Interestingly, I had to take a class on a new system and had the option to sign up for a session held on a local campus or on-line. After looking at my schedule, I chose the on-line option. No one has ever accused me of being a slacker – in fact, it may be the only negative thing I’ve not been called. Still, I thought it was possible I might have conflicts those days, whether meeting with clients, employees or investors. The option of taking the course in smaller bits – an hour here or there – was a lot more convenient for me than several hours at a time. To be truthful, too, I didn’t really want to spend hours hanging out with people with whom I didn’t expect I would have that much in common. It wasn’t like a class on statistics that I was really interested in.

So … if we are willing to accept that students who sign up for on-line, limited-term classes might be just as motivated and hard-working as anyone else, do they work? I think the better question is how they work or for what type of students they work.

National University, where I teach, offers courses in a one course one month format. Students are not supposed to take more than one course at a time and , although exceptions can be made, I advise against it. The courses work for those students (and faculty) who can block off a month, and then, during that month DEVOTE A LOT OF TIME TO THE COURSE.  Personally, I give two-hour lectures twice a week. If a student cannot attend – and some are in time zones where it is 2 a.m. when I’m teaching – the lectures are recorded and they can listen to them at their leisure. Time so far – 16 hours in the month. Normally, a graduate course I teach will require 50-100 pages of reading per week. Depending on your reading speed that could take you from one to four hours.

I just asked our Project Manager, Jessica, how long she thought it took the average person to read 75 pages of technical material she said,

“Whatever it is, I’m sure it’s a lot more than you are thinking!”

Talking it over, we agreed it probably took around 3-5 minutes per page, because even if some pages you get right away, others you have to read two or three times to figure out wait, that -1 next to a capital letter in bold means to take the inverse of a matrix while the single quote next to it means to transpose the matrix. These are things that are not second nature to you when you are just learning a field. Discussing this made me think I want to reduce the required reading in my multivariate statistics course. Let’s say on the low end, then it takes five hours to read the assigned material and review it for a test or just for your own clarity. Now we are up to 20 hours a month + 16 =  36 hours.

I give homework assignments because I am a big believer in distributed practice. We have all had classes we crammed for in college that we can’t remember a damn thing about. Okay, well, I have, any way. So, I give homework assignments every week, usually several problems like, “What is the cumulative incidence rate given the data in Table 2?” as well s assignments that require you to write a program, run it and interpret the results. I estimate these take students 4-5 hours per week.  Let’s go on the low end and say 16 + 20 + 16 =  52 hours

There is also a final paper, a final exam and two quizzes. The final and quizzes are given 5 hours total and it is timed so students can’t go over. I think, based on simply page length, programs required and how often they call me, the average student spends 14 hours on the paper. Total hours for the course  52 hours plus another 19 =  71 hours in four weeks.

IF students put in that amount of time, they definitely pass the course with a respectable grade and probably learn enough that they will retain a useful amount of it. The kiss of death in a course like this is to put off the work. It is impossible to finish in a week.

My personal bias is that I require students actually DO things with the information they learn. It is not just memorizing formula and a lot of calculations because  I really do think students will forget that after a few weeks. However, if they have to post a question that is a serious personal interest and then conduct a study to answer that question, the whole time posting progress and discussions on line with their classmates , then I think they WILL retain more of the material.

So, yes, students can learn online and they can learn in a compressed term. It IS harder, though, I think, both for the students and the instructor, and takes a lot of commitment on the part of both, which is why I don’t teach very many courses a year.

 

 

 

 

 

 

 

 

 

SAS Enterprise Miner – Free, on a Mac – Bet You Didn’t See That Coming … but how the hell do you get your data on it?

I wanted to test SAS Text Miner and was surprised to find the university did not have a license. No problem – and it really was, astoundingly, no problem – I had SAS On-Demand Enterprise Miner on a virtual machine using VMware.

I had installed it thinking   – “This probably won’t work but what the hell.”

Here are all the links on this blog on getting SAS Enterprise Miner to work in all of its different flavors, because I am helpful like that.

Let me emphasize that you just better have the correct version of the Java Run Time Environment (jre), don’t say I didn’t warn you, and after you have it running whenever Java asks if you want to update, give it a resounding, “NO!”

So, surprisingly, running Windows 8.1 Pro on a 4GB virtual machine, it pops open no problem.

Okay, now how to find your data.

Turns out that even if you have SAS Enterprise Miner you need to use SAS Studio to upload your data. So, you go to SAS Studio, on the top left hand side of your screen, you see an UP arrow. Click on that arrow and you will be prompted to upload your data.

Not so fast …. where do you want to put your data?

You can only upload the data if you are a professor but since I am,  that should be no problem. There is also a note on my login page that

The directory for all of your courses will be “courses/lalal123/”   .

The LIBNAME for your courses should be

LIBNAME  mydata “courses/lalal123″ access = readonly ;

Except that it isn’t. In fact, my course directory is something like

“courses/lalal123/c_1223″

I found that out only by calling tech support a few months ago where someone told me that. Now, when I look on the left window pane I see several directories, most of which I created, and a few I did not. One of the latter is named my_content. If I click on the my_content directory I see two subdirectories

c_1223

and

c_7845

These are the directories for my two courses. How would you have known to look there if you didn’t call SAS tech support or read this blog? Damned if I know, but hey, you did read it, so good for you.

If you leave off the subdirectory … say you actually followed the instructions on your login page and in your start code had this:

LIBNAME  mydata “courses/lalal123″ access = readonly ;
run ;

Why, it would run without an error but it would show your directory is empty of data sets, which is kind of true because they are all in those subdirectories whose name you needed to find out.

So …. to recap

1. Use SAS Studio to upload the data to the directory you are using for your SAS Enterprise Miner course. (Seems illogical but it works, so just go with it.)

2. In the start code for your SAS Enterprise Miner project, have the LIBNAME statement including the subdirectory which is under the my_content directory.

Once you know what to do, it runs fine. You can access your data, create a diagram, drag the desired nodes to it.

I’ve only been using this for testing purposes for use in a future course. For that it works fine. It is convenient to be able to pull it up on a virtual machine on my Mac. It is pretty slow but nowhere near as bad as the original version years ago, which was so slow as to be useless.

If you teach data mining – or want to – and your campus doesn’t have a SAS Enterprise Miner license, which I believe is equivalent to the cost of the provost’s first born and a kidney – you definitely want to check out SAS On-demand. It’s a little quirky, but so far, so good.

 

 

Am I missing something here? All of the macros I have seen for the parallel analysis criterion for factor analysis look pretty complicated, but, unless I am missing something, it is a simple deal.

The presumption is this:

There isn’t a number like a t-value or F-value to use to test if an eigenvalue is significant. However, it makes sense that the eigenvalue should be larger than if you factor analyzed a set of random data.

Random data is, well, random, so it’s possible you might have gotten a really large or really small eigenvalue the one time you analyzed the random data. So, what you want to do is analyze a set of random data with the same number of variables and the same number of observations a whole bunch of times.

Horn, back in 1965, was proposing that the eigenvalue should be higher than the average of when you analyzed a set of random data. Now, people are suggesting it should be higher than 95% of the time you analyzed random data (which kind of makes sense to me).

Either way, it seems simple. Here is what I did and it seems right so I am not clear why other macros I see are much more complicated. Please chime in if you see what I’m missing.

  1. Randomly generate a set of random data with N variables and Y observations.
  2. Keep the eigenvalues.
  3. Repeat 500 times.
  4. Combine the 500 datasets  (each will only have 1 record with N variables)
  5. Find the 95th percentile

%macro para(numvars,numreps) ;
%DO k = 1 %TO 500 ;
data A;
array nums {&numvars} a1- a&numvars ;
do i = 1 to &numreps;
do j = 1 to &numvars ;
nums{j} = rand(“Normal”) ;
if j < 2 then nums{j} = round(100*nums{j}) ;
else nums{j} = round(nums{j}) ;
end ;
drop i j ;
output;
end;

proc factor data= a outstat = a&k  noprint;
var a1 – a&numvars ;
data a&k ;
set  a&k  ;
if trim(_type_) = “EIGENVAL” ;

%END ;
%mend ;

%para(30,1000) ;

data all ;
set a1-a500 ;

proc univariate data= all noprint ;
var a1 – a30 ;
output out = eigvals pctlpts =  95 pctlpre = pa1 – pa30;

*** You don’t need the transpose but I just find it easier to read ;
proc transpose data= eigvals out=eigsig ;
Title “95th Percentile of Eigenvalues ” ;
proc print data = eigsig ;
run ;

It runs fine and I have puzzled and puzzled over why a more complicated program would be necessary. I ran it 500 times with 1,000 observations and 30 variables and it took less than a minute on a remote desktop with 4GB RAM. Yes, I do see the possibility that if you had a much larger data set that you would want to optimize the speed in some way. Other than that, though, I can’t see why it needs to be any more complicated than this.

If you wanted to change the percentile, say, to 50, you would just change the 95 above. If you wanted to change the method from say, Principal Components Analysis (the default, with commonality of 1) to saying else, you could just do that in the PROC FACTOR step above.

The above assumes a normal distribution of your variables, but if that was not the case, you could change that in the RAND function above.

As I said, I am puzzled. Suggestions to my puzzlement welcome.

 

I’m about to tear my hair out. I’ve been reading this statistics textbook which shall remain nameless, ostensibly a book for graduate students in computer science, engineering and similar subjects. The presumption is that at the end of the course students will be able to compute and interpret a factor analysis, MANOVA and other multivariate statistics. The text spends 90% of the space discussing the mathematics in computing the results, 10% discussing the code to get these statistics and 0% discussing decisions one makes in selection of communality estimates, rotation, post hoc tests or anything else.

In short, the book is entirely devoted to explaining the part that the computer does for you that students will never need to do and 10% or less on the decisions and interpretation that they will spend their careers doing. One might argue that it is good to understand what is going on “under the hood” and I’m certainly not going to argue against that but there is a limit on how much can be taught in any one course and I would argue very strenuously that there needs to be a much greater emphasis on the part the computer cannot do for you.

There was an interesting article in Wired a few years ago on The End of Theory, saying that we now have immediate access to so much data that we can use “brute force”. We can throw the data into computers and “find patterns where science cannot.”

Um. Maybe not.

Let’s take an example I was working on today, from the California Health Interview Survey. There are 47,000+ subjects but it wouldn’t matter if there were 47 million. There are also over 500 variables measured on these 47,000 people. That’s over 23,000,000 pieces of data. Not huge by some standards, but not exactly chicken feed, either.

Let’s say that I want to do a factor analysis, which I do. By some theory – or whatever that word is we’re using instead of theory – I could just dump all of  the variables into an analysis and magically factors would come out, if I did it often enough. So, I did that and came up with results that meant absolutely nothing because the whole premise was so stupid.

Here are a couple of problems

1. The CHIS includes a lot of different types of variables, sample weights, coding for race and ethnic categories, dozens of items on treatment of asthma, diabetes or heart disease, dozens more items on access to health care. Theoretically (or computationally, I guess the new word is), one could run an analysis and we would get factors of asthma treatment, health care access, etc. Well, except I don’t really see that the variables that are not on a numeric scale are going to be anything but noise. What the heck does racesex coded as 1=  “Latin male”, 10 = “African American male” etc. ever load on as a factor?

2. LOTS of the variables are coded with -1 as inapplicable. For example, “Have you had an asthma attack in the last 12 months?”
-1 = Inapplicable
1 = Yes
2 = N0

While this may not be theory, these two problems do suggest that some knowledge of your data is essential.

Once you get results, how do you interpret them? Using the default minimum eigenvalue of 1 criterion (which if all you learned in school was how to factor analyze a matrix using a pencil and a pad of paper, I guess you’d use the defaults), you get 89 factors. Here is my scree plot.

Scree plot showing 89 eigenvalues  I also got another 400+ pages of output that I won’t inflict on you.

What exactly is one supposed to do with 500 variables that load on 89 factors? Should we then factor analyze these factors to further reduce the dimensions? It would certainly be possible. All you’d need to do is output the factor scores on the 89 factors, and then do a factor analysis on that.

I would argue, though,  and I would be right, that before you do any of that you need to actually put some thought into the selection of your variables and how they are coded.

Also, you should perhaps understand some of the implications of having variables measured on vastly different scales. As this handy page on item analysis points out,

“Bernstein (1988) states that the following simple examination should be mandatory: “When you have identified the salient items (variables) defining factors, compute the means and standard deviations of the items on each factor.  If you find large differences in means, e.g., if you find one factor includes mostly items with high response levels, another with intermediate response levels, and a third with low response levels, there is strong reason to attribute the factors to statistical rather than to substantive bases” (p. 398).”

And hold that thought, because our analysis of the 517 or so variables provided a great example …. or would it be using some kind of theory to point that out? Stay tuned.

 

I’ve written here before about visual literacy and Cook’s D is just my latest example.

Most people intuitively understand that any sample can have outliers, say, an 80-year-old man who is the father of a six-year-old child, the new college graduate who is making $150,000 a year. We understand that those people may throw off our predictions and perhaps we want to exclude those outliers from our models.

What if you have multiple variables, though? It’s possible that each individual value may not be very extreme but the combination is. Take this data set below that I totally made up, with mom’s age, dad’s age and child’s age.

Mom Dad Child

30 32 6
20 27 5
31 33 8
29 28 6
40 42 20
44 44 21
37 39 14
25 29 7
30 32 6
20 27 5
31 33 8
29 28 6
39 42 19
43 44 20
37 39 13
25 28 6
40 29 15

Look at our last record. The mother has an age of 40, the father an age of 29 and the child an age of 15. None of these individually are extreme scores. These aren’t even the minimum or maximum for any of the variables. There are mothers older (and younger) than 40, fathers younger (and older) than 29; 15 isn’t that extreme an age in our sample of children.  The COMBINATION, however, of a 40-year-old mother, 29-year-old father and 15-year-old child is an extreme case.

Enter Cook’s distance, a.k.a. Cook’s D, which measures the effect of deleting an observation. The larger the distance, the more influential that point is on the results. Take a look at my graph below.

Cook's D showing one very high pointIt is pretty clear that the last observation is very influential. Now, you might have guessed that if you had thought to look at the data. However, if you had 11 variables and 100 observations it wouldn’t be so easy to see by looking at the data and you might be really happy you had Cook around to help you out.

Let’s look at the data re-analyzed without that last observation. Here is what our plot of Cook’s D looks like now.

Plot with 2 points moderately high but not vastly differentThis gives you a very different picture. While a couple of points are higher than the others, it is certainly not the extreme case we saw before.

In fact, dropping out that one point changed our explained variance from 89%  to 93%.

So … knowing how to use Cook’s D for regression diagnostics is our latest lesson in visual literacy.

You’re welcome.

Since it is the weekend, I decided to blog about weekend stuff. Look for more statistics tomorrow. For most of the past quarter-century, I have been roped into being a volunteer for one organization or another. Here is a very, very partial list:

  • American Association on Mental Retardation
  • National Council on Family Relations
  • United States Judo Federation
  • Community Outreach Medical Services
  • American Youth Soccer Organization

I’ve been everything from Chair of the Board to chaperone. I’ve spoken at more conferences than I can count, certainly giving a few hundred presentations. I’ve raised hundreds of thousands of dollars.

Given that experience,  I’ve concluded that volunteers fall into three broad categories. Recognizing that fact is probably key to having a successful non-profit organization, because for most non-profits, volunteers are essential.

Category 1: People who are very excited to be a volunteer. These individuals derive a lot of their self-esteem from their position in the organization. Their enthusiasm may stem from a genuine passion for the mission of the organization, be it youth sports, individuals with disabilities or health care. Alternatively, the volunteer position may be an exciting departure from a boring day job, an opportunity to use more of their talents. Generally, it is both reasons. They are willing to do a lot of work. They are also willing to put up with authoritarian and unprofessional interactions with the organizations, because they are so enthusiastic and often, they are accustomed to being bossed around and devalued on their “day jobs”. There is a limit to their tolerance, though.

Category 2: People who are not at all excited to volunteer but have skills and talents your organization needs. These individuals are there out of obligation – they have a child on the team, a friend on the staff, or they really care deeply about the mission of the organization. These people do valuable work for the organization like raising money, providing free legal or accounting services. They have very little tolerance for authoritarian and unprofessional interactions with the organizations, because they would rather be somewhere else in the first place and they are accustomed to being the boss or highly valued on their “day jobs”.

Category 3: People who show up and don’t do any real work.

It seems pretty clear to me that organizations need both of the first two categories, and the more the better.

Not everyone sees it that way, obviously. Let me give you just a few examples, and again, this is a very partial list. I have witnessed

  • Volunteers told how to dress for an event,
  • Week-long required continuing education classes costing several hundred dollars,
  • Required training held hundreds of miles away,
  • Required drug testing (this was after I had been asked to coach for free at an event and pay my own way. My response was “Are you fucking kidding me?”),
  • “Two-hour” meetings running six hours late,
  • Volunteers told to “Show up at 7 a.m. , don’t be late and be sure you don’t leave early”,
  • Volunteers chastised, yelled at, berated by a board member or staff member

 

Do’s and don’ts

Well, first of all, don’t do any of those things above.

Second, say “Thank you.” A lot.

Think of these individuals just like donors who are giving you thousands of dollars, because they are. It would costs you a lot of money to replace their services. Treat them as you would professionals providing services for you. Would you ask your accountant to take a drug test? Would you tell your attorney to be sure he dressed professionally when he represents you in court? Don’t assume just because someone is working for free that he is a degenerate or an idiot.

It’s funny that most organizations seem to think what volunteers want is an engraved plaque or a certificate printed out from PowerPoint. Really, a little common courtesy goes a long way.

Ever wonder why with goodness of fit tests non- significance is what you want?

Why is that sometimes when you have a significant p-value it means your hypothesis is correct, there is a relationship between the price of honey and the number of bees, and in other cases, significance means your model is rejected? Well, if you are reading this blog, it’s possible you already know all of this, but I can guarantee you that students who start off in statistics learning that a significant p-value is a good thing often are confused to learn that with model fit statistics, non-significance is (usually) what you want.

You are hoping that you find non-significance when you are looking at model fit statistics  because the hypothesis you are testing is that the full model – one that has as many parameters as there are observations  – is different than this model you have postulated.

To understand model fit statistics, you should think about three models.

The null model, and contains only one parameter, the mean. Think of it this way, if all of your explanatory variables are useless then your best prediction for the dependent variable is the mean. If you knew nothing about the next woman likely to walk into the room, your best prediction of her height would be 5’4″ , if you live in the U.S., because that is the average height.

The full model  has one parameter per observation. With this model, you can predict the data perfectly. Wouldn’t that be great? No, it would be useless. Using the full model is a bad idea because it is non- replicable

Here is an example data set where I predict IQ using gender, number of M & M’s in your pocket and hair color.

EXAMPLE

Male 10 redhead   100

Female 0 blonde. 70

Male 10 blonde 60

Female 30 brunette 100

50 + MMx1  + female x 20 + redhead x 40

Is that replicable at all? If you selected another random sample of 4 people from the population do you think you could predict their scores perfectly using this equation?

No.

Also, I do not know why that woman has so many M & M’s in her pocket.

The M and M store

In between these two useless models is your model. The hypothesis you are testing is that your model, whatever it is, is non-significantly different from the full model. If you throw out one of your parameters, your new model won’t be as good as the full model – that one extra parameter may explain one case – but the question is, does the model without that parameter differ significantly from the full model. If it doesn’t then we can conclude that the parameters we have excluded from the model were unimportant.

We have a more parsimonious model and we are happy.

But WHY do more parsimonious models make us happy? Well, because that is kind of the whole point of model building. If you need a parameter for each person, why not just examine each person individually?  The whole point of a model is dimension reduction, that is, reducing the number of dimensions you need to measure while still adequately explaining the data.

If, instead of needing 2,000 parameters to explain the data gathered from 2,000 people you can do just as well with 47 parameters, then you would have made some strides forward in understanding how the world works.

Coincidentally, I discussed dimension reduction on this blog almost exactly a year ago, in a post with the title “What’s all that factor analysis crap mean, anyway?”

(Prediction: At least one person who follows this link will be surprised at the title of the post.)

I was reading a book on PHP just to get some ideas for a re-design I’m doing for a client, when I thought of this.

Although I think of PHP as something you use to put stuff into a database and take it out –  data entry of client records, reports of total sales – it is possible to use without any SQL intervention.

You can enter data in a form, call another file and use the data entered to determine what you show in that file. The basic examples used to teach are trivial – a page asks what’s your name and the next page that pops up says, “Howdy”  + your name .

We make games that teach math using Unity 3D, Javascript, PHP, MySQL and C# .

Generally, when a player answers a question, the answer is written to our database and the next step depends on whether the answer was correct. Correct, go on with the game. Incorrect, pick one of these choices to study the concept you missed. Because schools use our games, they want this database setup so they can get reports by student, classroom, grade or school.

What about those individual users, though? They can tell where they/ their child is by just looking at the level in the game.  So, I could drop the PHP altogether in that case and just use Javascript to serve the next step.

I could also use PHP.

In the case where we drop out the database interface altogether, is there a benefit to keeping PHP? I couldn’t think of one.

Still thinking about this question.

I’ve been looking high and low for a supplemental text for a course on multivariate statistics and I found this one –

The Multivariate Social Scientist, by Graeme Hutcheson 7 Nick Sofroniou

They are big proponents of generalized linear models, in fact, the subtitle is “Introductory statistics using generalized linear models”, so if you don’t like generalized models, you won’t like this book.

I liked this book a lot. Because this is a random blog, here is day one of my random notes

A generalized linear model has three components:

  • The random component is the probability distribution assumed to underlie the response variable. (y)
  • The systematic component is the fixed structure of the explanatory variables, usually linear. (x1, x2 … xn)
  • The link function maps the systematic component on to the random component.

The systematic component takes the form

η  = α + ß1×1 + ß2×2 + … ßnxn

They use η to designate the predicted variable instead of y-hat. I know you were dying to know that.

Obviously, since that IS  a multiple regression equation (which could also be used for ANOVA), when you have linear regression, the link function is actually identity.  With logistic regression, it is the logit function, which maps the log odds of the random component on to the systematic one.

The reason I think this is such a good book for students taking a multivariate statistics course is that it relates to what they should know.  They certainly should be familiar with multiple regression and logistic regression, and understand that the log of the odds is used in the latter.

The book also discusses the log link used in loglinear analyses, which I don’t necessarily assume every student will have used. I don’t say that as a criticism, merely an observation.

 

 

← Previous PageNext Page →