Mar

26

There are some things in life that I just have difficulty wrapping my brain around, and one of those is how some people can be so incompetent that they don’t know they’re incompetent.

Let’s take the example of people earning doctorates. You’d think that would be a pretty select crowd, right?

From 1960 – 69, about 16,ooo Ph.D.’s were awarded  annually in the United States

From 1990-99, there were about 40,000 annual Ph.D graduates.

That seems like a pretty steep jump in 30 years, but maybe science, technology, etc. was increasing at a rapid rate, we were in a race to space, make up whatever explanation you want because, are you ready for this  …. in 2013, we awarded over 125% of the number of degrees a mere 14 years ago- and  that is following on pretty steep trends up to that decade.

There has been a dramatic increase in the number of institutions awarding doctorates.

So, here is a question for you …. who are the people educating all of these doctoral students?

At the risk of sounding like an old curmudgeon, even more than usual, I’d like to point out that it used to be that a professor supervised only a few doctoral students at a time. You worked closely with that person on your research for a year or two. Prior to that, you had 3-5 years of coursework, often with only a dozen or fewer students in a class. When I enrolled in the doctoral program, I had to agree not to work more than 20 hours a week during the term because being a doctoral student was a full-time job.  All but two of my statistics courses were six hours a week, a three-hour lecture and a three-hour lab. One of the two that didn’t have a lab, structural equation modeling, you were just expected to spend that lab time figuring it out on your own, and believe me, it took more than an extra three hours.

When I look at what doctoral students are required to know in most institutions, I wonder – who is going to replace the people who are retiring?

If someone poses a statistical problem to me – say, determining whether three groups receiving different treatments improved from pretest to post-test, I can perform all of the steps required to answer the problem – pose the relevant hypotheses and post hoc tests, evaluate the reliability and validity of the measures used, clean the data in preparation for analysis. Not only can I lay out the research design and necessary steps, but I can code it, in SAS preferably but in SPSS or Stata if someone prefers. Everyone I knew in graduate school was expected to be able to do this, it wasn’t the special AnnMaria program.

Now, many people use consultants. I have friends that make their living full time consulting on dissertations for doctoral students.

This leads me to the question, “What are their advisors doing if these students need a consultant?”

Isn’t that what your professors in your program are supposed to be doing, consulting with you?

The fact is that the vast majority of professors now are adjuncts, teaching a course here or there. I’m not bashing adjuncts per se. I teach as an adjunct now and then myself, and it is fine if you need a course on say, programming or statistics, but if that is all you get, is courses taught by someone tangentially tied to the university, you are missing out on the in-depth research and study that used to be required for a Ph.D.

The really alarming thing to me is that now we have whole waves of students who are being educated by people who don’t know any other system. So, we have people who cannot conduct a complete research project on their own, who have only vague concepts of what a ‘mixed model’ is – and they are teaching doctoral students!  Now, if you are in French literature or something, maybe that’s cool and mixed models aren’t very applicable. That’s not my point.

My point is this whole cutting costs by reducing full-time faculty to a tiny fraction has resulted in people who are poorly educated and don’t even know it! They don’t know what they don’t know and now they are passing their ignorance on to the next generation.

I came out of my Ph.D. program knowing one hell of a lot, simply because, if I wanted to graduate, there was no other option. The University of California didn’t give a damn if I had three kids (I did), or needed to work (I did) or that it costs one hell of a lot to provide that level of individual supervision (it did). The powers that be figured you needed this body of knowledge to get a Ph.D. and that was that. And now, that isn’t that. That worries me.

 

Mar

20

I can’t believe I haven’t written about this before – I’m going to tell you an easy (yes, easy) way to find and communicate to a non-technical audience standardized mortality rates and relative risk by strata.

It all starts with PROC STDRATE . No, I take that back. It starts with this post I wrote on age-adjusted mortality rates which many cohorts of students have found to be – and this is a technical term here – “really hard”.

walnut

Here is the idea in a nutshell – you want to compare two populations, in my case, smokers and non-smokers, and see if one of them experiences an “event”, in my case, death from cancer, at a higher rate than the other. However, there is a problem. Your populations are not the same in age and – news flash from Captain Obvious here – old people are more likely to die of just about anything, including cancer, than are younger people. I say “just about anything” because I am pretty sure that there are more skydiving deaths and extreme sports-related deaths among younger people.

Captain Obvious wearing her obvious hat

Captain Obvious wearing her obvious hat

So, you compute the risk stratified by age. I happened to have this exact situation here, and if you want to follow along at home, tomorrow I will post how to create the data using the sashelp library’s heart data set.
The code is a piece of cake

cake

PROC STDRATE DATA=std4
REFDATA=std4
METHOD=indirect(af)
STAT=RISK
PLOTS(STRATUM=HORIZONTAL);
POPULATION EVENT=event_e TOTAL=count_e;
REFERENCE EVENT=event_ne TOTAL=count_ne;
STRATA agegroup / STATS;

The first statement gives the data set name that holds your exposed sample data, e.g., the smokers, your reference data set of non-exposed records, in this example, the non-smokers. You don’t need these data to be in two different data sets, and, this example, they happen to be in the same one.  The method used for standardization is indirect. If you’re interested in the different types of standardization, check out this 2013 SAS Global Forum paper by Yang Yuan.

STAT = RISK will actually produce many statistics,  including both crude risk estimates and estimates by strata for the exposed and non-exposed groups, as well as standardized mortality rate – just, a bunch of stuff. Run it yourself and see.  The PLOTS option is what is of interest to me right now. I want plots of the risk by stratum.

The POPULATION statement gives the variable that holds the value for the number of people in the exposed group who had the event, in this case, death by cancer, and the count is the total in the exposed group.

The REFERENCE statement names the variable that holds the value of the number in the non-exposed group who had the event, and the total count in the non-exposed group (both those who died and those who didn’t).

The STRATA statement gives the variable by which to stratify. If you don’t need your data set stratified because there are no confounding variables – lucky you – then just leave this statement out.

Below is the graph

risks by strata
The PLOTS statement produces plots of the crude estimate of the risk by strata, with the reference group risk as a single line. If you look at the graph above you can see several useful measures. First, the blue circles are the risk estimate for the exposed group at each age group and the vertical blue bars represent the 95% confidence limits for that risk. The red crosses are the risk for the reference group at each age group. The horizontal, solid blue line is the crude estimate for the study group, i.e., smokers, and the dashed, red line is the crude estimate of risk for the reference group, in this case, the non-smokers.

Several observations can be made at a glance.

  1. The crude risk for non-smokers is lower than for smokers.
  2. As expected, the younger age groups are below the overall risk of mortality from cancer.
  3. At every age group, the risk is lower for the non-exposed group.
  4. The differences between exposed and non-exposed are significantly different for the two younger age groups only, for the other two groups, the non-smokers, although having a lower risk, do fall within the 95% confidence limits for the exposed group.

There are also a lot more statistics produced in tables but I have to get back to work so maybe more about that later.

I live in opposite world

Speaking of work — my day job is that I make games for 7 Generation Games and for fun I write a blog on statistics and teach courses in things like epidemiology. Actually, though, I really like making adventure games that teach math and since you are reading this, I assume you like math or at least find it useful.

Mom and kid

Share the love! Get your child, grandchild, niece or nephew a game from 7 Generation Games.

One of my favorite emails was from the woman who said that after playing the games several times while visiting her house, her grandson asked her suspiciously,

Grandma, are these games on your computer a really sneaky way to teach me math?

You can check out the games here and if you have no children to visit you or to send one as a gift, you can give one to a school – good karma. (But, hey, what’s with the lack of children in your life? What’s going on?)

Mar

10

SENSITIVITY AND SPECIFICITY – TWO ANSWERS TO “DO YOU HAVE A DISEASE?”

Both sensitivity and specificity address the same question – how accurate is a test for disease – but from opposite perspectives. Sensitivity is defined as the proportion of those who have the disease that are correctly identified as positive. Specificity is the proportion of those who do not have the disease who are correctly identified as negative.

Students and others new to biostatistics often confuse the two, perhaps because the names are somewhat similar. If I was in charge of naming things, I would have named one ‘sensitivity’ and the other something completely different like ‘unfabuloso’. Why I am never consulted on these issues is a mystery to me, too.

Specificity and sensitivity can be computed simultaneously, as shown in the example below using a hypothetical Disease Test. The results are in and the following table has been obtained:

 

  Disease No Disease
Test Positive 240 40
Test Negative 60 160

Results from Hypothetical Screening Test

COMPUTING SENSITIVITY AND SPECIFICITY USING SAS

Step 1 (optional): Reading the data into SAS. If you already have the data in a SAS data set, this step is unnecessary.

The example below demonstrates several SAS statements in reading data into a SAS dataset when only aggregate results are available. The ATTRIB statement sets the length of the result variable to be 10, rather than accepting the SAS default of 8 characters. The INPUT statement uses list input, with a $ signifying character variables.

DATALINES;

a statement on a line by itself, precedes the data. (Trivial pursuit fact : CARDS; will also work, dating back to the days when this statement was followed by cards with the data punched on them.) A semi-colon on a line by itself denotes the end of the data.

DATA diseasetest ;

ATTRIB result LENGTH= $10 ;

INPUT result $ disease $ weight ;

DATALINES ;

positive present 240

positive absent 40

negative present 60

negative absent 160

;

Step 2: PROC FREQ

PROC FREQ DATA= diseasetest ORDER=FREQ ;

TABLES result* disease;

WEIGHT weight ;

Yes,  plain old boring PROC FREQ. The ORDER = FREQ option is not required but it makes the data more readable, in my opinion, because with these data the first column will now be those who had a positive result and did, in fact, have the disease. This is the numerator for the formula for sensitivity, which is:

 

Sensitivity =   (Number tested positive)/ (Total with disease).

 

TABLES variable1*variable2   will produce a cross-tabulation with variable1 as the row variable and variable2 as the column variable.

Weight weightvariable will weight each record by the value of the weight variable. The variable was named ‘weight’ in the example above but any valid SAS name is acceptable. Leaving off this statement will result in a table that only has 4 subjects, 1 subject for each combination of result and disease, corresponding to the data lines above.

Results of the PROC FREQ are shown below. The bottom value in each box is the column percent.

Because the first category happens to be the “tested positive” and the first column is “disease present”, the column percent for the first box in the cross-tabulation – positive test result, disease is present – is the sensitivity, 80%. This is the proportion of those who have the disease (the disease present column) who had a positive test result.

 

Table of result by disease
result disease
Frequency
Percent
Row Pct
Col Pct
present absent Total
positive 240
48.00
85.71
80.00
40
8.00
14.29
20.00
280
56.00
negative 60
12.00
27.27
20.00
160
32.00
72.73
80.00
220
44.00
Total 300
60.00
200
40.00
500
100.00

Output from PROC FREQ for Sensitivity and Specificity

The column percentage for the box corresponding to a negative test result and absence of disease is the value for specificity. In this example, the two values, coincidentally, are both 80%.

Three points are worthy of emphasis here:

  1. While the location of specificity and sensitivity in the table may vary based on how the data and PROC FREQ are coded, the values for sensitivity and specificity will always be diagonal to one another.
  2. This exact table produces four additional values of interest in evaluating screening and diagnostic tests; positive predictive value, negative predictive value, false positive probability and false negative probability. Further details on each of these, along with how to compute the confidence intervals for each can be found in Usage Note 24170 (SAS Institute, 2015).
  3. The same exact procedure produces six different statistics used in evaluating the usefulness of a test. Yes, that is pretty much the same as point number 2, but it bears repeating.

Speaking of that SAS Usage Note, you should really check it out.

Mar

3

In the early part of any epidemiology course, few things throw students as much as computing age-adjusted mortality. It seems really counter-intuitive that two populations could have the exact same mortality rate and yet one is significantly less healthy than the other.

Thinking about it for a moment, though, before diving into computation, makes it pretty clear.

Say you have a class of 30 members of a senior citizens center in Florida and 30  second-graders in Santa Monica. In each group, 4 members died this year. Is the mortality rate the same?Me and my mom in a pool with martinis

Eva in school uniform

This is where age-adjusted mortality comes in. It just so happens that in fact the CRUDE MORTALITY RATE is the same.

Crude mortality rate is simply (# of people who died)/(population at midyear)

We take midyear population because the denominator is the population at risk and if you have already died you cannot die again. Poets talk about dying a thousand deaths but statisticians don’t believe in that crap.

Since, we will assume, no one is joining your community center or class, mid-year population = 28.

How did I get 28? I assumed that people died randomly throughout the year so 1/2 of the year, 2 of your 4 people have died.

So, your crude mortality rate is 143 per 1,000.

Does it bother you that more of the second-graders died? Does that not seem right? That’s because you have some intuitive understanding of age-adjusted mortality.

Age-adjusted mortality is what you get when you apply ACTUAL age specific rates to a HYPOTHETICAL STANDARD POPULATION.

Let’s say we want to compare the mortality rates of two relatively small cities, each with the same size population and in each city, 74 people died in the last year. We are arguing that pollution is causing increased mortality but the main polluter in town points to the fact that City B has no more deaths than City A, on the other side of the state.

To compute the age-adjusted population, we would take the actual mortality rate for each age group for each city, as in the example below. Applying that to a standard population, let’s say each city had 10,000 children born that year, 20,000 ages 1-5 and so on.

Age Group
Standard Population
CITY A Mortality Rates per 100,000
Expected Deaths A
CITY B Mortality Rates per 100,000
Expected Deaths B
< 1 10,000 160 16 70 7
1-5 20,000 20 4 12 2.4
6-40 50,000 30 15 14 7
41-65 10,000 50 5 45 4.5
Over 65 10,000 350 35 300 30
TOTAL 75 50.9

The cities may have different age distributions, so city A, which is a college town, has  a lot more young people than City B. Given the City A mortality rates for each age group, one would expect 75 deaths in a standard population –  that is, with the age distribution given above.

However, given the mortality rates by age in City B, one would expect only 50.9 deaths in a year. So, yes, City A has the same number of people and the same number of deaths, but if the people in City A are much younger, they should have FEWER deaths.

The standardized mortality ratio is the observed number of deaths per year divided by the expected number of deaths.

Let’s say we use the rate for City B, without the polluter, as our expected number

SMR =  75/ 50.9 = 1.47

Usually, we multiply it by 100. So, this says the deaths in City A are 147% of what would be expected for this distribution of ages based on the mortality rate in a city with no polluting plant.

Feel smarter after reading this?

Just need a little R & R now?

Sam is running

Check out my day job, making educational adventure games. Download one for Mac or Windows today.

Mar

1

This is part 3 of the series inspired by Cindy Gallop’s brilliant talk on finding talented women or minorities.

Not only is your company not hiring female or minority employees, not investing in female or minority-led companies, but YOU ARE LITERALLY ADDING INSULT TO INJURY.

The tech workforce is disproportionately white and Asian male, and the white male proportion increases the higher one goes up the ladder. Link to Fortune article here. I don’t just make this shit up.

Here is what people say when told these facts about their company.

“We hire/fund solely based on merit.”

Which is saying, that Latinos, African-Americans, Native Americans, women are INFERIOR.  If you do not mean that, please tell me how “has less merit” is defined in your language.

There is actually a great deal of research that documents that women and non-white men are NOT judged equally. Here is a link to a summary of three of them. In fact, the identical pitch, when given by a man, was about twice as likely to rated favorably as a pitch by a woman.

The first post in this series was about how people can’t find women and minority applicants because they don’t really want to find them. If they did, they would look harder.

The second was on how we hire men based on their potential but we hire women based on their proven accomplishments. The same goes for African-Americans, Latinos and others who don’t fit the stereotype. There are plenty of studies, here’s a link to one of them, that show we give people “like us” the benefit of the doubt. They are rated more highly, more likely to be hired. The “like us” includes “like the people who already work here”.

This whole “they don’t have merit” and judging one group of people on potential while another is judged on accomplishments, produces a vicious circle.

You hire Bob because he has all the qualifications for the job – degree in the right field, portfolio he created in college that highlights his skills – and he is your friend, Bubba’s son. I get that, I really do. We are a small company and we can’t afford to have people working for us who are lazy, faked their qualifications or just cannot get along with their co-workers. Bob is a known quantity and you want to mitigate risk.

So, now, Roberto, or Roberta, does NOT get the internship. When you are looking for a full-time employee, it’s not that you don’t like Latinos or women but Bob has experience and they don’t. Two years later, when you are looking for someone to promote to management, there is Bob, with two years of experience in your company and Roberto and Roberta are somewhere else.

Let’s go back to the beginning, though, when Bob is applying for his first internship or pitching his first startup. Let’s say you don’t know Bob, or Roberto or Roberta. How fucking DARE you start off by saying,

“Well, I’d give Roberto or Roberta the chance if one of them is the better candidate.”

Why do they have to be the BETTER candidate? Why can’t they be just as good?

Okay, now you’re back-pedaling,

Well, of course, if they were just as good.

What really, really makes me want to slap people is the assumption that Roberto or Roberta are not just as good, the willingness to accept the “we only hire for merit and all of the white, male people are better.” Define better.

Let me tell you what happens to the definition of better – it moves to fit your preconceived notions.

Sometimes, Maria and I look at the programs that decided not to fund us or accept us in their accelerator and we laugh a little bitterly. They accept/ fund people with less traction, less users, no product, less experience, less education. Somehow, though, they have “more merit”.

It’s your money, it’s your program and you have every legal right to select people how you see fit.

Just DON’T go around telling people that you accepted all young white and Asian men because there were no good female, black, Latino or Native American entrepreneurs out there, because that just makes me want to slap you.

— Games that make you smarter –

buffalo in the winter

Here is what I make when I am not ranting here. Buy one. You can also donate one to a classroom or school.

Blogroll

WP Themes