The second time I taught statistics, I supplemented the textbook with assignments using real data, and I have been doing it in the twenty-eight years since. The benefits seem so obvious to me that it’s hard to believe that everyone doesn’t do the same. The only explanation I can imagine is that they are not very good instructors or not very confident. You see, the problem with real data is you cannot predict exactly what the problems will be or what you will learn.

For example, the data I was planning on using for an upcoming class came from 8 tables from two different MySQL databases. Four datasets had been read into SAS in the prior year’s analysis and now four new files, exported as csv files were going to be read in.

Easy enough, right? This requires some SET statements and a PROC IMPORT, a MERGE statement and we’re good to go. What could go wrong?

Any time you find yourself asking that question you should do the mad scientist laugh like this – moo wha ha ha .

Here are some things that went wrong -

The PROC IMPORT did not work for some of the datasets. No problem, I replaced that with an INFILE statement and INPUT statement. It’s all good. They learned about FILENAME and file references and how to code an INPUT statement. Of course, being actual data, not all of the variables had the same length or type in every data set, so they learned about an ATTRIB statement to set attributes.

Reading in one data set just would not work, it has some special characters in it, like an obelus (which is the name for the divide symbol  – ÷  now you know). Thanks to Bob Hull and Robert Howard’s PharmaSUG paper, I found the answer.

DATA sl_pre ;

SET mydata.pretest (ENCODING='ASCIIANY');

Every data set had some of the same problems – usernames with data entry errors that were then counted as another user, data from testers mixed in with the subjects. The logical solution was a %INCLUDE of the code to fix this.

In some data sets the grade variable was numeric and in others it was ‘numeric-ish’. I’m copywriting that term, by the way. We’ve all seen numeric-ish data. Grade is supposed to be a number and in 95% of the cases it is but in those other 5% they entered something like 3rd or 5th.  The solution is here:

nugrade=compress(upcase(grade),'ABCDEFGHIJKLMNOPQRSTUVWXYZ ') + 0 ;

and then here

Data allstudentsents ;

set test1 ( rename =(nugrade= grade)) test2  ;

This gives me an opportunity to discuss two functions – COMPRESS and UPCASE, along with data set options in the SET statement.

Kudos to Murphy for a cool paper on the COMPRESS function.

I do start every class with back-of-the-book data because it is an easy introduction and since many students are anxious about statistics, it’s good to start with something simple where everyone can succeed. By the second week, though, we are into real life.

Not everyone teaches with real data because, I think, there are too many adjunct faculty members who get assigned a course the week before it starts and don’t have time to prepare. (I simply won’t teach a course on short notice.) There are too many faculty members who are teaching courses they don’t know well and reading the chapter a week ahead of the students.

Teaching with real, messy data isn’t easy, quick or predictable – which makes it perfect for showing students how statistics and programming really work.

I’m giving a paper on this at WUSS 14 in San Jose in September. If you haven’t registered for the conference, it’s not too late. I’ll post the code examples here this week so if you don’t go you can be depressed about what you are missing,

 

 

 

 

Visual literacy, being the word chooser of this blog, I have decided means the ability to “read” graphic information. A post I saw today on Facebook earnings over time gave a prime example of this.

 

Chart of Facebook earnings by region

If you are a fluent “visualizer”, then just like a fluent reader can read a paragraph and comprehend it, summarize the main points and rephrase it, you could easily grasp the chart above. You would say:

  • Over two years, the number of users from the U.S.  & Canada has grown relatively little.
  • The U.S. / Canadian market was the lowest number of users for the past two years
  • Europe was the next smallest market and grew about 20%  over two years.
  • Asia was the second-largest “market”, second only to “the rest of the world”
  • The U.S/Canada and European markets are shrinking as a percentage of Facebook users

My point isn’t anything about Facebook or Facebook users. I don’t really care. What I do want to point out is that if you are reading this blog, you probably found all of those points so obvious that you wonder why I am even mentioning. Of course, you are reading this blog, so no one needs to explain what those black letters on the screen mean, either.

My point, and I do have one, is that somehow, somewhere, you learned to read graphs like that and that is an important skill. Most likely, you are fluent . That is, many people could perhaps puzzle out what that graph means, just like many people who are not proficient readers can sound out words and kind of figure out the meaning of a paragraph or two. Those people do not generally read War and Peace, or The Definitive Guide to Javascript.

The need for visual literacy is all around you – and that’s my real point.

 

 

 

 

 

When we started the Dakota Learning Project to evaluate our educational games, I wondered if we had bitten off more than we could chew. We proposed to develop the games, pilot them in schools, collect data and analyze the data to see if the games had any impact. We were also going to go back and revise the games based on feedback from the students and teachers.

Some people told us this was far too much and we should just do a qualitative study observing the students playing the game and having them “think aloud”. Another competition we applied to for funding turned us down and one of the reasons they gave is that we were proposing too much.

We ended up doing a mixed methods design, collecting both qualitative and quantitative data and I’m very glad I did not listen to any of these people telling me that it was too much.

There is no substitute for statistics.

When I observed the students in the labs, I thought that perhaps the grade level assigned to specific problems was inconsistent with what the students could really do. For example:

Add and subtract within 1000 … is at the second-grade level 

Multiply one-digit numbers  … is at the third-grade level

It seemed to me that students were having a harder time with the supposedly second-grade problem, but I wasn’t sure if that was really true. Maybe I was seeing the same students miss it over and over. After all, we had 591 students play Spirit Lake in this round of beta testing. It was certainly possible I saw the same students more than once. It is definitely the case that students who were frustrated and just could not get a problem stuck in my mind.

So …. I went back to the data. These data do double-duty because  I’m teaching a statistics class this fall and I am a HUGE advocate of graduate students getting their hands on real data, and here was some actual real data to hand them. (I always analyze the data in advance so it is easy to grade the students’ papers, to give examples in class and so l don’t get student complaining that I am trying to get them to do my work for me, although they still do. Ha! As if.)

We had 1,940 problems answered so, obviously, students answered more than one problem each.  Of those problems, 1,053, or 54.3% were answered on the first attempt. This made me quite happy because it is close to an ideal item difficulty level. Too easy and students get bored. Too hard and they get frustrated.

I used SAS Enterprise guide to produce the chart below:

chart showing subtraction in the middle of difficulty range

You can see that the subtraction problem showed up about mid-range in difficulty. Now, it should be noted that the group gets more selective as you move along. That is, you don’t get to the multiplication problems unless you passed the subtraction problem. Still, it is worth noting that only 70% of fourth- and fifth-grade students in our sample answered correctly on the first try a problem that was supposedly a second-grade question.

Because we want students to start the game succeeding, I added a simpler problem at the beginning. That’s the first bar with 100% of the students answering it correctly. I won’t get too excited about that yet, as I added it later in the study and only a few students were presented that problem. Still, it looks promising.

So, what did I learn that I couldn’t learn without statistics? Well, it reinforced my intuition that the subtraction problem was harder than the multiplication ones and told me that  a substantial proportion of students were failing it on the first try. It was not the same students failing over and over.

The second question then, was whether the instructional materials made any difference. I’m pleased to tell you that they did. On the second (or higher) attempt, 85% of the students answered correctly. If you add the .85 of the 30% who failed the first go-round to the 70% who passed on the first attempt, you get 92% of the students continuing on in the game. This made me happy because it shows that we are beginning at an appropriate level of difficulty. I would have liked 100% but you can’t have everything.

I should note that the questions are NOT multiple choice, and in fact, the answer to that particular problem is 599, so it is not likely the student would have just guessed it on the second attempt.

 

 

More notes from the text mining class. …

This is the article I mentioned in the last post, on Singular Value Decomposition

ftp://ftp.sas.com/techsup/download/EMiner/TamingTextwiththeSVD.pdf

Contrary to expectations, I did find time to read it, on the ride back from Las Vegas and it is surprisingly accessible even to people who don’t have a graduate degree in statistics, so I am going to include it in the optional reading for my course.

Many of these concepts like start and stop lists apply to any text mining software but it just happens that the class I’m teaching this fall uses SAS

———
In Enterprise Miner, you can only have 1 project open at a time, but you can have multiple diagrams and libraries, and of course, zillions of nodes, in a single project

In Enterprise Miner, can use text or text location as a type of variable. Documents < 32K in size can be contained in project as a text variable. If greater than 32K, give a text location.

Dictionaries

  • start lists – often used for technical terms
  • stop lists, e.g. articles like “the”, pronouns. These appear with such frequency in documents they don’t contribute to our goal which is to distinguish between documents. May also include words that are high frequency in your particular data. For example, mathematics, in our data, because it is in almost every document we are analyzing

Synonym tables
Multi-word term tables – standard deviation is a multi-word term

Importing a dictionary — go to properties. Click the …. next to the dictionary (start or stop) you want to import. When it comes up with a window, click IMPORT

Select the SAS library you want. Then select the data set you want. If you don’t find the library that you want, try this:

  1. Close your project.
  2. Open it again
  3. Click on the 3 dots next to PROJECT START CODE in the property window
  4. Write a LIBNAME statement that gives the directory where your dictionaries are located.
  5. Open your project again

[Note:  Re-read that last part on start code. This applies to any time you aren't finding the library you are looking for, not just for dictionaries. You can also use start code for any SAS code you want to run at the start of a project. I can see people like myself, who are more familiar with SAS code than Enterprise Miner, using that a lot.]

Filter viewer – can specify minimum number of documents for term inclusion

 

—————-

Jenn and ChrisSpeaking of Las Vegas, blogging has been a little slow lately since we took off to watch The Perfect Jennifer get married. It was a very small wedding, officiated by Hawaiian Elvis. Darling Daughter Number Three doubled as bartender and bridesmaid then stayed in Las Vegas because she has a world title fight in a few days.

Given the time crunch, I was particularly glad I’d attended this course that gave me the opportunity to draft at least one week’s worth of lectures in the fall. When I finish these notes, my plan is to to edit them and turn it into the last lecture in the data mining course. If it’s helpful to you, feel free to use whatever you like. I’ll try to remember to post a more final version in the fall. If you have teaching resources for data mining yourself, please let me know.

My crazy schedule is the reason I start everything FAR ahead of time.

 

Captain Obvious wearing her obvious hat

Captain Obvious is wearing a hat

Maybe this is obvious, but I have often found that what is obvious to some people is not so obvious to others, so here are a few random tips.

1. Enterprise Miner can take a REALLY long time to load during which you wonder if anything is happening at all.

task manager

Open up the task manager and look for something that says javaw.exe *32  You can see it near the bottom in the image above. The number next to it should be going up, from 30,000 to 50, 000 etc. If it is, you should probably be patient for a few more minutes and your session will start.

2. Let’s say you want to change the properties of something. For example, I don’t want the data set to be partitioned into Training, Validation and Test in a 40, 30, 30 split. I want it to be 50, 50, 0.  So, I right-click on the DATA PARTITION node, get a drop-down menu and

diagram window with properties at left

 

there is all of this stuff about Edit Variables all the way down to Disconnect Nodes, where the hell are the properties to change? They’re on the left, in that window with the title Property!  Funny, but it’s so easy to focus on the diagram window and completely forget about everything else. Click on a node and it’s properties will show up in the window.

3. While the three screens you see when you run the StatExplore node are pretty interesting, it would be nice to have a more detailed look at your data. Just go to the VIEW menu and you can get more statistics, like the cell chi-square values, descriptive statistics of numeric variables broken down by the levels of your target variable.

Menu with window optionsNow that you are starting to see some of what you can do with Enterprise Miner, you’ll be wondering what MORE you can do, like decision trees, for example. I’m glad you asked that question ….

After all of the effort to get Enterprise Miner installed, I thought it better do something good. It is interesting to use. Unlike programming where you can get a program to run but give you errors or unexpected results, so far (key phrase!), with Enterprise Miner I have found the problem to be knowing exactly what to select, for example, with CREATE DATA sources. Once you know that, however, it seems pretty hard to make an error.

Goat on a mountainEnterprise Miner does do some pretty cool stuff, which makes it worth the pain of getting it installed. Even way cooler, unlike back in the day when no one could get their hands on it without paying approximately $4,893,0893.16 , their first born child, their left kidney and an albino goat, if you are an instructor or a student, you can get it for free through SAS On-Demand for Academics.

(And, yes, for the record, I *am* aware that said goat is not an albino. I was fresh out of pictures of albino goats. Deal with it.) 

Speaking of Enterprise Miner,  I thought I would ramble on about the good parts for a few posts, since I’m getting ready to teach data mining in the fall and I hate to do anything at the last minute.

One of the good parts is StatExplore. At first glance, it looks good, but at second glance, it looks better.

All you need to do is create a diagram by going to the FILE menu, then selecting NEW and then DIAGRAM.

You can start by dragging a data source on to the diagram. In this example, I used the heart data set from the Framingham Heart Study, which happens to ship with Enterprise Miner in the SASHELP library.

I drag the data set from data sources to the diagram window.

Next, I click on the EXPLORE tab just above the diagram window. This gives you a bunch of icons. Enterprise Miner is just rife with icons. Never fear, though, if you have no idea what this bunch of colored boxes is supposed to mean versus  that bunch, just hover over the icon with your mouse and it will tell you.

diagram

Here is my diagram. Simple, no?  It gives you a bunch of cool stuff. First, you have the plot of chi-square values for all nominal variables.

Chi-square plot

You can see that sex has the highest chi-square (as in gender, not as in frequency of), followed by cholesterol status, smoking status and weight status.  I find this rather surprising. I knew women lived longer than men, but with all of the discussion of obesity, I thought weight would be higher up there.

The next chart gives me the worth of each variable in predicting my target, which in this example is death.

plot of variables in order of predictive value

The variable on the far left is age at start. Not surprisingly, the older people are when you start following them, the more likely they are to die in a given period of time. The next variable is Age at CHD Diagnosis, followed by two blood pressure measures, their cholesterol, then cholesterol status – weight status is down at the end.

statistics

 

This analysis produces A LOT of statistics. This, I found interesting because despite some people arguing Enterprise Miner allows analysis by someone without extensive programming or statistics background, certainly in the case of statistics, the more knowledge you have, the better you could make use of the results.

For example,  in the top right (all three of the screen shots above are one screen, I broke them up at an attempt at legibility), the output pane gives descriptive statistics broken down by each level of the target variable. I can see how many people who died had missing data for age at CHD diagnosis, skewness and kurtosis values for variables by status, living or dead, the mode for weight status for people who were living or dead, and a whole lot more. Interestingly, 68% of the whole sample was overweight.

Scrolling through the statistics output I can get a good idea of the data quality – is it skewed, is it missing, is it missing at random.

Without some background in statistics, that’s probably no more than a bunch of numbers. Personally, I found it very helpful. That’s another assignment for the students, to write a brief summary of their data, including any concerns. There weren’t any real problems with these data except for the obvious fact that variables like cholesterol and cholesterol status,smoking and smoking status are going to be highly correlated. It would be a good idea to include one of those as input in any predictive analyses and reject the other to prevent multicollinearity problems.

(NOTE to self: Make sure to explain variable roles, changing variable roles in EM and multi-collinearity.)

You might think this is adequate for running just one node, but, in fact, there is much more here than meets the eye. More on that tomorrow because speaking of overweight, I have been at a computer for 13 hours today and I want to hope on the  bike and get some exercise in before I knock out the last task I need to do today. Although @sammikes just pointed out on twitter that round is a shape, it is not the one I want to be in.

Most likely, you,too, have experienced homicidal urges when confronted with a problem you have spent five hours trying to solve on your computer, only to call tech support and have them report,

Well, it works fine on my computer.

You’d think if that solved the problem that they would offer to box up their computer and send it over to your house but, alas, they never do.

This is the reason that any software I use for class I test on several computers under different conditions. After having initially failed to get SAS On-Demand for Enterprise Miner to work with boot camp on the Mac, I tried it on a Lenovo machine running Windows 8. I had to install the JRE and ignore a few security warnings, but after that it worked.

[For how I did eventually get it working with boot camp, click here, and thank Jason Kellogg from SAS. ]

Next, I needed to upload some data. The SAS instructions say to use your favorite FTP client and coincidentally, I do have a favorite FTP client (Filezilla), so I downloaded it to the testing machine.

Only the professor can upload data to the class directory, and most professors probably have an FTP program on their personal computer (or maybe not, do you?) Even if you normally do, you may, like me, have borrowed a machine to use for testing or have a new computer. Whatever, this just reinforces my argument that you should never, never plan to use any kind of software in a class unless you have ample time to prepare.

I know that there are schools that ask adjuncts to teach on a week or two notice. That seems to me a recipe for disaster for both the professor and students, unless maybe you are doing something that hasn’t changed in 50 years and requires no technology,  like reading Chaucer, I recommend you follow the advice of Nancy Reagan and “Just say no.”

Here are my first few hints:

  1. Test the software on multiple machines and multiple operating systems.
  2. Make sure one of those machines is on the older, under-powered end of the spectrum, as students often don’t have a lot of extra cash and may not have the shiniest, newest machine like you have on your desk.
  3. Test it on the latest operating system. It may turn out that the version your school has does not work with Windows 11. (I did not have that problem with the Enterprise Miner this time, but I’ve had it with other software in the past so it is a good idea.)
  4. Find out what other software you might need, for example, some kind of FTP program in this case, and install it on your computer, if necessary.
  5. Give yourself plenty of time to do all of the above.

You might think these types of things would be handled by the information technology department at your university, and you may be really lucky and that will be so. In many schools, the IT department basically helps re-set passwords, assigns school email addresses, helps to get discounts on software and upload files to Blackboard and not much else.

For years, I have been trying to figure out where the $50,000 a year or so tuition goes. It isn’t to adjunct professors and it isn’t to the IT staff. It also isn’t  to buying the latest technology because, more and more often, students are expected to bring their own device.

You may think that none of the above should be your job and you may be right, but I am just saying if you want to anticipate the frustrations your students will experience and be able to solve their problems during the lecture by directing them to a link on your class website/ blog your life and theirs will both be a lot easier.

 

Thank you to Jason Kellogg from SAS Technical Support, SAS On-Demand Enterprise Miner is now running on my Mac using Windows 8.1 with boot camp. Here were his instructions.

Note, this is after you have a SAS profile, registered a course, changed the security settings in Java, now you are here

The steps are:
  1. Download and save jre-6u24-windows-i586.exe.
          http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html#jre-6u24-oth-JPR
  2. Open the Windows Run window and run
"C:\users\[userid]\Downloads\jre-6u24-windows-i586.exe" STATIC=1
          where [userid] is your user account name
  3. Click OK to start the installation
  4. After finishing the installation, on the desktop, 
right click empty area and select “Create Shortcut”
(NOTE: on Windows 8.1 this was NEW and then SHORTCUT)
  5. In the location, Browse to Desktop and click Next
  6. In the next screen provide name of shortcut, 
for example “Enterprise MinerJWS”
  7. Once the shortcut is created, Right Click and select Properties.
 In the Target enter the following:
"C:\Program Files (x86)\Java\jre1.6.0_24\bin\javaws.exe"

 https://academic93.oda.sas.com/SASEnterpriseMinerJWS/main.jnlp
  8. Click Apply

You now have a clickable shortcut to Enterprise Miner. Please use it when starting Enterprise Miner.

This worked and I now have SAS Enterprise Miner working on my laptop, which is going to be extremely convenient.

PLEASE NOTE THAT ALL OF THE QUOTATION MARKS NEED TO BE THERE OR IT WILL GIVE YOU AN ERROR.

ALSO,  under #7 that is all one command.  I had to break into two lines on this blog to be legible.

 

Although it was still a huge pain in the ass to get started, it is leaps and bounds ahead of the first time I tried Enterprise Miner years ago.

 

chicken

Back then, it required back flips and sacrificing a chicken (okay, finding a machine running Windows XP, installing a bunch of files – just take my word it was a pain in the ass).  As for the on-demand version, it was so slow as to be useless.

In contrast, once I got up and running, it was not bad at all, and that was running off the wireless in the office. Now, our internet speed is good here, so your mileage may vary, but at least under good conditions it runs fine using a small dataset.

So, I just uploaded a dataset with 10,000 records and 6,000 variables. We’ll see what it does with that.

==== Random shameless plug =====

When I’m not playing around with statistical software, I’m running a company that makes adventure games to teach math. If you want your children to do something educational this summer, you can buy a copy here for $9.99.

 

Getting ready to teach a data mining course at the end of the year, I started looking through data sets I have on my desktop. Not sure what I will end up using. My first lesson, no matter what, is going to be on data quality.

The very first thing I did was a series of PROC FREQs. Then, I thought maybe that was a mistake. Perhaps I ought to start off with SAS Enterprise Guide or Enterprise Miner.  Here is how I did the first peek at data quality with Base SAS. I’m going to do the same thing with Enterprise Guide tomorrow and see if it would be easier. After that, I’ll try Enterprise Miner. I know I downloaded the SAS On Demand version a while back and haven’t done much with it lately.

(There is a new SAS for the Web offering but from what I have seen (admittedly, a while back), it requires you to set up a virtual machine with VMware and I did not have the time to do it nor could I find my Windows 8 or Windows 7 install disk. Must clean office.)

The first thing I did was pull out a data set with a couple of thousand student quiz records. Yes, I know in data mining we will get to data sets in the millions but this is the first exercise of the first class.

I did not expect to have 2,000 quiz records because we only have around 1,200 beta testers and about 200 of those are teachers who I would expect would get all of the in-game problems correct so never be routed to a quiz. I also know from observation that some of the students never made it to the part of the game where they could do the quizzes. The first challenge page requires students to be able to read simple words and subtract two-digit numbers.

I did a super-simple PROC

proc freq data = in.realquiz ;
tables username*quiztype ;

and found that a couple of the users had supposedly taken the same quiz 40 or more times. One students showed having taken the quiz 70 times and another 91 times. While that is theoretically possible, I was suspicious because after those three, the highest number was 7.

I went into the data set and looked at those particular records and the time stamp showed them coming in tenths of a second apart. Clearly, the student was not answering 5-7 questions in less than a tenth of a second.

We tracked these down to a particular school that was having issues with the firewall. It appeared that when the program couldn’t connect to the server, it tried again and again. When there was a connection, all of those records went through at once.

LESSONS LEARNED

  1. Always look at the outliers. Don’t just toss them out. They can tell you things. In this case, taking a closer look at that PHP code is on my list of program fixes. If it happens at one school it can happen at others.
  2. Time stamps are your friend.  I try to include them whenever I can. Yes, it might take up a bit of time and space but there is nothing like it for detecting duplicate records – and fraud.
  3. Just because data has supposedly been cleaned up, never, never assume that it is problem-free.

 

At the moment, we are interested in knowing the most common failure point in the games. Do we need to add in more teaching and problems earlier? The games are designed to teach and test students in mathematics at the fourth and fifth grade levels. The teachers we work with often tell us that their average student is below grade level. So here was my next series of steps.

proc sort data = in.realquiz ;
by username quiztype ;

data test ;
set in.realquiz ;
by username quiztype ;
if first.quiztype ;

*** These first steps sort the dataset by username and type of quiz and then only retain the first instance of each. So, if a student actually did take the same quiz seven times, I am only interested in the fact that beginning the game, he or she could not do multiples of 3, not that it took seven tries to get there.

proc freq data = test ;
tables quiztype*pass / out=quizfreq ;
run;

*** This step shows both the quizzes students took and the result.

This is the point at which I began to become concerned, not about data quality but by what the data was beginning to reveal.

Table of quizzes by passing

Visually impaired – click here for HTML files of tables instead of png

Over half of the students failed and the quizzes they were failing seemed to be at the lower levels – around third-grade math.

LESSON LEARNED

You can get some very valuable information from some very simple statistics. A lot more about that, tomorrow, though, since I have to get back to work ….

As I mentioned yesterday, banging away at 7 Generation Games has led to less time for blogging and a whole pile of half-written posts shoved into cubbyholes of my brain. So, today, I reached into the random file and  coincidentally came out with a second post on open data …

aque

The question for Day 11 of the 20-day blogging challenge was,

“What is one website that you can’t do without? Tell about your favorite features and how you use it in teaching.”

Well, I’m a big open data fan and I am a big believer in using real data for teaching. I couldn’t limit it to one. Here are four sites that I find super-helpful

The Inter-university Consortium for Political and Social Research has been a favorite of mine for a long time.  From their site,

“An international consortium of more than 700 academic institutions and research organizations, ICPSR provides leadership and training in data access, curation, and methods of analysis for the social science research community.

ICPSR maintains a data archive of more than 500,000 files of research in the social sciences. “

I like ICPSR but it is often a little outdated. Generally, researchers don’t hand over their data to someone else to analyze until they have used it as much as their interest (or funding) allows. On the other hand, it comes with good codebooks and often a bibliography of published research. As such, it’s great for students learning statistics and research methods, particularly in the social sciences.

For newer data, my three favorites are the U.S. census site, CHIS and CDC.

census logocensus.gov data resources section is enough to make you drool when it comes to open data. They have everything from data visualization tools to enormous data files. Whether you are teaching statistics, research methods, economics or political science – it doesn’t matter if you’re teaching middle school or graduate school, you can find resources here.

 

Yes, that’s nice, but what if you are teaching courses in health care – biostatistics, nursing, epidemiology – whatever your flavor of health-related interests, and whether you want your data and statistics in any form from raw data to publication, the Center for Disease Control Data & Statistics section is your answer.

Last only because it is more limited in scope is the California Health Interview Survey site where you can get public use files to download for analysis (my main use) as well as get pre-digested health statistics.

It all makes me look forward to diving back into teaching data mining  this fall.

Next Page →