### Oct

#### 6

# How much matrix algebra do statistics students REALLY need?

Filed Under Algebra, Dr. De Mars General Life Ramblings, statistics | 3 Comments

Following a discussion using matrix algebra to show computation in a Multivariate Analysis of Variance, a doctoral student asked me,

“Professor, when will I ever use this? Why do I need to know this?”

He had a valid point. I’m always asking myself why I’m teaching something. Is it because it interests me personally, because it is in the textbook or because students really need to know it.

Let’s take some things about matrix algebra we always teach students in statistics.

*What conformable means and why it might matter*

Two matrices are conformable if they can be multiplied together. When you multiply two matrices, the row of the first matrix will be multiplied by the column of the second matrix. You sum the products and that is the first element in the matrix. You repeat this until you have multiplied all of the rows in the first matrix by all of the columns in the second.

So — you can multiply a 2 x 3 matrix by a 3 x 2 matrix but not vice versa.

Multiplying a matrix of dimension a x b and a matrix of dimension c x d will give you a resulting matrix with a rows and d columns, that is, of dimensions a x d .

This can give you results that sometimes seem counter-intuitive, like that the product of a 1 x 3 matrix and a 3 x 1 matrix is a 3 x 3 matrix.

It may seem weird that the result of matrix multiplication can either be a larger matrix than both of the matrices you multiplied, or smaller than both of them, but there it is.

If both matrices are square, that is, of dimension n x n, then the resulting product will also be an n x n matrix.

And, of course, any matrix can be multiplied by its transpose because the transpose of an m x n matrix will always be n x m .

If a square matrix is of full rank, it means that none of the rows are linearly dependent. If you DO have linear dependence, it means you have redundant measures. Now, I could go on to prove this mathematically and all of it is very interesting to me.

I question, though, whether you really need to know anything about matrix algebra to understand that redundant measures are a bad thing.

Do you need matrix algebra to explain that we are going to apply coefficients (do you even need to refer to it as a vector?) to the values of each variable for each record and get a predicted score such that

predicted score = b0 + b1X1 + b2X2 …. b.Xn

When I was in graduate school, calculators that did statistical analyses, even as simple as regression, cost a few hundred dollars which was the equivalent of three months of my car payment. Computer time was charged to your department by the hour. So … my first few courses, I did all of my homework problems using a pencil and paper, transposing and inverting matrices – and it was a huge pain in the ass.

Then, I got a job as a research assistant and one of the perks was hours of computer time. I thought I’d died and gone to heaven. It took me less than half an hour to get all of my homework done using SAS (which ran on a mini-computer and spit out printouts that I had to walk across campus to pick up).

My students are learning in a completely different environment. So … do they need to learn the same things in the same way I did? This is a question I ponder a lot.

### Oct

#### 3

# USDA is the biggest proponent of women in tech

Filed Under computer games, Dr. De Mars General Life Ramblings, The Julia Group | 1 Comment

**I’m pretty certain that I’m a woman in technology.**

Last night, I was using SAS on a virtual machine through a remote desktop connection to prepare data from the National Hospital Discharge Survey for use in examples of MANOVA and multinomial logistic regression.

Today, I was working on improving animation in the Javascript for a browser-based game that leads into the 3-D portion of an adventure game I designed to teach fractions.

Next week, I will start on a contract to completely re-do the PHP/ MySQL database for a client to bring it to something more secure and up to date.

Oh, and I also was reviewing my notes for the graduate courses in biostatistics and advanced multivariate statistics that I’m teaching this fall.

Pretty certain that by any standard – writing code, founding companies, graduate degrees, university appointment, successful Kickstarter – I am definitely a woman in tech/ STEM whatever the day’s buzzword.

I read SO many articles, blog posts, tweets about the need for women in tech, women-led start-ups, women entrepreneurs.

*If you ask me, the U.S. Department of Agriculture is the greatest proponent of women in tech that there is, because they have actually put up money and funded us to do a prototype of an adventure game that teaches math.*

When results from that were positive, they funded us again with a Phase II Small Business Innovation Research award to develop the games for commercialization.

I have written here before about the troubling nature of the Black Girls Code, Latina Girls Code emphasis that seems to completely overlook the grown women who are here now. I am NOT saying those aren’t good programs. I assume they are but I have no personal experience. What I am saying is pretty much what I said in January.

It seems to me that when people are looking at minorities or women to develop in their fields, they are much more interested in the hypothetical idea of that cute 11-year-old girl being a computer scientist some day than of that thirty-something competing with them for market share or jobs. If there are venture capitalists or conference organizers or others out there that are sincerely trying to promote WOMEN who code, not girls, I’ve never met any.

(Since then, I have met a couple of conference organizers.)

I suppose Ada Lovelace was cool – my two-year-old granddaughter has a shirt with her picture on it. Still, I don’t think a trending hashtag of #fuckyeahadalovelace did anything for me as a woman in tech.

**You know what helped me as a woman in tech? Seed money from the USDA.** You can see what we did with it here at our 7 Generation Games site.

One thing Sheryl Sandberg got right in her book, Lean In, was that women tend to be judged on their accomplishments where men are judged on their potential. Of course, you also don’t want to be “too old” to be an innovator so by the time women have those accomplishments, they are past their prime as entrepreneurs according to those VCs who believe that people over 30 are too old to do a start-up.

It’s hard for me to complain about my life when my morning starts out with reading technical books with lines like, *“Figure 1 shows the sprite with the red and green blood particles for player and zombie”.*

My point is that our company is in the situation we are in not because of any “help minorities code” program but because USDA and our backers on Kickstarter gave us cold, hard cash to develop our products.

Want to help women in tech? Back them on Kickstarter. Buy their products. Tweet about their products and companies to help their marketing. Invest in their companies.

**USDA got it right.**

Thank you.

### Oct

#### 3

# SAS Tricks for Massaging Data into Shape

Filed Under Software, statistics, Technology | Leave a Comment

Today, I was thinking about using data from the National Hospital Discharge Survey to try to predict type of hospital admission. Is it true that some people use the emergency room as their primary method of care? Mostly, I wanted to poke around wit the NHDS data and get to know it better for possible use for teaching statistics. Before I could do anything, though, I needed to get the data into a usable form.

I decided to use as my dependent variable the type of hospital admission. There were certain categories, though, that were not going to be dependent on much else, for example – if you are an infant born in a hospital, your admission type is newborn. I also deleted the people whose admission type was not given.

The next question was what would be interesting predictor variables. Unfortunately, some of what I thought would be useful had less than perfect data, for example, discharge status, about 5% of the patients had a status of “Alive, disposition not stated”.

I also thought either diagnostic group or primary diagnosis would be a good variable for prediction. When I did a frequency distribution for each it was ridiculously long, so I thought I would be clever and only select those diagnoses where it was .05% or more, which is over 60 people. Apparently, there is more variation in diagnosis than I thought because in both cases that was over 330 different diagnoses.

Here is a handy little tip, by the way -

PROC FREQ DATA = analyze1 NOPRINT ;

TABLES dx1 / OUT = freqcnt ;

PROC PRINT DATA = freqcnt ;

WHERE PERCENT > 0.05 ;

Will only print out the diagnoses that occurred over the specified percentage of the time.

I thought what about the diagnoses that were at least .5% of the admissions? So, I re-ran the analyses with 0.5 and came up with 41 DRGs. I didn’t want to type in 41 separate DRGs, especially because I thought I might want to change the cut off point later, so I used a SAS format, like this. Note that in a CNTLIN dataset, which I am creating, the variables MUST have the names fmtname, label and start.

Also, note that the RENAME statement doesn’t take effect until you write out the new dataset, so your KEEP statement has to have the old variable name, in this case, drg.

Data fmtc ;

set freqcnt ;

if percent > 0.5 ;

retain fmtname ‘drgf’ ;

retain label “in5″ ;

rename drg = start ;

keep fmtname drg label ;

Okay, so, what I did here was create a dataset that assigns the formatted value of in5 to everyone of my diagnosis related groups that occurs in .5% of the discharges or more.

To actually create the format, I need one more step

proc format cntlin = fmtc ;

Then, I can use this format to pull out my sample

DATA analyze2 ;

SET nhds.nhds10 ;

IF admisstype in (1,2,3) ;

IF dischargestatus in (1,3,4,6) & PUT(drg,drgf.) = “in5″ then insample = 1 ;

ELSE insample = 0 ;

I could have just selected the sample that met these criteria, but I wanted to have the option of comparing those I kept in and those I dropped out. Now, I have 71,869 people dropped from the sample and 59,743 that I kept. (I excluded the newborns from the beginning because we KNOW they are significantly different. They’re younger, for one thing.)

So, now I am perfectly well set up to do a MANOVA with age and days of care as dependent variables. (You’d think there would be more numeric variables in this data set than those two, but surprisingly, even though many variables in the data set are stored as numeric they are actually categorical or ordinal and not really suited to a MANOVA.)

Anyway …. I think that MANOVA will be one of the first analyses we do in my multivariate course. It’s going to be fun.

### Oct

#### 2

# Matrix Algebra, Just Because

Filed Under Algebra, statistics | 1 Comment

I was talking to a friend of mine today who had taken a test for a new job recently and he had a hard time with the math portion of it. We were in college about the same time and he did perfectly fine in math, but it had been a while. This got me to thinking that I should review things like matrix algebra from time to time, just because it has been a while since I had any need to multiply a matrix without a computer. Well, actually, I can’t imagine that I will ever have such a need but since I’m teaching multivariate statistics and the textbooks generally have a lot of matrix algebra, I thought I should brush up on it whether I ever need it or not.

I had the normal equations for regression drilled into my brain in graduate school and there was a time in my life when I actually had spare time when I found solving systems of linear equations something amusing to do. All of that was a very long time ago.

So …. as I sit here thinking what do my students need to know, I run into the Goldilocks problem yet again. Nothing seems just right. Teaching multiplying a scalar by a matrix seems a waste of time, no matter how brief. All you do is multiply every number in the matrix by that value. Okay, got it.

They should know what an Identity matrix is. This could actually have some useful implications in statistics. If your correlation matrix is close to an identity matrix, with 1 in the diagonals and 0s in the off-diagonal then it tells you that your variables are uncorrelated. If you analyzed a matrix of random data, this is exactly what you would expect to get.

If you multiply a matrix by the identity matrix, I, you are going to get the original matrix as a result, hence the name, identity matrix.

IA = A

This is analogous to the identity property of scalar (that is, regular numbers, not matrices) multiplication that 1X = X

The determinant of a matrix is, for a 2 x 2 matrix, of this form

a b

c d

is equal to

(ad – bc)

To find the inverse of a matrix, the reciprocal of the determinant, that is 1 / (ad-bc), in the case of our same 2 x 2 matrix is multiplied by the following matrix

d -b

c -a

Here is a really good Khan Academy video on finding the inverse of a matrix.

This is particularly important in statistics because you will occasionally get a message on your output that the “determinant is zero” and it would be helpful to you if you understood what that meant and why it was important.

One important point here is that you need the determinant to find the inverse of a matrix. For example, to find the vector of regression coefficients you would use this equation

Notice here that you need to take the inverse of the product of the transpose of the X matrix and the X matrix. What if the determinant is zero? Well, you can’t divide by zero – SO THERE IS NO SOLUTION.

At this point, you want to start to chase down why the determinant is zero. Do you have redundant measures? Is there no variance in the sample?

All of this is very interesting to me personally, but aside from that, I keep asking myself whether the students really need an in-depth understanding of matrix algebra when it is all done by a computer. I really don’t know the answer to that, which is why I keep thinking about it.

### Oct

#### 1

# Flickering screens, stalled machines and not working like it used to

Filed Under Software, Technology | Leave a Comment

One definition of insanity is doing the same thing over and over, expecting different results. One thing that can drive programmers insane is doing the same thing over again and GETTING different results.

In a past life, working in tech support, I learned that whenever anyone calls and says,

I did it exactly like your example and it didn’t work for me.

- they are lying.

In my experience, when you have the same programming statements but get different results, something else is always different and that something is often the demands put on the system.

How can that be if your statements are the same? Let me give two examples, one using javascript and one using SAS.

**Javascript**

I had made a game using canvas and html5. The game had three layers. The background was the bottom layer, some game objects that mostly stayed in the same place were the middle layer and the top layer was a game object that changed with each move. The init function on load drew all the layers. On update, all three layers were updated by calling one function. All was well.

function drawAll() {

draw1();

draw2();

draw3();

}

Then, I made another game the exact same way and I could not get rid of the screen flicker every time the player piece on the top layer moved. I tried clearing the canvas between each re-draw which had solved the problem in the past. Nope. What finally did work, in case you run into this problem yourself, is that I only drew the background in the init function and never re-drew it.

function init(){

layer3 = document.getElementById(“layer3″) ;

layer2 = document.getElementById(“layer2″) ;

layer1 = document.getElementById(“layer1″);

ctx = layer1.getContext(’2d’) ;

ctx2 = layer2.getContext(’2d’) ;

ctx3 = layer3.getContext(’2d’);

window.addEventListener(‘keydown’,getkeyAndMove,false);

startwall() ;

draw1() ;

draw2() ;

draw3() ;

}

function drawall() {

draw2();

draw3();

}

Problem solved. My conclusion was that the second program involved a lot more complicated drawing of objects. Instead of just placing an image here or there, the program needed to compute collisions, read lines from an array, draw objects and the time it took was noticeable.

**SAS**

Several times I have written a program that worked wonderfully on a high performance computing cluster but crashed on my laptop, or failed on SAS on demand but worked beautifully on my desktop . The difference in all of those cases was that the processing requirements exceeded the capabilities of the machine. All is not lost in those cases. One pretty obvious but not always feasible solution is to use a different machine. When that isn’t an option, there are workarounds. For example, if I wanted students to analyze an enormous dataset, I could have them analyze the correlation matrix instead of trying to load a 100gb dataset – but that is another post.

### Sep

#### 25

# Matrix of plots with SAS

Filed Under Software, statistics, Technology | Leave a Comment

When I was running out to the airport, I said I would explain how to get a single plot of all of your scatter plots with SAS. Let’s see if I can get this posted before the plane lands. Not likely but you never know …

It’s super simple

proc sgscatter data=sashelp.heart ;

matrix ageatdeath ageatstart agechddiag mrw / group= sex ;

And there you go.

Statistical graphics from 10,000 feet. Is this a full service blog or what?

### Sep

#### 25

# Websurfing multivariate statistics

Filed Under Software, statistics, Technology | Leave a Comment

My life is upside down. All day, as my job, I spent writing a program to get a little man to run around a maze, come out the other end and have a new screen come up with a math challenge question. Then, in the evening, I’m surfing the web for interesting bits to read on multivariate statistics.

I’m teaching a course this winter and could not find the Goldilocks textbook, you know, the one that is just right. They either had *no* details – just pointing and clicking to get results – or the wrong kind of details. One book had pages of equations, then code for producing some output with very little explanation and *no* interpretation of the output.

I finally did select a textbook but it was a little less thorough than I would like in some places. I decided to supplement it with required and optional readings from other sources. Thus, the websurfing.

One book I came across that is a good introduction for the beginning of a course is Applied Multivariate Statistical Analysis, by Hardle and Simar. You can buy the latest version for $61 but really, the introduction from 2003 is still applicable. I was delighted to see someone else start with the same point as I do – descriptive statistics.

Whether you have a multivariate question or univariate one, you should still begin with understanding your data. I really liked the way they used plots to visualize multiple variables. I knocked a few of these out in a minute using SAS Studio.

symbol1 v= squarefilled ;

proc gplot data=sashelp.heart ;

plot ageatdeath*agechddiag = sex ;

plot ageatdeath*ageatstart = sex ;

plot ageatdeath*mrw = sex ;

Title “Male ” ;

proc gchart data=sashelp.heart ;

vbar mrw / midpoints = (70 to 270 by 10) ;

where sex = “Male” ;

run ;

Title “Female ” ;

proc gchart data=sashelp.heart ;

vbar mrw / midpoints = (70 to 270 by 10);

where sex = “Female” ;

If I had more time, I would show all of these plots in one page – a draftman’s plot - but I’m running out to the airport in a minute. Maybe next time. Yes, I do realize these charts are probably terrible for people who are color-blind. Will work on that at some point also.

You can see that the age at diagnosis and death is linearly related. It seems there are many more males than females and the age at death seems higher for females.

You can see a larger picture here.

The picture with Metropolitan relative weight did not seem nearly as linear, which makes sense because if you think about it, age at start and age at death HAVE to be related. You cannot be diagnosed at age 50 if you died when you were 30.It also seems as if there is more variance for women than men and the distribution is skewed positively for women.

The last two graphs seem to bear that out, ( You can see those here – click and scroll down). which makes me want to do a PROC UNIVARIATE and a t-test. It also makes me wonder if it’s possible that weight could be related to age at death for men but not for women. Or, it could just be that as you get older, you put on weight and older people are more likely to die.

My point is that some simple graphs can allow you to look at your variables in 3 dimensions and then compare the relationships of multiple 3-dimensional graphs.

Gotta run – taking a group of middle school students to a judo tournament in Kansas City. Life is never boring.

### Sep

#### 22

# SAS Studio – where and wow

Filed Under Software, statistics, Technology | Leave a Comment

I’m pretty certain I did not deliberately hide these folders. When I opened up my new and improved SAS Studio, it had tasks but my programs were missing.

If this happens to you and you are full of sadness missing your programs, look to the top right of your screen where you see some horizontal lines. Click on those lines.

A menu will drop down that has the VIEW option. Select that and select FOLDERS. Now you can view your folders where your programs reside. Based on this, you might think I’m against using the tasks. You’d be wrong. I just like having the option to write SAS code if needed. The tasks are super easy to use and students are going to love this. Check my multiple regression for an example

I selected the data set from the SASHELP library, the dependent and independent variables and there you are – ANOVA table, parameter estimates, plot of dependent observed vs predicted and if you scrolled down – not here because this is just a screen shot, but in SAS Studio, you’d see all of your diagnostic plots. Awesome sauce.

### Sep

#### 20

# FINALLY we look like we know what we’re doing

Filed Under Dr. De Mars General Life Ramblings, The Julia Group | 2 Comments

Yes, I do realize that I’m probably far more excited about our new website coming on line than is normal. Several points here on a Friday night:

- I completely disagree with those entrepreneurs who say, “You sell the sizzle not the steak” when what they mean is that they really don’t have a good product but just a good story a.k.a. a line of bullshit.
- I think we have benefited from never hiring anyone in our company who has experience as a middle manager.
- You’re better off having a great product and a lousy website than the other way around.
- Not having too much money can be a benefit when starting a business.

Thanks to Jon Sullivan for the yummy steak photo

Back in the paleolithic era when I was in undergraduate marketing classes, they drilled into us the four P’s – product, price, promotion and place. There were lots of things I learned in business school that I disagreed with, but one I have found to be true to this day is that the most important of those four P’s is product. If your product is terrible, you may get people to buy it once if it’s cheap enough, they live close enough or you advertise it enough, but they aren’t going to buy it again.

Since we began 7 Generation Games, our priority has been making math awesome. Our first game had a lot of problems, many of them due to incompatibilities with web browsers, being stopped by school district firewalls. Ever call technical support and the person on the other end of the line says to you,

“Well, it works on my computer.”

Yeah, it was like that. So, we have been working like crazy to add every feature, correct every bug reported by our infinitely patient and wonderful alpha and beta testers (we love you guys). We still have, literally, hundreds of improvements we want to make, and I expect we always will. I work on them every day. Spirit Lake: The Game works. It doesn’t crash, it has lots of math and kids like to play it. Fish Lake is in process. Making a good game was our highest priority and still is. We just hired another developer (yay!) to help us out, are ramping up the artwork for the next two games, hired people as testers, an audio engineer …

Now that we have more people working in our company we have started to implement some actual policies and procedures. We have a git repository, use a source management system, an issues tracking system, file sharing system. We signed up for Amazon Web Services, Google Apps for Work, basecamp, some payroll system Donna manages - a lot of stuff I thought would be useless for us at the beginning. This is why I am glad we never hired anyone who had been a middle manager – because I was right. That stuff would have been useless for us at the beginning. It would have wasted our time and kept us from doing the most important work of making a good product. When do you add that layer of management? When you find yourself swearing,

“Damn it, we NEED a way to make sure you’re not copying over the changes I just made!”

When you only have two people working, and both in the same house, one can holler upstairs to the other,

“Hey, I’m working on level 4 today, okay? So, don’t touch it.”

At that point, you don’t need version control. Now, we do. When we did hire a project manager, we hired someone who had run a small business for ten years who shared our idea of having the degree of management you absolutely need and no more.

Finally, finally, finally, we are updating the 7 Generation Games website which, I believe, Maria originally put together in four hours one afternoon. It isn’t as if we didn’t know it needed a huge improvement. We believed our less than infinite time was best spent improving the game, meeting with customers, getting their feedback, designing more levels. We’re a small company. At Unite 2014, I attended a session where a developer mentioned they had 50 people working on their game for 2 1/2 years and it still wasn’t finished – that’s 125 person-years! That’s just people making the game – not managers, marketing, accounting. We’ve spent something more than 2.5 person-years developing ours, which explains why we constantly feel like we need to put every spare second into development.

Having the luxury to worry about the website says something about how we have matured as a company. With new people hired to take the non-development work off of us and additional people picking up some of the development work, we no longer can say,

“Having a spiffy new website is the

leastof our problems.”

In fact, it’s been bugging the hell out of me for a few months now. Did I feel bad about it? Yes. Like the source management system, when it got to the point where it felt like,

“Damn it, we need this!”

instead of,

“Brother, I got 99 problems and that aint one of ‘em”

that it was time to get it done.

I’ve had people tell me that we should have been working on our website with bells, whistles and gold tassels before now because “VCs won’t be impressed if you don’t have a professional website.”

Hmm. Not sure VCs will be impressed if you don’t have a product, either. I know companies that started about the same time as 7 Generation Games and had terrific website, brochures, every social media account you can imagine, unbelievably honed pitches – and they evaporated because they were all sizzle and no steak.

I’ve written before about Paul Hawken’s recommendation that in growing a business that you do as much for yourself as possible. That’s a whole post in itself, but to cut to the point – you keep your overhead low, which means you don’t require external funding in the short run. You are more viable in the long run not just because you have low debt and low operating expenses, but you also have the asset of everything you have learned yourself.

But we still hired someone else to update the website (-:

### Sep

#### 19

# Virtual Machine vs SAS On-demand for Academics

Filed Under Software, statistics, Technology | Leave a Comment

I’ve been pretty pleased with SAS Studio (the product formerly known as SAS Web Editor), so when Jodi sent me an email with information about using a virtual machine for the multivariate statistics course, I was a bit skeptical. Every time I’ve had to use a remote desktop connection virtual machine for SAS it has been painfully slow. I’ve done it several times but it’s probably been like in 2001, 2003 and 2008 when I was at sites that tried, and generally failed, to use SAS on virtual machines.

Your mileage may vary and here is the danger of testing on a development machine – I have the second-best computer in the office. I have 16GB of RAM and a 3.5 GHz Intel Core i7 processor. Everything from available space (175 GB) to download speed (27Mbps) is probably better than the average student will have.

The previous occasions I was using SAS on a remote virtual machine I had pretty good computers, too, for the time, but 6 -13 years is pretty dramatic differences in terms of technology.

That being said, the virtual machine offered levels of coolness not available with SAS Studio.

Firstly, size. I did a factor analysis with 206 variables and 51,000 observations because I’m weird like that. I wanted to see what would happen. It extracted 49 factors and performed a varimax rotation in 16.49 seconds. I don’t believe SAS Studio was created with this size of data set in mind.

Secondly, size again. The data sets on the virtual machine added up to several times more than the allowable space for a course directory in SAS on-demand.

Thirdly, it looked exactly like SAS because it was.

Now, I do realize that the virtual machine with SAS is probably only allowable if your university has a site wide license from SAS.

SAS Studio remains as having the significant advantage of being free and easy. It also seems to have morphed overnight. I don’t remember these tasks being on the left side, and while they look interesting and useful, they do NOT

- Encompass all of the statistics students need to compute in my classes, e.g. , population attributable risk.
- Explain where the heck my programs went that I wrote previously. I can still create a new program and save a program and it even shows the folders I had previously as choices to save the new program.

#1 is easily taken care of if I can just find out where the programs are saved, for statistics not available in the task selections, they can just write a program. I’ll look into that this weekend since I have had to get up THREE days this week before 9 a.m. I am thinking I need to get some sleep.

From my initial takes of the latest versions of each, I think I will:

- Use SAS Studio for my biostatistics course because it is an easy, basic introduction AND, once I figure out where the programs are hidden, I can have students write some simple programs. (It may be in an obvious place but sleep deprivation does strange things to your brain.)
- Use the virtual machine for multivariate statistics because it allows for larger data sets and, although I did not have a similar size data set in SAS Studio, I am assuming it will run much faster.