As you may know, I'm a huge advocate of proactive analyses. I think everyone should be looking at their compensation decisions with respect to internal equity. Formally reviewing your compensation decisions will help you to identify potential problems - before someone else does and you end up in court.

The best "formal" tool for looking at compensation with respect to internal equity is regression analysis. Don't worry - I'm not suggesting you sign up for a statistics course. These analyses aren't typically done in-house; you'll retain outside consultants to do these analyses for you. Even though you won't be doing these studies yourself, it's still important you understand how these studies work.

I'll spare you the pain of a discussion of regression analysis, but you do need to understand one basic statistical concept - statistical significance. Before your math phobia kicks in, we're going to talk about statistical significance in terms of a coin flip. No hardcore math - I promise!

Let's assume that you and I are on a Skype video call, and we set up a friendly bet. I'm going to flip a coin 10 times. For every head, you pay me $1, and for every tail, I pay you $1. So, I flip the coin 10 times, tell you that I got 10 heads, and ask you to pay me $10.

10 heads out of 10 flips is pretty high, and you're a little suspicious of the results. You couldn't see the results of the coin flips on the webcam, so there's no visual evidence of me cheating. Have you been cheated? How do you know?

We can use statistics to tell whether you were cheated. The likelihood of getting a head on any flip is 50%. So, out of 10 flips we would expect 5 heads (50% multiplied by 10). But I told you I got 10 heads. Is the difference - my reported 10 heads minus the expected 5 heads - "big enough" for you to think you were cheated? The likelihood of getting 10 heads out of 10 flips is 1 in 1,000. Is this rare enough for you to accuse me of cheating?

We can figure out if this is "rare enough" using standard deviation. Standard deviation is a statistical way of expressing the difference between what you *expected* to see and what you *actually* saw. Here's the general rule of thumb (used by social scientists and adopted by the court system):

If a difference is more than 2 units of standard deviation, the result is sufficiently rare that we can conclude chance is not the likely explanation

In our coin-flip game, 10 heads out of 10 flips is equivalent to 3.10 units of standard deviation. Using our rule of thumb, you would accuse me of cheating.

So what does all this coin-flipping have to do with compensation analysis?

After your consultant runs the regression analysis, she'll probably give you a summary that shows the average difference in pay between men and women (or whites and nonwhites, etc.). This summary will also show the units of standard deviation of the difference. So let's assume that the analysis shows a difference of $7,500 per year between men and women. Should you be concerned?

It depends... If the $7,500 difference is not statistically significant, then - from a statistical point of view - it's not different from a $0 difference. If it's not statistically significant, you can't conclude that the difference is due to anything other than chance. Neither can anyone else, which means that difference can't be used to argue discrimination.

But what if the difference is only $75 per year? $75 isn't much- in fact, it's about $1.44 per week. No big deal, right? It depends... If that $75 difference is statistically significant, someone might be able to use that result to support a claim of discrimination.

The bottom line is that the *size* of the difference is not what really matters, it's whether that difference is statistically significant.

*(Note: For those familiar with statistics, the likelihood presented above is based on a one-tailed test. Since we know that the observed outcome (10 heads) is greater than the expected outcome (5 heads), we are only looking at one side of the probability distribution. Essentially, the question is how likely is it that we would see 8 or more heads, 9 or more heads, 10 or more heads. Thus, a one-tailed test is appropriate in this case.)*

*Stephanie R. Thomas is an economic and statistical consultant specializing in EEO issues and employment litigation risk management. For more than a decade, she's been working with businesses and government agencies providing expert EEO analysis. Stephanie has published several articles on examining compensation systems with respect to equity. She is the host of The Proactive Employer, and is the Director of the Equal Employment Advisory and Litigation Support Division of MCG.*

WorldatWork has a certification course that specifically covers the statistical analysis between male and female pay distributions -- Quantitative Methods (T3) at http://www.worldatwork.org/waw/adim/seminars/html/seminars-t3.jsp.

Posted by: Paul Weatherhead | 07/16/2010 at 06:45 PM

Stephanie,

Nice post!

Doesn't the idea of doing a proactive analyses present a management conundrum?

On the one hand, if your pay design team and management is sensitive to diversity issues, they should have no problem with doing a proactive analysis, right?

But on the other hand, if the same team does a proactive analysis, doesn't that analysis then become discoverable, and subjects the employer to legal vulnerabilities that they wouldn't have had to face other-wise?

Thoughts?

Posted by: Paul Weatherhead | 07/16/2010 at 07:02 PM

Hi Paul,

Thanks for the comment. In response to your question, I don't think proactive analyses present a conundrum if they're done under the auspices of legal counsel. I advise my clients that their legal department - and ideally outside counsel - should be involved from the very beginning. I also advise that even though in-house personnel (e.g., pay design team) *may* be capable of doing the analyses, they should not do them for the very reason you mention in your question.

If the analyses are done by outside experts at the request of legal counsel, the results of those analyses are protected by the attorney-client privilege. Please keep in mind that I'm not an attorney, and can't give you a legal opinion on privilege, but it's been my experience that the analyses I've performed as an outside expert have been, in every project on which I've worked, protected from discoverability by the attorney-client privilege.

Privilege issues can get pretty complicated pretty quickly, particularly if internal personnel are involved. I discussed this issue with Paul Secunda of the Marquette University Law School a couple of weeks ago in my podcast (I can provide the details if you're interested in listening to the podcast).

One final thing to consider - if your in-house team performs this analysis, and you end up in litigation, you will likely have to defend the results of your analyses (either to a judge or a jury). An objective assessment by an outside expert will likely carry more weight than one performed by the defendant...

Analyses done on behalf of outside counsel by outside experts do not subject the employer to legal vulnerabilities they wouldn't otherwise have had to face. In fact, the opposite is true; by doing these proactive analyses, the employer can identify potential problem areas - before someone else does - and take corrective action where appropriate. This can reduce your vulnerability and go a long way in reducing your exposure to litigation.

Posted by: Stephanie R. Thomas, Ph.D. | 07/16/2010 at 08:19 PM

Best answer I have ever seen to the question, Stephanie. Thank you very much!

Posted by: Paul Weatherhead | 07/17/2010 at 07:49 AM