Most salary marketplace surveys don’t agree on much of anything. Except on the most general terms, pay surveys usually show results that vary from each other.
There are many good reasons for survey differences.
There would be no market for redundant surveys that all said the same thing. Why pay twice for the same information?
Without variable results, buyers could not pick and choose among the information sources they prefer to rely on or to ignore.
Business models also make a big difference to the results published in the professional pay survey trade. That is probably the most significant reason for surveys to vary, because collecting/analyzing really good data very well/carefully, consistently, continually, etc., is an extremely expensive process. You must assure there is a large market willing to pay for it, to cover your costs and offer a bit of profit.
Commercial surveys with a long established history of high reliability and assured acceptance in board rooms, government agencies and courts may cost the most because they remain the best. On the other hand, if you produce free or el cheapo calculators for job-seekers, especially if you want to encourage them to use your employment agency (or one of your advertisers), you tilt the numbers or selectively cherry-pick your sources to produce exaggerated anticipated pay figures that will please applicants. If you target employers, you similarly craft your survey product to their needs. Some have been rumored to flaunt wildly high salaries to employees who complain to their management who then buy different surveys showing lower numbers from the same vendor. Sneaky but effective, if you are a marketing type rather than an HR/comp professional and are able to appear on the doorstep right after a disgruntled employee has presented an outlandish pay rate from one of your “less accurate” free or cheap products.
Each separate survey usually asks different questions (or reports them in different ways) and uses different benchmark titles for different markets and industries. Making their results unique supports the other reasons above.
Most compensation surveys vary in rather consistent ways. Those from BigConsultingGiants run a bit high (concentrating on data from their massive and profitable clients, one may presume). Government surveys run low (lacking size/profit sensitivity, concentrating on well-established conservative employers, etc.). Various dot-com calculators produce REALLY high numbers while others are a bit low, depending on their sourcing and business models. Some medium price-range surveys are usually right on the norm or a slight bit above. Basically, pay surveys generally reflect to their demographic source characteristics; i.e., surveys of corporations tend to yield higher pay rates than surveys covering mom & pops, charities, small public agencies, etc., which offer non-cash reward advantages ranging from ownership, intrinsics, security, benefits and such. One of the problems with identifying the real true “norm” of all organizations is that no aggressive employer wants to be just the average enterprise.
A few companies offer data aggregation, which can be problematic due to obvious provenance credibility issues and especially hidden redundancy. If you participate in both the X survey and the Y survey and get their results either free or at a discount, while vendor A combines the copyrighted summaries from X and Y, the "composite survey" by supplier A gives you nothing new or different from what you could yourself derive from the X and Y surveys you already have. In fact, it misleads you into thinking you have a third independent source while you don't, really: you just have the same data repeated again. It also may be impossible to learn how the X and Y data has been weighted in the amalgamation if not simply averaged. Aggregation lets the vendor sell a truthy-looking knockoff product with minimal research and analysis costs, while legitimate surveyors (not marketing firms advertising themselves as pay data suppliers) disappear.
Long story short, the more independent transparent surveys there are, the better we all are. The shrinkage of the pool of comprehensive reliable pay surveys who actually send out (or display) their data-gathering questionaires and openly publish their methodologies and reliability statistics should be a matter of professional concern.
What other observations about compensation surveys should be made?
E. James (Jim) Brennan was Senior Associate of ERI Economic Research Institute, the premier publisher of interactive pay and living-cost surveys. After over 40 years in HR corporate and consulting roles throughout the U.S. and Canada, he’s pretty much been there done that (articles, books, speeches, seminars, radio/TV, advisory posts, in-trial expert witness stuff, etc.), serves on the Advisory Board of the Compensation and Benefits Review and will express his opinion on almost anything.
Image "Disagree Agree Undecided Keys" courtesy of Stuart Miles/FreeDigitalPhotos.net
It's inherently very noisy data. My own heuristic (YMMV) is that if you're within +/- 5% of some targeted number, you're there.
Posted by: Tony Bergmann-Porter | 01/14/2015 at 07:10 PM
Agree, Tony, that whatever method you use, such a tiny standard error suggests an extremely "normal" figure.
Posted by: E. James (Jim) Brennan | 01/14/2015 at 10:11 PM
I agree with Tony as well. The goal for me is to try to get close on the market (you will never get perfect accuracy), and build the ranges from there. That should allow us the ability to satisfy that need for most people enough to get them in the door; hopefully, other things we do will make the company a great place to work so they will want to stay.
Posted by: Scott Boynton | 01/15/2015 at 11:57 AM
Valid point, Scott. The old axiom "the perfect is the enemy of the good" frequently applies in these cases.
Posted by: E. James (Jim) Brennan | 01/18/2015 at 11:59 PM