Editor's Note: Data and analytics are big topics - and big priorities - in a lot of HR shops these days. Do they also bring big dilemmas? A Classic lesson based on our colleagues in Marketing, who have gone before us into this territory and learned a few things the hard way themselves.
As we in HR work to absorb and apply the possibilities that data and analytics present for the design and administration of employee pay, we often look to our peers in Marketing - always several steps ahead of us in this game - for lessons and insights. With this in mind, it is also appropriate that we heed the cautionary tales that emerge around Marketing's leading edge push into the Age of Algorithms. Specifically, the degree to which Big Data may be creating a new potential and power to discriminate.
We're already seeing this dilemma rear its head in Marketing, as Michael Schrage highlights in his recent HBR blog post Big Data's Dangerous New Era of Discrimination. In his article, he shares a few examples of the segmentation opportunities that marketing analytics have created and raises important questions about how they might be used.
Going more granular, as Big Data does, offers even sharper ethno-geographic insight into customer behavior and influence:
• Single Asian, Hispanic, and African-American women with urban post codes are most likely to complain about product and service quality to the company. Asian and Hispanic complainers happy with resolution/refund tend to be in the top quintile of profitability. African-American women do not.
• Suburban Caucasian mothers are most likely to use social media to share their complaints, followed closely by Asian and Hispanic mothers. But if resolved early, they’ll promote the firm’s responsiveness online.
• Gay urban males receiving special discounts and promotions are the most effective at driving traffic to your sites.
My point here is that these data are explicit, compelling and undeniable. But how should sophisticated marketers and merchandisers use them?
HR systems are home to a host of demographic data about our employees. To what degree do we seek and apply the insights that quite likely exist there in managing and seeking to improve the productivity of our workforces?
Just one case in point. Several years ago, I wrote a post about a study titled "Rewarding a Multigenerational Workforce" that had just been released by WorldatWork. In the summary of findings, the study's authors noted that a majority of organizations responding (56%) did not, at that time, even consider generational differences when designing total rewards programs and gently chided employers for not realizing "the importance of evaluating the needs of each generation uniquely and rewarding them accordingly."
The study's intent was clearly not to encourage discrimination but rather to encourage a more informed and evidence-based approach to reward design. Nonetheless, I think the concerns that the report raised for me and many who commented on this post present us with an early example of the potential dilemmas we may face as the data increasingly available to us brings ever more specific revelations about the preferences for and responses to rewards in different worker segments. Sooner or later, we will find the need to ask ourselves the question Schrage raises (and I paraphrase below):
Where, in our corporate cultures and strategies, does value-added personalization and segmentation end and harmful discrimination begin?
Ann Bares is the Founder and Editor of Compensation Café, Author of Compensation Force and Managing Partner of Altura Consulting Group LLC, where she provides compensation consulting and survey administration services to a wide range of client organizations. She earned her M.B.A. at Northwestern University’s Kellogg School and enjoys reading in her spare time. Follow her on Twitter at @annbares.
I read this article twice, and I almost began to wonder whether the answer to Michael Schrage's question about where the benefits of customized segmentation end and discrimination begin, just might be "Never"?
If our corporate and societal responses are tailored and optimized to meet and satisfy each individual's motivations and preferences, is the "harm" of discrimination just a hypothetical definition or result - and not a practical outcome? Which brings us back around to the need to take actions that benefit individuals, whether they like it or not?
Now I'm sorry I even started thinking about this . . .
Posted by: Chris Dobyns | 03/26/2019 at 09:07 AM
Oh the fun of making a friend's head explode!
The need to take actions that benefit individuals, whether they like it or not? I feel the need to tread carefully there. Are we smart enough to understand the layers and nuances of, and interactions between, these individual differences (what we believe to be individuals' motivations and preferences, based on our own assumptions and predilections) and appreciate the full consequences of all the actions we believe should be taken "on their behalf?"
Having seen myself in action, I sure know that I am not.
Thanks for pushing the conversation forward, Chris!
Posted by: Ann Bares | 03/26/2019 at 10:40 AM
Funny (well, not really . . .).
And you know that when I talk about individual motivations and preferences, I'm referring of course to the near-term future state-of-play in that domain - and where the datasets (and their availability) regarding the motivations and preferences of say, Ann Bares - are known factually, and aren't the product of somebody's best guess or assumption or bias. And the dataset (yes, about Ann . . ) is capable of being updated on a nearly real-time basis (since we're all always changing in some small ways).
Given all of that, do I really want someone taking actions to ensure I'm not being discriminated against, and where with a high order of probability, I'm NOT going to like what you're going to do?
Posted by: Chris Dobyns | 03/26/2019 at 11:45 AM
I suspect that like most advances in technology, there will be a great many blunders, abuses (intentional and unintentional), and more than a few many spectacular pyres fueled with the best of intentions.
Two anecdotes come to mind. First, last fall, I listened to a senior analyst from a major competing consulting firm discuss the gender pay gap. His firm had analyzed tens of thousands of data points and concluded that 95% of a person's pay can be explained by non-gender data in their HR record with the right multi-variate analysis. But, within a given organization, there might be 75 variables that drive pay. That's big data! Yet how many leaders, HR professionals, or employees can think in terms of 75 variables or explain their interplay? So, its great that pay can be explained with big data, but who'll believe it?
Which brings me to my second anecdote. At a recent AI discussion, we were told of a Artificial Intelligence tool that had become quite adroit at predicting the diagnosis of a particular, but very complex, medical condition. The tool performed exceptionally well, repeatedly, but the doctors didn't understand why the tool was connecting certain "dots" when making its predictions. And, because they didn't understand why, they decided to discontinue using the tool. They didn't believe it.
Both examples, I hope, underscore the human element in handling, interpreting, using, and ultimately believing, big data. I suspect that many people are not "smart enough to understand the layers and nuances of" big data (and, by "smart enough," I really mean smart in the sense of wisdom and experience vice IQ).
So, I say blaze a big data trail, watch out for snakes - both on the ground and in the trees above you, and above all, remember that with big data should come big responsibility. We would do well to keep the Hippocratic Oath in mind when using big data to drive decisions.
Posted by: Joe Thompson | 03/26/2019 at 02:32 PM
Thanks Chris and Joe for the comments - much appreciated.
I do understand that the data reflects observed facts and micro-facts that can be known in real-time, but there is still the need to interpret those facts, infer some meaning (backed up, naturally, by extensive analysis), and determine the policy prescription that aligns with different factual "profiles." At the end, I think I come to the same conclusion Chris does - that despite someone's data-based conclusion that there appears to be a high probability I will (should?) like what they decide to do, not so sure I will.
It is, as Joe underscores, the human element on both sides of this equation that makes this path tricky. With big data does come big responsibility. (Why does that sound like a Sheldon Cooperism?) Hopefully we're up to the task. :)
Posted by: Ann Bares | 03/26/2019 at 04:20 PM