As we in the rewards field move forward to better understand and embrace the promise of data and analytics, we do well to heed the lessons of the forerunners in this field. Who better to learn from than Google, famous for making all manner of decisions based on data crunching?
In an address at the company's re:Work conference, Google's VP of People Analytics and Compensation, Prasad Setty, shared the story of a very instructive failure that his team faced.
Setty noted that when he first joined Google in his role (7 years ago), one of the first exercises he was involved in was creating a rallying cry for the new People Analytics team. The result, a mission statement of sorts for the team, was as follows:
All people decisions at Google are based on data and analytics
"We wanted data to spit out our people decisions," Setty says.
Fast forward to what has historically been one of Google's most important rituals: a twice annual summit built around making promotion decisions for software engineers. Promotions are a big deal at Google, a critical reward element - particularly for this core group of employees. Two times every year, hundreds of the firm's most senior engineers from around the world come to the Bay area, where they camp out at the Santa Clara Marriott for several days of meetings and small group conversations. A complex process involving nominations, appeals, appeals of appeals, etc. that Setty likens to the Supreme Court. The outcome of this very people-intensive proceeding: decisions on which software engineers should receive promotions.
In light of the enormous amounts of time and energy being invested in this process, the People Analytics team decided to step in and help their "engineering brothers and sisters" find a way to make these decisions more efficiently. Using the available data on promotions and analytics, they developed an algorithm, a decision-making model to determine which employees should receive promotions. Here it is:
The People Analytics team was excited. Their model showed 90% accuracy in predicting about a third of the promotion cases, and their testing showed it to be reliable and stable across multiple promotion cycles. They thought that their engineering cohorts -- people who lived in the realm of search and ad algorithms all day long -- would love it. But they didn't. Not, according to Setty, one bit.
Why? Setty explains:
They didn't want to hide behind a black box. They wanted to own the decisions. They didn't want to use a model to do so.
While the promotion committees, unwilling to use the model for decision-making, did find it useful in examining and understanding the decisions they made, Setty took another lesson from it for himself and his team, what he refers to as his "Homer Simpson Doh moment."
He elaborates on that lesson and the impact it had on his team's work going forward:
People Analytics wasn't going to be in the business of developing algorithms to substitute for or replace human decision makers. Instead what we were going to be all about was to arm these people with much better relevant iformation so that they can be capable of making better decisions.
Makes a lot of sense to me. How about you?
Watch Setty's entire re:Work presentation here.
Anyone else with data analytic failure/lessons to share?
Ann Bares is the Founder and Editor of the Compensation Café, Author of Compensation Force, and Managing Partner of Altura Consulting Group LLC. Do you buy that our organizations are what we reward and that Everything We Do in Compensation is Communication? Check out the definitive book on the topic that Ann co-authored with Cafe cohorts Margaret and Dan right here. Ann also serves as President of the Twin Cities Compensation Network (the most awesome local reward network on the planet) and is a member of the Advisory Board of the Compensation & Benefits Review. She earned her M.B.A. at Northwestern University’s Kellogg School, is a foodie and bookhound in her spare time. Follow her on Twitter at @annbares.
Hi Ann,
Good post. It is fantastic to hear that their managers want to own their decisions and not hide behind a “black box” or HR. I don’t know all the underlying data that Google has to measure Engineer performance but in my experience it is a difficult function to measure completely objectively. You can measure how much code is written and the efficiency of that code but how do you objectively measure innovation? I’m sure Google is better than most at quantifying performance. It sounds like their model was successful in predicting who would be promoted but I did not read if it predicted the performance of those promoted. There is a big difference between the two. I wonder if they could predict future performance if it would have some role in the process.
In my experience there are areas of a business that can be more objective in the measurement of performance than others. Data analytics is not a perfect science but can play a role if balanced appropriately. In the right context it can be used as a data input but not the only input. It is important that the approach fits the culture and workforce strategy.
Posted by: Trevor Norcross | 12/12/2014 at 01:01 PM
All human decisions are B.A.D. decisions, at best relying on the Best Available Data known at the time.
Posted by: E. James (Jim) Brennan | 12/12/2014 at 04:46 PM
Yes good (and timely), as I was part of working group today (Friday), including some of our senior managers - discussing primarily promotion budget allocations, but also the actual, individual promotion decisions, what they should be based on, and who should make (and own) them.
And before we attribute data-based decision making infallibility too unequivocally to our friends in Mountain View . . . a funny story. About five years ago when Google announced its "astounding" 10 percent across-the-board increase for its entire workforce, I asked a top-level Google HR manager where they came up with 10 percent as the "right" amount. And strangely, he couldn't trace back to the basis for that decision. When I suggested the psycho-physics construct of Just Noticeable Differences (the sensory threshold were a "difference" is detectable to people), he acknowledged that could have been the (subconscious) basis. Et tu, data?
Posted by: Chris Dobyns | 12/13/2014 at 09:50 AM
Trevor:
All good points - we don't know (and aren't told) what kinds of measures go into the "Avrg Perf" element of the algorithm. And it sounds as though the Google People Analytics team has reached the same conclusion that you advocate - that analytics are better used for context and input to decision-making, rather than to spit out automatic decisions with no human intervention.
Jim:
B.A.D. is a very fitting acronym!
Chris:
Great Google story. Et tu, data indeed!
Thanks all for the comments and observations!
Posted by: Ann Bares | 12/15/2014 at 08:28 AM