« Do You Want to Be a Hero? | Main | Skateboarder Teaches Us When to Think Twice »

03/25/2019

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I read this article twice, and I almost began to wonder whether the answer to Michael Schrage's question about where the benefits of customized segmentation end and discrimination begin, just might be "Never"?

If our corporate and societal responses are tailored and optimized to meet and satisfy each individual's motivations and preferences, is the "harm" of discrimination just a hypothetical definition or result - and not a practical outcome? Which brings us back around to the need to take actions that benefit individuals, whether they like it or not?

Now I'm sorry I even started thinking about this . . .

Oh the fun of making a friend's head explode!

The need to take actions that benefit individuals, whether they like it or not? I feel the need to tread carefully there. Are we smart enough to understand the layers and nuances of, and interactions between, these individual differences (what we believe to be individuals' motivations and preferences, based on our own assumptions and predilections) and appreciate the full consequences of all the actions we believe should be taken "on their behalf?"

Having seen myself in action, I sure know that I am not.

Thanks for pushing the conversation forward, Chris!

Funny (well, not really . . .).

And you know that when I talk about individual motivations and preferences, I'm referring of course to the near-term future state-of-play in that domain - and where the datasets (and their availability) regarding the motivations and preferences of say, Ann Bares - are known factually, and aren't the product of somebody's best guess or assumption or bias. And the dataset (yes, about Ann . . ) is capable of being updated on a nearly real-time basis (since we're all always changing in some small ways).

Given all of that, do I really want someone taking actions to ensure I'm not being discriminated against, and where with a high order of probability, I'm NOT going to like what you're going to do?

I suspect that like most advances in technology, there will be a great many blunders, abuses (intentional and unintentional), and more than a few many spectacular pyres fueled with the best of intentions.

Two anecdotes come to mind. First, last fall, I listened to a senior analyst from a major competing consulting firm discuss the gender pay gap. His firm had analyzed tens of thousands of data points and concluded that 95% of a person's pay can be explained by non-gender data in their HR record with the right multi-variate analysis. But, within a given organization, there might be 75 variables that drive pay. That's big data! Yet how many leaders, HR professionals, or employees can think in terms of 75 variables or explain their interplay? So, its great that pay can be explained with big data, but who'll believe it?

Which brings me to my second anecdote. At a recent AI discussion, we were told of a Artificial Intelligence tool that had become quite adroit at predicting the diagnosis of a particular, but very complex, medical condition. The tool performed exceptionally well, repeatedly, but the doctors didn't understand why the tool was connecting certain "dots" when making its predictions. And, because they didn't understand why, they decided to discontinue using the tool. They didn't believe it.

Both examples, I hope, underscore the human element in handling, interpreting, using, and ultimately believing, big data. I suspect that many people are not "smart enough to understand the layers and nuances of" big data (and, by "smart enough," I really mean smart in the sense of wisdom and experience vice IQ).

So, I say blaze a big data trail, watch out for snakes - both on the ground and in the trees above you, and above all, remember that with big data should come big responsibility. We would do well to keep the Hippocratic Oath in mind when using big data to drive decisions.

Thanks Chris and Joe for the comments - much appreciated.

I do understand that the data reflects observed facts and micro-facts that can be known in real-time, but there is still the need to interpret those facts, infer some meaning (backed up, naturally, by extensive analysis), and determine the policy prescription that aligns with different factual "profiles." At the end, I think I come to the same conclusion Chris does - that despite someone's data-based conclusion that there appears to be a high probability I will (should?) like what they decide to do, not so sure I will.

It is, as Joe underscores, the human element on both sides of this equation that makes this path tricky. With big data does come big responsibility. (Why does that sound like a Sheldon Cooperism?) Hopefully we're up to the task. :)

The comments to this entry are closed.