(This article was originally published on Inside HR on 5 November 2018)
Talent management can be a very subjective activity. Actions such as judging future potential, leadership capability – identifying “talent” –invariably involve judgement calls and opinions based on experience. All too often this equates to bias, inconsistency and poor outcomes.
It is no surprise then that organisations are increasingly taking a more data-driven approach to talent management, with the promise of replacing human prejudice and unpredictability with hard facts, numbers and consistency.
A data-driven approach offers huge potential, but it too is far from perfect, and if embraced unthinkingly then our experience suggests this can cause just as many problems as the historic people-led model.
Here are six of the challenges we most commonly see clients wrestling with:
Dirty and inconsistent data
Data-driven analysis is only as good as the data you are analysing. Many organisations struggle to get one robust set of numbers for their financial accounts, so is it any wonder that talent data is often flaky at best? There is plenty of research demonstrating that performance ratings vary significantly depending on the rater, and these are based on evidence of performance. Ratings for future potential and succession readiness are even less reliable. A fund manager we worked with used performance ratings as a key data point, only to discover a huge problem; they used a five-point scale and classified all staff with a “2” rating as underperforming for the purpose of their analysis – only to find that a significant proportion of managers were using this as a “needs development” rating for staff new in role. Most talent numbers come from people, with all the inconsistency that brings.
Getting distracted by irrelevant insights
Employee engagement is higher on Mondays. Sentiment analysis shows that 7% of high potential staff feel “disappointed” after lunch. High performers take less time off sick. These are all data-driven insights from recent clients – but what actions do they drive? What should managers do to reduce disappointment levels, or should they even care? Insights are not really insightful if they don’t lead to actions to change the outcome.
While a purist would advocate tracking all talent data and seeing what unexpected trends and correlations emerge, this is often not practical given time and resource constraints. It’s important to focus on data that can inform action and drive better business outcomes, not simply generate interesting charts and chin-stroking.
Too much data, particularly – ironically – data that are useful and insightful can cause just as many problems as too little. Leaders who are used to having reams of talent data to inform their decisions can end up suffering from over-dependence on it. This can in turn lead to analysis paralysis, where managers just ask for one more data point, another table, or a graph showing the data in a slightly different way. This creates inertia as decisions are deferred and managers look for the comfort blanket of another data point before making a decision.
This is often a tendency in industries or functions used to working with numbers. Investment analysts, bankers, actuaries can all be tempted to look for more datapoints, looking to extend their functional belief in the primacy of numbers into the talent arena.
Confusing correlation and causation
Managers who spot a strong correlation between talent data and other metrics can often get excited, seeing this as vindication for a data-driven approach: if attendance on a leadership program correlates with future promotion then this proves the effectiveness of the program, right? Not for the client who found that managers were nominating higher performers to attend the program in the first place. And just because we found a link between completion of all onboarding activity and higher engagement, it doesn’t mean we could prove to our client that the onboarding program causedthe engagement. A correlation between two data points is a great starting point for more analysis, but it doesn’t mean anything without causation. After all, remember that there’s a near-perfect correlationbetween cheese consumption and being killed by your bedsheets.
Overconfidence in the data
The underlying promise of taking a more data-driven approach to talent management is that numbers are more reliable than people: the data don’t lie. By swapping opinions for numbers we can remove the human error and bias from the process.
We’ve already discussed the problems with getting clean and consistent data above, but even if the organisation can solve for that, we are still talking about talentdata –and “talent” means people. Messy, complicated human beings with emotions, values, attitudes, all of which change and impact on behaviour and performance.
Psychometric assessments are a prime example. Psychometrics can provide valuable insights, and they are more reliable than most other forms of selection, but they are still a long way off being perfectly predictive. All too often the use of numbers and talk of concepts such as predictive validity coefficients can lull people into thinking they are dealing with facts rather than educated guesswork.
A lack of ownership
If the numbers are more reliable than the people, then whose fault is it when the wrong decision is made? It’s not my fault the new promote embezzled a million; the predictive algorithm said he was a future CEO in the making.
It’s not such a big step from using the data to inform your decisions to absolving yourself of all decisions in favour of letting the data decide. This is shaping up to be a far bigger societal issue – if an autonomous car kills a pedestrian whose fault is that? – but for now organisations need to remember that, no matter how good the data, it’s the people that they pay to make the decisions, and they need to be held accountable.