Metrics, metrics, metrics
It is frankly depressing, that when you ask learning technologists to talk about the metrics around what they do, their natural inclination is to talk about the numbers of completions of their courses, not about the tangible value they have contributed to their business. It’s as if, in the spirit of the Olympic ideal, it’s the turning up, rather than the winning that counts.
The harsh truth for most organidstions is that actually, it’s the scientific and focused investment in winning that secures success and the most funding and that builds more success. But to be fair, it’s not just the learning technologists that suffer an inability to measure impact, it’s also a malaise that effects L&D, talent management and HR as a whole. All too often the focus is efficiency and cost saving rather than value add.
My personal view is that, in the world of learning, Kirkpatrick and Phillips are to blame for creating a group-think that evaluation of Learning is to do with to do with levels and a false assumption that measuring Return On Investment (RoI) is too hard, and in most cases impossible to prove. The thinking almost seems to be, ‘We can never prove if learning is the reason for higher performance’ und ‘We can’t take the glory for doing our jobs’. Strangely I agree. Cause and effect can be incredibly complex. But here is the rub.
THAT SHOULDN’T STOP US SETTING MEASURES OF SUCCESS FOR OUR LEARNING PROJECTS TO KNOW IF WE HAVE MADE A TANGIBLE DIFFERENCE.
And a failure to set these out in the design of any learning programme is just plain lazy!
If you live under the yoke of any CFO worth their salts –not measuring and not reporting your value add/your business contribution is going to damn your training/learning organisation to ever deceasing budgets! And if your proposition is focused on using digital learning to enable cost cutting then – in that one measure of success – I can assure you – you will succeed. You will get less budget!
As you may have gathered from my tone, I would suggest you might want to operate the 80:20 rule here and make sure that 80% of your programmes have a demonstrable measure of success. That is essential, if you want to be seen as a valued partner by your stakeholders and be able to argue for the optimum funding for your team. Sadly, from our experience, we’d be lucky to see even 20% of learning measured in any form beyond who turned up! But, equally, being able to report real measures of success is critical.
So what can you do?
The more radical side of me would suggest relegating your learning evaluation programmes to the background and start implementing learning impact programmes that measure (1) strategic business success, (2) business unit performance success or (3) development in personal capability – for your learning initiatives. This involves benchmarking where you are today and then measuring where people are after they have participated in the opportunities you’ve packaged to support learner’s learning cycles. Both in terms of their confidence and competence, but more importantly through demonstrable impact. And this means using a wide set of measures than those in an end of course-learning assessment. It means measuring and analysing performance ratings, 180, 360 feedback from peers, managers and customers. It means pulling in quality assurance process feedback, and real business performance measures. It means being relentless in learning design in defining what outcomes are needed and using multiple sources to show learning has made a difference ideally with an eye to business KPIs.
With the advent of the much hyped Big Data and machine intelligence, this would appear to be easier than ever to achieve. And with the availability of the latest tools, the technology is much more ready to support this than ever. But there is an issue. The thinking on what and how we measure is still too weak in most organisations. And that is a massive issue. And the biggest barrier to excellence in L&D.
But, there is more to this than questioning if any learning programme without a measure should make it into production. There is more to this than budgets and funding your team. There is more to this than being a professional learning operation. There are some major changes happening in service providers. A shift in how some are operating and it’s a change that I expect to grow. It’s called Success-as-a-Service and starts to challenge what we buy when we purchase software. In the Success-as-a-Service model – you’re NO longer interested in buying a piece of kit, or a piece of software – you are buying a result. You’re not buying features, but you are buying into outcomes. There is skin in the game from the supplier and a clear focus for buyers. It’s the veritable Win-Win. And a real sense of partnership.
This approach has the potential to be very disruptive and fragment many existing software relationships because it puts a big focus on services. The challenge for learning professionals is that if you’re not able to articulate your value add – you will be very low hanging for external providers offering business success proposition.
So whilst it may be tempting to simply react to the demands of your business and develop that learning programme; if you want to have a stronger chance of survival you’ll need to hold your customer down and agree some measures of success! Sometimes kicking and screaming. Because the simple truth is this. If someone offers them success rather than solutions – they will be quick to drop you and start buying! And who would blame them!
This article, written by director of research, David Perring, first appeared in E-learning Age magazine.