How Meaningful Measurement Leads to L&D Success

Meaningful Measurement
One of the great failures of today’s L&D is a fundamental lack of meaningful measurement. The data reported too often is about efficiencies, and too seldom does it report on effectiveness. And often the claim is that it’s too hard.   But measurement is not, and should not be, too difficult. Further and most importantly, measuring is a key element of playing a strategic role in the organization.

In theory, our learning interventions are supposed to have an impact. If we run sales training, sales outputs should improve; time to close should go down, or closure rates should go up. If we run customer service training, customer satisfaction should go up or complaints go down. If we do internal compliance training, the incidents of things like sexual harassment and workplace bullying should go down. Ok, I live in the real world too, and we know that by and large compliance training is CYA, but it could actually be impactful if we cared. And we definitely should care about whether our training is worth the dollars invested.

Learning and Development Framework

The basic framework is simple: there is some metric that is below what we expect or need it to be. Critically, we should be addressing some core business metrics. If someone comes to us saying “we need a course on X”, we should be sure we’ll know how we’ve successfully addressed it. If the requester can’t answer that question, you both need to do some work. Ultimately, you should have a real metric that the business unit tracks.

And I realize this is challenging, because it means you need to start talking business with them. They own the metric, and you’ll have to work with them on this. This is as it should be! Doing training in a vacuum is not a viable business approach. The notion that ‘if we build it, it is good’ is not a sound basis for expenditures.

Once you know what the problem is, it’s time to figure out what might lead to a fix. What should people be doing differently that would lead to an improvement? On principle, you should be doing performance consulting (topic of a previous post in this series); matching the problem to a suite of known causes, and applying the appropriate approach. So, if it’s a knowledge problem, create a resource; if it’s a skill problem, create training, and so on. Determine what will lead to that changed behavior.   And is the cost of the change going to lead to a greater savings or revenue increase than that cost?

There are several ways to track this. You can ask the supervisors and fellow workers whether it’s being exhibited by survey tools. If you host the resource on a website, you can use web tracking tools. Similarly, mobile access should be trackable. If the resources are hosted in your LMS, you can use the internal reporting to see if they are being accessed. Or you can use the emerging Experience API (xAPI) standard and aggregate the data.

Finally, we develop the intervention that will lead to that changed behavior. Whether it’s a job aid or a course, we need to introduce the solution and prepare people to adopt it, including helping employees understand the rationale for the introduction and the appropriate ways to incorporate the change into their repertoire. And we evaluate the learning with post-course assessments, whether it’s about decisions to be made or how to use the resource.

The essential element here is to ascertain the necessary change and then evaluate the impact. If you develop a solution, is it being adopted, and is it having sufficient impact? If not, you need to tweak and tune. Ultimately, of course, you’re going to want to validate that the cost of your efforts are worth the benefit. This is how business should work.

The approach detailed is, in fact, the core of the Kirkpatrick method (Levels 2-4, but note you start at level 4 and work backwards, as identified here). Others complain that the Kirkpatrick doesn’t directly evaluate the learning (correct), or is too linear or implying a causal relationship (depends on how you implement it). There are other methods that use more qualitative data to determine the impact. I’m relatively ecumenical on the methods you use, but not about the importance of documenting the impact you are having.

Frankly, too much of what is done under the banner of Learning & Development is being done on inappropriate metrics. We’ll see measures of cost per hour of seat time, without knowing whether that time is leading to any change. Or we’ll find out how many people are being trained relative to the number of training staff, without knowing whether that training is having an impact. And people will benchmark these against industry averages. These data aren’t important! These numbers only become important once you document that the seat time or the training is actually improving business outcomes. Then, and only then, can you worry about how efficient you are being. Until then, you’re merely showing that you are wasting no more money than the average company. That’s not a particularly good place to be.

As a side note, one of the measures often used is user satisfaction (Level 1 in Kirkpatrick), e.g. did the trainees like the training, and do they think it’s valuable. This isn’t helpful. Empirically, there’s essentially zero correlation between what trainees think and the actual impact. Having only this data may be more misleading than having nothing at all! You could penalize some and reward others on a basis that has no impact on the outcomes!

Good Example of Learning Measurement

So how would this play out in a case study? For training, let’s say that we’ve determined that our performance isn’t up to snuff (whether customer satisfaction, sales, operations errors, whatever. So we should figure out how much change we need (e.g. a delta on a metric) and what behavior should lead to that delta. Let’s assume a new process will be used. Then we develop training for that process, and evaluate that after the learning experience, learners can indeed perform the task. Next we identify whether they are, indeed, performing as desired in the workplace. Finally we evaluate the metric, and compare it to a baseline, or do an A/B test (comparing a trained group against an untrained one) to see if we’ve achieved our goals.

It’s fair to expect that we won’t get it the first time around. People and workplaces are complex. If we’re not achieving the desired impact, we need to determine why, and perhaps address workplace barriers, improve the training, or lower our expectations (doing the latter consciously is better than not evaluating at all). The point being we should be measuring and tuning to achieve our goals.

This plays out for other interventions than training. So, for performance support, we could determine that using a particular resource would reduce errors or increase success (see the earlier post on performance support). We can again determine a delta, figure out how people should use support, and make sure that they’re aware of the resource and when and how to use it, and then see if they are actually using it. This likely involves both some training and resource design.

Even social media, the building of community may be measured this way. As a previous post discussed, we should be looking at social resources too. Many people argue that just having activity in a social network is good, and that’s possible. But if we put it in a particular business unit, such as operations or even our own L&D, we should expect some improvements, and we should be able to identify the metrics we’re being evaluated on and look for the desired improvements, and tune if we aren’t achieving it. Are people not sharing, or are they even not using the system? If so, why?

Measurement is a tool to use both formatively and summatively. If we are having an impact, we should be able to document it and take credit. And to do that successfully, we need to create a pilot, test it, and use data to improve it. The sad state of affairs is that L&D too often is taking orders for courses and building them, with an implicit assumption that if you follow instructional design processes it will yield success. This is a faulty assumption. We know that most learning experiences will yield a successful pass of a summative test, but not persist into ongoing behavior. Yet this seems to be the predominant learning approach. There are many things wrong with the instructional design we see, and we can easily fall prey to approaches that will not have the necessary persistence in change. We won’t know, however, unless we measure. Please, measure.