What do you use as a basis for evaluation of employees? It’s easy to have a simple measure, such as customer rating, and compare employees on this basis. And this would be wrong for many reasons. Instead, you should have multiple measures and a clear idea of what ‘acceptable’ is. Here’s why.
Note that this is about improving organizational execution. The goal here is to gather data, and then apply it to individual development. This requires a nuanced notion of performance, and an associated approach to development. Tying development to specific opportunities is important.
In The End of Average, Todd Rose discussed his experience in overcoming evidence that he was a low-performer. He talked about how he wasn’t aligned with the particular dimensions being evaluated, yet was capable in other ways. And this is a big problem. It’s true for schools, and also organizations.
The issue is mapping multiple dimensions down to single number. In doing so, you lose details. For instance, two different customer service employees might both rate as only average in helping customers. Yet in person, one is friendly but not very knowledgeable, while the other is knowledgeable but has a gruff demeanor. With only a single measure, you might throw a product course or a customer interaction course at them, when each would benefit from one uniquely.
The opposite extreme is also problematic. Trying to identify too many criteria can be a costly and ultimately unproductive effort. Instead, the goal is to find the sweet spot: those core criteria that contribute substantively to success. This isn’t new, but it’s important to be clear. There are many different proposals of what the critical elements are, so the important step is to identify the ones critical to you. And, to be able to identify them uniquely for each individual.
With this data, you can create a clear picture of where a person is on each category, and identify areas for improvement. This then creates a basis for evaluation and development. There is a growing interest in competencies and the associated processes. There was a competency movement some years ago that was cumbersome to the point that the results were out of date by the time they were completed. A more agile process is required, but it’s ultimately about defining performance metrics.
And, increasingly, the role of competency definition could and should be devolved to the appropriate community of practice. So, for example, instructional designers should be defining (or at least participating in the process) the necessary component skills for the ID roles played in the organization. Many times, roles can be flexible, so it has to be specific to the organization.
A second necessary characteristic is being clear on what acceptable performance is. One of the typical approaches is to compare employees to each other. For instance, comparing an under-performer with a star performer. It may be intended to move that under-performer towards star performance, but is that necessary? What would acceptable performance be?
The ideal is to have a criterion-referenced level of performance. That is, create a specification that says what acceptable performance is, at each of the measures from the exercise above. Normative measurement – comparing people to each other – might create a group that’s quite a bit below what could be expected. Instead, having a clear definition of performance is ideal.
This is a competency exercise, describing what the performance is, and various levels. With clear definitions at levels of, say, of sub-par, acceptable, and desirable performance. This creates an objective basis for employee review, removing personal components from it.
This also creates an objective basis for designing learning materials to support development in the necessary areas. You can have materials that develop basic and advanced skills. You still can use those stellar performers as a guide, but perhaps they’re only stellar relatively. Instead, you can determine what an individual should be doing, and help them develop themselves.
Technology provides a powerful tool to support this access. We can use technology to collect the performance data as well as develop the criteria for evaluating same. And we can support the improvement process as well.
We have increasingly adept tools to collect data. We can survey responsible individuals whether customers or supervisors or even peers. We can instrument systems to automatically collect data. This data can also go to the individuals for their own use. And, we can collect this data through a variety of channels.
Further, we can analyze this data to develop criteria. This may also include circulating questions to experts to develop and then validate criteria. Ultimately, we can define and host the criteria where it’s clear to employees what the criteria are.
Then we can align learning to these criteria. This means authoring the content with the criteria in mind. And, most importantly, we can deliver learning triggered by any gaps determined between data and the desired criteria.
The goal is to create an active ecosystem of continual development. This is the opportunity to get more granular. And the overall approach is objective, so bias can be eliminated. As long as the development of the criteria is scrutable and sensible, the process not only is effective, but maintains employee perceptions of fairness.
As the saying goes, “what’s measured matters.” If you can be clear about what you’re measuring, and why, you have a basis to systematically improve the organization. And that’s the goal, after all.