Don't Wait for the Game to End Before Determining the Score

Add bookmark
Bob Dick
Bob Dick
01/24/2012

HR Exchange Network

When peeling the onion of the learning metrics we as educators should be using, there is no particularly easy (or right) answer as to which layer will provide us exactly the information we need in every instance, or at any given moment. Much has been written and discussed around the idea that we somehow have to move beyond the concept of "butts in seats" (course enrollments, lesson launches, training hours….) but the business reality is that these numbers are were and will remain an extremely effective way to communicate effort, and in spite of the fact that they are educational void of anything useful – will continue to be demanded by upper management as the first line of justification for our existence within an organization.

Enrollment numbers matter, and fortunately for us are perhaps the easiest numbers to glean – and the easiest numbers to explain, so they cannot be dismissed within the context of this or any other discussion around learning metrics.

That said, as educators we have tried, and continue to try to move beyond these numbers, because as educators we do understand that they provide exactly nothing as to how effective we are in our efforts to prepare our learners for the "next step." While there are certainly metrics that can be extended across most learning efforts, we have discovered at Humana that metrics and success are not a one size fits all effort, and that planning efforts at the front end of any project will pay great dividends when designing and delivering meaningful and actionable metrics.

For any major project, we work with our partners to predetermine and agree upon what "winning" looks like. Winning can be defined many ways depending on task. Time to proficiency after leaving training, quality scores, retention, overall service levels, and what you determine to be the final metric is far less important than the fact that you determine it. If you have no finish line, you have no way to determine if you have crossed it.
[eventPDF]
One consequence needs to be acknowledged before you begin: if a clearly defined and measurable finish line of your curriculum is challenging and fairly written, there are still going to be a percentage of your learners who fail to reach that finish line. A good learning organization has a solid idea going in of what this fail (yes – I said "fail") percentage will be, and has an agreed upon remediation plan in place, ready to fire when and as people fail. Within my learning organization, this process is where we generate the real results, and eventually the real ROI (as measured through time to proficiency, quality metrics and retention statistics of the people we train).

As educators, we cannot reasonably expect that we can take corrective steps in a course simply by looking at test scores at the end of a course. Testing throughout training must be done early and often. Our standard is that every 30 minutes while training, some form of assessment is given. It is critical to note that this does NOT mean that every 30 minutes we stop the training and send everyone to the LMS to take a test, nor does it mean that every test is or should be formally graded. Very often these tests (which may be as simple as a quick Q&A session between learners and facilitator) only purpose is to level set between learner and facilitator as to what level of understanding has been achieved, and whether or not it is prudent to move forward. This is a somewhat informal process that relies on the skill and professionalism of the learning professional in the classroom. Our work kicks in every time a scored test is given.

As mentioned above, waiting for the game to end before determining the score is not a particularly effective strategy. Within my organization, our first level report – run several times throughout the day – identifies how many people have taken a given test, how many have passed, how many failed, and the average score. These results are then compared to the agreed upon standards for success. All well-written tests will have SOME "failure" – we are looking for instances where the level of failure exceeds what was expected. In instances where failure rates are higher than expected or average scores lower than expected, it is critical to the success of the training that this be identified as quickly as possible by the support team so that actionable feedback can be provided to the learning facilitator while the situation is still correctable.

At the point where failure is identified several pieces of data are collected and distributed to our learning managers, curriculum manager, and our instructional designers. These data points include:

  • Are the results unusual compared to other training efforts using the same material?
  • Are there specific questions that were missed more often than others (our standard is anything where less than 85 percent of our learners were successful)?
  • Is there anything that would indicate technical issues prevented a successful outcome?

To the first bullet: If we have an instance where a class is underperforming when compared to other classes, it is possible that we need to provide coaching to the facilitator. However, it can also be an indicator that a system or process change has made the answer to a question or questions open to interpretation, or inaccurate. Until the second bullet is fully addressed, no real conclusion can be drawn on the first.

Expanding the second bullet: Once the accuracy of the question and answer have been established, the instructional designer and manager work together to ensure: that problem questions are worded fairly; that distractors are reasonable; that the questions themselves are relevant to the competencies we expect the learner to come away from the class with; and that the questions can be directly tied to a specific point in the curriculum, as well as a specific objective of the training. Once all this analysis is complete, we can then provide specific, focused and actionable steps to our instructors in such a time frame as to be impactful.

While these steps certainly add time to our overall process, the results are undeniable. Pass rates are up nearly 20 percent in the last year and average assessment scores are up over 5 percent over the same period. Quality scores, retention and time to proficiency during our recent ramp up in support of our major line of business (4000 new associates onboarded in a three month window) all exceeded expectations.

And the jump from those numbers to ROI – is simple math.


RECOMMENDED