Xapiapps
By Xapiapps | Resources, Instructional Design

Leveling Up with Kirkpatrick

Instructional design is somewhat of a black art at the best of times. Much of what the best training designers do is more a result of experience and intuition as opposed to strict adherence to theory or models. Of course, in order to break the rules you have to know them in the first place. This is why instructional designers who are starting out should pick a design model that they feel is a good match for their style, subject and workflow. This will act as scaffolding upon which to build a logical structure that students can interpret and internalize.

While instructional models are more or less consistently employed by competent instructional designers, what we see much less of is evaluation of that training, once it has been used on actual trainees. When it comes to formal accredited education, evaluation is usually a requirement in order for recognition by independent accreditation bodies. The world of corporate training  is more of a Wild West in this regard. In an industry incentivised to be compliant at a shallow level and under immense time pressure, it’s not surprising to find that there is no great drive to evaluate and refine training programs on a continuous basis.

While evaluation and refinement can be seen as a pure cost without any real return on investment, when done properly the advantages are tangible and substantial. As long as you have a solid and proven framework to guide the process. In the 1950s Donald Kirkpatrick developed the four levels of evaluation model as part of his doctoral thesis. Since it’s subsequent publication in the last few years of the decade Kirkpatrick’s model has become one of the most widely recognized and tested evaluation frameworks.

One of the strengths the Kirkpatrick model is its simplicity. Within this framework we look at a training course on four different levels:

  • The reaction and thoughts students have about the training course.
  • The increase in knowledge and learning students get from the training experience.
  • Behavioral change and improved performance as a result of applying new skills in practice.
  • What the effect of the trainee’s performance is on the business.

This framework has been a sensible tool for thinking about the effectiveness of a training interventions for decades, but in this connected age of ours it takes on a whole other level of meaning and value.

RTo read more on how xAPI and Kirkpatrick’s Model work together in perfect harmony, click here.

Applying Kirkpatrick

In order to use Kirkpatrick’s model, we need information to analyse. This means that we have to build information gathering into the training programs themselves as well as post-training interviews. On top of that, we also need to tap into management reporting structures in order to measure real changes in the performance of the organization. On a conceptual level this is logical and straightforward, but in practice it can be a complex process to get useful information out of trainees and business processes.

On the first evaluation level, where we want to get a sense of how trainees feel, we can use feedback forms, verbal observation, surveys and trainee reports. This is usually cheap and easy to do. This level helps us improve the training in a visceral way , ensuring that your training is attractive emotionally.

Watch a practical implication of Kirkpatrick’s Model here:


 

The next level, evaluating learning, is probably most familiar to instructional designers. These are, more or less, various forms of assessment. If the assessments are well designed and measure what they are meant to measure, you’ll have no issue here.

When we move on to the next level where we have to evaluate human behaviour, things become more complicated. Measuring behavioural change is hard even for behavioural experts. You can’t quantify it as easily as attitude or information recall, for one thing and it has to be measured over time in context where you don’t have control.

When it comes to behavioural change we can’t just ask the opinion of the subject. We need to devise ways of observing behaviour in the most objective way possible.

Traditionally this is achieved through labour-intensive longitudinal interviews and subtle indicators that are hard to analyse. Yet this is one of the most crucial metrics we have to look at. If there is no behavioural change as a result of our training or (even worse) negative behaviours because of it, we have to question the purpose of that training. Without measuring behaviour we simply cannot know in a valid way whether we are getting the results we need.

The final level is a different beast than behaviour, but no less tricky. We have to look at the complex performance measurements of the business and link training outcomes to changes in those performance measures. As you can imagine, since so many variables affect business performance, it’s pretty hard to nail down a solid relationship between what you achieve through training and what happens at the bottom line. The problem here is not so much setting up data collection channels, but deciding which existing reporting and record keeping systems are the right ones to use.

This is one of the few truly useful applications of integrated performance management, where training inputs and agreed performance measures are negotiated with trainees as a part of their job performance. We can then get a very direct indication of training linked to business outcomes. The rest have to be inferred from metrics such as volumes, profits and ROI. As you can probably tell, manual application of Kirkpatrick’s evaluation principles can be an expensive, laborious task.

In some business contexts the potential gains may not outweigh the costs. It’s not surprising, given how intense it can be, that many organisations do only the most cursory evaluation or never step beyond the first or second evaluation levels. It doesn’t have to be that way however, thanks to modern tools and design considerations drawing the full benefit of evaluation at all four levels is a practical possibility more than ever.

Data and Design

We now live in a world where collecting data is not the problem, it’s about deciding which data are important and how to proceed with analysis. Data collection is going on all around us, all the time. Our smartphones and other electronic devices are watching our behaviours and feeding that information back to analysis engines that allow for assessment and predictions which are the put to use by marketers or designers looking to improve their products.

The key lesson that we can take away from these real world applications is that collection and analysis of data for evaluation purposes should be included at the initial design phase for training interventions. We are already familiar with second-level evaluation in the form of simple tests and exams.

We also see that it is quite common for trainers to ask for evaluation of the training from students directly, filling in a type of “client satisfaction” sheet at some point after training takes place. However, most of the time this is tacked on as a band-aid. An afterthought the designers of the training did not give any thought, but were included as a concessions to what is often seen as business needs that have nothing directly to do with education.

The truth however, is that the training designers are best positioned to also design the data collection and evaluation components of a course. We also don’t need to manually collect data anymore. This is one of the main reasons it has been so hard to scale and deepen evaluation, manual data collection using paper and an army of assistants just isn’t cost-effective.

Instead, we can build the data collection into e-learning material, we can harvest if from employee evaluations and we can get it from small, frequent questionnaires on a smartphone. As an instructional designer there are a few stages of thought you need to go through in order to interweave evaluation into your course material:

  • What questions are you trying to answer in the evaluation?
  • What information do you need in order to evaluate the effectiveness of the training intervention?
  • Where is this information kept or from where can it be collected?
  • When is the best time to collect this information?
  • How should the data be analysed?

Depending on the context of the training, you may have specific relational questions that make up part of the evaluation. For example, if you are training employees to be more efficient report writers, you could expect that their reports would grow shorter or that they can report more frequently. So one question we have is whether the training on report writing has improved the practice of report writing. In order to answer this we need to collect information such as manager’s views of report quality after training took place, or the word lengths of reports.

We could collect some of this automatically with software or send our surveys to the right people. Some of this information, such as those that are on the first and second level of Kirkpatrick’s model, may need to be collected close to the time the training ended or even during training, but third and fourth level evaluation could be stretched out over months.

Regardless, the entire evaluation process must be integrated at the design phase.

Thinking about Thinking

The role of the instructional designer has broadened considerably over the years and the expectation now is not just to craft solid training material and programs, but also to provide evidence that the time and money spent on that training is worthwhile. Instructional designers need to be aware of both the right evaluation frameworks and the best tools to produce this evidence. Creating training is no longer a “fire and forget” practice, but a process of iterative, evidence-based problem solving. Without the right approach and thought processes, these expectations can be daunting. Armed with the right knowledge and tools however, training can become one of the strongest investments an organization can make, as long as they commit to the process from the ground up.

Want to see Xapify in action?

Schedule a live, customized demo for your team.

Support

© Xapiapps Pty Ltd 2014-2023

Discover more from Xapify

Subscribe now to keep reading and get access to the full archive.

Continue reading