How to use the Kirkpatrick Evaluation Model to measure learning effectiveness
The Kirkpatrick Evaluation Model was developed in the 1950s and it is still the most widely used model for measuring training. It is sometimes referred to as the Kirkpatrick Model of Training Evaluation
Learning evaluation is an essential part of training and education programmes. It is a great way to assess if you are using the core instructional design models correctly and effectively. It helps instructors and course providers understand the impact the training had on participants and organisations. It also offers an opportunity for improvement and proof of effectiveness to be presented for managers and high-level executives.
There are four Kirkpatrick levels of evaluation used to determine the effectiveness of your training. Progressing from first impressions of the training to having the knowledge, to applying that knowledge, to lasting behaviour change, and then to organizational effects.
Let’s explore the Kirkpatrick model a little further…
Become a Digital Learning professional with our Diploma
Level 1 – Reaction: Did they like it?
The first level is a snapshot of a learner’s experience with the programme. At this level, we are evaluating the 2 E’s: Engagement and Experience.
In a live classroom environment, the trainer can analyse the participation levels and body language to evaluate how engaged the participants are. In eLearning courses, trainers can look at the “digital body language”, coming from Learning Management System data, that can include:
- Course completion rates, drop off rates and pass rates on assessments
- Time or duration spent online as well as the time of the day or day of the week
- Time spent on individual screens
- Learner’s preferences in terms of the type of digital learning content
- Their engagement and performance in learning activities
- Their levels of engagement with social learning in terms of numbers of shares, likes and comments in forums or social platforms
The experience is all about what the participants thought about the training. Were they energised? Did they enjoy the experience? For the Kirkpatrick model of evaluation to be effective you need to consider the relevance of the content, the performance of the trainer, and the room.
This feedback is usually gathered in the form of a happy sheet. When evaluating an online experience, try to understand if the learner’s expectations and learning needs were met. Were they satisfied with the training?
- How many clicks or steps did they have to go through
- Any technical issues they had
- The speed of the course
- How happy were they with the duration of the course
- How useful was it – did they enjoy it / how did it make them feel / what did they learn
- How relevant was the content
- Their experience with the different forms of content – videos, audio, exercise and the toolkit
- The navigation and ease of use
- Their experience with the support from the eModerator
- Their experience with social learning activity during the course
Remember that the Level 1 evaluation has its limitations, as it can be surface-level feedback and does not necessarily reflect real learning or change of behaviour. It should be done to demonstrate respect for participants and provide results of satisfaction, gaining insights into what the learner’s thought was valuable and what was not. Take every course evaluation as an opportunity to make it better.
Aim for performance-focused evaluation that uses quantitative questions to make the most of this level; you will get more in-depth feedback into their experience with the training. For example, phrase questions focusing on the learner rather than course providers/facilities and alternatives for answers they relate to. A few examples are:
- What were the three most important things you learned from this session?
- From what you learned, what do you plan to apply to your job?
- What kind of help you might need to apply what you learned?
Level 2 – Learning: Did they learn?
At this level, we are assessing if the training met the learning objectives. The question at this level is: “What knowledge and skills did the learner’s acquire in this programme?”
A common way of measuring this is via tests, simulations and hands-on assignments. Testing the knowledge and skills is a tangible way to discover if the learner’s learned what they were supposed to. Ask students to perform what they learned during or at the end of the session, as you will be able to support and bridge the learning gap as it happens.
A good reason to evaluate at Level 2 is to build employee’s confidence and prove their competence to management teams. Ideally, you should assess the participant’s knowledge of the topic before the training, with pre-course surveys, interviews and self-assessments.
Understanding the knowledge gap that the training has to cover makes it more relevant and ensure the results expected from the programme are met. It also allows you to compare the before and after the programme and discover if they can apply the skills.
Level 3 – Behaviour: Are they applying it?
The third level focus on the behavioural changes that happened because of the training. This can be measured by considering if the learners are applying what they have learned on their job. Did the training have a measurable impact on performance?
A good way to measure this is via surveys, observations and quality inspections of the procedures and skills taught during the course. We are looking for behaviour changes here.
Find KPIs (key performance indicators) that can be used as a reference to measure improvement, this could be in the format of time efficiency or percentages increase/decrease of a specific action learned during the course. Find this data point and compare these numbers or attitudes before and after the training.
As instructional designers, we typically get this data from the participant’s line manager, team leader through qualitative feedback or as part of the performance management process in an organisation.
Use the numbers to make the training better in specific parts where results and metrics were not met.
Level 4 – Results: Does it support the business goals?
The final and fourth level is used to evaluate what business results have been achieved when training goals were met. We are looking to answer: “How did the training impact the business?”
Did the training lead to sustained behaviour change, which led to a measurable change in KPI? As this relates to general organisational results and overall business impact, it is typically measured with KPIs such as cost reduction, increased revenue, increased footfall, increase in customer satisfaction levels.
As instructional designers, we need to be able to show how eLearning courses are impacting business results. With that said, Level 4 outcomes can be affected by many factors and work best when relatively isolated.
Level 4 outcomes need to be defined in advance, as the training should be purposeful and aligned with the overall organizational goals.
Start at Level 4 and identify the results you want to achieve, then work backwards to Level 3 to think about what participants need to do on the job to achieve those results. And so on. This will make it easier to connect the training to organizational goals.