Kirkpatrick has moved to the next level. RIP.

by Geoff Stead

Donald Kirkpatrick, pioneer of learning evaluation, died on 9th May. The team at WorkLearnMobile offer our deepest condolences to his family and followers.

For more than 50 years, Kirkpatrick’s measurement model has been widely adopted in Learning industry to evaluate the impact of training programs. In essence, this 4 step model determines the success and existence of training initiatives by allowing the Learning professionals to validate learners’ experience at 4 levels:

The four levels of Kirkpatrick’s evaluation model are often explained as follows:

  1. Reaction – what participants thought and felt about the training (satisfaction; “smile sheets”)
  2. Learning – the resulting increase in knowledge and/or skills, and change in attitudes. This evaluation occurs during the training in the form of either a knowledge demonstration or test.
  3. Behavior – transfer of knowledge, skills, and/or attitudes from classroom to the job (change in job behavior due to training program). This evaluation would occur 3–6 months post training while the trainee is performing the job. Evaluation usually occurs through observation.
  4. Results – the final results that occurred because of attendance and participation in a training program (can be monetary, performance-based, etc.)

But what about mobile? And what about learning that isn’t classroom based? How does this fit with performance support?

At Qualcomm, we have a multi-app, multi-vendor approach. Our Employee AppStore is filled with many different apps, available to all employees and covering a diverse range of topics. Some are closely related to learning and training, while others are for performance support.

Here is how Kirpatrick’s model influences us as our Mobile Learning team creates Mobile Training resources or Learning Apps to support our Employees:

  1. Reaction: In mobile, User Experience is critical. We invest heavily into iterative prototyping, and doing usability studies and learner feedback sessions during our design process. This helps to understand their reaction to the mobile content, medium and learning styles.
  2. Learning: Most of our apps are NOT a course, but rather tools to support you. With some specific apps (like QC Lingo, a gamified way to remember key facts) the learning gains are measured in the app itself, but for most, the app is used alongside parallel learning. To find whether the App is used for the intended reasons and as an alternative to ‘before and after training’ criteria you might want to evaluate their stand on before and after mobile experience. This helps to understand if it is making a difference to knowledge, skills or attitudes while performing a task.
  3. Behavior: Investigate whether the App is supporting their performance to enhance their productivity or exhibit better performance. This is a vital factor because mobile works optimally right in the point of need that drives the required learning behavior. We use detailed analytics to understand better who is using our apps, at what time of day, and in which situations.
  4. Results: Identify the performance outcomes of using an App and map them back to business drivers like increased production levels, higher sales etc. This helps determines how successful an App is and offers a basis to think about ROI.

Undoubtedly, Kirkpatrick has laid the foundation to evaluating the effectiveness of training programs, but if you take a step back, his vision can also support meaningful evaluation of any flavour of training regardless of the medium (classroom/elearning/mobile learning).

related posts