A key message from my recent Kirkpatrick program was to start with the “top of the mountain”. In this metaphor the top, the peak, the target to reach, is the organizational vision. Strategic learning programs are, therefore, helping the org reach this vision and should be evaluated as such.
My reflection during the program was that this, of course, is common sense. We should be working to support the organizational goals. The challenge then becomes prioritizing multiple needs – so only by forecasting potential impact up front can prioritization be done correctly. And this is one of the areas where there is misconception with KirkP – it should be about starting with the end in mind and working backwards (not just dealing with level 1 in a standard way and then carrying on from there).
In terms of evaluation of success, LEO have recently discussed the role of learning analytics (LA). Now, like a lot of things in L&D, I would say the problem with LA is that it has meant multiple things to different people. One of the earliest examples I saw, sold as LA, was Starfish Solutions (SS) who had a clear solution and goal – use existing LMS/VLE data to improve student retention. SS makes perfect sense for organizations where educational outcomes and student retention are the organization’s objectives. I liked SS’s solution (in part discussed with them at the BBWorld Conference back in ’09) but it also faced the challenge that, for many university courses, there was/is less need for ‘big’ data solutions – lecturers know their students in traditional models. It only made really sense when talking about large scale education – the problem then, again, is that ‘large scale’ means multiple things to different people 😉
The LEO article does a good job at articulating the problems I have always had with L&D impact – especially how to assess when there are so many other uncontrolled variables. As mentioned in my previous post on the KirkP Cert, this was my main challenge I wanted clarity on from the course. The recommended KirkP approach of identifying multiple ‘indicators’ (suggesting behaviors are on track for the desired result[s]) that can show a correlation – a key learning point for me. In this model, therefore, we are building a ‘chain of evidence’ akin to a court of law – “data, information and testimonies at each of the four levels that, when presented in sequence, demonstrate the value obtained from a business partnership initiative”.
What I really liked about this is the clarity of the evidence/steps up the ladder/mountain, from bottom to top:
- Individual Outcomes
- Team Outcomes
- Departmental/Divisional/Unit Outcomes
- Customer/Client Response
- Organizational Outcomes
- Customer Satisfaction
- Market/Industry Response
It is this breakdown, of benefit chain, that I will likely adopt in my existing project planning documents.
Let L&D then be clear, as the KirkP course made clear: stop trying to tackle issues through limited/impossible direct causation but instead correlations. I would say this is a much better approach than simply seeing “measuring learning impact as a waste of time and money” as the LEO article mentions many people argue.
Therefore, I would argue, let us (i.e. L&D professionals) not worry about learning analytics but instead organizational analytics (i.e. goals and metrics) that can be seen as trending over time and aim to see where our investments have an impact. As recommended in the KirkP programme, do not reinvent the wheel, borrow metrics from elsewhere as they will already be used by stakeholders and those same stakeholders (should) understand the logic. This should then allow us to, as I’ve hinted at before, not worry about ROI but instead (as recommended by KirkP) Return on Expectations.
So what do I think of KirkP following the course and certification? Well I’d have to agree with the LEO suggestions:
-
It’s better than doing nothing… which is what most organisations are doing.
-
Think about what level of evidence will be good enough for you. As the Kirkpatricks have pointed out, a chain of evidence that would not stand up to scientific rigour may be enough to convict someone in a court of law. If a level of evidence is good enough for a court then it’s probably good enough to convince your board to invest in L&D.
-
Good enough develops into great.
Just noticed Training Zone (seemingly in collaboration with Learning Pool) have a “hub” for analytics:
https://www.trainingzone.co.uk/hub/learning-analytics