The above article is one of many to pick up on the outcomes of the first UK Higher Education TEF results. The standout piece of the story, for me, is that the measures being used to judge “teaching”, including:
- student satisfaction,
- drop-out rates,
- whether students go on to employment or further study after graduating.
are as, the article points out, “based on data and not actual inspections of lectures or other teaching.” Swap out “data” for “indicators” and you basically have the L&D model.
The Ofsted inspection of schools is, of course, more teaching focused but, even there, judgments of schools use other metrics. School teachers, for example, are expected to support “progress” that is influencing by beyond what is immediately impact-able. The impact of other factors, like parenting, are not factored in.
Therefore, between Ofsted, TEF and L&D (via models like Kirkpatrick) we really do not seem to have cracked the nut of measuring how well we develop learning and improvement.
With TEF it feels like a missed opportunity to evaluate the quality of ‘traditional’ lecture centric programmes versus more seminar or online models. Some included elements, such as student evaluation of facilities, are also surely difficult considering most students will only know one HEI and thus not have something to benchmark against. The cost of London living presumably impacting on the poor perception of many London-centric organisations, including LSE.
So, beyond saying “well universities haven’t cracked it either” what can L&D departments learn? I’d be interesting in hearing people’s thoughts. One item from me – with the growth of apprenticeships and accredited programmes “training” is being reinvigorated but also being reimagined with a performance focus and approaches like bitesize learning away from the big “programmes”. Therefore, for me, the more metrics the merrier to try and build a picture of our organizations.