Can L&D learn anything from The Teaching Excellence Framework (TEF) experience?

http://www.bbc.co.uk/news/education-40356423

The above article is one of many to pick up on the outcomes of the first UK Higher Education TEF results.  The standout piece of the story, for me, is that the measures being used to judge “teaching”, including:

  • facilities,
  • student satisfaction,
  • drop-out rates,
  • whether students go on to employment or further study after graduating.

are as, the article points out, “based on data and not actual inspections of lectures or other teaching.”  Swap out “data” for “indicators” and you basically have the L&D model.

The Ofsted inspection of schools is, of course, more teaching focused but, even there, judgments of schools use other metrics.  School teachers, for example, are expected to support “progress” that is influencing by beyond what is immediately impact-able.  The impact of other factors, like parenting, are not factored in.

Therefore, between Ofsted, TEF and L&D (via models like Kirkpatrick) we really do not seem to have cracked the nut of measuring how well we develop learning and improvement.

With TEF it feels like a missed opportunity to evaluate the quality of ‘traditional’ lecture centric programmes versus more seminar or online models.  Some included elements, such as student evaluation of facilities, are also surely difficult considering most students will only know one HEI and thus not have something to benchmark against.  The cost of London living presumably impacting on the poor perception of many London-centric organisations, including LSE.

So, beyond saying “well universities haven’t cracked it either” what can L&D departments learn?  I’d be interesting in hearing people’s thoughts.  One item from me – with the growth of apprenticeships and accredited programmes “training” is being reinvigorated but also being reimagined with a performance focus and approaches like bitesize learning away from the big “programmes”.  Therefore, for me, the more metrics the merrier to try and build a picture of our organizations.

Learning Technologies book from Donald H Taylor

Thoughts on: “Learning Technologies in the Workplace”

So after picking up a copy a while back I’ve now had a skim through Donald Taylor’s book and thought I would capture a few thoughts here:

  1. I really like that it goes back to the origins of some of our key concepts (e.g. eLearning and technologies).  No doubt due to my history studying background, I have a soft-spot for books that consider historical perspectives.
  2. It does a nice job of linking those historical issues to the current state of play; with recommendations from and for the usual suspects: Jennings, Harrison, etc.
  3. It feels like the kind of book that could become somewhat seminal – the kind of history/good practice balance that often act as an entry point for people coming into an industry (or, in this case, HR generalists up-skilling in this area).  What makes me say this are various, perhaps unintentional, attempts to establish standards – such as a move for the use of ‘e-learning’ over ‘eLearning’ and other variations.  I know that example is basic semantics but it is indicative of the industry that such things have never really been agreed – I’ve certainly tended to always use eLearning and a lot of Don’s webinars/presentations around the book’s launch have stressed that this kind of text has never really been done before for learning tech and the question really for him in authoring it was “why not?”.  My view would be that its just been presumed you can pick up bits and pieces from conferences, blogs, etc. rather than needing a ‘go to’ text.  I am certainly going to treat it as such and pass my copy around my team!
  4. The book adopts the approach of Clive Shepherd in using e-learning as the generic term, under which includes the traditional self study model but also virtual classrooms, social tools, etc.  Personally I prefer ‘online’ or ‘digital’ as the umbrella, under which ‘click next’ style content is what we call ‘e-learning’.  Again it is semantics but you do often get misunderstandings if you are not explicit – for example, a static PPT file is IMO a resource (or ‘piece of content’) not eLearning [oops there I go again].
  5. The book also makes the point that much of “learning” technology is really about being inventive with workplace and commercial tech.  This include’s categorizations such as those in the below image.  Personally this is an area that has always interested me – the scope to be more productive and innovative with tools beyond their initial design but avoiding what the book refers to as “magazine management” (i.e. just running with the latest ideas without proper analysis).
  6. WP_20170602_21_43_28_Pro
  7. The introduced APPA model (an aim; a people focus; a wide perspective; a pragmatic attitude) is sensible and gives a structure to the case studies and main arguments.  Obviously there are lots of other ways you could classify successful projects but its a useful mnemonic.
  8. There are a few big jumps in logic – for example the section on justifying the aim/evaluating through metrics is strong but this statement: “If the Hawthorne effect can be accounted for, and if we have historical data and a control group, then ROI should be calculable in instances where we can calculate the value of the work of the employees concerned”.  The statement is fine but easier said than done for many people!  The chapter goes on to a pet peeve of mine – people using “ROI” as a term when its not actually ROI and it’s good to see Don call this out.
  9. The classification of typical aims very nicely simplifies the complexities of different (particularly LMS) projects as being one of the below (with appropriate metrics):
    1. “Organisational infrastructure – Effective business-as-usual, risk avoidance, compliance
    2. More efficient L&D delivery – Cost savings in L&D , reduced admin, faster delivery
    3. More effective learning – Faster time to competence, better retention
    4. Part of organizational change – Defined by the sponsors off change”
  10. The above is a really nice way to consider if the LMS is more than compliance (point 1) through to fulfilling options such as being THE social platform in an organisation (an example of cross-organisation change like point 4)

To summarize, the book reads a little like a “greatest hits” album, a compilation of my 10ish years of going to Learning Technologies shows and LSG Webinars. With Don calling on his experience as chair to mention players like Jane Hart and their contributions to the industry (such as her top tools for learning) as well key concepts towards good practice.

Overall, it is a great primer on development within and of organisations, covering introductions to Performance Consulting, Agile, network analysis and more – not just learning tech [which of course is the point – learning tech can not survive if just acting in a ‘learning bubble’].  I also attended his session at the Summer Forum and will post my notes on that one soon.  Even more from Don on topics around the book on this podcast.

CGS Corporate Learning Trends, Observations, & Prediction 2017

Notes:

  1. A decent report based on a survey of L&D leaders.
  2. The headline taken away is that digital has truly arrived with increased use of video, mobile, social and micro formats.  Perhaps more interestingly is the strong intention to use “instructors” to the same amount (c.65%) or more (10%) suggesting a general increase in the mix/blend rather than shifts.
  3. The “greatest challenge” was budget (surprise surprise) with 47% reporting this issue.
  4. When considering the metrics to justify that budget – “employee engagement” was identified as the “most important”.  I wonder how much this is due to the fact that employee engagement surveys are relatively easy?  The KP programme would argue for combining different metrics and, indeed, the catch all of “business metrics” was a close second to EE in the survey response list.  A separate quote from Jack Welch reverts to three measures for overall company performance: employee engagement, customer satisfaction and cash flow.
  5. I had more issues with the question of “So how do L&D professionals get employees excited about learning?” being answered “by giving them what they want”.  The report here is talking about “speed, efficiency, relevance and usability” but, of course, that’s all well and good but only if it actually helps improve their performance.  We can give wonderful ‘development’ experiences, that people want, that can be completely irrelevant and fail to stop someone being redundant a few weeks down the line.
  6. There’s a run out for the old quote (included below).  Lots of issues with this I’d say, including:
    1. retraining might be to solve original failures,
    2. the upgrade needs a clear as-is/to-be message,
    3. development in this context = performance and outcomes?

“People need maintenance and upgrades even more than machines do.  Retraining is maintenance.  Training is an upgrade.  Development is the next generation model.”

“Learning analytics”: a red herring?

A key message from my recent Kirkpatrick program was to start with the “top of the mountain”.  In this metaphor the top, the peak, the target to reach, is the organizational vision.  Strategic learning programs are, therefore, helping the org reach this vision and should be evaluated as such.

My reflection during the program was that this, of course, is common sense.  We should be working to support the organizational goals.  The challenge then becomes prioritizing multiple needs – so only by forecasting potential impact up front can prioritization be done correctly.   And this is one of the areas where there is misconception with KirkP – it should be about starting with the end in mind and working backwards (not just dealing with level 1 in a standard way and then carrying on from there).

In terms of evaluation of success, LEO have recently discussed the role of learning analytics (LA).  Now, like a lot of things in L&D, I would say the problem with LA is that it has meant multiple things to different people.  One of the earliest examples I saw, sold as LA, was Starfish Solutions (SS) who had a clear solution and goal – use existing LMS/VLE data to improve student retention.  SS makes perfect sense for organizations where educational outcomes and student retention are the organization’s objectives.  I liked SS’s solution (in part discussed with them at the BBWorld Conference back in ’09) but it also faced the challenge that, for many university courses, there was/is less need for ‘big’ data solutions – lecturers know their students in traditional models.  It only made real sense when talking about large scale education – the problem then, again, is that ‘large scale’ means multiple things to different people 😉

The LEO article does a good job at articulating the problems I have always had with L&D impact – especially how to assess when there are so many other uncontrolled variables.  As mentioned in my previous post on the KirkP Cert, this was my main challenge I wanted clarity on from the course.  The recommended KirkP approach of identifying multiple ‘indicators’ (suggesting behaviors are on track for the desired result[s]) that can show a correlation – a key learning point for me.  In this model, therefore, we are building a ‘chain of evidence’ akin to a court of law – “data, information and testimonies at each of the four levels that, when presented in sequence, demonstrate the value obtained from a business partnership initiative”.

What I really liked about this is the clarity of the evidence/steps up the ladder/mountain, from bottom to top:

  1. Individual Outcomes
  2. Team Outcomes
  3. Departmental/Divisional/Unit Outcomes
  4. Customer/Client Response
  5. Organizational Outcomes
  6. Customer Satisfaction
  7. Market/Industry Response

It is this breakdown, of benefit chain, that I will likely adopt in my existing project planning documents.

Let L&D then be clear, as the KirkP course made clear: stop trying to tackle issues through limited/impossible direct causation but instead correlations.  I would say this is a much better approach than simply seeing “measuring learning impact as a waste of time and money” as the LEO article mentions many people argue.

Therefore, I would argue, let us (i.e. L&D professionals) not worry about learning analytics but instead organizational analytics (i.e. goals and metrics) that can be seen as trending over time and aim to see where our investments have an impact.   As recommended in the KirkP programme, do not reinvent the wheel, borrow metrics from elsewhere as they will already be used by stakeholders and those same stakeholders (should) understand the logic.  This should then allow us to, as I’ve hinted at before, not worry about ROI but instead (as recommended by KirkP) Return on Expectations.

So what do I think of KirkP following the course and certification?  Well I’d have to agree with the LEO suggestions:

  1. It’s better than doing nothing… which is what most organisations are doing.

  2. Think about what level of evidence will be good enough for you. As the Kirkpatricks have pointed out, a chain of evidence that would not stand up to scientific rigour may be enough to convict someone in a court of law. If a level of evidence is good enough for a court then it’s probably good enough to convince your board to invest in L&D.

  3. Good enough develops into great.

Microsoft Teams: The platform we’ve been waiting for?

What is a ‘learning platform’?  It, perhaps, needs to support behavior change and knowledge sharing.  Therefore, it has been good to try out this week what Microsoft have launched with Teams and think how it might be used.

Now, it could be used for the Teams communication and sharing.  However, my mind has wandered to how it might work as more of an ESN/LMS if you went for a topic focus – creating open/public teams per topic where the business feels it has needs.

Now there are possible problems – not least that the rather unhelpful banner prompts you download a desktop app.  Hi Microsoft – its 2017 calling, where is the mobile app prompt!

Microsoft Teams Desktop Download Prompt

As for Microsoft – this might be the way to add structure to your sharing of documents and conversations.  However, there are clearly the problems with how this should work between Delve, Yammer and other options.

So what about the LMS?  Well there has, of course, been the “LMS/VLE is dead” narrative for a while, add to this a renewed discussion around disruption.  Therefore, can Teams act in place of the LMS – for example as a “learning experience platform“.  Whilst you could argue with a lot of that article this piece certainly resonates:

A disruptive change has occurred. Companies no longer look at their LMS as the core of their learning infrastructure. It’s now the back-end, and they are searching for a new employee experience, which demands a new set of tools.

There are many exciting things happening in the learning technology space: tools like Workplace by Facebook, Slack, and Skype are becoming enterprise-class, and these tools will likely become primary destinations for learners too. Now we need a new class of learning platforms that bring all this content together, deliver it in a compelling way, and give us the social and mobile experience we use every day throughout our life at home.