Reflecting on: “Here’s why you’re failing to create a learning culture”

Another great article from Laura Overton and the Toward’s Maturity team got me thinking this week.  The article considers “five common mistakes” that can stifle a learning culture.

Below are some of my reflections on these points – both from my own experience and what I’ve read, seen at shows, conferences, etc.

  1. You don’t trust staff to manage their own learning
    • I totally agree that everyone needs to own their part in continuous improvement and the part learning plays in that.
    • We are doing a lot to empower managers to coach and facilitate their team’s development.  The challenges I see are two fold:
      1. is that people feel they are too busy to take this on.  I tend to feel people are ‘doing this already’ and do not perhaps realize but…
      2. how can the ‘day to day’ learning can be amplified?  The amplification across silos being a particular challenge.
    • The “trust” point is an interesting one as I wonder how many L&D organisations are happy to trust the individuals in what they need (with the risk of verging into solution-centric models rather than analyzing issues) but not in how to spend money.  In some ways this is fair as it is where L&D have a governance role to play – consistency, economies of scale and consistent outcomes with controlled pilots/innovation, etc.  However, there is the risk of being a blocker…
  2. You are stifling staff contribution
    • “91% of learners like being able to learn at their own pace and they are more than capable of searching for the information they need” – my experience would suggest people generally struggle to search and retrieve (information skills are limited and overload a problem).  This is where information systems are key, L&D needs to be embedded with coms and KM, architecture is all important and it largely depends around what is already in place for having an online internal profile – for example, ESNs.
    • I would though agree with the main points: it is all important to get people to share what they find and user generated content is part of this – so too is getting people to feedback after external training or conferences.  The latter examples have been known issues for a long time and remain issues, I presume, in most organisations from what I have seen and heard.
    • Perhaps the issue here is with “personal development planning” and career development more generally.  Yes, it is a personal journey and one which will be more personalized via analytics, customization and technology like Filtered.  However, the fundamental point why an organisation wants to invest in you (be it funding or just funding your time away from work) is to see a performance improvement now or in the future (see Degreed for a definition of learning culture) so do we drop the “personal” to stress that it is a co-investment?  We could say “performance improvement plan” but that sounds rather draconian and as if people are on their “final warning”.  Anyone out there got a better name?  Really “plans” just needs to be dropped altogether for ongoing small scale development?  Then what about required accreditation (where they are not going away any time soon)?  Lots of issues here for the workplace in general beyond L&D departments – for example, how do you budget for these more flexible requirements.
  3. Your content is inaccessible
    • Yep, a real problem with the traditional model of hiding things from search via SCORM, etc.  This ties in with some of what I’ve written under ethos about trying to change L&D to an open web approach – do we really need to hide behind logins?  Often its about having everything in one place but that is, in part, due to poor architecture and a lack of hyperlinking
    • There remains, to me, a question over how much the best content is inaccessible.  Yes, the open web hosts enough to get by on most topics but do we still need to licence from vendors larger libraries of nice solutions like getAbstract?  I would say yes, even if many publishers have gone to the wall in the digital age.  The challenge then remains what it has been for probably 20 years or more – federated search across multiple resources.
  4. You take learning away from work
    1. Again my ethos page stresses the need to consider learning as work and work as learning.  I ran a session last week for people in my organisation who have formal “learning” responsibilities in their roles.  The interesting outcome of the session, which was the first such event and therefore deliberately navel-gazing about how we work (via me picking various articles and thought pieces from Jane Hart, Donald Taylor, Saffron Interactive and others), was our consideration of where we are on some of these spectrum.  Effectively a bench-marking reflection exercise for the wider group.  I still doubt many organisations are actively giving people such time to reflect on external learning and bring it back in a productive way to influence behavior.
    2. The growing importance in the UK given to apprenticeships is in some ways reinforcing problems here but also targeting learning at the workplace performance.  It remains to be seen if the government’s approach with the Levy can survive the Brexit fallout and other challenges.
  5. You don’t reward learning
    1. Agreed, this can be a major problem.  I’ve previously left organisations frustrated at a lack of opportunities to make use of my skills and I suspect many many others have had this problem.  I recently spoke to a colleague who had even been through a formal development programme only to not have a role to go into at the end – again apprenticeships should help here with the formal development options leading to rewards.
    2. Sharing success can be driven via internal coms channels and we’re also using a combination of Open Badges and competency models to drive recognition.

Overall some really interesting points to reflect on and try to tackle going forward!

Can L&D learn anything from The Teaching Excellence Framework (TEF) experience?

http://www.bbc.co.uk/news/education-40356423

The above article is one of many to pick up on the outcomes of the first UK Higher Education TEF results.  The standout piece of the story, for me, is that the measures being used to judge “teaching”, including:

  • facilities,
  • student satisfaction,
  • drop-out rates,
  • whether students go on to employment or further study after graduating.

are as, the article points out, “based on data and not actual inspections of lectures or other teaching.”  Swap out “data” for “indicators” and you basically have the L&D model.

The Ofsted inspection of schools is, of course, more teaching focused but, even there, judgments of schools use other metrics.  School teachers, for example, are expected to support “progress” that is influencing by beyond what is immediately impact-able.  The impact of other factors, like parenting, are not factored in.

Therefore, between Ofsted, TEF and L&D (via models like Kirkpatrick) we really do not seem to have cracked the nut of measuring how well we develop learning and improvement.

With TEF it feels like a missed opportunity to evaluate the quality of ‘traditional’ lecture centric programmes versus more seminar or online models.  Some included elements, such as student evaluation of facilities, are also surely difficult considering most students will only know one HEI and thus not have something to benchmark against.  The cost of London living presumably impacting on the poor perception of many London-centric organisations, including LSE.

So, beyond saying “well universities haven’t cracked it either” what can L&D departments learn?  I’d be interesting in hearing people’s thoughts.  One item from me – with the growth of apprenticeships and accredited programmes “training” is being reinvigorated but also being reimagined with a performance focus and approaches like bitesize learning away from the big “programmes”.  Therefore, for me, the more metrics the merrier to try and build a picture of our organizations.

Learning Tech Summer Forum 2017 #LTSF17

I nearly did not take up the offer of a free ticket for this year’s conference as I was not hugely optimistic about the session line up – yes there were some great presenters but nothing that really stood out.

The introduction to the day, from Donald Taylor, said the summer event tries to be more conversation than presentation (unlike the winter conference) but I felt ‘the best of the usual suspects’ might be a more apt way of describing the line up.

In the end, I chose to take up the offer of the ticket, primarily, for the opening keynote from Dr Itiel Dror, who I have not seen present before.

Unfortunately come the big day, after being awoken at 3am by the ‘song’ of the local urban fox population, I set out somewhat wearily.  Conversations around the event were mixed, in part as I was needing a caffeine drip, but a couple of first timers that I spoke to really seemed to find it useful.  More regular visitors had the usual hit and miss feel it seemed, indeed one overheard conversation in the gents went as far as someone saying “f*ck that’s an hour of my life I’m not getting back”, [yikes!].  I think you can always take something away from a session though – even if it’s just a reminder/refresh.  I always remember a few years back walking through the ExCel conference center and overhearing a delegate from the Oracle show that was on (I think I was there for BETT that was running in parallel) explaining to a friend: “sh*t I’ve become the old guy in the corner who doesn’t know anything”.  If nothing else, at least going to conferences and other networking events should help you reflect where you are on that particular journey!

Personally I found the keynote excellent and other sessions/conversations around neuroscience interesting in so much as the industry seems to be seriously trying to take a more scientific approach to things – ending a model of being “naive” as the keynote described L&D – rather than just replication of old models/approaches with new tech.  The science in this space is increasingly amazing and there really is a world of research out there I feel like I’m still only scratching the surface of, for example, this podcast is fascinating on David Eagleman’s inventions around cognition.

Brief notes from the sessions I attended below (as always I’ve not edited these much from OneNote so there will be obvious errors).  As always there are plenty of other good reviews/reflections online including from Kate GrahamLearnovate and Unicorn.

Keynote

Focused on the presenter’s research into “real learning”, work that has been done with various groups including surgeons and nurses, a two day workshop converted into a really funny and informative one-hour session.  It focus on some tools that could be taken away across three perspectives:

  1. acquire – aka need to understand/do
  2. memory – aka need to retrieve in the long run
  3. apply – aka use it back at workplace

The point being that, for the brain, these are different – albeit intertwined [see my previous post for some more on this].  How can we help the brain deal with having limited resources, tips included not wasting brain power on:

  • inconsistent navigation
  • pointless images
  • exaggerate distinctiveness – simulations don’t have to be reality: focus on features want to.

Remember – brain is active.  Not a camera.  A few nice, quick, activities were run through to show how we presume and add our own meaning.  As the brain is active allow development over time – learning objectives are boring and not suitable as a result.

We are creatures of habit so, as we all probably know in learning, change is difficult and ‘relearning’ more difficult that learning.  Brain built not to like this rewiring aware from habit – this is why change is hard.  Relates to terms people will have heard like “plasticity”.  However, that is two fold: neuronal (‘hardware’) and cognitive (‘software’).

So can you teach an old dog new tricks?  Perhaps predictably the response was “it depends”.  The science behind this being about how the previous was engrained – for example, you become very familiar with driving but can still switch relatively easily to driving on the other side of the road when on holiday.

There were some of the more familiar recommendations and what you would expected about encouraging the use of mental representations and chunking.  The latter at least seems to be one evidence based item that is well ingrained in learning practice.  Again familiar to (I presume) most learning professionals was the stress put on the importance of learner motivation, but there were recaps of some good studies showing that financial reward can kill internal motivation whilst external motivation factors (such as KPIs) rarely work.  There was the valuable point that Training Needs Analysis are often just wrong – as they ignore motivation (the M of KISME of course), if people do not believe in it, if they do not want to change, then there will be no change.  People have to be on board, not just about learning but important for them (not just the company).

Motivation killers we can all relate to include being forced to do learning from a LMS.  The argument for breaking this was to increase the tension/risk – treat as a “kick in the ass” or the “terror of error”, with the latter allowing for learning from your mistakes.  An example of a solution he helped design/support was for sepsis with Australian medics.  Misdiagnosis in this area is rife and training failed to improve it as people would diagnose sepsis if they knew it was a sepsis course.  Therefore, instead they set participants up to fail by ‘sabotage’.  A memorable learning experience was created by treating it as “Low Blood Pressure Training” in which medics would lose their patient to sepsis as they were not anticipating the correct ‘answer’.  This is a really great example of how to create a salient mental point.  Some other good examples were run through – for example how you tend to remember bad dates more than good ones!

Another medical example was at a hospital in Boston where he deliberately did not use the provided hand sanitizer.  He then challenged the clinicians he was with as to why they did not challenge his as they walked around – again it created an emotional situation much more effective than the previous of having static posters that people ignored.  Other examples can be more on the fun scale than the difficult and challenging – for example kids playing a version of twister where the floor play area is replaced by a colored map to teach geography.

Certainly plenty of things to consider in how we might do things differently.

David Kelly: The now and the next of learning and technology

A wiz through some of the tech that is impacting on the way we with live and learn.  It was deliberatly high level after his winter conference session went deeper in AR/Vr.

Resources from his blog will probably be more useful than me listing out what was covered.

I personally came along as I like David’s online stuff (loved his meme-ing L&D) and do not think I’ve seen him present before.  It was good to not have (m)any “gosh I’ve not heard of this” moments, the most standout bits really being:

  • Data in business and for decision making is changing: Whoever you are (not just L&D specifically), you need to be part of the conversation of what this means for your org.
  • Create experiences: Email phishing example from room, similar to the sepsis idea in the keynote, send people spam emails and see how many people open them: make people learn not do something in a realistic way but via a safe scenario.
  • Mobile apps increasingly splitting between Motivating and Manipulating: how is your org encouraging people?
  • Curation: needs a purpose [i.e. don’t do it for the sake of it – yes, yes, yes!]
  • VR: Example where starting to replicate old issues in new. Similar to how Second Life went wrong (lecture theatres in virtual world, etc). Mention particularly emergent in healthcare but other areas need to be careful.
  • Daqri helmets were new to me: huge possibilities here in remote support, work and AR spaces.
  • IOT: Interesting point about it being a combination of tools, not about the value of one IOT item.  An example could be a headset getting data from IOT devices, all interacting and IBM Watson powered. Some of this will lead to job elimination.

Fosway group: Making sense of the digital learning market

A useful reminder/recap of sensible practice (I even got on the mic at one point) via a number of surveys with people using the app/website to respond in real time:

  • UX needs to be considered in conversations:
  • Importance of search engine within systems should not be underestimated.
  • Make use of focus groups and end users.
  • Changing from massification to personalisation.
  • Partner with IT to ensure big enough to wag tail.
  • Think about transition from implementation projects to day-to-day from day 1 (such as reliance on vendors and contractors – implementation team will face questions more seriously if they sticking around to live with the consequences!).
  • Articulate user personas and scenarios.
  • Useful point that somewhat went against some of my previous thoughts: Harness analytics for individual not organisation – use to make AI intelligent.
  • Included positioning of Fosway model:
Fosway innovation model

Fosway Group innovation model

  • Innovation plan (photo about) [I wondered if it was sensible to pilot or wrong to look from tech lens?].

Stella Collins: Mind shift – moving people to a positive learning state

  • Mind shift: Moving people to a positive mindset.
  • How get correct internal environment for learning.
  • Don’t have to be neuroscientist but helps understanding, own interest and your own designs (the below takeaway was very useful)
Brain related hormones, triggers and impact

Stella’s brain guide

  • So what can you do as a facilitator and designer?  The answer is lots:
    • movement can help,
    • can’t learn new things whilst asleep (you do not learn languages if listening from tapes at night) but will automatically have learning stick overnight (‘let me sleep on it’ is true),
    • be on edge to learn from experience (i.e. not repeat same old stuff),
  • Unhelpful states: Argumentative, avoiding, not looking, depressed.
  • There was a good bit on what happens in the brain with neurotransmitters: chemicals transmitting in brain.

Went on to ask tables to consider some scenarios and then was a debrief on how we can use the science in the above table and what we know [tends] to work

  • Increasing curiosity: Dopamine. Often tilt to head to side: If hang posters on tilt can encourage more curiosity to look at them. Like slimming can make u more happy.
    • Curiosity (ideas from table): Personalize, small tasks, text to side, music, new information, suspense and stories, branching and differences. Exercise, almonds, bananas, motivation, reward. Link learning to gaming. Start conversations with questions, not answers. Guessing gets brain going (shown to promote long time learning). Click bait. Slow reveal. Escape rooms.
  • Increasing creativity to release dopamine, serotonin, Oxycontin. Dark chocolate start trigger for serotonin. Oxycontin can come from things like hugs. Alpha waves in brain – can help with creativity: Dreams, walking, etc. – i.e. More creative when not thinking specifically about task.
    • If think creativity there will be. Set up challenge and give responsibility. Get audience to do work.
  • Graveyard slot: Let people relax and rest in afternoon or get them doing something [i.e. think about what you want to encourage in the brain for relevant learning states].  Allow relaxation can be useful – people tend to stop questioning so this can be the opportunity to throw ideas into people in more didactic approaches.
    • Graveyard: Make clear what engagement expectation is. Could increase activity. Rewards/sugar – or activities. OR could relax, mindful, stretching, dark choc. Mindfulness could be in eLearning. Reflect, mind gym, team work – more reflect: Shorter videos to deliver messages. Choice and reflection time on what doing.
  • Long term memory: Glutomate, serotonin, cortisol.
    • Long term memory: Guide reflection, time kind or guided questions for reflection, sleep rooms in organisation, space and intential repetition,
  • One major challenge is you do not know how things are triggering in other people.  You never know states going to create.

Donald H Taylor: Your learning technology implementation checklist

  • Don admitted there are perennial problems and the same kind of things hit again and again: his book is part of trying to fix this.
  • Checklist provided in session today helps with people doing implementations but also to ask right questions.  Focus, as in book, is “processes and people over tech”.
  • Challenges include ownership, networks/infrastructure, varied people, how persuade usage, etc.
  • To be success: Got to be right thing for job and fit in the environment and with people.
  • Checklist – mindset, skills, method (pic below):
Mindset, Skills and Methods for learning techs

Checklist for learning tech implementation – sorry for the awful photo!

  • Perspective grid (pic below): Nothing new – being connected key. People still not talking to organisation. Best implementations build on IT relationship – not create new relationship.
External v Internal and L&D v Wider Org factors

Key considerations

  • Need element of conscious incompetence. Get perspective of what need. Consider L&D role in organisation.
  • Need to be connected in org: Can even do network analysis. Get your ambassadors.
  • Nemawashi principle: (comes from preparing tree roots for tee transport) Talk around topic, get people on board, rather than presenting something to them, get shared ownership in advance [this was great and one of my big takeaways for the day that there’s a useful name for this!].
  • Need to performance consultant – do not ‘solutioner’ with latest shinny thing.
  • Online focus groups can help: Strict time limit. One person facilitate, one person notes and record. Limited questions – get people on call to trust (this one of the bits of advice and examples that are mentioned in the book).
  • Six step implementation method [relatively logical].
  • Rallying cry to finish: St Paul’s story again. L&D enables individuals and organisations to fulfill their potential.
Learning Technologies book from Donald H Taylor

Thoughts on: “Learning Technologies in the Workplace”

So after picking up a copy a while back I’ve now had a skim through Donald Taylor’s book and thought I would capture a few thoughts here:

  1. I really like that it goes back to the origins of some of our key concepts (e.g. eLearning and technologies).  No doubt due to my history studying background, I have a soft-spot for books that consider historical perspectives.
  2. It does a nice job of linking those historical issues to the current state of play; with recommendations from and for the usual suspects: Jennings, Harrison, etc.
  3. It feels like the kind of book that could become somewhat seminal – the kind of history/good practice balance that often act as an entry point for people coming into an industry (or, in this case, HR generalists up-skilling in this area).  What makes me say this are various, perhaps unintentional, attempts to establish standards – such as a move for the use of ‘e-learning’ over ‘eLearning’ and other variations.  I know that example is basic semantics but it is indicative of the industry that such things have never really been agreed – I’ve certainly tended to always use eLearning and a lot of Don’s webinars/presentations around the book’s launch have stressed that this kind of text has never really been done before for learning tech and the question really for him in authoring it was “why not?”.  My view would be that its just been presumed you can pick up bits and pieces from conferences, blogs, etc. rather than needing a ‘go to’ text.  I am certainly going to treat it as such and pass my copy around my team!
  4. The book adopts the approach of Clive Shepherd in using e-learning as the generic term, under which includes the traditional self study model but also virtual classrooms, social tools, etc.  Personally I prefer ‘online’ or ‘digital’ as the umbrella, under which ‘click next’ style content is what we call ‘e-learning’.  Again it is semantics but you do often get misunderstandings if you are not explicit – for example, a static PPT file is IMO a resource (or ‘piece of content’) not eLearning [oops there I go again].
  5. The book also makes the point that much of “learning” technology is really about being inventive with workplace and commercial tech.  This include’s categorizations such as those in the below image.  Personally this is an area that has always interested me – the scope to be more productive and innovative with tools beyond their initial design but avoiding what the book refers to as “magazine management” (i.e. just running with the latest ideas without proper analysis).
  6. WP_20170602_21_43_28_Pro
  7. The introduced APPA model (an aim; a people focus; a wide perspective; a pragmatic attitude) is sensible and gives a structure to the case studies and main arguments.  Obviously there are lots of other ways you could classify successful projects but its a useful mnemonic.
  8. There are a few big jumps in logic – for example the section on justifying the aim/evaluating through metrics is strong but this statement: “If the Hawthorne effect can be accounted for, and if we have historical data and a control group, then ROI should be calculable in instances where we can calculate the value of the work of the employees concerned”.  The statement is fine but easier said than done for many people!  The chapter goes on to a pet peeve of mine – people using “ROI” as a term when its not actually ROI and it’s good to see Don call this out.
  9. The classification of typical aims very nicely simplifies the complexities of different (particularly LMS) projects as being one of the below (with appropriate metrics):
    1. “Organisational infrastructure – Effective business-as-usual, risk avoidance, compliance
    2. More efficient L&D delivery – Cost savings in L&D , reduced admin, faster delivery
    3. More effective learning – Faster time to competence, better retention
    4. Part of organizational change – Defined by the sponsors off change”
  10. The above is a really nice way to consider if the LMS is more than compliance (point 1) through to fulfilling options such as being THE social platform in an organisation (an example of cross-organisation change like point 4)

To summarize, the book reads a little like a “greatest hits” album, a compilation of my 10ish years of going to Learning Technologies shows and LSG Webinars. With Don calling on his experience as chair to mention players like Jane Hart and their contributions to the industry (such as her top tools for learning) as well key concepts towards good practice.

Overall, it is a great primer on development within and of organisations, covering introductions to Performance Consulting, Agile, network analysis and more – not just learning tech [which of course is the point – learning tech can not survive if just acting in a ‘learning bubble’].  I also attended his session at the Summer Forum and will post my notes on that one soon.  Even more from Don on topics around the book on this podcast.

CGS Corporate Learning Trends, Observations, & Prediction 2017

Notes:

  1. A decent report based on a survey of L&D leaders.
  2. The headline taken away is that digital has truly arrived with increased use of video, mobile, social and micro formats.  Perhaps more interestingly is the strong intention to use “instructors” to the same amount (c.65%) or more (10%) suggesting a general increase in the mix/blend rather than shifts.
  3. The “greatest challenge” was budget (surprise surprise) with 47% reporting this issue.
  4. When considering the metrics to justify that budget – “employee engagement” was identified as the “most important”.  I wonder how much this is due to the fact that employee engagement surveys are relatively easy?  The KP programme would argue for combining different metrics and, indeed, the catch all of “business metrics” was a close second to EE in the survey response list.  A separate quote from Jack Welch reverts to three measures for overall company performance: employee engagement, customer satisfaction and cash flow.
  5. I had more issues with the question of “So how do L&D professionals get employees excited about learning?” being answered “by giving them what they want”.  The report here is talking about “speed, efficiency, relevance and usability” but, of course, that’s all well and good but only if it actually helps improve their performance.  We can give wonderful ‘development’ experiences, that people want, that can be completely irrelevant and fail to stop someone being redundant a few weeks down the line.
  6. There’s a run out for the old quote (included below).  Lots of issues with this I’d say, including:
    1. retraining might be to solve original failures,
    2. the upgrade needs a clear as-is/to-be message,
    3. development in this context = performance and outcomes?

“People need maintenance and upgrades even more than machines do.  Retraining is maintenance.  Training is an upgrade.  Development is the next generation model.”