UKeIG: Digital Literacy in the Workplace

This day workshop really ended up getting me thinking and my thoughts (as articulated below) are probably still not very tidy.

What does being ‘digitally literate’ even mean?  What does digital literacy look like?  What does it mean to different industries/sectors?  How does it compare to Information Literacy?

Perhaps predictably for a CILIP group event the first couple of presentations were quite focused on Information Literacy [in the SCONUL kind of sense] and the day did continue to think a lot about electronic resources and e-information.  This said, it did highlight how different people have different views on DL, for example mine would be more in line with the Belshaw model than how information professionals might consider the topic [note I tend not to call myself an info pro anymore!].

Key activities related to the topic were included in the day’s presentations, my interest in attending being particularly around the training of ‘clients’ (although a number of delegates made the point of not calling it ‘training’ to increase engagement), to up-skill staff and students (the latter for the large number of delegates working in education).  The “don’t call it training” advice will be well known by L&D folks and Wendy Foster’s session on the City Business Library made the point perfectly: it should be outcomes/WIIFM focused, i.e. not “database training” but “creating business to business contacts”.  eLearning was also mentioned as increasingly important for library/information professionals – and I made the point on Twitter that some of us have moved away from the ‘traditional’ profession via this route:

 

Personally, when I think about digital literacy, I’m thinking digital competency and capability.  This includes how people can be encouraged to be open to technological change, continue to develop their knowledge and skills within the requirements of their role and for possible future needs.  Indeed in the initial brainstorm of what it meant for us, I made the point of saying that it really can mean anything and everything.  I continued by arguing a need to “get on with it”, more than worrying about definitions, in a similar way to how L&D faffed about with what “coaching” meant only for people to go ahead and crack on with it (in various guises).

The different perceptions, semantics and language used around the topic continued to come up throughout the day and I couldn’t help but feel that businesses have adopted “digital transformation” as a buzzword, largely via IT Services, whilst a lot of professions have been left behind.  This is an interesting one for libraries/information considering eLib was a very ‘early’ series of service transformations (again for education – and a key part of my MA dissertation) that arguably (at least in my dissertation) was not followed through (or at least maintained).  eLib, however, is largely the cause of the LMS language divide between workplace LMS (learning) and UK higher ed (library – and use of VLE over LMS).  Anyways, I’m getting waylaid by semantics and history (which I tend to be)…

The day considered various pieces of research such as the ‘Google Generation’ which got me thinking about the laziness, ‘buzyitus’ and other factors which might be as important as UI/UX decisions:

 

A couple of sessions referenced Information Literacy in the Workplace by Marc Forster.  I don’t think I’ve ever looked at this [at c.£50 (it’s a Facet book after all) I’m unlikely to] nor the also referenced Information Literacy Landscapes by Lloyd.  Overall there remained a feeling that we were talking about a narrow subset of the digital skills I would consider people need.  I quite liked this model when reflecting on the day and Googling alternatives and, for workplace’s aligning to the apprenticeship standards, perhaps functional skills frameworks are the standard to be applied.

The JISC session nicely considered the wider issues (Flexing our digital muscle: beyond information literacy) but, unsurprisingly again, was very HE orientated – their model of “digital capability” however could be flexed for other environments.  Is the model of creation, problem-solving and innovation (in addition to an information focus) the way to go when thinking about digital skills – i.e. should they just be embedded at appropriate (Blooms taxonomy?) levels of technical capability?

Overall, there is a huge impact on productivity from information overload, a lack of digital skills and related issues.  If we (as in our organisations and the UK overall) are to improve perhaps we need to recognise this and invest in people for longer term impact and improvement.  Whilst one session, correctly, pointed out that work is about “KPIs not coursework” it is also an oversimplification.  As required skills are changed by technology the knowledge, skills and behaviours will change and be reinforced.  In terms of quick wins, the start point may well be developing some shared vocabulary within your own organisation to then support people with.

Can L&D learn anything from The Teaching Excellence Framework (TEF) experience?

http://www.bbc.co.uk/news/education-40356423

The above article is one of many to pick up on the outcomes of the first UK Higher Education TEF results.  The standout piece of the story, for me, is that the measures being used to judge “teaching”, including:

  • facilities,
  • student satisfaction,
  • drop-out rates,
  • whether students go on to employment or further study after graduating.

are as, the article points out, “based on data and not actual inspections of lectures or other teaching.”  Swap out “data” for “indicators” and you basically have the L&D model.

The Ofsted inspection of schools is, of course, more teaching focused but, even there, judgments of schools use other metrics.  School teachers, for example, are expected to support “progress” that is influencing by beyond what is immediately impact-able.  The impact of other factors, like parenting, are not factored in.

Therefore, between Ofsted, TEF and L&D (via models like Kirkpatrick) we really do not seem to have cracked the nut of measuring how well we develop learning and improvement.

With TEF it feels like a missed opportunity to evaluate the quality of ‘traditional’ lecture centric programmes versus more seminar or online models.  Some included elements, such as student evaluation of facilities, are also surely difficult considering most students will only know one HEI and thus not have something to benchmark against.  The cost of London living presumably impacting on the poor perception of many London-centric organisations, including LSE.

So, beyond saying “well universities haven’t cracked it either” what can L&D departments learn?  I’d be interesting in hearing people’s thoughts.  One item from me – with the growth of apprenticeships and accredited programmes “training” is being reinvigorated but also being reimagined with a performance focus and approaches like bitesize learning away from the big “programmes”.  Therefore, for me, the more metrics the merrier to try and build a picture of our organizations.

RefME as an example for viral apps

Do you remember Harvard referencing and building lengthy bibliographies during your student days?  If yes, did you find it time consuming?  Yep, thought so, did you ever use the techniques again (presuming you’ve not continued in academia)?  No?  Didn’t think so.

Even if we’re kind to academic referencing it is, at best, a necessary evil to show the development of research skills and the correct representation of ideas (i.e. to avoid plagiarism).  I’ve been to two events this week, the first on Adobe products will get a longer post but the second, on RefME, showed how a tool can go viral with users if it is really well targeted on solving an actual problem.

RefME has built a following of more than one million users quicker than Facebook or Twitter – all thanks to the humble citation!  Why?  Well those negative experiences of citation and bibliography building are now finally tackled through this very easy to use app.

When I studied there were some tools in this space, some institutionally backed, but none as easy as RefME appears from the demos at the #BLEevent.  Overall, I took away a key message here – focus on your audience (whoever they may be) and their challenges/frustrations.  If you are facing a lack of adoption with your corporate/institutional technology then it is probably fair to presume that it is (a) not easy enough to use and/or (b) not solving a problem that is felt keenly enough.

It will be interesting to see if any institutions opt to stand against it as a way of students ‘cheating’ by not having to spend hours formatting their own lists … not to mention all the librarians who will have another thing they ‘own’ taken away from them by technology.

CILIP Update March 2015: Digital workplaces and metrics

I tend to skim CILIP Update magazine when on the train.  This month, a couple of articles jumped out – both felt like they needed a bit more reflection.

The first (“Information management leaders – we have work to do”) would sound familiar to a lot of support services (such as HR, L&D. etc).  The article argues for a strategic future for information management as CILIP’s IM Project progresses.  The article mentions major trends, such as Big Data, and it ends with a summary of the role in that “most of our information management enhancing [sic] the digital workplace”.  This will sound similar to some of my posts here and what I’ve argued elsewhere.  In the article the focus is on being “strategic information advisors” ensuring “easy to access, relevant and valid information”, but this could be swapped out depending on your personal focus to learning and other areas.  Personally, the blurring of these areas is become such that we perhaps should be looking at the skills, such as those related to mobile development and data analysis, rather than the expertise background.  Unfortunately this would involve a lot of organizational transformation, and challenges for organizations such as CILIP – created around ‘professions’.

“Using metrics to demonstrate the value of your service” was the second article looking at some of the automatic statistics and more qualitative approaches that can be used.  This went back to some extent to a previous event I attended.

I recently argued with a colleague that, in L&D, we will fail if we are seen as ‘a breed apart’.  People will learn in their own ways.  There is perhaps a need for support services to be the ‘go to experts’, after all we’ve seen what ‘learning’ looks like to some people, but so embedded this is just part of common practice.  The centralized, decentralized, embedded, etc. arguments will rage on for support services but if we operate in a digital workplace environment then that blurring may help the support rather than concentrating on the service.

The inevitable backlash to ‘curation’

One of the popular terms of the last eighteen months or so, both on the wider web and specifically in L&D circles, has been ‘curation’ – indeed I mentioned it back in August 2013.

Well, inevitably the backlash has begun:

or at least the backlash against people who “don’t get it”.  Ultimately my take on this has not really changed…

Curation is nothing new.

Directories drove the early web until search improved.  We now see ‘live’, largely automated, directories aggregating content on an ongoing basis – albeit at the risk of rehashing old ideas and not moving the conversation forward.  Quality curation is one way to raise, above the noise, genuinely new insight, research, data, etc.

Information skills are essential to any non-automated approach and there would certainly be an argument that where ‘time is money’ some level of automated curation (as part of a personal learning and information system) could be supplemented by people focusing on information management/curation and distribution in your organisation (rather than the potential for duplication of effort, etc by everyone spending time managing their own).  However, I see two major challenges:

  1. Personal network versus “supported learning network”.  The inevitable problem for any kind of internal awareness, communication or learning curation will be that it has already been captured by an individual’s personal system.  For example, a colleague may share something on my team’s internal social tool which I have already engaged with via Twitter.  We have moved past restrictions enforcing only ‘work tools on work time’ so how can we balance this without boring ourselves and our audiences via multiple sharing/discussion streams?
  2. ‘Human touch’ curation capabilities are limited.  The cutbacks of recent decades to information-related teams mean that the focus is more likely to fall on the individual, supported by groups such as internal communications (for distributing key messages) and knowledge/record management (for longer term curation).  I see the recent focus of L&D on curation, to capture quality content and share appropriately as one area where my information background and learning technologies crossover – quality content has been the core reason for libraries and now we are seeing transformation of learning away from ‘our stuff’ to recognizing the value in UGC and integration with 3rd party materials.  Ultimately we would want everyone’s daily work to be built around a single company virtual space which can do everything we might need around learning, sharing, communication, etc.  The challenge is that this system realistically does not exist and, in all probability, existing businesses face fragmentation and silos.

So I would say lets strive to ensure our organizations appropriately curate but recognize it will have failings and is not the solution to every form of learning/content need.