“Totally unrealistic”? Reflecting on categorising learning topics within games

This post was triggered by the below Twitter thread. Nuance is of course often lost in Twitter character limits, but, was my immediate response on reading @DTWillingham’s article fair or was I being too emotional (given my work in learning and time spent in the world of video games)?

Trigger thread

Firstly, lets all agree games are hugely powerful for learning. Indeed, I often blame Sid Meier for my choice of History for undergraduate studies (although, of course, a number of good teachers and other factors were at play).

Second, I would recommend you look at the original article. The idea is a really interesting one. The numbered points below are mostly where I disagreed with the article on first read through, with some reflections included below each point. Many of these have little to do with the (knowledge and skills) learning specifically but are important in terms of the framing of the learning environment and motivation (if we consider based on KISME). “Design for motivation” arguably being a skill in itself, as articulated in this new learning designer competency framework.

  1. “if my kids are representative”
    1. I appreciate this is a newspaper opinion piece but anecdotal starting points are not great. I also appreciate most of my views are very anecdotal based on my own experiences 🙂
  2. “I gave in to increased gaming time but gravely told my children they should choose educational games”
    1. This is a hugely “first world problem” problem statement. When I was in the age bracket being discussed (8 to 18) I got one game for my birthday and one for Christmas. If gaming is a concern for a parent then I would rather see an article encouraging them to be active in choices, either choose the games or be active with the children in the selection.
  3. “it’s usually impossible to know what, if anything, kids will learn from a video game based on a simple description of it
    1. I really like the opening of this part but not the bit I have italicised. Yes, a description will not likely cover this but a gaming experience is intensely personal. There are so many levels of competence to gaming skill, many games are non linear and players will pay differing levels of attention. Therefore, just like in an education environment, it is incredibly difficult to say what people “will learn” – only what we are encouraging and supporting them to learn. This also counters some game design – for example deliberately open design in the latest Zelda game.
  4. “The Entertainment Software Rating Board rates games for objectionable content like sex and violence. That’s helpful, but it should be as easy for parents to guide their kids toward enriching games as it is to shield them from unacceptable ones.”
    1. Surprisingly, given the author, this massively over simplifies learning. The ESRB, the BBFC, etc. are dealing with a very small taxonomy – for example, I just looked at GTA V on ESRB (presuming it would be the game with the most ‘warnings’) and it is only rated on 7 items – albeit that their are levels to this model (“intense”, “strong”, etc which is probably how we get to the 30 categories the article mentions). If we were to map “topics” as mentioned earlier, what would be the appropriate taxonomy? Cataloguers and librarians the world over would be quick to tell you this is difficult, video games themselves were an example used in my Librarianship MA as an example of how difficult it is to fit things into Dewey Decimal Classification – under games, technology, etc.?
  5. “boring”, education-first, games
    1. I previously considered if podcasts were the rebirth of “edutainment”. I don’t think we would say that as a concept is entirely bad. Indeed most people will remember their more “fun” teachers over some of the others. However, I would agree that “chocolate-covered broccoli” learning design isn’t very helpful in general, similarly to forced gamification in workplace learning. At the most recent school I worked at, most made for education “games” tended to frustrate the kids as they are the first to see when learning is being ‘forced’ into a game environment. Similarly potentially educational games, like Minecraft, were misused by what can probably be best described as ‘di**king about’. However, the experience of course varied enormously between the games and the children in terms of preference and practice. That said, some serious games undoubtedly do work and the science has been worked on for a long time, even if just thanks to the age old learning paradigm of simulation and practice of activities in safe(r) environments.
  6. “To make them fun, game creators either make the content less academic (and claim education will still benefit) or remove the tests (and claim kids will still learn). But the effect of either change on learning is unpredictable.”
    1. “learning is unpredictable” – I think this is the nub of the matter. It is unpredictable and difficult which is really why I was saying it is unrealistic to try and rate learning in such media. Indeed the article references the evidence that some games designed to help with memory do not work (which is in part why I said the vast majority of game driven learning is really accidental).
  7. “playing Tetris, do improve spatial thinking skills, an ability linked to success in math and science”
    1. But the designers probably did not anticipate this and the evidence becomes clear over time. It would be very difficult to classify such outcomes at the point of publication.
  8. “not quiz players on it”
    1. This is of course a very education way to talk about learning (going back in part to the original reason this site was called what it is). It probably doesn’t help to reinforce parental expectations of testing everything. It does double back to say learning is “because you thought about it, not because you were quizzed” but I would say it is weak on the fact that repetition to counter the forgetting curve is key here. For example, I learned Caribbean geography from Pirates! (like the other article mention in the thread but with Black Flag rather than Pirates!) as I played for many hours over a long period of time, however, I also had that knowledge reinforced through following football/soccer, looking at maps, watching the Olympics, etc. We know who “Prince Harry is married to” due to constant exposure to that content, I know very little about less exposed celebrities/royals.
  9. “They have to think about it, and that’s guaranteed only if the information is crucial to playing. Even then, “will they think about it?” isn’t always obvious.”
    1. I wouldn’t say it is guaranteed even in that case, repetition, interest, existing level of knowledge, etc. would all impact this. Also you do not necessarily think about spatial thinking skills. That is more incidental when benefiting from the Tetris example, etc.
  10. Roller Coaster Tycoon
    1. As the article suggests, the gamer would need an interest to pick on the more scientific elements rather than playing for fun/crashes. It would also depend a lot on existing knowledge, this would be impacted by age, literacy levels, etc.
    2. This could revert to something like sticking a recommended reading level on a game, for example, I loved Shadowrun but got bored with Shadowrun Returns as there was far too much reading text. A text rating would help parents and gamers of all ages. The text could also be potentially exported from code and analysed away from the game. This might help people determine if the game is too complex, for example if they are going to have sit through a huge tutorial reading activity. That said, in another context I would happily play more ‘interactive book’ type experiences.
  11. “Someone who understands cognition needs to evaluate gameplay. The video gaming industry could arrange for that.”
    1. This is the really difficult bit from a practical perspective. You may understand cognition but could you get through the game? Your analysis is unlikely to map to the possible variations in relation to the experience. Would you be better analysing pro players (for example on Twitch or YouTube)? I doubt “Game makers submit a detailed description of a new game, which is then evaluated by three professional raters”, as for the ESRB, would be anywhere near sufficient for the complexity of knowledge, skills and behaviours a game may change.
    2. There would also be potential cost implications – gaming is notoriously a low price inflation industry (even though the tech involved and size of games has transformed) with small and big designers regularly disappearing into bankruptcy.
  12. “they owe parents that much.”
    1. A nice way to wrap up the article. However, if we take that a parent would have to be at least 16 years old I would say the industry does not really owe you anything unless you have chipped in by playing games yourself within those years. As with film ratings and Parental Advisory it would also only be of use for the small number of parents who care.

The ease at which this information would appear to parents/purchasers is also perhaps giving more credit than due to some of the systems involved. The PlayStation store, for example, does not even offer a ‘wish list’ or ‘save for later’ type of option. The Steam Store allows various tagging but again we would come back to how difficult a taxonomy would be. The article and Twitter thread both mentioned Assassins Creed, if we take Valhalla you could argue you would learn a rough idea of:

  • English and Norwegian geography
  • some (stereotyped) Anglo Saxon and Norse cultural aspects
  • elements of medieval religious practice
  • different weapon types
  • and probably some other knowledge pieces.

However, as with learning from films and other media perhaps the most interesting point is away from such obvious content. Instead Valhalla’s approach to same-sex relationships could be a transformational learning experience, for example, if a sexist homophobe played the game then maybe, just maybe, they might have some of their beliefs and resulting behaviours changed. That said, did Ubisoft consultant with relevant bodies to ensure their representation was appropriate? This could be a challenge cast at many sources of information of course, for example if the The Crown should come with a health/education warning.

As I tweeted, I would love to work in gaming at some point, indeed one of those ‘sliding doors’ moments in my younger years was turning down a job at Codemasters. However, on reflection, I still don’t think the article’s suggestion is the best way to go. Indeed education consultants working for the developers would seem preferable to external rating and verification. DTWillingham is, of course, a luminary in this area (hell the LA Times publishes his articles!) but whilst I love the idea of this job existing I still feel it would be incredibly difficult to bring to fruition in a way that is of value to parents or anyone else.

Docebo Shape : First impressions

Firstly, kudos to Docebo for giving everyone free trials of this new tool.

Secondly, kudos for a funny launch video:

What is “Shape”

Shape is the latest tool to offer AI auto conversion of content into learning content. This would appear to be going for the “do you really need an instructional designer for this” market. Obviously this is a debatable starting ground for a product, but so to is the starting point of “only instructional designers can create learning”, so hey ho. This seems to be entering some of the space of tools like Wildfire and perhaps the quiz area – like Quillionz which I have used a bit in the past.

My experiment

I recently needed to build a new learning module on an overhauled document. This doc effectively amounts to a policy or practice document with some specific “do this” points related to expected behaviours.

Therefore, I thought I would see what Docebo’s new AI tool can do with the raw content of the policy doc in comparison to what I came up with (in Articulate Rise 360).

When you upload content it goes through the below steps (after you say if you want a small, medium or large project):

The extraction to production workflow

Of these steps, the only manual intervention is to give the Shape (yes, each project/presentation is itself a “Shape”) a title. The system does auto suggest three titles but you can create your own.

The output

What you get is effectively a short video, the tool picks out key text and overlays that automatically over selected stock images with a selected audio track (about 15 tracks included and you can upload your own).

This can be previewed in browser (all I have done so far) or published elsewhere.

Concerns

One concern that should probably be held is what happens to the data, how much the AI is improving through saving anything that may be your copyright, etc.

There are some predictable issues with the AI – for example, use of “interest” in the context of ‘an interest in something’ leads to a background graphic around interest rates. A lot of the images are also stock image rubbish but that was probably predictable.

The stock images that are used as backgrounds vary in quality which is a little odd as you would have thought they would all be of similar size to avoid scaling issues, etc. I certainly saw one or two that looked pixelated.

Some of the background choices were not great for contrast and being able to see the text.

The music was very ‘meh’.

I found the default speed a little fast for reading but it does at least force a little concentration 😉

Overall, the model is questionable given the distraction of the transitions and images in relation to cognitive load and redundancy.

The good

The output looks mostly professional and is in line with modern short adverts, for example this kind of thing could easily be done in Shape (note images are included although you have to upload your own videos if you want to use them – at least in the free trial version):

You can edit the Shape to change colours, images, etc to deal with some of the issues I raise under concerns about contrast (although still probably not great for accessibility?).

Perhaps most importantly, the AI does a pretty good job of spotting the key elements from the source material although there was some weird stuff toward the end.

The “medium” solution I requested came back as just over 3 minutes which suggests this is going for decent “short and punchy” rather than trying to be too clever.

Overall

Is it worth it? Well, for basic advertisements this seems great, it would be an easy way to create content for campaigns but I’m not sure if micro learning itself in this format is hugely helpful. That said, if we compare this with what was possible a few years back then the ease with which we can now create content is hugely impressive.

Docebo have a track record of improving their products and I know they have some really good people on their team so hopefully Shape can become a useful tool to Docebo’s LXP customers and beyond.

To time or not to time

Expected duration. Time on task. Lock stepped vs open. Start and end dates. Peer pressure motivation. Collaborative vs independent.

All of the above are all too well known for online learning developers. Does your design measure progress? Is it via time on task, do you lock access based on progress, do you enforce weekly or other spacing, use pre and post testing to adapt the experience or some other method? These issues are often tied to if you are allowing people to access content versus undertaking more collaborative activities.

This week I have had chance to pick up a few “courses” (well resources really in some cases) and this has got me thinking again about the temporal aspect of online learning. For example, is there value in Coursera basically unenrolling you from their courses to fit in their schedule, with the option to reenrol on the next session. This is partly as there are discussion activities but, in reality, the timing adds nothing to the learning experience for those wanting to pass through the course at their own pace.

Google, for example, advertise that they have opportunities via Coursera yet the company known for “organiz[ing] the world’s information and mak[ing] it universally accessible and useful” lock these “job-training solutions” to their/Coursera’s timelines rather than those of the interested party.

This expectation of working through at someone else’s pace is poor instructional practice when, in reality, many such courses are combinations of async activities such as videos, reflections, quizzes, etc. The defence for the model is probably facilitator support (i.e. being able to have someone online to help with questions). However, this seems contradictory to the idea of flat rate charging ($39 a month* as in the below image) without the traditional Coursera “audit” (i.e. FREE) access option. If the intention is to increase completion rates through forcing a time-based fear/scarcity mode of motivation this similarly is poor given there is not the personal support you would have in, say, a traditional university course to give you a hand and nudge you along to the final deadline.

Ultimately it feels that if this is the model then these courses need to be designed to allow any time joining with, say, monthly cohorts for discussion boards. Indeed we were designing similarly to this for rolling start degrees back in c.2010.

Ultimately it feels like MOOCs continue to fail at their stated objectives time and time again.

* also Google obviously have enough money to support skills development without charging for such items as CSR activity.

Some reflections on week one of the #LTDX21 event (with a bit on the latest The Learning Hack podcast)

Learning Technologies, of course, is normally a big, physical, conference and exhibition and I had hoped to attend this year (amazingly I don’t think I have been since 2016 – where did that time go?). However, with travel and event restrictions there has been the inevitable move to a “digital experience” this year. The free sessions I have attended for this year’s LTDX21 have really reminded me about three things from Learning Tech as an event:

(1) The “free” sessions, normally on the exhibition floor of the physical event, vary enormously and it is a lot better to attend the “paid for” conference event if you can.

(2) The major benefit of the event, for me, is bumping into people you normally see once a year (or less) for a quick catch-up.

(3) There is value in just browsing the exhibition for trends, new entrants, etc. – I am yet to attend a virtual event which does this kind of thing well in getting the balance between viewing “exhibitor information” and having sales people harangue you via LinkedIn and email.

With regards to the first point above and specifically the sessions, the ones I have attended in the first three days have varied between the incredibly introductory and the very thought provoking. Kudos to Omniplex for the thought provoking session – one that really picked at the shared learning industry conscious over our role in organisations (and impact on wider society) with calls for improving practice. A good example of bringing emotion in – by highlighting real world examples (from big stories like Grenfell through to smaller scale examples).

One problem with the less interesting sessions was that product demos, which would normally be restricted to exhibition booths, and presentations (with a product focus) that normally appear in the “theatres” seem to have blurred together in this format. The answer here is probably to look beyond the titles and descriptions to try and second guess the nature of presentation – this isn’t really an issue if you commit a day or two to an event and can walk away from less interesting sessions. It is more annoying when you are blocking out calendar time for virtual events.

From the sessions I have attended, I could see some of the ongoing challenges for online learning – for example, discussions in session chat showed a drive towards wanting to display learning in Microsoft Teams (in part due to Viva?) but at the same time we had presentations around old concepts rebranded as new. I would really advocate everyone in the industry listen to the below podcast. A lot of people are still very blinkered to the companies they have experience in and I really don’t think people realise what is actually “new”. As Dr Chen points out, for example, doing more than SCORM is not new. There also seems to be a growing trend of huge content libraries and aggregators (perhaps because of LinkedIn Learning’s success) which I personally feel has a role but is just part of the puzzle. Anyway, listen to the pod if you haven’t the latest Learning Hack podcast was timely given all of this:

Regarding point 3. Attending only a few sessions you also miss the general feel. Today, I am going to try and follow the event’s hashtag more closely to try and pick up some of the more general trends. Thanks, as always, to all the tweeters out there on #LTDX21.