More on educational games : the example of mission1point5

Using mobile gaming technology, Mission 1.5 educates people about climate solutions and asks them to vote on the actions that they want to see happen.

https://www.mission1point5.org/about

This new climate change related online activity is an interesting idea, combining a series of what are basically multiple choice questions (that give the user options for what their government should do to meet the 1.5 degree challenge) with calls to action for individual and national-level behaviour change.

Responses from your selected country will be aggregated and submitted to your government as your “vote”…

What will we do with the results?

Your vote, and those from your country, will be compiled and presented to your government to encourage bolder climate action. Votes will also be counted in a global tally. So stay tuned for the results!

https://www.mission1point5.org/about

Presumably this vote piece is only prearranged with the 12 countries (plus the EU) that are listed. In addition the game mechanics themselves are a little odd given your choice for each point is really between two items as one is clearly a ‘red herring’. The onscreen results from the ‘quiz’ record 10, 700 or 1000 points depending on your answer to a question and combine into a total score for tackling the 1.5 challenge across multiple areas such as “farms and foods”, “transport”, etc.

Example question from the “Farms & Food” topic.
A section’s “vote” (which acts like a summary/debrief of the ‘correct’ answers for each section).
Overall scoring in keeping temperature change down.

Does it educate?

The first quote included above specifically states the resource “educates people”. Obviously I could write a lot here about what educating someone actually means versus learning something, etc. What I will try to focus on is if someone is likely to learn anything from the activity. The answer, of course, will be “it depends”.

If we take the cattle example, in the above screenshot, there is a lot of pre-requisite knowledge required – for example a reading level to comprehend “livestock” and “plant-based diet”, albeit with mobile-style friendly graphics as visual clues. Beyond reading ability, there is no real information on the different option and what they mean – thus the light touch to any kind of knowledge content could be confusing and if you really wanted/needed to learn something from this you would likely have to do some research away from the resource. This is not helped by the text being image based and, therefore, you can not simply select text and ask your browser to search the web for more information.

Therefore, I am tempted to say this resource might be quite useful for a school to run through in a group, i.e. with a teacher/facilitator in place to use it to foster discussion, rather than as a learning resource per se.

How could it be improved?

10, 700 and 1000 don’t obviously relate to the 1.5 degree temperature and it is not very clear from the onscreen graphics how many ‘points’ are needed as a minimum for your choices to meet your country’s requirements. Indeed there is a contradiction between not wanting to add to temperature but also needing a high score. It would be better if the scoring was somehow reversed – e.g. starting with your high carbon total and then cutting it with a % target to reach 1.5 from a high score.

There is also a risk here from oversimplifying as, presumably, the carbon impact of some choices would be more in some countries than others (this complexity might be built in but I doubt it).

The “none of the above” option on the vote really does not work either as a form of learning summary nor as a mockup of the democratic action. Particularly if the intention of the resource is for actual democratic input…

Reliable information on public opinion on climate action

This is given, in a related YouTube video’s description, as a reason for the website’s vote element:

Mission 1.5 YouTube introduction

However, it is clearly a limited activity with just three (well two) options to consider per question and then the user being very heavily prompted to select the ‘best’ option for each section’s three questions as the vote. I must admit I voted a few “none of the above” responses in a Brewster’s Millions style mood.

Summary

Overall this feels like one of those examples of where someone wants to try to achieve educational outcomes but they have limited content, a desire to reduce instruction (but to the point of irrelevance) and really only manage to leverage the gaming expertise involved (which seems considerable from the “about” page) to graphics/UI and little else. It also highlights the incredible difficulty in building content for a global audience with no personalisation or clear target audience.

“Totally unrealistic”? Reflecting on categorising learning topics within games

This post was triggered by the below Twitter thread. Nuance is of course often lost in Twitter character limits, but, was my immediate response on reading @DTWillingham’s article fair or was I being too emotional (given my work in learning and time spent in the world of video games)?

Trigger thread

Firstly, lets all agree games are hugely powerful for learning. Indeed, I often blame Sid Meier for my choice of History for undergraduate studies (although, of course, a number of good teachers and other factors were at play).

Second, I would recommend you look at the original article. The idea is a really interesting one. The numbered points below are mostly where I disagreed with the article on first read through, with some reflections included below each point. Many of these have little to do with the (knowledge and skills) learning specifically but are important in terms of the framing of the learning environment and motivation (if we consider based on KISME). “Design for motivation” arguably being a skill in itself, as articulated in this new learning designer competency framework.

  1. “if my kids are representative”
    1. I appreciate this is a newspaper opinion piece but anecdotal starting points are not great. I also appreciate most of my views are very anecdotal based on my own experiences 🙂
  2. “I gave in to increased gaming time but gravely told my children they should choose educational games”
    1. This is a hugely “first world problem” problem statement. When I was in the age bracket being discussed (8 to 18) I got one game for my birthday and one for Christmas. If gaming is a concern for a parent then I would rather see an article encouraging them to be active in choices, either choose the games or be active with the children in the selection.
  3. “it’s usually impossible to know what, if anything, kids will learn from a video game based on a simple description of it
    1. I really like the opening of this part but not the bit I have italicised. Yes, a description will not likely cover this but a gaming experience is intensely personal. There are so many levels of competence to gaming skill, many games are non linear and players will pay differing levels of attention. Therefore, just like in an education environment, it is incredibly difficult to say what people “will learn” – only what we are encouraging and supporting them to learn. This also counters some game design – for example deliberately open design in the latest Zelda game.
  4. “The Entertainment Software Rating Board rates games for objectionable content like sex and violence. That’s helpful, but it should be as easy for parents to guide their kids toward enriching games as it is to shield them from unacceptable ones.”
    1. Surprisingly, given the author, this massively over simplifies learning. The ESRB, the BBFC, etc. are dealing with a very small taxonomy – for example, I just looked at GTA V on ESRB (presuming it would be the game with the most ‘warnings’) and it is only rated on 7 items – albeit that their are levels to this model (“intense”, “strong”, etc which is probably how we get to the 30 categories the article mentions). If we were to map “topics” as mentioned earlier, what would be the appropriate taxonomy? Cataloguers and librarians the world over would be quick to tell you this is difficult, video games themselves were an example used in my Librarianship MA as an example of how difficult it is to fit things into Dewey Decimal Classification – under games, technology, etc.?
  5. “boring”, education-first, games
    1. I previously considered if podcasts were the rebirth of “edutainment”. I don’t think we would say that as a concept is entirely bad. Indeed most people will remember their more “fun” teachers over some of the others. However, I would agree that “chocolate-covered broccoli” learning design isn’t very helpful in general, similarly to forced gamification in workplace learning. At the most recent school I worked at, most made for education “games” tended to frustrate the kids as they are the first to see when learning is being ‘forced’ into a game environment. Similarly potentially educational games, like Minecraft, were misused by what can probably be best described as ‘di**king about’. However, the experience of course varied enormously between the games and the children in terms of preference and practice. That said, some serious games undoubtedly do work and the science has been worked on for a long time, even if just thanks to the age old learning paradigm of simulation and practice of activities in safe(r) environments.
  6. “To make them fun, game creators either make the content less academic (and claim education will still benefit) or remove the tests (and claim kids will still learn). But the effect of either change on learning is unpredictable.”
    1. “learning is unpredictable” – I think this is the nub of the matter. It is unpredictable and difficult which is really why I was saying it is unrealistic to try and rate learning in such media. Indeed the article references the evidence that some games designed to help with memory do not work (which is in part why I said the vast majority of game driven learning is really accidental).
  7. “playing Tetris, do improve spatial thinking skills, an ability linked to success in math and science”
    1. But the designers probably did not anticipate this and the evidence becomes clear over time. It would be very difficult to classify such outcomes at the point of publication.
  8. “not quiz players on it”
    1. This is of course a very education way to talk about learning (going back in part to the original reason this site was called what it is). It probably doesn’t help to reinforce parental expectations of testing everything. It does double back to say learning is “because you thought about it, not because you were quizzed” but I would say it is weak on the fact that repetition to counter the forgetting curve is key here. For example, I learned Caribbean geography from Pirates! (like the other article mention in the thread but with Black Flag rather than Pirates!) as I played for many hours over a long period of time, however, I also had that knowledge reinforced through following football/soccer, looking at maps, watching the Olympics, etc. We know who “Prince Harry is married to” due to constant exposure to that content, I know very little about less exposed celebrities/royals.
  9. “They have to think about it, and that’s guaranteed only if the information is crucial to playing. Even then, “will they think about it?” isn’t always obvious.”
    1. I wouldn’t say it is guaranteed even in that case, repetition, interest, existing level of knowledge, etc. would all impact this. Also you do not necessarily think about spatial thinking skills. That is more incidental when benefiting from the Tetris example, etc.
  10. Roller Coaster Tycoon
    1. As the article suggests, the gamer would need an interest to pick on the more scientific elements rather than playing for fun/crashes. It would also depend a lot on existing knowledge, this would be impacted by age, literacy levels, etc.
    2. This could revert to something like sticking a recommended reading level on a game, for example, I loved Shadowrun but got bored with Shadowrun Returns as there was far too much reading text. A text rating would help parents and gamers of all ages. The text could also be potentially exported from code and analysed away from the game. This might help people determine if the game is too complex, for example if they are going to have sit through a huge tutorial reading activity. That said, in another context I would happily play more ‘interactive book’ type experiences.
  11. “Someone who understands cognition needs to evaluate gameplay. The video gaming industry could arrange for that.”
    1. This is the really difficult bit from a practical perspective. You may understand cognition but could you get through the game? Your analysis is unlikely to map to the possible variations in relation to the experience. Would you be better analysing pro players (for example on Twitch or YouTube)? I doubt “Game makers submit a detailed description of a new game, which is then evaluated by three professional raters”, as for the ESRB, would be anywhere near sufficient for the complexity of knowledge, skills and behaviours a game may change.
    2. There would also be potential cost implications – gaming is notoriously a low price inflation industry (even though the tech involved and size of games has transformed) with small and big designers regularly disappearing into bankruptcy.
  12. “they owe parents that much.”
    1. A nice way to wrap up the article. However, if we take that a parent would have to be at least 16 years old I would say the industry does not really owe you anything unless you have chipped in by playing games yourself within those years. As with film ratings and Parental Advisory it would also only be of use for the small number of parents who care.

The ease at which this information would appear to parents/purchasers is also perhaps giving more credit than due to some of the systems involved. The PlayStation store, for example, does not even offer a ‘wish list’ or ‘save for later’ type of option. The Steam Store allows various tagging but again we would come back to how difficult a taxonomy would be. The article and Twitter thread both mentioned Assassins Creed, if we take Valhalla you could argue you would learn a rough idea of:

  • English and Norwegian geography
  • some (stereotyped) Anglo Saxon and Norse cultural aspects
  • elements of medieval religious practice
  • different weapon types
  • and probably some other knowledge pieces.

However, as with learning from films and other media perhaps the most interesting point is away from such obvious content. Instead Valhalla’s approach to same-sex relationships could be a transformational learning experience, for example, if a sexist homophobe played the game then maybe, just maybe, they might have some of their beliefs and resulting behaviours changed. That said, did Ubisoft consultant with relevant bodies to ensure their representation was appropriate? This could be a challenge cast at many sources of information of course, for example if the The Crown should come with a health/education warning.

As I tweeted, I would love to work in gaming at some point, indeed one of those ‘sliding doors’ moments in my younger years was turning down a job at Codemasters. However, on reflection, I still don’t think the article’s suggestion is the best way to go. Indeed education consultants working for the developers would seem preferable to external rating and verification. DTWillingham is, of course, a luminary in this area (hell the LA Times publishes his articles!) but whilst I love the idea of this job existing I still feel it would be incredibly difficult to bring to fruition in a way that is of value to parents or anyone else.

Docebo Shape : First impressions

Firstly, kudos to Docebo for giving everyone free trials of this new tool.

Secondly, kudos for a funny launch video:

What is “Shape”

Shape is the latest tool to offer AI auto conversion of content into learning content. This would appear to be going for the “do you really need an instructional designer for this” market. Obviously this is a debatable starting ground for a product, but so to is the starting point of “only instructional designers can create learning”, so hey ho. This seems to be entering some of the space of tools like Wildfire and perhaps the quiz area – like Quillionz which I have used a bit in the past.

My experiment

I recently needed to build a new learning module on an overhauled document. This doc effectively amounts to a policy or practice document with some specific “do this” points related to expected behaviours.

Therefore, I thought I would see what Docebo’s new AI tool can do with the raw content of the policy doc in comparison to what I came up with (in Articulate Rise 360).

When you upload content it goes through the below steps (after you say if you want a small, medium or large project):

The extraction to production workflow

Of these steps, the only manual intervention is to give the Shape (yes, each project/presentation is itself a “Shape”) a title. The system does auto suggest three titles but you can create your own.

The output

What you get is effectively a short video, the tool picks out key text and overlays that automatically over selected stock images with a selected audio track (about 15 tracks included and you can upload your own).

This can be previewed in browser (all I have done so far) or published elsewhere.

Concerns

One concern that should probably be held is what happens to the data, how much the AI is improving through saving anything that may be your copyright, etc.

There are some predictable issues with the AI – for example, use of “interest” in the context of ‘an interest in something’ leads to a background graphic around interest rates. A lot of the images are also stock image rubbish but that was probably predictable.

The stock images that are used as backgrounds vary in quality which is a little odd as you would have thought they would all be of similar size to avoid scaling issues, etc. I certainly saw one or two that looked pixelated.

Some of the background choices were not great for contrast and being able to see the text.

The music was very ‘meh’.

I found the default speed a little fast for reading but it does at least force a little concentration 😉

Overall, the model is questionable given the distraction of the transitions and images in relation to cognitive load and redundancy.

The good

The output looks mostly professional and is in line with modern short adverts, for example this kind of thing could easily be done in Shape (note images are included although you have to upload your own videos if you want to use them – at least in the free trial version):

You can edit the Shape to change colours, images, etc to deal with some of the issues I raise under concerns about contrast (although still probably not great for accessibility?).

Perhaps most importantly, the AI does a pretty good job of spotting the key elements from the source material although there was some weird stuff toward the end.

The “medium” solution I requested came back as just over 3 minutes which suggests this is going for decent “short and punchy” rather than trying to be too clever.

Overall

Is it worth it? Well, for basic advertisements this seems great, it would be an easy way to create content for campaigns but I’m not sure if micro learning itself in this format is hugely helpful. That said, if we compare this with what was possible a few years back then the ease with which we can now create content is hugely impressive.

Docebo have a track record of improving their products and I know they have some really good people on their team so hopefully Shape can become a useful tool to Docebo’s LXP customers and beyond.

So my Open Badges are gone then?

I am presumably very late on to this problem but I was just checking links on my LinkedIn profile and realised both of my public badge account links were broken.

My Credly link was relatively easily fixed, by going in to my account I could get a new link to show my profile with 1 badge (from the LPI).

My older Mozilla Backpack that had a variety of random badges attached, however, seems to have gone. The help page is, well, not very helpful:

I didn’t get an email notification from Mozilla. What should I do?

Do you have more than one email address that the zipped file could have been sent to?

Have you checked your spam folder for the email?

Unfortunately, we’re sorry to say, there is no way to resend the Mozilla email containing your badges. If not, you may wish to contact the original issuer(s) of the badge(s) that were in your backpack, to see if they can provide you with a copy of the badge or re-award it to you.

Badgr support

I knew Badgr was taking over from Mozilla (actually quite a while ago, in 2019, looking at their website) but had not realised that my badges would be basically gone. The link I had saved no longer displaying anything useful. Now I know, as I used bit.ly, only three people have actually followed the link from LinkedIn but even still, it will have looked a bit bad that I had broken links on my LinkedIn profile without realising. More care needed in keeping an eye on my profile I guess!

I have been an advocate for Open Badges but this really seems a shoddy situation and one that reminds us yet again about the risks of relying on online services (as opposed to having offline records such as CVs, certificates, etc for such achievements).