More on educational games : the example of mission1point5

Using mobile gaming technology, Mission 1.5 educates people about climate solutions and asks them to vote on the actions that they want to see happen.

https://www.mission1point5.org/about

This new climate change related online activity is an interesting idea, combining a series of what are basically multiple choice questions (that give the user options for what their government should do to meet the 1.5 degree challenge) with calls to action for individual and national-level behaviour change.

Responses from your selected country will be aggregated and submitted to your government as your “vote”…

What will we do with the results?

Your vote, and those from your country, will be compiled and presented to your government to encourage bolder climate action. Votes will also be counted in a global tally. So stay tuned for the results!

https://www.mission1point5.org/about

Presumably this vote piece is only prearranged with the 12 countries (plus the EU) that are listed. In addition the game mechanics themselves are a little odd given your choice for each point is really between two items as one is clearly a ‘red herring’. The onscreen results from the ‘quiz’ record 10, 700 or 1000 points depending on your answer to a question and combine into a total score for tackling the 1.5 challenge across multiple areas such as “farms and foods”, “transport”, etc.

Example question from the “Farms & Food” topic.
A section’s “vote” (which acts like a summary/debrief of the ‘correct’ answers for each section).
Overall scoring in keeping temperature change down.

Does it educate?

The first quote included above specifically states the resource “educates people”. Obviously I could write a lot here about what educating someone actually means versus learning something, etc. What I will try to focus on is if someone is likely to learn anything from the activity. The answer, of course, will be “it depends”.

If we take the cattle example, in the above screenshot, there is a lot of pre-requisite knowledge required – for example a reading level to comprehend “livestock” and “plant-based diet”, albeit with mobile-mobile style friendly graphics as visual clues. Beyond reading ability, there is no real information on the different option and what they mean – thus the light touch to any kind of knowledge content could be confusing and if you really wanted/needed to learn something from this you would likely have to do some research away from the resource. This is not helped by the text being image based and, therefore, you can simply select text and ask your browser to search the web for more information.

Therefore, I am tempted to say this resource might be quite useful for a school to run through in a group, i.e. with a teacher/facilitator in place to use it to foster discussion, rather than as a learning resource per se.

How could it be improved?

10, 700 and 1000 don’t obviously relate to the 1.5 degree temperature and it is not very clear from the onscreen graphics how many ‘points’ are needed as a minimum for your choices to meet your country’s requirements. Indeed there is a contradiction between not wanting to add to temperature but also needing a high score. It would be better if the scoring was somehow reversed – e.g. starting with your high carbon total and then cutting it with a % target to reach 1.5 from a high score.

There is also a risk here from oversimplifying as, presumably, the carbon impact of some choices would be more in some countries than others (this complexity might be built in but I doubt it).

The “none of the above” option on the vote really does not work either as a form of learning summary nor as a mockup of the democratic action. Particularly if the intention of the resource is for actual democratic input…

Reliable information on public opinion on climate action

This is given, in a related YouTube video’s description, as a reason for the website’s vote element:

Mission 1.5 YouTube introduction

However, it is clearly a limited activity with just three (well two) options to consider per question and then the user being very heavily prompted to select the ‘best’ option for each section’s three questions as the vote. I must admit I voted a few “none of the above” responses in a Brewster’s Millions style mood.

Summary

Overall this feels like one of those examples of where someone wants to try to achieve educational outcomes but they have limited content, a desire to reduce instruction (but to the point of irrelevance) and really only manage to leverage the gaming expertise involved (which seems considerable from the “about” page) to graphics/UI and little else. It also highlights the incredible difficulty in building content for a global audience with no personalisation or clear target audience.

“The more things change, the more they remain the same” MoodleMoot Ireland and UK Online 2021

Last week I attended the IE&UK MoodleMoot which was, of course, held online this year.

My feelings about the presentations will sound something like a follow-up to my “not all innovation is created equal” post which itself was triggered by the global MoodleMoot last year. Why? Well, while Covid loomed large over this year’s event the ‘transformation’ taking place all sounded very familiar.

The change

The opening keynote pointed out the scale of change in the last year, and the scale of global challenges beyond Covid too. The growth in online learning in the last year set out via a particularly striking metric:

Whilst this might be expected given the Covid situation it is perhaps still impressive that that many people have launched Moodle sites during the period. Whilst Moodle offers a low cost solution there are, of course, plenty of other options out there so that growth is still impressive IMO. It is perhaps also worth pointing out that the growth will be across multiple types of organisations whilst this MoodleMoot was primarily university focused in terms of the presentations (and I would imagine the audience too).

I did have to agree with some of the keynote points, for example Martin Dougiamas stressed that too much of the pivot/transformation has simply been video based learning sessions, webinars, etc. He also, I think, agreed with me that it has really been a time of reinvention, not a revolution, for him and others who have been in the industry a while. Overall, the lessons learned in the last 20-odd years are clearly inconsistently applied in the world of online learning.

The same

As with the global MoodleMoot much of what was being discussed has not really changed in the last 10+ years. This might be a sign that digital learning is now mature in further and higher education (and elsewhere), or it might just be that the agenda for the event did not really get beyond the basics as many presentations were by people being pushed online by Covid. In fact, away from those ‘this is how we responded to Covid’ type sessions, the agenda was quite sponsor heavy – it is perhaps difficult to sell presenting at user conferences if it doesn’t involve a day out of the office?

In no particular order here are some items I took away from the event, as always accuracy is reliant on my memory and note taking, and (as I have said) a lot of these points are far from ‘new new’:

  1. Moodle is only part of the solution. The opening keynote stressed Moodle’s role is to build a learning system, it is not a learning system in itself. This is probably not the way Moodle has been sold to many educators but should be remembered. The great desire in education institutions has always been to have ‘a size fits all’ solution but Moodle has 200+ plugins and, of course, most people will be reliant on various other tools (from Microsoft to Articulate and beyond). I have grouped some of the other related content/conversations to this as #12 below, not to mention different learning/teaching approaches (see #2).
  2. Social construction. The keynote also reinforced the point that Moodle was designed for social learning. We heard some presentations where the transformation has been little more than moving from Moodle as a file store to Moodle as it was intended to be used (i.e. as a platform with active discussions, activities, etc). Better late than never I guess but it also really depends on what else you are using this matters or not (see #12).
  3. Moodle 4.0’s promise of improved visuals and UX could probably have been from any point in the system’s version history. Obviously these changes are continuous, and needed, but 4.0 probably shouldn’t be oversold given this conversation is certainly a ‘deja vu’ topic. Fingers crossed 4.0 is a big leap forward though.
  4. Brickfield’s accessibility checker. This tool sounds really useful but long overdue given Blackboard, for example, has had Ally for a while (this could be seen as a comment on commercial vs OS development – however, Brickfield’s full offer is still commercial as a plugin/partner). Related to #1, you would likely need to invest here to extend Moodle into a full offering that meets modern accessibility requirements.
  5. eAssessment. Whilst I would have thought most universities had an online LMS/VLE pre-covid they may not have ‘pushed the button’ on online assessment before being forced to in the last year. Catalyst did a good presentation on the need to consider various things before launching eAssessment, including making sure the load on servers is not a problem – something I suffered the pain of with Blackboard hosting back in about 2010. Obviously we often rely on trial and error in tech but in high stakes assessments this is not really an option. Catalyst were selling their hosting and load testing services but you can argue for a move away from this simultaneous style assessment to other models.
    • Gradescope, now part of the TurnItIn group, looked good as a way of getting written exams and non essay exams into a digital tool – this in part tackles some age old questions such as how to deal with diagrams and maths in eAssessment. However, at the same time, a lot of what was being talked about (rubrics, etc) is, again, not really new. Related to #1, you would likely need to invest here to extend Moodle into a full offering if you are in an assessment heavy sector.
    • Poodll also showed some nice functionality in this area, although they advertise as focused on language learning I would say their tools could be used wider – for example, with the rise of voice operation over typing you could author questions in different ways.
  6. Plugin evaluation. UCL presented a few sessions, including one on their new-ish plugin evaluation process. This was all fairly straightforward and I would imagine many orgs have something similar, either for Moodle, Blackboard building blocks, etc. etc.
    • Of other plugins and themes mentioned, the Moove theme sounded great for simplifying the user interface – it was mentioned by Hibernia College (interestingly they have 17[!] in their Digital Development team).
    • Hibernia also mentioned the MyFeedback [?] plugin which sounded good for one of the problems I have with Moodle – namely the need to better aggregate, for the student, a view of their gradebook across modules/courses.
  7. Studygroup pre-arrival course and course development processes. This is a course that can be licenced for institutions to help international students know more about their new location pre-arrival, including some of the cultural differences they might want to be aware. As with pre-start date induction materials in workplace learning this looked a good idea.
    • In terms of the design process discussed (Aims > Pedagogy > Limitations > Content Development > Content Transfer > Test Systems > Amendments) it all made sense and not a bad model for others to use.
    • Bolton Uni did a session later on developing a Masters course during lockdown, the most interesting bit for me, again, was their approach (Pedagogy > Design and Structure > Validate > Upskill Staff > Deliver, Support and Track > Enhance). Bolton admitted they were “new kids on the block” for online learning development but this seemed to be working for them (I presented on something similar in 2011).
    • I liked an example from Nottingham of a different type of learning activity, namely an attempt to create an escape room type experience made up of video, puzzle quizzes and other Moodle elements. Overall, this was a nice example of thinking ‘outside the box’ when faced with the Covid challenge.
  8. Global search engine. Another UCL session, this showed their results in comparing three global search tools (Azure, Elastic and Solr?). The presentation was good in showing the different findings in terms of the indexing impact and the search results across the three tools. Content management and discovery have long been problems in learning platforms and a solution to this really should be in the core code. Indeed I have found such a search tool useful within Totara in the past, not least as the logs are illuminating in what your users are looking for.
  9. Upgrades. UCL also presented on a move to continuous release upgrades. This was an interesting one given the problems VLE upgrades have long caused. Higher Ed having long relied on the ‘big’ summer upgrade. However, this also goes against the desire for permanent online learning and avoiding downtime. The UCL session got into some of the management of code and cloud vs local data integration. Overall, one for hosting teams but also highlights the issues for teams like UCL to be managing this versus using external hosting services.
  10. Moodleboard development from DCU. More the kind of thing I like from user conferences – how to do something new. In this case it was ‘boards’ via a tool developed as a new plugin that does some of what popular external tools (like Padlet) can do but all internal to Moodle and keeping the data your own.
  11. Tile format for courses. A lot of the presentations either mentioned this specifically or were clearly using it. Very interesting from a “the same” perspective as, in many ways, it goes back to the same logic as the old WebCT UI just with a more modern look. Some of the examples looked good – for example the Royal College of Midwives managing to move their 3 day residential programme to an online course looked like a real achievement.
  12. What data where in the ecosystem. An OU session looked at the new options related to user profile fields in release 3.11. Overall this really felt like it came back to the age old question of what systems you have, where the data needs to be, etc. There are clear use cases here, for example creating additional fields that could then be used in different ways – for example the old Blackboard community system allowed you to filter what users of that system saw. Examples of what this would allow included employer sponsored students being able to see their company content. Other use cases include changing what a fully online student sees in their dashboard versus a blended or campus-based student, etc. by their profile fields. As someone who has completed an online MSc this is the kind of functionality universities could get a lot better at to personalise and filter out the noise.
    • Dundalk presented about moving their student support hub online, I would imagine many institutions would have offered this for a while but how they do it, and what is on Moodle versus other parts of an ecosystem, no doubt varies. Indeed having to navigate a university student record system, VLE/LMS, intranet, website, library and elsewhere for information (rather a simple single Google-style interface) is a well documented challenge for online education providers and a great example of service providers often carving customer experience up by their departments rather than what a customer needs. Indeed Dundalk mentioned a major drive for the move was student feedback that their Moodle was a logical place for more than just modules and programmes.
    • For many, a key part of their ecosystem will now be webinar and virtual classroom tools so it was good to see BigBlueButton still being plugged in the face of Teams and Zoom becoming so dominate (at least in my experience).
  13. Academic staff upskilling and PD. Most of the sessions touched on this and no matter how many learning technologists, templates and other aides are in place most academic institutions presenting still seemed to have the model of tutors being the end user and that they ‘own’ the digital space. There are, of course, debates to be had over the rights and wrongs of this.
    • It was interesting to see Birkbeck deciding to go back to the drawing board for how to go fully online (decisions > policy [including adopting basic standards for Moodle and ABC Learning Design] > roles).
    • Gallway showed some nice use of using H5P to teach academics how to use H5P – there seemed to be some clever setup tricks in their approach that would be good to see shared in a way that others could use. However, it seemed like it would need quite a lot of setup – indeed the presenter mentioned it was being used for small groups (of 5) as would be in a face-to-face workshop, rather than something that could be used more widely at a larger scale[?].
  14. Data analytics was considered in a few presentations, including with regards to search (#8) and in terms of the wider ecosystem (#12).
    • Intelliboard presented on how to use data for early intervention, proactive advising and more. Much of this sounded very familiar to what I got excited about with Starfish’s solution at BBWorld in 2009.
    • Chicester showed some interesting work in rationalising module evaluation to cut the number of templates to allow comparison across departments/schools and how they have used templates and question banks to do this. There were some nice displays of data with the chart.js plugin [?] but overall it was a little mind blowing that a university would have, until recently, been doing this on paper and not in a consistent way.
    • There was also some data analysis and consideration of machine learning models from a Hungarian institution that looked interesting for what it might mean for the metrics being used.
    • Another session from Intelliboard was good in showing the research and data related to online learning and how there are many things we do know, for example the correlation between perceived instructor competence and if the instructor is seen as caring about the student [i.e. we give a perception of competence to people we like!]. Homophily, the perception of time (i.e. you need to answer student queries quickly), etc. were also considered in this.
  15. Video sharing from OneDrive. This was a practical and super simple presentation – I would hope no one who would attend this event actually uploads directly to Moodle and would imagine most organisations will have a video platform to use (which is probably preferable to the OneDrive examples shared).

The future

I am well aware that we all take our own things from such events – for example, there were more developer/technical focused sessions during the conference which are not related to my area of expertise but would have been of use for others and I did not get to attend everything due to other events, emails, etc.

Therefore, I appreciate me complaining about basic Moodle operations is unfair given this is new to many. What is more worrying is that the beginners and basic stuff in some of the sessions originate from higher education institutions that are really behind the sector overall and will be continuing to waste tuition fees and government money in various areas.

The real change going forward might be with Moodle’s model itself. The creation of Moodle US alongside the monetization via Moodle Workplace and Mobile are interesting changes for what is theoretically still an OS project. Of course it is also a fair point to say one shouldn’t criticise the project if not contributing financially or via time (such as in testing or development).

My main takeaways

  1. We need to challenge ourselves to not just learn from the last 20-odd years but also apply those lessons.
  2. There were a few plugins and themes for me to look into (those I have taken time to highlight above).
  3. Moodle 4.0 is a huge opportunity but probably one not to get too excited about.

“Totally unrealistic”? Reflecting on categorising learning topics within games

This post was triggered by the below Twitter thread. Nuance is of course often lost in Twitter character limits, but, was my immediate response on reading @DTWillingham’s article fair or was I being too emotional (given my work in learning and time spent in the world of video games)?

Trigger thread

Firstly, lets all agree games are hugely powerful for learning. Indeed, I often blame Sid Meier for my choice of History for undergraduate studies (although, of course, a number of good teachers and other factors were at play).

Second, I would recommend you look at the original article. The idea is a really interesting one. The numbered points below are mostly where I disagreed with the article on first read through, with some reflections included below each point. Many of these have little to do with the (knowledge and skills) learning specifically but are important in terms of the framing of the learning environment and motivation (if we consider based on KISME). “Design for motivation” arguably being a skill in itself, as articulated in this new learning designer competency framework.

  1. “if my kids are representative”
    1. I appreciate this is a newspaper opinion piece but anecdotal starting points are not great. I also appreciate most of my views are very anecdotal based on my own experiences 🙂
  2. “I gave in to increased gaming time but gravely told my children they should choose educational games”
    1. This is a hugely “first world problem” problem statement. When I was in the age bracket being discussed (8 to 18) I got one game for my birthday and one for Christmas. If gaming is a concern for a parent then I would rather see an article encouraging them to be active in choices, either choose the games or be active with the children in the selection.
  3. “it’s usually impossible to know what, if anything, kids will learn from a video game based on a simple description of it
    1. I really like the opening of this part but not the bit I have italicised. Yes, a description will not likely cover this but a gaming experience is intensely personal. There are so many levels of competence to gaming skill, many games are non linear and players will pay differing levels of attention. Therefore, just like in an education environment, it is incredibly difficult to say what people “will learn” – only what we are encouraging and supporting them to learn. This also counters some game design – for example deliberately open design in the latest Zelda game.
  4. “The Entertainment Software Rating Board rates games for objectionable content like sex and violence. That’s helpful, but it should be as easy for parents to guide their kids toward enriching games as it is to shield them from unacceptable ones.”
    1. Surprisingly, given the author, this massively over simplifies learning. The ESRB, the BBFC, etc. are dealing with a very small taxonomy – for example, I just looked at GTA V on ESRB (presuming it would be the game with the most ‘warnings’) and it is only rated on 7 items – albeit that their are levels to this model (“intense”, “strong”, etc which is probably how we get to the 30 categories the article mentions). If we were to map “topics” as mentioned earlier, what would be the appropriate taxonomy? Cataloguers and librarians the world over would be quick to tell you this is difficult, video games themselves were an example used in my Librarianship MA as an example of how difficult it is to fit things into Dewey Decimal Classification – under games, technology, etc.?
  5. “boring”, education-first, games
    1. I previously considered if podcasts were the rebirth of “edutainment”. I don’t think we would say that as a concept is entirely bad. Indeed most people will remember their more “fun” teachers over some of the others. However, I would agree that “chocolate-covered broccoli” learning design isn’t very helpful in general, similarly to forced gamification in workplace learning. At the most recent school I worked at, most made for education “games” tended to frustrate the kids as they are the first to see when learning is being ‘forced’ into a game environment. Similarly potentially educational games, like Minecraft, were misused by what can probably be best described as ‘di**king about’. However, the experience of course varied enormously between the games and the children in terms of preference and practice. That said, some serious games undoubtedly do work and the science has been worked on for a long time, even if just thanks to the age old learning paradigm of simulation and practice of activities in safe(r) environments.
  6. “To make them fun, game creators either make the content less academic (and claim education will still benefit) or remove the tests (and claim kids will still learn). But the effect of either change on learning is unpredictable.”
    1. “learning is unpredictable” – I think this is the nub of the matter. It is unpredictable and difficult which is really why I was saying it is unrealistic to try and rate learning in such media. Indeed the article references the evidence that some games designed to help with memory do not work (which is in part why I said the vast majority of game driven learning is really accidental).
  7. “playing Tetris, do improve spatial thinking skills, an ability linked to success in math and science”
    1. But the designers probably did not anticipate this and the evidence becomes clear over time. It would be very difficult to classify such outcomes at the point of publication.
  8. “not quiz players on it”
    1. This is of course a very education way to talk about learning (going back in part to the original reason this site was called what it is). It probably doesn’t help to reinforce parental expectations of testing everything. It does double back to say learning is “because you thought about it, not because you were quizzed” but I would say it is weak on the fact that repetition to counter the forgetting curve is key here. For example, I learned Caribbean geography from Pirates! (like the other article mention in the thread but with Black Flag rather than Pirates!) as I played for many hours over a long period of time, however, I also had that knowledge reinforced through following football/soccer, looking at maps, watching the Olympics, etc. We know who “Prince Harry is married to” due to constant exposure to that content, I know very little about less exposed celebrities/royals.
  9. “They have to think about it, and that’s guaranteed only if the information is crucial to playing. Even then, “will they think about it?” isn’t always obvious.”
    1. I wouldn’t say it is guaranteed even in that case, repetition, interest, existing level of knowledge, etc. would all impact this. Also you do not necessarily think about spatial thinking skills. That is more incidental when benefiting from the Tetris example, etc.
  10. Roller Coaster Tycoon
    1. As the article suggests, the gamer would need an interest to pick on the more scientific elements rather than playing for fun/crashes. It would also depend a lot on existing knowledge, this would be impacted by age, literacy levels, etc.
    2. This could revert to something like sticking a recommended reading level on a game, for example, I loved Shadowrun but got bored with Shadowrun Returns as there was far too much reading text. A text rating would help parents and gamers of all ages. The text could also be potentially exported from code and analysed away from the game. This might help people determine if the game is too complex, for example if they are going to have sit through a huge tutorial reading activity. That said, in another context I would happily play more ‘interactive book’ type experiences.
  11. “Someone who understands cognition needs to evaluate gameplay. The video gaming industry could arrange for that.”
    1. This is the really difficult bit from a practical perspective. You may understand cognition but could you get through the game? Your analysis is unlikely to map to the possible variations in relation to the experience. Would you be better analysing pro players (for example on Twitch or YouTube)? I doubt “Game makers submit a detailed description of a new game, which is then evaluated by three professional raters”, as for the ESRB, would be anywhere near sufficient for the complexity of knowledge, skills and behaviours a game may change.
    2. There would also be potential cost implications – gaming is notoriously a low price inflation industry (even though the tech involved and size of games has transformed) with small and big designers regularly disappearing into bankruptcy.
  12. “they owe parents that much.”
    1. A nice way to wrap up the article. However, if we take that a parent would have to be at least 16 years old I would say the industry does not really owe you anything unless you have chipped in by playing games yourself within those years. As with film ratings and Parental Advisory it would also only be of use for the small number of parents who care.

The ease at which this information would appear to parents/purchasers is also perhaps giving more credit than due to some of the systems involved. The PlayStation store, for example, does not even offer a ‘wish list’ or ‘save for later’ type of option. The Steam Store allows various tagging but again we would come back to how difficult a taxonomy would be. The article and Twitter thread both mentioned Assassins Creed, if we take Valhalla you could argue you would learn a rough idea of:

  • English and Norwegian geography
  • some (stereotyped) Anglo Saxon and Norse cultural aspects
  • elements of medieval religious practice
  • different weapon types
  • and probably some other knowledge pieces.

However, as with learning from films and other media perhaps the most interesting point is away from such obvious content. Instead Valhalla’s approach to same-sex relationships could be a transformational learning experience, for example, if a sexist homophobe played the game then maybe, just maybe, they might have some of their beliefs and resulting behaviours changed. That said, did Ubisoft consultant with relevant bodies to ensure their representation was appropriate? This could be a challenge cast at many sources of information of course, for example if the The Crown should come with a health/education warning.

As I tweeted, I would love to work in gaming at some point, indeed one of those ‘sliding doors’ moments in my younger years was turning down a job at Codemasters. However, on reflection, I still don’t think the article’s suggestion is the best way to go. Indeed education consultants working for the developers would seem preferable to external rating and verification. DTWillingham is, of course, a luminary in this area (hell the LA Times publishes his articles!) but whilst I love the idea of this job existing I still feel it would be incredibly difficult to bring to fruition in a way that is of value to parents or anyone else.

Docebo Shape : First impressions

Firstly, kudos to Docebo for giving everyone free trials of this new tool.

Secondly, kudos for a funny launch video:

What is “Shape”

Shape is the latest tool to offer AI auto conversion of content into learning content. This would appear to be going for the “do you really need an instructional designer for this” market. Obviously this is a debatable starting ground for a product, but so to is the starting point of “only instructional designers can create learning”, so hey ho. This seems to be entering some of the space of tools like Wildfire and perhaps the quiz area – like Quillionz which I have used a bit in the past.

My experiment

I recently needed to build a new learning module on an overhauled document. This doc effectively amounts to a policy or practice document with some specific “do this” points related to expected behaviours.

Therefore, I thought I would see what Docebo’s new AI tool can do with the raw content of the policy doc in comparison to what I came up with (in Articulate Rise 360).

When you upload content it goes through the below steps (after you say if you want a small, medium or large project):

The extraction to production workflow

Of these steps, the only manual intervention is to give the Shape (yes, each project/presentation is itself a “Shape”) a title. The system does auto suggest three titles but you can create your own.

The output

What you get is effectively a short video, the tool picks out key text and overlays that automatically over selected stock images with a selected audio track (about 15 tracks included and you can upload your own).

This can be previewed in browser (all I have done so far) or published elsewhere.

Concerns

One concern that should probably be held is what happens to the data, how much the AI is improving through saving anything that may be your copyright, etc.

There are some predictable issues with the AI – for example, use of “interest” in the context of ‘an interest in something’ leads to a background graphic around interest rates. A lot of the images are also stock image rubbish but that was probably predictable.

The stock images that are used as backgrounds vary in quality which is a little odd as you would have thought they would all be of similar size to avoid scaling issues, etc. I certainly saw one or two that looked pixelated.

Some of the background choices were not great for contrast and being able to see the text.

The music was very ‘meh’.

I found the default speed a little fast for reading but it does at least force a little concentration 😉

Overall, the model is questionable given the distraction of the transitions and images in relation to cognitive load and redundancy.

The good

The output looks mostly professional and is in line with modern short adverts, for example this kind of thing could easily be done in Shape (note images are included although you have to upload your own videos if you want to use them – at least in the free trial version):

You can edit the Shape to change colours, images, etc to deal with some of the issues I raise under concerns about contrast (although still probably not great for accessibility?).

Perhaps most importantly, the AI does a pretty good job of spotting the key elements from the source material although there was some weird stuff toward the end.

The “medium” solution I requested came back as just over 3 minutes which suggests this is going for decent “short and punchy” rather than trying to be too clever.

Overall

Is it worth it? Well, for basic advertisements this seems great, it would be an easy way to create content for campaigns but I’m not sure if micro learning itself in this format is hugely helpful. That said, if we compare this with what was possible a few years back then the ease with which we can now create content is hugely impressive.

Docebo have a track record of improving their products and I know they have some really good people on their team so hopefully Shape can become a useful tool to Docebo’s LXP customers and beyond.

To time or not to time

Expected duration. Time on task. Lock stepped vs open. Start and end dates. Peer pressure motivation. Collaborative vs independent.

All of the above are all too well known for online learning developers. Does your design measure progress? Is it via time on task, do you lock access based on progress, do you enforce weekly or other spacing, use pre and post testing to adapt the experience or some other method? These issues are often tied to if you are allowing people to access content versus undertaking more collaborative activities.

This week I have had chance to pick up a few “courses” (well resources really in some cases) and this has got me thinking again about the temporal aspect of online learning. For example, is there value in Coursera basically unenrolling you from their courses to fit in their schedule, with the option to reenrol on the next session. This is partly as there are discussion activities but, in reality, the timing adds nothing to the learning experience for those wanting to pass through the course at their own pace.

Google, for example, advertise that they have opportunities via Coursera yet the company known for “organiz[ing] the world’s information and mak[ing] it universally accessible and useful” lock these “job-training solutions” to their/Coursera’s timelines rather than those of the interested party.

This expectation of working through at someone else’s pace is poor instructional practice when, in reality, many such courses are combinations of async activities such as videos, reflections, quizzes, etc. The defence for the model is probably facilitator support (i.e. being able to have someone online to help with questions). However, this seems contradictory to the idea of flat rate charging ($39 a month* as in the below image) without the traditional Coursera “audit” (i.e. FREE) access option. If the intention is to increase completion rates through forcing a time-based fear/scarcity mode of motivation this similarly is poor given there is not the personal support you would have in, say, a traditional university course to give you a hand and nudge you along to the final deadline.

Ultimately it feels that if this is the model then these courses need to be designed to allow any time joining with, say, monthly cohorts for discussion boards. Indeed we were designing similarly to this for rolling start degrees back in c.2010.

Ultimately it feels like MOOCs continue to fail at their stated objectives time and time again.

* also Google obviously have enough money to support skills development without charging for such items as CSR activity.