There are many ways in which we’re not using technology in good ways, and yet there’s still the excitement over new technologies. We’re naturally excited by the new and shiny, but is there any substance behind the hype? Let’s investigate, with a keen eye on real learning potential.
Most of these technologies, for reasons of ease of reference (and marketing) have emerged with two letter acronyms. We’ll talk about a whole suite of TLAs (two/three letter acronyms): AI, AR, VR, VW, before touching on ARGs, and then WC.
One of the biggest areas of activity has been in adaptive learning and artificial intelligence (AI). They’re not the same thing, so let’s be clear. We can hardwire separate paths to create adaptive systems, so hypothetically we could have separate learning experiences for learners characterized by some criteria to distinguish them (e.g learning styles, though the utility of this is debunked). Or we can have a system that dynamically adapts based upon the learner’s recent actions. This can also be somewhat simplistic, whereby recent success or failure triggers a simple algorithm to keep them in place or advance the difficulty. The non-intelligent ones are not particularly new nor of interest. What’s happening with AI, however, is of interest.
There has been a long history of intelligent tutoring systems (ITS), whereby systems look at what actions the learner has taken and determines what to do next. These are typically characterized as consisting of three systems: a learner model, a domain model, and a tutoring model. The tutoring model is about your pedagogical strategy, and can work on simple ‘put them back on the right path’ to more advanced pedagogies like allowing some learner strategies to play out before intervening, and the type of intervention can vary from direct instruction to graduated hints. Further, the system can have the learner work on one thing ‘til it’s known or cycle between several different topics. The learner model typically tracks evidence for what the learner knows, or not, and can assign probabilities for these. It’s the domain model that’s been problematic.
The early models, starting with the MYCIN system, are based upon a representation of expert knowledge, and these were costly and problematic to build. As a consequence, they tended to be built around abstract and logical worlds such as mathematics and programming. (The Carnegie Learning work was gestated around one such model for algebra.) These systems build expert models of teachers, and problem-solving in the domain. To be scalable, approaches needed to find a way to develop a basis for adapting that goes beyond domain specifics, or find ways to capture the knowledge in tags and relationships.
More recently several startups have developed mechanisms to do this adaptation by strongly characterizing content development in ways that support the adaptation. They can send learners different questions depending on how they do, diagnosing specific areas that need work. Two of the startups in this space are creating such solutions, though they require specific content building skills or have relatively simple adaptation. Another new approach is mining texts for knowledge, and can auto-generate questions from source material.
Importantly, these systems typically demonstrate better outcomes than typical classroom instruction, achieving performance a standard deviation better, but not the two standard deviations that individual tutoring provides. Of course, this depends on what you’re assessing. Is it knowledge, or actual skills? Intelligent tutoring systems address performance within a specific problem, while most systems select different problems based upon the last one. And the problems largely are still in choosing the right answer, and not addressing the problem-solving process used.
Then the issue becomes one of meaningful learning experiences. If learning is about deep practice, can these systems provide it? Generating knowledge questions from text isn’t really going to lead to meaningful new abilities. And how about learning to learn? This is a powerful outcome from good instruction, but how can that be built into the learning experience delivered by system? And what are the role of mentors in these learning experiences?
I think that there’s tremendous promise in AI for learning in terms of reliable and bias-free results, but I think we still need to answer questions of curricula and pedagogy, including meta-learning, before we’re ready to truly capitalize on the opportunities. Right now, we may be better focused on using better design to create rich media content and meaningful practice than adapting the entire learning experience. So maybe we want to look at AR instead of AI.
As an alternative to adaptive technology, another smart approach is to use the user’s context to specify information. Augmented reality (AR) is a technology with some already demonstrated success in supporting performance and learning. Augmented Reality involves layering on specific information onto the existing world by sensing and matching to the context whether location or what’s seen through the camera. A classic example is holding up a phone with a camera and screen, and the system recognizes the image and adds information to it. It can be pointing to restaurants in the direction being looked, or layering on images of parts or system components when viewing an engine. The information can be audio instead, so a continuing narrative as you move or accomplish a task, triggered by the location or other clues.
A variety of systems now exist to support context-sensitive information. They provide the capability to specify the context and the additional information to be presented. And there are powerful reasons to instrument the world instead of trying to put it in the head.
Certainly, AR can be a powerful adjunct to learning. Exposing the underlying workings of something, or pointing out specific items relevant to learner’s goals, can augment formal instruction in important ways. However, another role may be more performance support than formal learning. Helping someone in the moment accomplish a task can be a valuable outcome whether any specific ‘learning’ happens.
AR is, indeed, ready for prime time, and already seeing real use cases. And there’s an extension that’s also of interest, VR.
Virtual Reality, or VR, is a different take than AR. While AR augments reality, VR creates it’s own. The typical implementation is a set of goggles that presents visuals (stereo, one to each eye) that create a separate reality. You see a completely simulated world. What’s different from regular virtual worlds is rather than just an image on the computer screen is different in two separate ways: first, it’s 3D, owing to the stereo eye images; and it’s also responsive in that as you move, your moves adapt the scene relative to where you were and where you’ve moved to.
The benefits here are several-fold. First, the world doesn’t have to be real, and it can be in any scale. It’s highly immersive too, in that your vision is completely embraced and it reacts to your motion (the original motion sickness problems have largely been eliminated).
Most importantly, you can create a fully envisioned 3D world to be explored. Learners can move around or through any creation of importance, at any scale. You can visit molecules or galaxies. What’s more, you can take action. With suitable (and non-trivial) programming, you can have interactions that create experiences.
There are downsides. While the prices have dropped dramatically, the sets still have additional costs on top of your existing hardware. The costs to develop the worlds can be somewhat steep. And the lack of awareness of the rest of the world has the potential to be dangerous.
Overall, however, in the right place and time the learning outcomes can justify the expense, and things will get more powerful and cheaper over time.
Virtual Worlds were a version of Virtual Reality that was instead communicated by computer screen and navigated by mouse or keys. No extra headset was required, though originally they used to require separate applications. Eventually, browser-based versions emerged. They’re not new, but they’re re-emerging.
There were some overheads involved; they could be quite processor-intensive to handle the digital image rendering, and acting in the worlds could require some learning overhead. Depending on the capabilities involved, these could prove to be substantial barriers.
There were two main opportunities that these worlds provided: 3D and social. The combination had significant value in specific situations. When the situation benefited from sharing information in an environment where space mattered, there were real opportunities for learning.
Of course, the original hype that these worlds were overblown. The overheads meant that they couldn’t be a panacea. Using these worlds just for social communicating or just 3D didn’t make sense, as there were lower overhead solutions. Consequently, these environments underwent a collapse after the initial excitement. Over time, however, these environments have returned, with much more careful attention to where they make sense.
One other acronym, alternate reality games or ARGs, also has learning opportunities. ARGs are games that, instead of being housed inside a computer or console, permeate the real world through channels like phone, email, text messages, or real world incidents. Spread out over time and space, they can be single player but more frequently are multi-player.
Sharing the benefits with serious games, situating important decisions in a thematic story, ARGs add the possibility to use the communication tools typically used in the workplace. So, for instance, in a demo we created once we had the player in a sales role and engaging with a potential client to close a deal; sending the right information in response to the dialog.
The infrastructure to support such games is evolving. It used to be that you needed to custom-code the systems, but increasingly there are platforms available. These platforms provide one location to support cross-channel communications so a text message can trigger a call or an email, or any other such mapping desired.
While mobile learning has been relatively mainstream for a number of years now, a new facet is becoming available, wearable computing (WC). Wearable technology, whether in form factors such as Google’s recent glasses or wristbands such as watches, is increasingly hitting the mainstream. While the glasses supported AR, as above, the wrist opportunities provide new capabilities.
The way to conceptualize wearables is by their usage. The attention time a wearable gets is seconds, while a pocketable (e.g. a smartphone) gets seconds to perhaps some minutes. (A tablet really is used for at least minutes and potentially up to a few hours at a time.) What does this give us?
A few seconds is generally not considered the right span for learning. In fact, it’s short for even performance support. However, we can imagine a few use cases. For example, just a quick quiz question, reinforcing knowledge, or maybe even a mini-scenario might make sense. Similarly, a quick contact after a performance experience could transform that to a learning experience. Naturally, reminders of events, or pointers on a path, or a quick question and answer could be considered a form of support.
As with many new technologies, our initial uses will mimic previous solutions. New opportunities tend to come after we become familiar with the new affordances. The point is to map learning to the technology (not the other way around).
And that’s the crux of new technologies. We want to map them to good pedagogy, not use them to continue to perpetrate old mechanisms. We want to start from designing the learning experience we need, and then figure out what technologies can help. The reverse, where we look to find how a new technology can be used for learning is wrong. Look at the core capabilities, and then design with this new opportunity on tap when it makes sense.
There are and will be new technologies arising continuously. It is our role to evaluate their new and unique offerings, and capitalize on them in the service of learning and improving performance outcomes. Whether it’s through providing meaningful practice, augmenting learning or performance, or connecting people, we want to align technology with how we think, work, and learn. In this way, we can truly advance our abilities to improve our situation.