An Instructional Design Process Review by Dr. Clark Quinn

design process review

There is much buzz around design processes these days, and it’s worthwhile to look at what the excitement is. While there is a bit of ‘hype’ about this, there’s also some real value to be found, so it’s worthwhile to dig into the details.

Why should we even consider reviewing our design processes? In addition to problems with the outcomes, there are some structural problems. What we want is a repeatable and predictable process that yields optimal outcomes under pragmatic constraints. And we’re far from it.

The original learning design model for process is ADDIE: Analysis, Design, Development, Implementation, and Evaluation. And these are all good steps, but there’s a problem. Too often, they’re followed in a linear methodology. And that’s not a recipe for success.

In software engineering, it has long been recognized that a linear, so-called ‘waterfall’ model of design doesn’t work. With substances like concrete and metal, the properties are well known. However, when designing for humans, as the audience becomes aware of the opportunities created during the process their ideas of what they want are likely to change. Thus, there will be a change in requirements that basically destroys a linear process. To be fair, most process proponents have advocated an iterative model of ADDIE for a long time now, but the anecdotal evidence is that’s not representative of what takes place.

Another revolution in software engineering has been the Agile approach, sparked by a Manifesto. As a consequence, new approaches have been proposed including Allen’s SAM and Torrance’s LLAMA model. The core concept is the notion of teams working in short ‘sprints’ to develop the next iteration, and then planning forward. This has a number of benefits, including rapid outcomes, regular testing, and working together. However, there are more nuances beyond just iterative. So let’s run through some of the possible characteristics and some options, evaluating the case for each.

Principles

To be sure, iterative is good. However, it does have to be formative as well. That means that each iteration is evaluated and the feedback refines the design. The goal is that at every iteration there needs to be a test or a review to determine how the output is meeting the desired criteria.

The above implies a measurement focus. In software development, good processes stipulate what the inputs and outputs of the software will be, including a test suite, before development begins. Ideally, there should similarly be learning outcomes (as well as usability and engagement metrics) that the output should meet when done. In practice, of course, it is often the case that we’re testing to find major problems. In usability, a heuristic approach iterates between expert review and user testing, with a goal of catching some 80-90% of problems.

A related principle is that development should output small working products that can be elaborated, rather than producing one monolithic output. This makes sense in the notion of modular software, but an open question is how it gets mapped to learning content. One approach would be to first develop the final practice or assessment, then to add additional practices to get the learner to this point, and finally determine the minimum necessary concepts and examples to be developed.

Another precedent is to stay with the lowest technology possible for as long as possible. A core premise is that the more you invest in development, the harder it is to throw away. A coarser version of this is to not start development until a design is fully rendered as a storyboard. As tools get easier to use, this may not be as much of an issue, but the ability to throw away and start again is a valuable freedom to ensure the willingness to take feedback in full.

Another mantra that arose out of usability was so-called ‘situated’ design. This was where the designers didn’t work in a vacuum, but particularly in the analysis phase there were visits to the actual performance environment. Here there were clues to where the real problems lay (for instance, users might not comment on problems they’d solved by posting a sticky note to remind them of some step or important information). This is now an aspect of performance consulting, but the realities of the need should indeed be validated before determining the solution.

Also arising out of the user interface field was the move to ‘participatory’ design. Here, representative users were active in the actual design process. These users (and other stakeholders) provide a reality check, serving as the voice of the customer, and their developing awareness was useful in anticipating design opportunities that weren’t available at the time of requirement setting. The flexibility to support emergent capabilities provides a rich way to truly create a learner experience.

Similarly, an important component of the agile method, as mentioned earlier, is the collaborative nature. Research now makes clear that the best outputs come when people work together in productive ways. Having one individual responsible for all the design (let alone all the design and development) is suboptimal. At worst, have several checkpoints where people collaborate, including at the beginning and at every review point.

One of the issues that arises is maintaining predictability. Many people are concerned that iterations provide an uncertain amount of revisions. SAM solved this by pragmatically choosing 3 iterations for each of the two loops in their approach (which can be adjusted based upon smart expectations about the scope and complexity of the project, as I understand it).

An interesting approach comes from software engineering. Watts Humphries, after a career in quality control for coding, looked at a personal and team process that decreased code errors and improved estimates. His key element on the latter was to document the estimates and then subsequently compare estimates to actual outcomes. The realization was that too often, estimates were never reviewed and consequently the discrepancies never decreased. When the errors were reviewed, however, the estimate accuracy increased.

Goals & Implications

It is clear that we may not want to incorporate all of these elements into our approaches, but we do want to work towards creating processes that will balance optimal outcomes with pragmatic scope. Certain goals should be on the top of our list.

First, we should be looking to create learning that is effective as well as efficient. If we develop a course when it’s not necessary, we’ve missed an opportunity and wasted money. Similarly, if we develop a course focused on the wrong things, we’re not being efficient with resources. This suggests effective performance consulting up front, and a process for working with SMEs that results in the right objectives.

Once we’ve determined a real learning need, we need to ensure that our learning design is going to be effective. This means that we want to ensure sufficient meaningful practice, specific feedback, and the minimal amount of concepts and examples to achieve the necessary outcomes. Prototyping and refinement, e.g. iteration, is the way to ensure practice that meets the necessary criteria. Incorporating regular cycles of expert review and user testing in lightweight ways is also appropriate

We also want the experience to be emotionally engaging. Working collaboratively is one of the best ways to tap into the creativity that’s likely to create engagement. This includes compelling narratives and appropriate and varied use of media.

Pragmatics

All of the above needs to be managed so that it’s still repeatable and affordable. Instituting regular checkpoints for interaction between team members when creativity is desirable, and with stakeholders when reviews are required, is one approach. Templates for learning design quality can reduce the requirement for too much review. While initially the effort required may be higher, with practice the amount of effort goes down and the ability to predict goes up.

An essential component is beginning to measure the impact of the learning. This focus on metrics has two roles. The first is to ensure that the focus on the learning design is on a meaningful change. The second is to both inform and document the success.

Overall, there may be an overall increase in effort required. This should be offset by not developing courses when courses aren’t required. It should also be justifiable once you start focusing on measurement and documenting that you’re having an impact on the organization.

It’s time we stop assuming that we can take a given objective, prepare content and an associated quiz in a linear fashion by a single designer/developer or a handoff from design to developer, and have any meaningful impact. If we are going to acknowledge the complexity of the human brain, we need processes that draw upon what’s known about maximizing outcomes under realistic constraints. Our design processes need to reflect the 21st century just as much as our learning design does.