The Realities of Artificial Intelligence and Adaptive Learning

artificial intelligence and adaptive learning

Feature post by Clark Quinn

There’s been quite the spate of discussion of late about Artificial Intelligence (AI) and adaptive learning.  You’ve no doubt seen the commercials where Watson conducts conversations with talents from Bob Dylan to teacher Ashley Bryant, the latter in which great learning outcomes are proposed. And I think it’s important to know what is real, where we are, and where we are going, if we’re to plan accordingly. We’ve previously touched on AI, but it’s worth going deeper.

To start, we need to clarify what AI really is. Artificial intelligence can be a number of things: doing smart things with computers, or doing smart things with computers the way people do them. The distinction is important.  Computers work differently than our brains: our brains are serial consciously, but parallel underneath. Computers are serial underneath, though we can have multiple processors, and there now are parallel hardware architectures as well. Still, it’s hard to do parallel in computing, whereas we’re naturally that way.

Mimicking human approaches has been a long-standing effort in AI, as a mechanism to confirm our understanding. If we can get the same results from a computer simulation, we can suggest that we have a solid model of what’s happening.  Of course, the connectionist work, inspired by frustration with some artifacts of cognition, shows that some of the earlier symbolic models were approximations as opposed to accurate representations.

A second area of AI is machine learning. Here, we don’t try to model what’s happening, and instead we simply provide inputs and feedback on the outputs. With a learning algorithm – some procedure whereby the computer changes it’s approach to better match the desired output – the machine eventually learns what to do. And the resulting ‘rules’ may be opaque to semantic inspection: we can’t necessarily intuit what the rules are being used, even if the output is good!  Yet, if the results are good, we should be happy.

From an instructional perspective, we want whatever is best. We would like to embody what’s known about how to best facilitate learning, for one, and have explicit rules about what to do when.  This can include deciding the next problem or determining an appropriate intervention.  Learning science research has led to robust results that are still imperfectly embodied in our learning environments, and we could do better.  We’d also like to see what emerges from learner interactions.  With vast numbers of online learners, we can start determining what works via the power of big data and analytics. We can learn what behaviors lead to more effective outcomes.

The Possibilities

So what do we do with artificial intelligence to support learning?  There are several categories: we can adjust (adapt) the learning to the learner. While learning styles is not the basis for adaptation, learner behaviors are. If they’re struggling, we can return to component skills until they are mastered.  If learners are succeeding, we can accelerate the difficulty. We can also adapt on the basis of learner approaches to problems. At least in well-defined domains, we can make models of good performance, and detect when learners go awry.

The specific benefits include providing instruction where the instructor doesn’t get tired, has no bias, is accessible whenever needed, and so on.  We can have rules that move the learner forward or back purely on behavior, independent of any teacher assessment of the student. We can train tools that can make independent evaluations of student writing on specific assignments, assessing quality of thought.  And we can intervene when students are at risk, exhibiting behaviors that are correlated with problems in success.

One area that has been a continuing interest is the possibility of coaching not just domains, but learning skills, developing meta-learning.  The possibility is a coach implemented across separate exploratory environments, focusing on, say, systematicity of exploration rather than the domains themselves.  The notion is to develop learning-to-learn skills across curricula, rather than specific to them.

The ultimate point is to leverage the abilities of computers to do symbolic operations in a reliable and repeatable way.  To the extent we can capture the specifics of what we think are good teaching practices, we can embody them in a system.

The Reality

The reality, however, is that we’re still in the early stages. A recent report funded by the Gates Foundation and conducted by SRI found little benefit to adaptive software, as yet.  Our knowledge of learning science is developing, but still embryonic. And the technology, while powerful, still depends on our ability to apply it.

The fact is that previous efforts to build expert systems led to an AI ‘winter’ when the reality didn’t lead up to the promises, and we’re only somewhat further along.  ‘Knowledge Engineers’ worked with experts to build systems that did what the experts said they did. And they didn’t work. What was found was that the correlation between what the experts said they did, and what observation actually revealed, was essentially zero. Further research suggests that experts literally don’t have conscious access to 70% of what they do.

And while research in the learning sciences has advanced significantly since then, and we have great principles for designing learning experiences, there are still gaps. We don’t fully know what interventions are best, and more importantly what the role of great teachers are, and how best to incorporate social learning. We have good heuristics that can be applied by people, but the adaptation needed is still uncertain.

In well-defined domains, like mathematics and programming, we’ve created intelligent tutoring systems that can develop specific outcomes, but even those have problems transferring into practice. It’s instructive that when a famous algebra tutoring system was commercialized, significant efforts were required around the system to actually make it implementable in schools.

One of the important elements is actually thinking about the types of learning we need. It’s not just about the pedagogy, but it’s also about the curriculum. We’re increasingly recognizing that what’s important is not the ability to solve rote problems, but the ability to map issues to solvable problems and then apply practical approaches.  Yet our curricula have yet to reflect this reality.  So, while there have been AI systems developed that can take content such as that in Wikipedia and develop knowledge questions about that content, such tests are seldom what’s going to make a difference in an individual’s ability to do.

Real learning comes from approaching complex problems, experimenting with alternatives, and accessing resources in the process of solving them. It works even better when it’s collaborative, and it clearly benefits from coaching.  While direct instruction works on typical school curricula, a problem-based curriculum works better for retention until needed and transfer to other important problems. As yet, however, problem-based tutoring by computers – where the system is playing a mentoring role – is not ready for prime time.

There are things about our cognitive architecture that aren’t optimal, and things we can do that computers can’t.  Our brains are amazing pattern matchers and meaning makers, but they’re really bad at repeating rote and complex steps. Computers are the opposite. Together, they’re quite powerful, but what we want from our systems are providing the right role to each component, human and technology.

The Future

What works best is providing meaningful problems and having individuals in the loop to evaluate both the process and the product. The efforts at learning that involve the learners in peer-evaluation, coupled with instructor evaluation, are still the best learning models going.  The technologies that are needed are collaboration and communication tools, not robots.

This is not to say that there is not a role for AI in learning: the increasing ability to auto-mark, and executing against specific and known interventions, are valuable (triggering human intervention is one option). And the opportunities will only increase.  We should be excited about, and actively investigating the possibilities. We do have needs for scalability and efficiency, and similarly for increasing development of abilities.

As yet, however, and perhaps always, there is a role for humans in the loop. We’re making advances, but the best advice is still to design on sound learning principles, not to count on intelligent systems to do our work for us. For now, at least.