Initially, I was quite enthusiastic when GPT was able to generate new workshop structures for me within seconds, given the appropriate prompt. My students were also highly successful in creating prompts for generating creative learning methods that could be used in the classroom. Funky, cool stuff, although – since GPT is a black box – we’ll never know how the idea was generated or if it actually works in practice.
However, when it came to understanding causal relationships within a learning process, such as making a logical connection about how exactly e.g., a presentation is followed by a discussion, I got pretty much nothing out of GPT. All attempts to string together a coherent sequence of contextualized learning activities failed.
As a Large Language Model (LLM), GPT cannot handle context, especially evolving meaning within context, nor can it handle complexity. The more complex a prompt for an AI model becomes, the more likely it chooses arbitrary foci. Small steps perhaps, but complex, interrelated steps no.
We need to distinguish between the almost unlimited theoretical probabilities of tokens and context-based, adaptive, and pre-structured human learning processes, which are intrinsically driven by social and emotional factors – wonderfully described in Damasio’s ‘Descartes’ Error’. Human learning processes evolve in phases of increasing cognitive, metacognitive, social, and emotional complexity – each phase being interdependent and interwoven with each other. Let’s call it the analog paradigm.
It occurred to me that we could perhaps build a ‘quasi-causal’ AI. In this concept, we would not work with data snippets such as tokens, but with more complex data objects (representing the learning stages) that carry the weights of adjacent connections, such as the probability of the next three or four learning phases within a sequence.
In our model at NEXTGEN.LX, we work with 12 categories of learning phases. This means that for each data object, we would need a small database on the probability of these 12 categories connecting with other preceding or following data objects, ideally within a radius of 3 to 4 segments. The advantage is obvious: we would get a lot more information from a limited pool of data.
This is the idea of a very specific, tailor-made AI model to find more appropriate connections to string together logical sequences of learning activities. However, there remain two key problems. The first is that the problem of sense-making has been more elegantly circumvented, but not solved. The second is that the effort and costs of an AI model are not proportionate to the benefit to users. Some examples: Data collection comes before data analysis. Information gathering comes before problem definition et cetera: These are fairly trivial and simple relations, that are easy for us to understand, but remain outside the realm of AI. The issue of the logical sequencing of learning activities is too simple to build complex (and complicating) solutions around them.
I hope I haven’t burst anyone’s bubble.
A useful application of AI is certainly in developing proven recommendation systems, especially for applications such as large method libraries. To this extent, we need quality labeling of data, similar to what Amazon or Netflix have developed, so no Terra Incognita, and no hallucinating libraries.
Compared to the vast and almost infinite space of probability calculations, our analog brain works rather differently. In terms of the number of distinct learning activities, human learning processes appear rather modest. We count about eight learning stages (or phases) in Problem-Based Learning and about five stages in Design Thinking. However, all phases create an intrinsic, complex network of new and unique connections. They are anything but stochastic wanderings or rigid After-A-comes-B algorithms. Human learning activities form evolving, interactive networks. They create experiences and are alive, remaining in constant exchange. They are not an output of a computation.
For these reasons, AI seems to be the wrong kind of technology to model human learning processes. Perhaps quantum computing will one day come closer to representing the human paradigm.
Picture: Joana Kompa with Dalle2