More often than not, my co-founder and I feel like Bill Gates in the famous 1995 video where he attempts to describe the Internet to David Letterman. The funniest part of Windows 95 was the flying toasters, not that our world would change forever. It is hard to explain our new paradigm, but I shall try.

A New Paradigm of Process-Based Social Learning

In a nutshell, we can model all formal human learning processes in algorithmic form. Organizational learning processes, methods, hacks… no matter how complex, you name it. We empower teams to design all types of L&D by themselves. Or like this: Every company in the world that needs to run workshops or any other L&D events is our potential client. We call our concept Social Learning Design (SLD), which evokes collaborative and immersive learning experiences (LX).

It took us more than two years of research, backed by decades of experience, to develop a unique taxonomy that is both intuitively easy for users to understand and precise enough to describe organizational processes in great detail. What are the implications of being able to model virtually any process?

From an Investor’s Perspective

It goes something like this: You want to buy software for specialized onboarding processes? Well, you can model all those processes with nextgen.lx. Want to replicate the same complex and costly change-management methods or leadership training programs that you find at KPMG, BCG or McKinsey? Well, you can model, share, modify and distribute all of these processes with nextgen.lx. You don’t have qualified experts to deliver the programs? Our platform provides mobile support for novice and intermediate facilitators, coaches and workshop moderators. SLD is disruptive because it can make previously impossible-to-scale exclusive learning processes accessible to many.

So we don’t do e-learning or personalized learning. We do social learning and team development. We do processes, not content. In the age of AI, the production of content has, in our view, become a secondary issue.

How Do We Start Developing a Pro-Social AI?

Now that users around the world have gained a good understanding of the basic workings of AI in a plethora of applications, I look forward to the future development of an AI for SLD/LX. But where should we start?  We are in the people-business.

This is how the first thing that comes to mind is ethics. Humans should not only have the final say on design, but also drive reasoning and social negotiation at any stage. Any AI that replaces human learning (or decisions about learning) deprives its users of their autonomy and self-organization. Therefore, the probability-spaces that AI opens up must be explicitly transparent to users. When making recommendations, we need to distinguish between suggestions based on (a) ‘tried and tested’, verified training data and (b) experimentally-generated data. The appropriate type of data needs to be labelled.

Learning from Best Practice

There are already best-practice AI models on the market that work this way. For example, AlphaFold developed by DeepMind, a subsidiary of Alphabet, indicates the probability of correctly predicting a protein’s fold structure as in ‘high confidence’ and ‘low confidence’. The latest model even predicts the interaction between proteins, which is currently fueling a biochemical revolution. (https://deepmind.google/technologies/alphafold/) Graphic below: protein structure prediction with confidence score, deepmind 2024

Unlike predicting proteins and their interactive behavior with the highest possible certainty, human learning processes are not predetermined. They are the result of negotiations and creative thinking, usually with several competing options to choose from. There is no inherent right or wrong. Thankfully, we have to experiment with these options based on well-reasoned assumptions. What I take from AlphaFold is the indication of probability and the nature of the probabilistic space. In the case of SLD, for example, it would not be ‘high confidence’ versus ‘low confidence’, but perhaps other parameters, such as (a) ‘congruence level with similar-type processes’, or (b) ‘generated deep-social impact scenario’.

Solution (b) would imply that the architecture of our processes, the sequence of learning activities, as well as the quality of social outcomes are measured and fed back into the training data. Since experimental data can be validated by real-world results, data eventually moves from (b) experimentally- generated to (a) validated. This follows scientific protocol: Hypothesize, test the hypothesis, verify and evaluate the results.

Unlike social networks, the underlying algorithms do not implicitly become part of the conversation between users and nextgen.lx. Similar to AlphaFold, where scientists have to make complex decisions independently, recommendations remain on the screen and open for discussion. Such an approach is true to the spirit of our craft. Diana Laurillard called her groundbreaking process-model the ‘Conversational Framework’. In this sense, we offer a cultural tool for conversations about the structure of desired social learning processes and their outcomes.

There is a huge difference between an organizational culture based on manipulative, ‘correct’ recommendations and one that empowers open conversations about how an organization should evolve. The first works through deception, the latter through empowering ownership.

The second similarity with AlphaFold is that we also work with visualization. By visualizing processes, we can make faster and better decisions than by just looking at text. In this respect, ChatGPT has the poorest interface. Unlike text, a visual language transcends cultural and social barriers. Our eyes can absorb information from graphics much faster than from text. The Cognitive Theory of Multimedia Learning has scientifically proven that a combination of text and standardized graphics works best, which is what we are developing at nextgen.lx. In addition, we have verified the unsuitability of LLMs for learning design with groups of expert learning designers.

Another best practice example is DeepL, a German based text translation and optimization company. In each language, we deal with alternative options such as synonyms, or different types of delivery (formal, informal or automatic), style (simple, business, academic or casual) as well as customization options (glossary). This is how users are motivated to search for detailed recommendations that meet their needs. Google Translate, btw, cannot compete. Advanced user-choice remains the number one criterion for all professional design platforms.

Below: Screenshop of DeepL (author): It is all about indicating choices transparently.

Summary

This is how we need to develop a specific AI based on educational logic and our visual taxonomy. It would be counterproductive to simply plug ChatGPT into our product. As we develop e.g., multi-professional teams, we must also consider how teams at different levels, such as novice and advanced teams, can learn from each other. It is a completely different setup. This opens the prospect of a true SLD AI-application. Prosocial AI is thus more than a recommendation system that says ‘This might be a good match for the user data we previously tracked about you’. A SLD-AI is an advanced conversational, social tool that helps teams make more insightful and confident decisions. It doubles up as a scientific tool that allows users to test their hypotheses and generate high-quality data.

Image below: Our process-based approach – Not too different from DNA Sequencing: https://www.nextgenlx.com/ Copyright 2024 by nextgenlx.


Leave a comment