How does AI determine social practice? Yesterday, I was talking to a mother who was deeply upset about the perceived unfair treatment of her son. In a school essay, her son, whom I know personally and to whom I attribute a high level of intelligence, was graded significantly lower than the essay of a classmate who is known for doing his homework with AI. My first assumption was that the underlying format of the exam and its assessment was obviously geared towards the writing of ironed-out texts. In such a format, individuality and originality become disruptive factors, so the AI text wins hands down.
The information society outsources its epistemology to non-human systems
There are many tasks that Large Language Models (LLMs) cannot accomplish, such as understanding contexts and their semantic, reasoning or distinguishing relevant from less relevant or irrelevant examples. AI can neither feel, nor think. If the above-mentioned essay required an epistemological genesis, i.e. a documentation of its creation, the true linguistic struggle with the text, the self-deceiving classmate would look pretty clueless.
In contrast to problem-based learning (PBL), which generates new knowledge, AI is based on old knowledge, on trained data sets, featuring a remix of probabilities that purports to be able to predict relevant outcomes. LLMs are currently being developed in an unprecedented race by the digital overlords of Silicon Valley, who are sitting on bursting cash reserves.
On the one hand, their AI-models are driving a frivolous exploitation of humanity’s cultural heritage (without, of course, compensating its intellectual owners); on the other, they are standardizing expectations of what human language, image and music horizons should look like. If AI were developed according to the standards of human-cantered design, AI-products would look rather different. At the moment, the models of Microsoft, Alphabet, Nvidia, Meta and many others are being imposed on us, whether we like it or not.
The colonization of futures
At a time when we are talking about postcolonial discourses, we completely overlook the fact that the next big colonial story, the colonization of our future through the standardization by LLMs, has long been underway. After the exploitation of the outer, physical world, the hunt has begun for the exploitation of the inner, emotional-cognitive world, and I would not be surprised if the latter is one day left in a similarly desolate state as our planetary ecosphere. Who cares, though? The seemingly God-given arguments of increasing efficiency and productivity justify pretty much anything. What happens to culture, society and ourselves in the process appears to be of secondary concern.
Where would OpenAI be today if Ilya Sutskever and Jan Leike were driving its development? As Leike posted on ‘X’: “I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
I am less concerned about the partially great, partially creepy technological innovations that are presented to us on a daily basis than I am about the decoupling of cultural and technological development, insofar as a strong culture and civil society can more easily correct maldevelopments.
There is no shortage of smokescreen throwers who wish to prevent this. When Elon Musk and thousands of other AI experts called for a moratorium on development in March 2023, they were secretly hoping for nothing more than a head start for their own companies. After all, nothing paralyses the technological development of competitors more effectively than publicly instrumentalized moral concerns.
A new unbridgeable chasm: the AI-autonomy threshold
I would call the new phenomenon of human disempowerment the autonomy threshold. It implies that people with a sufficient level of education are generally able to handle the new tools reasonably well, because they actually understand them as tools and toys and not as ‘thought amplifier’ à la Gyro Gearloose or as a surrogate for a lack of imagination. These elites, to which most of us belong, draw on a privileged educational socialization. They/we operate above the ‘AI-autonomy threshold’.
For everyone else who is not able to properly read, write and calculate sufficiently – a large proportion of primary school children in Germany according to the IGLU study – AI will become a lifelong crutch, with companies having a commercial interest in keeping their customers in a lifelong dependency. From travel planning to financial decisions, translations, online shopping, e-learning, project planning, restaurant selection, fitness and lifestyle advice, the automated formulation of emails and chat messages, job applications and even dating, individual and social life is being outsourced to AI systems. This means that people can largely dispense with acquiring equivalent competencies. All people need is an AI that tells them what to do next in a plausible and convincing way. Experts call this psychological flattery ‘alpha persuasion’.
The end result is an externally determined, optimized life plan with the greatest possible security, predictability and few surprises, at least none that are unpleasant. Even these are carefully dosed within profile-based context windows.
Distressing historicity, liberating historicity
Oh no, please no disturbing stories and narratives with subtitles here! Engaging with other people means engaging in relationship work, such as being inviting others into our life projects, to reach out to them and accept them in their otherness – to find humanity in connection.
There is no comfort in historicity. Historicity reveals itself in individual as well as collective stories of suffering and success, in contradictions, paradoxes, crises and existential uncertainties that do not conform to the weighting of an AI system. Historical life is unique: we are only children once, we only experience youth once, we only pass through the stages of adult life once.
Societies likewise go through unique, non-repeatable phases. Uniqueness can be influenced by our actions and to some extent assessed (Daniel Kahnemann & Amos Tversky), but it can never be completely calculated. In Hegelian dialectics, historicity as an unfinishable process resists controllability, functionalization and finalization.
In this respect, I don’t think much of post-apocalyptic visions in which one or more AIs, as in the great ‘Matrix’ film series, actually control humanity completely. What I do see as an already realized variant, however, is the relative loss of freedom, in a Kantian sense, through the transfer of human self-determination to external systems: The more we give away, the more we lose. The more we lose, the less competent we become. The less competent we become, the more dependent we get.
At this point at the latest, it becomes clear that we find ourselves in a dilemma in a hyper-complex world: On the one hand, we need AI systems to better navigate the complexity of our world; on the other hand, we need to preserve our humanity by never subordinating people to systems. Even more – that our needs and desires guide and inform the AI systems. This turns out to be the ultimate challenge.
Historicity unfolds a fascinating power in its open-ended horizons of understanding, its lively immediacy and authenticity, which cannot be forced into the corset of functional formulas or bullet points. Our historicity holds unimagined emancipatory potential, as we get a chance to grasp the fragility of freedom. We become aware of human vulnerability and that there is no automatism, no inevitably progressing world spirit for creating a better future. On the contrary, freedom must be won anew with every generation, experienced, lived democratically (John Dewey) and reinvented. Freedom is more than just a Sunday speech.
About social glue and the new loneliness
As a school principal recently pointed out to me, there is an incredible amount of loneliness among younger people. Despite the latest in digital networking, from Tik Tok to WhatsApp and Instagram, many young people feel fairly lonely and isolated. Social media might bring people together, but it is not able to connect them permanently on a deeper emotional level.
In addition, dissent in social media is usually easier to experience than consensus, which complicates communication enormously. We have to constantly evaluate of what to expect from expectations in the digital realm. We must always be prepared for rejection, the termination of communication by others. Managing expectations about expectations costs much psychological strength.
Put simply, we have run out of social glue. Instead, human emotions are staged as media-produced micro-sensations. Social resonance comes in the format of wishful thinking. And while the all-seeing digital eye has demystified the world, surveying every last corner with drone cameras, even ecocide has been turned into a boring nuisance. The disintegration of social relationships, which began with the triumph of social media, is continuing in the age of AI systems.
If you don’t believe it, just google ‘AI Girlfriend’: the personalization of voice assistants, as is currently the case with ChatGPT 4o, is catapulting anthropomorphism into high orbit. The youngest generation of digital natives is growing up under the illusion of talking to quasi-humans. The Turing Test sends its regards and, like Schrödinger’s cat, without basic trust in other people we feel half alive and half dead at the same time. Or very lonely.
Where genuine social resonance is no longer required, to paraphrase the German sociologist Hartmut Rosa, there is logically no need to develop the ability to relate. In the promise of rationalizing personal freedom of choice, the certainty of independence from others via digital resources is ascribed a higher value than the development of happy world-relationships such as with friends, communities and loved ones. Companies are capitalizing on a Hikikomori for all: the fear of analogue social encounters and society can be both amplified and appeased ad infinitum by corresponding products – a successful and perfidious business model.
Am I not awfully pessimistic?
I believe that am critical of media, certainly not pessimistic, as we know from our research and experience how we can constructively design contemporary learning settings in such a way that human freedom, creativity and personal development can be given the space they deserve. I would therefore like to conclude with Hegel’s thought that freedom is the awareness of necessity. The necessity to endorse social constructivism and self-organization.
It means that in order to shape our history in a self-determined way, we need to accept that, on the one hand, we are dependent on technologies such as AI, quantum computing, CRISPR and many others, while developers not only need to (a) openly assess the effects and risks of technologies on a personal and societal level as well as they need to (b) involve users and affected groups in the development of technological systems in a participatory manner.
We are a long way from such co-creation.
The dream of community-driven AI and decolonization
What we as educational designers are striving for are new, fear-free and inclusive learning and innovation spaces. The development of teams plays a central role in this. Anyone who has followed the latest developments, for example the presentations at the last Google I/O, quickly recognizes how a monopoly of a few tech companies is striving for global (data) dominance via AI. Everything is turning into AI, particularly services. There is no app, no database that is not linked to AI. This is countered by the idea of self-determined communities that decide for themselves which technologies they consider desirable to master their environment.
This morning, I stumbled across the enlightening podcast ‘How to bridge gaps’ with the Senegalese philosopher Souleymane Bachir Diagne. Diagne very succinctly points out the shortcomings of the non-inclusive universalism of the Enlightenment from an African perspective. Kant and Hegel followed the maxim that all people were equal – unless they had a different skin colour. Kant’s noble pure and practical reason contrasts with his racist geography and anthropology that cannot be swept under the carpet.
An exclusive technological universalism tells people: “Let us develop technology according to our design and our rules, offering a superior approach and resources that local and regional communities cannot even muster. Leave the organization of the world to us.” This is the voice of digital neocolonialism.
Diagne complements Western universalism with the African concept of Ubuntu. ‘Ubuntu’ is a difficult term to translate and connotes community, social connectedness, brotherhood and sisterhood, i.e. we need to be connected not only with our minds, but also with our hearts if sustainable communities are to succeed. Such an approach is fundamental to the development of technology education. To me, it includes the idea of ownership, of technological sovereignty, to the extent that we connect through what we create together. People are proud of what they create through their own efforts. It is about strengthening local and regional value creation as well as networking into global communities in the technology sector, as a kind of ‘glocal’ (local plus global) social hub.
The question we are asking ourselves, alongside team development, is therefore how we can build such new, community-oriented, local social hubs sustainably and what the tools can look like that encourage and promote genuine, communal co-creation.
