Why are computers still so dull? Where are the thinking machines we have been promised?

chap3

Above: Scene from the movie ‘Chappie’ (2015), directed by Neill Blomkamp

One of the most famous artificial intelligence (AI) entities in modern popular culture was arguably the HAL 9000 computer in the modern classic ‘2001- A Space Odyssey’; the insider joke being that when we shift all letters by one to the right in the alphabet, ‘HAL’ reads ‘IBM’. While HAL was creepy and evil, viciously attempting to kill the spaceship’s crew, we have in the meantime happily accepted the first wave of AI without much suspicion. Apple’s SIRI, Microsoft’s CORTANA and Facebook’s ‘M’ (the latter is still in development, but watch out for it) present the latest generation of commercialized AI in the form of friendly personal assistants. Who wouldn’t like to have a digital servant at their disposal?

CORTANA, for example, is courteous and friendly and diligently sends complex user profiling data back to her master, in this case Microsoft. Information-delivering loyalty is no different for the other mentioned models. AI comes with the programmed, built-in agenda to make profit for their owners, obviously. The only convincing solution to create a truly private assistant would be the development of local AI. Speech recognition and machine learning have made tremendous leaps in usability over the past decade. But why is the humble PC sitting on my desk still as uninspiring as a rock? Why don’t I believe anything that SIRI says? My personal and disappointing experience with AI came in the form of a car navigation system which had sent me in continuous loops around the city – with the effect of missing my flight. Then again, how do we define the ambiguous term of ‘intelligence’?

A well-known procedure to test ‘machine intelligence’ is the Turing Test, which has inspired generations of science fiction writers. The Turing Test was designed, to dispel a common myth, not as a test to prove of whether computers can or cannot think. The Turing test has been designed to instruct computer to lie (we may also say ‘to fake’ or ‘make-believe’) in such a manner that a human dialogue-partner cannot tell the difference of whether the conversation partner is human or machine. The Turing test is a test of performance, not a test to prove if or how machines are capable of mental states.

The claim that in the very near future computers will be capable of consciousness is one of the most fascinating public debates. When will we become obsolete? When will the Terminator knock at our door? Looking at my home computer, probably not anytime soon. Followers of ‘Transhumanism‘ and advocates of strong AI (which is the label for the idea of emerging self-conscious machines, or ‘h+’ in short), such as one of their most prominent speakers, Ray Kurzweil, cite two key arguments to why the end of humanity as we know it is inescapable and nigh. Stephen Hawking believes in the  inevitable advent of strong AI as well.

Pro Singularity: The Complexity-Threshold Argument and the Reverse-Engineering Argument

Firstly, it is argued that the performance of massive parallel computing increases exponentially. This is why, at some stage, consciousness may spring into existence once a certain threshold of complexity can be achieved. A single neuron cannot create consciousness, but billions of neurons can, which is the analogy being drawn. Secondly, by reverse-engineering the human brain, software can simulate precisely the same functions as neuronal networks. It is therefore anticipated to be only a matter of time when ‘singularity’, the advent of machine consciousness, arrives. If it does, so transhumanists conclude, biological intelligence becomes obsolete and we will eventually be replaced by the ‘next big thing’ of evolution, the ‘h+’. So much for cheerful prospects.

The Simulation-Reality Argument

One of the most ardent critics to this claim is Yale computer scientist David Gelernter. For Gelernter, to start with, simulations are not realities. We may, e.g., simulate the process of photosynthesis in a software-program while de facto no real photosynthesis has taken place. Computers, so Gelernter, are simply made out of the wrong stuff. No matter how sophisticated or complex a software-program simulates a process, it cannot transform actual carbon-dioxide into sugar and oxygen. We can simulate the weather, but nobody gets wet. We can simulate the brain, but no mind emerges.The underlying argument states that digital-, quantum- and biological modes of computation encompass fundamentally different types of causation and therefore cannot be substituted for one another. Consciousness, so Gelernter’s conclusion, is an emergent biological property of the brain.

HAL

Above: Big Brother is watching you. In Stanley Kubrick’s ‘2001 – A Space Odyssey’ (1968) this was the legendary HAL 9000.

The Mind-Brain Unity Argument

Brains develop organically over an entire lifetime. Our minds, as emergent properties of the brain, are intrinsically linked to the unique structure of neural pathways. The brain is not simply ‘hardware’, it is the physical embodiment of life-long leaning processes. This is why we cannot ‘upload’ a mind into a computer – we cannot separate the mind from its brain. For the same reason we cannot run several minds on the same brain – like we run several programs on a single computer. There is only one mind per brain and it is not portable.

The Psychological Goal-Setting Argument (Ajzen-Vygotsky Hypothesis)

Besides the obvious physical differences between brains and computers, cognitive differences could not be greater. AI developer Stephen Wolfram argues that the ability to set goals is an intrinsic human ability. Software can only execute those objectives that it was programmed and designed to. An AI cannot meaningfully set goals for itself or others. The reason for this, so Wolfram, is that goals are defined by our particulars—our particular biology, our particular psychology and particular cultural history. These are domains that machines have no access to or understanding of. One could also argue in reverse: because human life develops and grows within social scaffolding (a concept developed by psychologist Lev Vygotskythe founder of a theory of human cultural and bio-social development), deeply embedded in semantics, it is experienced as meaningful, which is a necessary prior condition to define goals and purpose. This would be the psychological extension to Wolfram’s argument.

Pepper insert

Above: The lovable  Japanese service-robot Pepper recognizes a person’s emotional states and is programmed to be kind, to dance and entertain. Is the idea of AI-driven robots as sweet, helpful assistants necessarily bad? Is is easy to see that the idea could be reversed (imagine military robots), giving weight to Isaac Asimov’s ‘Three Laws of Robotics’:  (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Setting goals also depends on a person’s attitude and underlying subjective norms in order to form intentions. If we would expect machines to set goals, they would not only be required to understand socially-embedded semantics, but to be able to develop attitudes and a subjective model of desirable outcomes. This requirement has been extensively researched in the Theory of Planned Behavior (TPB) by Icek Ajzen. We may coin the hypothesized inability of an AI to set meaningful goals the ‘Ajzen-Vygotsky hypothesis‘. The bar is set even higher when we consider including not only individual planning, which could be arbitrary, but the ability to consensual and cooperative goal-setting.

Arguing for Weak AI instead

What machines unequivocally do get better at is pattern-recognition, such as the ability to analyze our habits (e.g., which type of products or restaurants we prefer), speech recognition or reading emotional states, such as by webcam facial analysis or by measuring the heartbeat of fitness-wristbands. AI is getting better at assembling and updating profiles of us and at responding to profile-changes accordingly, which is a novel, interactive quality of modern IT.

In our role as eager social network users, we continuously feed AI the required raw material, which is precious user data. Higher-level interaction based on refined profiles can be very useful. AI can, e.g., assist us via single voice command, rendering the use of multiple applications obsolete. AI can manage application for us in the background while we focus on the task at hand. On the darker side, AI may compare our profile and actions to those of others, without our knowledge and consent for strategic purpose, which represents a more dystopian possibility (or already-established NSA practice).

Machine Learning is not an Easy Task when there is Little Data Available and Environments are Complex – Another Argument for Weak AI

Psychologist Gary Marcus looks at the trustworthiness of AI for real-world applications. The problem with machine learning, according to Marcus, is that AI does not do well when relying on limited data sets or in complex situations within stochastic environments. For example, let’s think of self-driving cars. Driving styles of car drivers in Shanghai, Stuttgart, New York, Singapore, Rome or Calcutta are entirely different, making a standardized AI driving algorithm for self-driving cars not only impractical, but potentially life-endangering.

mercedes

Above: Many car manufacturers work currently on developing self-driving cars. Here Mercedes’ concept study, the F 015, ‘Luxury in Motion’. Non-car companies such as Google and Apple have joined the race.

Machine Learning usually involves several data sets: a test-learning set, a training set and a (real-world) task set. Real-world scenarios do not provide conveniently pre-structured situations and data (such as, e.g., in chess or for recommendation systems), but they consist of an almost infinite number of situations. What when  it snows, or in heavy rain, or when an unexpected obstacle appears that has not been captured in the system’s database before? We don’t want a cleaning robot to bang against our furniture too often. Trusting a robot to take care of a child is a recipe for disaster to happen.

Marcus suggests developing cognitive psychological models for AI (e.g., by applying a variety of  ways how to recognize objects, not only by a single algorithm) to improve the accuracy of applied AI for specific contexts: If it looks like a dog, barks like a dog and behaves like a dog, the probability is high that we are indeed dealing with a dog and not a hyena or a goat, since a single low-pixel camera-input may deceive the AI.If programmers want AI to efficiently learn from sparse data sets, so Marcus, they should study how children learn, highly efficient, without much prior knowledge.

Despite what some people think, AI today is not anywhere near to what science-fiction suggests. For now, we better don’t base missile-guided systems on Deep Learning algorithms. 

Penrose deterministic but non-computational system

Above: Systems can be deterministic, but non-computable

The Non-Computational Pattern Argument

An intriguing argument against strong AI was formulated by Sir Roger Penrose, which can be reformulated in the context of mind-environment interaction. Penrose demonstrates in his lecture “Consciousness and the foundations of physics” how a system could be fully deterministic, ruled by the logic of cause and effect, and still be non-computational. It is possible to define a set of a simple mathematical rules for the creation of intersecting polyomino whose sequence is output as a unique, non-repeating and unpredictable pattern. There is no algorithm, so Penrose’ argument, that can describe the evolving pattern.

My immediate question was how this thought-experiment is any different from how we learn in the real world. Each new situation creates unique neuronal pathways in our brain. Since we assume, in addition, up- as well as downward-causation between brain (as the biological organ) and mind (the action executed by the organ), cognitive structures evolve (a) non-repetitive and (b) in self-restructuring manner.

Memories form by weaving subjective and objective information into the fabric of an autobiographical narrative. To claim, counter-factually, that narratives are still somehow ‘computed’ by an infinite number of interconnected internal and external processes, misses the point that there is no single algorithm, or program, that can account for a genesis of mind. The dismissal of this argument is by infinite regress.

The ‘Emotional Intelligence’ and Body Argument

Ray Kurzweil is well-aware that ‘intelligence’ cannot evolve in abstraction. This is why he emphasizes the importance of ’emotional intelligence’ for strong AI, to which there are at least two objections. The first is objection is that there cannot be emotions when there is no physical body to evoke them from, only software. Computational cognition lacks semantics without the information provided by an embedded, existential ontology, which implies existential vulnerability. The second objection is that the concept of emotional intelligence itself is a good example of deeply flawed pop-psychology. There is no compelling evidence in the field of psychology that emotional intelligence exists and could be validated as a scientific concept.

HER

Above: The movie HER (2013), directed by Spike Jonze, explores the human need for companionship. The main protagonist, Theordore (Joaquin Phoenix) falls in love with an AI, Samantha, who eventually outgrows the relationship with her human partner. As a body-less entity, she develops the ability to establish loving relationships with hundreds of users simultaneously and after an upgrade, a liking for other operating systems which are more similar to herself.

The Multimodal Argument – The Flexibility of Mind

What scientists seem to ignore in the debate about AI is that the human mind can switch between entirely different mental modes, some of which are likely to be more computational (like calculating costs and benefits) and some appear to be less – or not at all computational (such as reflecting on the meaning and quality of experiences and the value of specific goals). The human mind can effortlessly switch between subjective, objective and inter-subjective modes of operation and perspectives. We can see things from the inside out or from outside in. In mental simulation, we can reverse assumptions of causation, which is our reality check. As a result of this flexibility, we have developed a plethora of mind-states involving imagination, heuristics, the ability to hold and detect false beliefs or to distinguish between illusion and true states. It is because we make mistakes, and because of the experience how painful these mistakes can be, that mental self-monitoring and forethought derive meaning. The multimodal argument rests on the assumption that an entity is capable of conscious experience, bringing us to the qualia argument.

The Qualia Argument

In the Philosophy of Mind, qualia is conceptualized as our subjective, experiencing consciousness. We could argue with Daniel Kahneman that this includes concluding memories based on those experiences (the experiencing- versus the memorizing Self). In the Mary’s Room thought-experiment, philosopher Frank Jackson demonstrates the non-physical properties of mental states which philosopher David Chalmers calls the ‘hard problem of consciousness‘, our inability to explain how and why we have qualia.

The thought experiment is as follows: Mary lives her entire life in a room devoid of color—she has never directly experienced color in her entire life, though she is capable of it. Through black-and-white books and other media, she is educated on neuroscience to the point where she becomes an expert on the subject. Mary learns everything there is to know about the perception of color in the brain, as well as the physical facts about how light works in order to create the different color wavelengths. It can be said that Mary is aware of all physical facts about color and color perception.

After Mary’s studies on colour perception in the brain are complete, she exits the room and experiences, for the very first time, direct colour perception. She sees the colour red for the very first time, and learns something new about it — namely, what red looks like.

Jackson concluded that if physicalism is true, Mary ought to have gained total knowledge about color perception by examining the physical world. But since there is something she learns when she leaves the room, then physicalism must be false.

An AI may, in the same manner as Mary, collect information about human interaction and emotions by learning how to read pattern based on programmed algorithms, but it will never be able to experience them. This could be considered a philosophical argument against strong AI (or supporting weak AI to assist us by synthesizing and applying useful information). Linking the multimodal- to the qualia  argument states that if the realization of qualia, as a prerequisite, cannot be achieved by machine-learning, subsequent multimodal mental operations can also not be performed by AI.

Anthropomorphized Technology: AI, Gender and Social Attitudes

The question posed in a title by science fiction author Philip K. Dick ‘Do Androids Dream of Electric Sheep?’ could be answered, from what has been elaborated, in many ways: (a) Yes, if androids have been programmed to do so (b) Not really, but Turing-wise their dreams seem convincingly real or (c) No, because machines are fundamentally incapable of sentience and self-cognition.

As a big fan of thought-experiments, I enjoyed movies such as ‘Chappie‘, ‘HER‘ or ‘Ex Machina’ thoroughly. A common theme running through all of the stories is the inability of an AI to truly connect to a human understanding of life. Another dominant theme, rather sadly, is the sexual and erotic exploitation of AI by men for the fulfillment of their fantasies (not elaborating on Japanese robot girls here, which is a cultural chapter by itself). It is unlikely that intelligent AI appears anytime soon when all that people can think of is satisfying their most primal urges by creating digital sex slaves, or creating collaborating criminals as elaborated in the movie ‘Chappie’ (2015).

ex_machina_2015_movie-wide

Above: The sexualization of AI to pass the Turing test is a theme in the movie ‘Ex Machine’ (2015) by Alex Garland. Another, more humorous example would be the figure of Giggolo Joe, played by Jude Law,  a male prostitute ‘Mecha’ (robot) programmed with the ability to mimic love in Spielberg’s ‘AI’ (2001).

The two most commonly quoted arguments to why most AI are formatted female are that (a) lone male programmers who work on AI create de facto their virtual girlfriends as a compensatory reaction to their social deprivation and (b) men and women find a female AI equally less intimidating and more pleasant to interact with as compared to male AI. It is revealing how we anthropomorphize technology (as we have, e.g., anthopomorphized Gods), which is worthy of a separate inquiry.

Beyond the obsession with creating artificial intelligence, how about creating artificial kindness, artificial respect, artificial understanding or artificial empathy? We could distribute these qualities among those humans who dearly lack them.

Summary

As weak AI continues to develop, prospects for the advent of strong AI remain in the realm of science fiction. There are compelling arguments that singularity will not emerge anytime soon and may, in fact, never realize. One of the key arguments is that biological, digital and quantum systems are based on fundamentally different types of causation. They are not identical and require technological translation. AI can be understood, in this light, as the translation between human consciousness and information processing in the digital  and the quantum domain in order to serve human needs and goals.

Digital assistants and service robots have already become useful and self-optimizing extensions of our social life. As for all technology, AI is subject to potential abuse since the ethics of goal-setting , for the better or worse, remain still a unique quality of fallible programmers within the open domain of human imagination.

The Advent of Online Education (Part II)

The following entry has been inspired by my participation in ‘Effective Online Tutoring’ with Oxford University. Part I explored current trends in online education and its pedagogical implications. The second part explores the psychology behind online learning.

A PDF-version of this post is available at The Advent of Online Education, Joana Stella Kompa – Part 2

1. An Introduction to Gilly Salmon’s 5-Stage Model

2. A Note on PBLonline: E-moderating versus e-tutoring

3. Typical Anxieties and Psychological Needs of Virtual Students

4. The Short and Beautiful Life of Virtual Learning Communities: A Global Outlook

ADVENT2

 

Our Minds Keep On Evolving

When Rene Descartes laid down the foundation of mind-body Dualism in his second and sixth ‘Meditations on First Philosophy’ (Cottingham et al., 1985) he could not have possibly anticipated that the human mind would evolve a significant stretch further. Not only is the mind perceived as a non-dualist, emergent and supervening quality of the body (Davidson, 1970), but representation of identity is , even more puzzling, embedded in data, big and small. One of these additional ‘layers of identity representation’ is made of ‘Big Data’ (Pentland, 2012), the electronic trails we leave behind as a passive narrative of our lives as we move through the world such as our credit-card transactions or any records of digital subscriptions and economic expenditure. The second layer of identity representation, our active digital participation, comprises of constructs such as emails, SMS-messages, social network postings or Blogs; our digital Alter Ego. Philosopher Andy Clark and David Chalmers also talk of the ‘extended mind’ which encompasses smart-phones, tablets, personal computers and their immediate access to networked knowledge bases. Chalmers and Clark coined the phenomena ‘active externalism’ whereby the environment drives cognitive processes (Clark & Chalmers, 1998). The digital environment has become a useful extension of our natural mind.

The cultural divide between indigenous populations, traditional life-worlds and developed countries is growing and so is the global digital divide. This delicate notion, combined with the concept of additional symbolic embeddedness of mind, play a key-role for understanding the creation of online learning platforms. It becomes obvious that high levels of (media) literacy and pro-social, communicative competence form prerequisites to successfully participate in this new world.

This two-fold dilemma of literacy requirements and given limitations of social life-world backgrounds confronts online tutors on many levels, in particular for courses inviting an international and multi-cultural audience of students. In the following we shall have a closer look on typical issues that global learners face on individual and collective level.

1.An Introduction to Gilly Salmon’s 5-Stage Model of Online Learning

Gilly Salmon’s operational 5-Stage Model of Online Learning (Salmon, 2011, 2014) has become the gold-standard for designing online courses for good reasons. Firstly, her model recognizes the logical difficulties and progressive familiarity that learners experience in an online learning environment. Secondly, each preceding stage constitutes a prerequisite to engage successfully on the next level. An extended summary goes as follows:

Stage 1 ACCESS AND MOTIVATION. The student can access the online learning platform and is supported to gain confidence in navigating and managing the virtual learning environment (VLE). To offer an encouraging, constructive and reassuring personal welcome is important since many beginners experience considerable anxieties in the unfamiliar territory. Corresponding technical student support is of the essence.

Stage 2 ONLINE SOCIALISATION. The student establishes a digital identity via setting up a public profile and introduces him- or herself to study colleagues. The student engages in first social online activities (‘e-tivities’) and information exchanges hosted by the e-moderator which includes introduction to ‘house rules’ and netiquette. The e-moderator is weaving participants together in the ‘Welcome Forum’; this is to evoke mutual social interests in other students during introductions. Building an online community continues to develop throughout the course.

Stage 3 INFORMATION EXCHANGE. Students actively start on their studies while the tutor scaffolds discussions and social interaction. Helpful tools are learning contracts and general agreements among the study group, for example to publish a minimum number of weekly contributions and to answer in a structured manner to colleagues’ postings. Students also learn implicitly time and resources-management to support ongoing dialogue. Information exchange can be facilitated both formally (such in official discussion forums) and informally (such as in a ‘Common Room’ or extended social networks). The time for early first assignments and feedback.

Stage 4 KNOWLEDGE CONSTRUCTION. More complex tasks are being introduced. Students become contributors and authors in the collaborative creation of new knowledge. The moderator assumes the role of an assisting ‘guide on the side’ rather than a ‘sage on the stage’ and facilitates discussions and activities. The moderator ensures that discussions don’t run off-topic, that discourse is not dominated by a few and that the learning process remains enjoyable, lively and insightful to all. Students learn to construct new knowledge collaboratively and the tutor takes a step back and monitors rather than continuously interferes.

Stage 5 DEVELOPMENT. Students develop a virtual community where they support each other mutually and gain increasing autonomy as strong, self-directed learners. Students also assess critically their newly gained competences, the cross-contextual application of developed solutions and the roles that they have assumed during the learning process.

Salmon’s Model appears to be a robust and valid theoretical model to storyboard online courses. She also wrote a very useful paper on typical problems and their solutions occurring in the described stages in the online publication ’80:20 FOR EMODERATORS’ (Salmon, 2006, p. 145-153). Most courses aim to keep stages 1-3 as short and effective as possible in order to maximize time for knowledge construction and students’ development.

2. A Note on PBLonline

Noteworthy for educators is the compatibility of Salmon’s functional workflow with Problem-Based Learning (PBL) pedagogy which follows a similar path of social engagement, allocation of resources, construction of new knowledge, solutions development and finally mutual assessment.  A comparative graphic is enclosed below (click to enlarge).

Advent 2 PBL - Gilly Salmon

Salmon emphasizes the constructivist philosophy (Blais, 1988) of the 5-stage Model based on the premise that learners actively construct authentic mental models of the task and challenge at hand. PBL shares this conceptual approach.  However, Salmon’s e-moderating and PBLonline tutoring appear to address different dimensions. E-moderating as proposed by Gilly Salmon seems to refer to the logical functional development of the online learning experience (familiarization, information exchange, socialization and knowledge construction) whereby e-tutoring refers to ensuring a high qualitative mental structure of students’ dialogue. The quality of mental structure encompasses internal criteria such as:

  • the specificity to which a student responds to someone else’s arguments
  • the anchoring of key-arguments in references and research
  • the degree of critical thinking
  • the ability to apply reasoning in context
  • the ability to actively explain and evaluate concepts, not only copying and reformulating them
  • the sensitivity towards self- and group biases
  • the ability to question one’s own premises and to be able to draw logical conclusions from well-argued prior positions, and finally
  • the degree of meta-cognitive reasoning, this is to be able to justify the validity of an argument with good reasons (‘Why is this a good argument?’)

The tutor-based pedagogy of PBL is suited to be adapted online since new knowledge is created in small and structured teams. Cheaney and Ingebritsen (2005) point out the significant changes that take place in the translation of face-to-face PBL to PBLonline. They note the preference of synchronous over asynchronous communication in PBLonline as the course progresses. Savin-Baden (2006) argues that PBLonline is necessarily different from its face-to-face version. She argues that the type of dialogue and means of giving and receiving information have changed. Furthermore so has the authenticity of the problem itself while the authorship of contributions differs greatly from face-to-face environments (Savin-Baden, 2006, p.13). The topic of redesigning PBL for online participation is perhaps deserving of a separate investigation. For now we can conclude that the paramount aims of PBL such as the development of higher cognitive and social skills in the context of typical real-world problems are in line with the overarching structure of Salmon’s model.

3. Typical Anxieties and Psychological Needs of Virtual Students

In particular at the beginning of a course many participants face a number of anxieties that negatively affect their learning and socialization stage. In the following I have listed some of the most commonly encountered psychological obstacles:

Readiness Anxiety is the fear of not being able to cope with the course before it even starts. Possible solutions are to prepare students by assisting them with the setting up of software, sorting out login- and navigation procedures or making sure that students can order required textbooks on time. A ‘Student Readiness Assistance’ needs to be prepared and offered ahead of the official program start.

Technophobia is the fear of not being able to handle basic technologies required to communicate or the worry to handle technological resources insufficiently. Solutions are for example the availability of an IT-Helpdesk, easy-to-follow online videos demonstrating the use of the VLE or online brochures with easy-to-follow step-by-step instructions. Different students might have different preferences of choosing assisting media. As the number of digital natives increases technophobia might be on the overall decline.

Publishing Anxiety is the natural shyness of posting online based on a general lack of self-esteem (“My contributions are not good enough”) or fear of unknown negative consequences via online exposure. Shy and silent students require encouragement and need to be reminded that they operate in a safe, risk-free and highly supportive environment. Weaving quiet students into the communicative fabric of other students is a skill exercised by the e-moderator (Salmon, 2006).

Cognitive Overload Anxiety expresses the unpleasant perception of not being able to cope with ongoing tasks on a multilevel platform. It is important to assist students in planning external commitments ahead of time and to help with time-management issues once they arise.

Social isolation and loneliness: Since online learning is a more solitary activity, the feeling of being socially isolated and disconnected from others is not uncommon. Not fitting into an existing group or feeling not fully recognized as an individual may worsen such a depressed outlook. Different from initial anxieties (which may be more easily resolved) the feeling of isolation and loneliness can carry on throughout a course. The emotional need to belong to a group or to connect to a ‘study buddy’ is too often neglected and might turn online studies into a lonely and even sad experience. Social weaving by the e-moderator during introductions or meeting study colleagues in more familiar social networks ‘outside’ the official VLE may facilitate social bonding and exchange.

Cognitive Space Anxiety expresses the underlying fear of either not covering enough concepts in a program (the scope is too narrow) or covering too much ground (the scope is too wide). Such anxieties translate to students viewing a program as too easy or too demanding. To publish ‘Weekly Study Notes’ as a resource for all students might dispel such perceptions. Weekly study notes should give a comprehensive and brief overview about the most typical and significant concepts that have been developed in one’s field of expertise and which are about to be investigated. To allow for a critical view of theories in the light of actual evidence and context might further open a more sober perspective on the validity of theoretical models. Weekly study notes furthermore level the tricky issue of students joining in with greatly varying levels of prior knowledge. Professional scholarly discourse should moderate student’s individual schemata and allow for more balanced subsequent discussions. Leveling the playing-field for students via study notes (by covering basic conceptual knowledge in the beginning of a learning unit) might counter the often criticized weakness of PBL, mainly that students lack the depth of specialized knowledge (Lee & Kwan, 2014). Course designers have to decide on the depth and breadth of the syllabus at any given stage, when studies turn into explorations or when they go into detail.

4. The Short and Beautiful Life of Online Learning Communities

From my personal experience, participating in online course at Oxford and Liverpool University, most online groups start naturally with minor hiccups and the usual smaller confusions – just like in any face-to-face class. However, by the end of each unit it is hard for everybody to say goodbye. We have grown together. Emotional and meaningful personal bonds that form during studies are the rule, not the exception, while tutors have the privilege to witness the magic of social crystallization. Online learning works and it does work well. Online Education teaches us that we can form meaningful and productive relationships on a truly grand scale.

By the end of this century the UN expects the world population to grow by 11 billion people of who 4 billion come from Africa, 5 billion from Asia and 1 billion from the Americas and Europe each (Rosling, 2014). In order to facilitate the enormous demand of training and education of people from all over the world, online education appears to hold a central key to our global future.

References

Blais, D. M., (1988). Constructivism: A theoretical evolution in teaching. Journal of Developmental Education, 11(3) 2 – 7.

Cheaney, J. and  IngebritsenT.S. (2005). Problem-based Learning in an Online Course:  A case study. The international review of open and distance learning. Retrieved from: http://www.irrodl.org/index.php/irrodl/article/view/267/433

Clark, A., and Chalmers, D. J. (1998). The extended mind. Analysis 58: 7-19. Retrieved from: http://consc.net/papers/extended.html

Cottingham, J. , Stoothoff, R., Murdoch, D. (1985). The Philosophical Writings of Descartes Vol. I .Cambridge University Press

Davidson, D. (1970). Mental Events. In: Foster, L. & Swanson, J., eds., Experience and Theory, p. 79-101. Humanities Press

Pentland, A. (2012). REINVENTING SOCIETY IN THE WAKE OF BIG DATA. Retrieved from http://www.edge.org/conversation/reinventing-society-in-the-wake-of-big-data

Rosling, H. (2014). Don’t Panic: The Truth About Population, BBC 2 Documentary. Retrieved from: http://www.youtube.com/watch?v=CQWoeT2jXSo

Salmon, G. (2011). E-moderating: The key to teaching and learning online (3rd ed.). New York: Routledge.

Salmon, G. (2014). The 5-Stage Model. Retrieved from: http://www.gillysalmon.com/five-stage-model.html

Salmon, G. (2006). 80:20 FOR EMODERATORS . In: The Challenge of eCompetence in Academic Staff Development, Mac Labhrainn, I., McDonald Legg, C.,  Schneckenberg, D., Wildt, J., Galway: CELT, 2006. Retrieved from: http://www.ecompetence.info/uploads/media/ch16.pdf

Lee, K.Y. and Kwan, C.Y. (2014). McMaster University. PBL: What is it? “The Use of Problem-Based Learning in Medical Education”. Retrieved from: http://fhs.mcmaster.ca/mdprog/pbl_whatis.html

Savin-Baden, M. (2006) ‘The Challenge of Using Problem-based Learning Online’, in Problem-based Learning Online. ed. by Savin-Baden., M., & Wilkie, K. Maidenhead: McGraw Hill, 3-13. Retrieved from: https://www.mcgraw-hill.co.uk/openup/chapters/0335220061.pdf