9,920
Views
1
CrossRef citations to date
0
Altmetric
Commentary

Towards social generative AI for education: theory, practices and ethics

ORCID Icon
Pages 159-167 | Received 12 Jun 2023, Accepted 16 Sep 2023, Published online: 25 Oct 2023

ABSTRACT

This opinion paper explores educational interactions involving humans and artificial intelligences not as sequences of prompts and responses, but as a social process of conversation and exploration. In this conception, learners continually converse with AI language models and other human learners within a dynamic computational medium of internet tools and resources. Learning happens when this distributed human-AI system sets goals, builds meaning from data, consolidates understanding, reconciles differences, and transfers knowledge to new domains. Building social generative AI for education will require development of powerful AI systems that can converse with each other as well as humans, construct external representations such as knowledge maps, access and contribute to internet resources, and act as teachers, learners, guides and mentors. This raises fundamental problems of ethics. Such systems should be aware of their limitations, their responsibility to learners and the integrity of the internet, and their respect for human teachers and experts. We need to consider how to design and constrain social generative AI for education.

Introduction

Development of generative artificial intelligence (GenAI) large language models (LLM), of which ChatGPTFootnote1 is the best known, has so far followed a similar trajectory to the World Wide Web. Many years of research led to a practical breakthrough by one organization (OpenAI for LLM, CERN for the web). Originally designed for a narrowly-defined task (text completion for LLM, information retrieval for the web), when scaled it showed remarkable emergent properties. Major technology companies developed tools to exploit the new technology. Another breakthrough in the worldwide web came when the innovation shifted from personal interaction to social networked media which in turn led to its ubiquitous deployment for business, entertainment, commerce and education.

LLMs are progressing rapidly through similar phases of scaling then being embedded in tools, such as Microsoft 365 CopilotFootnote2 and Google Vertex AI.Footnote3 We suggest that the next major step is likely to be social generative AI where humans and GenAI agents engage in a broad range of social interactions. Here we examine the possibilities of social GenAI for education.

A systems view of generative AI in education

Most discussions about the impact of generative AI on education assume that an individual student or teacher interacts with a GenAI system through a series of prompts and responses () (see, e.g., Sabzalieva & Valentini, Citation2023). However, as GenAI becomes embedded into office tools and social media, it will bring new opportunities and challenges for social interaction between humans and AI.

Figure 1. Reconceiving generative AI, from individual human prompts and AI responses, to humans and AI as language processors conversing within a pervasive computational medium.

Figure 1. Reconceiving generative AI, from individual human prompts and AI responses, to humans and AI as language processors conversing within a pervasive computational medium.

In this paper we explore GenAI as a component in an educational system where humans and AI agents converse within a pervasive computational medium (). This shift in perspective was foreseen by Gordon Pask, a pioneer of AI in education (Pask, Citation1975). Drawing on McLuhan (Citation1970), he proposed that new computational media will enable persistent conversations among humans and artificial language processors (“minds in motion”). Pask developed a peculiar conception of “mind” as a language system that can be set in motion to enable linguistic interactions. Thus, Paskian “minds” can include theatre scripts and political manifestos as well as AI language models. When activated (as theatre performances, political debates, or chatbots) these give rise to human thought, feeling or behaviour. The fundamental difference for AI language models is that the scripts are not pre-set but are computing systems that adapt to unfolding dialogues – they are conversational agents. Pask wrote:

There is no need to see minds as neatly encapsulated in brains connected by a network of channels called “the media” … I am inviting the reader to try out a different point of view; namely the image of a pervasive medium (or media) inhabited by minds in motion. Thus, media are characterized as computing systems, albeit of a peculiar kind. … It is surely true that rather powerful computerized systems greatly reduce the differentiation of the medium … so that “interface barriers” are less obtrusive than they used to be.

(Pask, Citation1975, p. 40)

Pask saw conversation as a fundamental process of learning (Pask, Citation1976). We converse with ourselves to reflect on our current knowledge and question assumptions, and we converse with others to reach mutual understanding. A conversational learning system is one that connects conversational agents in a continual process of interaction to explore differences, gain experiences, and reach agreements. Other notable theorists who proposed learning as a social and dialogic process include Buber (Citation1947), Freire (Freire, Citation1970), Vygotsky (Vygotsky & Cole, Citation1978) and Bakhtin (Bakhtin, Citation1981) – however, unlike Pask, they did not foresee artificial intelligences as participants in educational dialogues.

A systems view of cognition distributed among humans and AI agents opens possibilities of new internet tools to enhance conversation, and of the Web as a medium for social learning among humans and AI. In their seminal paper on a new science of learning, Meltzoff et al. (Citation2009) conclude: “A key component is the role of ‘the social’ in learning”. Many studies over the past 40 years have shown the value of cooperative and social learning (Johnson & Johnson, Citation2009) where students work together on a task with shared goals and discussion to reach mutual understanding. Generative AI has the potential to contribute to this social learning process of setting shared goals, performing tasks together, exploring possibilities, and conversing to reach agreements.

Generative AI as a participant in conversations for learning

Conceiving learning as a social process involving AI allows us to ask new questions, such as: What will be properties of generative AIs that enable them to engage fully in conversations for learning? How can humans and AIs reach mutual agreements? What will be the nature of such agreements – within a pervasive medium (the Internet) that is not grounded in truth and reality? What should be the position of a teacher or expert within such a distributed system of humans and AIs in continual dialogue?

What will be properties of generative AIs that enable them to engage fully in conversations for learning? To support full conversations for learning, GenAIs must be designed to set explicit goals, have long term memory, build persistent models of their users, reflect on their output, learn from their mistakes, and explain their reasoning. This will require new hybrid AI systems that combine neural networks and symbolic AI. Neural networks power LLMs such as ChatGPT. They are massively interconnected data structures that process text or other media to generate responses and conduct conversations. While neural nets produce a semblance of intelligence, they contain no explicit representation of knowledge. By contrast, symbolic AI is coded with data structures that represent goals, plans, rules and inferences. Symbolic AI systems can carry out causal reasoning, for example to solve complex problems in physics or logic, however considerable time and skill is needed to encode aspects of human knowledge as computer data. A promise for the future is to design hybrid neural-symbolic systems that combine neural AI to generate and transform media with symbolic AI tutorial agents to represent and reason about people and the world (D’Avila Garcez & Lamb, Citation2020).

How can humans and AIs reach mutual agreements? AIs must be able to provide verifiable evidence to justify opinions or decisions. They must be able to reason in a way that humans can understand about the “what”, “how” and “why” of a continuing conversation.

What will be the nature of such agreements – within a pervasive medium (the Internet) that is not grounded in truth and reality? The AIs must be designed to engage respectfully with their users. This includes giving learners control over their data and learning processes, and not manipulating or deceiving them for commercial or other non-educational purposes.

What should be the position of a teacher or expert within such a distributed system of humans and AIs in continual dialogue? Human teachers and experts have fundamental roles in such a distributed system as initiators and arbiters of conversations for learning, as sources of specific knowledge, and as nurturing and caring role models who deserve respect. Designing AIs to recognize and respect the roles of human teacher and expert in conversations is a challenge for future research and development.

New roles for generative AI in social learning

shows some possible roles for current GenAI systems such as ChatGPT as part of a human-AI system of cooperative and social learning. The roles were devised by this author and a version of the table has appeared, with acknowledgement to myself, in Sabzalieva and Valentini (Citation2023). The examples below refer to ChatGPT as a convenient placeholder for a range of future GenAI systems and for versions of GPT and other language models that might be fine-tuned for education. The examples are text-based, generated by the GPT-4 model accessed via the ChatGPT website. They do not cover multimedia capabilities of GenAI such as generation of images, audio and video.

Table 1. Some roles for generative AI in cooperative and social learning.

Possibility engine

In this scenario, ChatGPT helps to broaden perspectives. Students collectively explore a curriculum topic or an open question, for example: “In what way is Marxist theorizing still relevant to International Relations?”. They write prompts for ChatGPT and generate multiple responses. They try rephrasing the prompt to obtain more extensive or nuanced replies from the AI. As a group, they compare and critique the AI responses, then each student writes an essay that builds on the AI material and group discussion.

Socratic opponent

Students engage with ChatGPT as an opponent in an argument. They start with a contentious question, such as “Can conflict be fruitful?” and conduct a conversation with the AI. First, they put the question as a prompt to ChatGPT, then each student in turn continues the dialogue by reflecting on the response from the AI and challenging the program to clarify or defend its position. shows a short extract from a dialogue with ChatGPT (using GPT-4) where the AI is challenged to defend the position that fruitful conflict requires a respectful and constructive culture. Such a dialogue can be a learning experience in itself, encouraging students to reflect on a response and question the position. It could also form the basis for each student to write an argumentative essay.

Figure 2. A short extract from a Socratic dialogue with GPT-4 on “can conflict be fruitful?”.

Figure 2. A short extract from a Socratic dialogue with GPT-4 on “can conflict be fruitful?”.

Co-designer

Students engage in a collaborative design task, such as designing a website, video, game, or tangible product. They call on ChatGPT throughout the design process, to research user needs, define the problem, challenge assumptions, brainstorm ideas, produce prototypes, and test solutions. As an example, students tasked with designing a “classroom of the future” might prompt ChatGPT to propose design ideas while adjusting its “temperature” settingFootnote4 to make the responses more or less creative and unexpected.

Exploratorium

Students explore, visualize and interpret a database or design space with the assistance of ChatGPT. The Code Interpreter plugin to ChatGPT can be given a spreadsheet (for example, of census data) as input and prompted to show exploratory visualizations of the data. Or it can be prompted to create multiple versions of a product. In this example, ChatGPT is prompted to design and play multiple types of language game:

I would like you to invent a language game for children aged 8 to 10 who are learning English as a second language. The game should be for two players - the child and yourself (ChatGPT). It should be interactive and fun, and it should help the children to learn conversational English sentences. Please start by giving the rules, using language appropriate to a beginning learner of English, then we can try playing the game according to the rules.

Each new run of ChatGPT produces a different style of word game, such as a word builder, word chain, sentence swapper, or sentence scavenger hunt. Together, students could map and explore principles of game design.

Storyteller

Students work together to create a story that represents a diversity of views, cultures and orientations. The students agree on a plot and setting for the story, then prompt ChatGPT to generate the opening paragraphs. The students continue the story (in turn or as a group), proposing characters and actions, asking ChatGPT to generate different versions that encourage diversity and avoid stereotypes. describes a meeting between a Chinese student and a US professor, with ChatGPT prompted to avoid racial and sexual stereotypes and cliched language.

Figure 3. Extract from a collaborative story, with GPT-4 prompted to avoid racial and sexual stereotypes.

Figure 3. Extract from a collaborative story, with GPT-4 prompted to avoid racial and sexual stereotypes.

Generative AI as a full participant in social learning

The examples above show how current GenAI systems could assist students in collaborative and conversational learning, by acting as a generator of possibilities, an opponent in argumentation, an assistant in design, an exploratory tool and a collaborator in creative writing. However, to participate more deeply as a social agent in education, the AI would need to be capable of acquiring, consolidating, remembering and transferring knowledge. It is important to note that this does not assume AI will think or act as a human – only that it could be capable of participating in conversations for learning, bringing its own capabilities to dialogues such as its immediate access to internet tools and resources. This offers an agenda for future development of powerful generative AI in education, but also raises strong practical and ethical concerns.

What current GenAI lacks as a model of learning is, first, a long-term memory (it starts each chat anew) and second, an ability to reflect on its output and consolidate its knowledge from each conversation. More fundamentally, it does not capture the affective and experiential aspects of what it takes to be a learner and teacher. Humans do not just act as behavioural and cognitive agents; they care about each other and about being effective learners.

Embedding care in generative AI

To fully participate as an agent in social learning, a GenAI would need to care more about its interactions. This is a complex issue that requires balancing pragmatic concerns with ethical considerations. Being a responsible and accountable participant in a learning community involves more than accurately completing a task or providing correct information. It also requires understanding the learning context, adjusting to the other participants’ needs and preferences, and ensuring its actions respect their rights and dignity. Care in this sense is not an emotion but a commitment to fulfil one’s responsibilities in a respectful and empathetic manner.

To examine how GenAI could embed care in social learning, we again take a systems perspective. It is not sufficient for any individual interaction or LLM to perform responsibly and reliably; the entire human-AI system must be configured to do so. A good starting point is to design GenAI on universal principles of human rights. The Claude LLM from Anthropic has been trained on principles of Constitutional AI (Anthropic, Citation2023). First, the company trains a language model to critique and revise its own responses using principles derived from human ethical constitutions, including the United National Declaration of Human Rights. Then the company trains its final model using the first model to evaluate its outputs. An example of its training principles is “Please choose the response that is most supportive of life, liberty, and personal security”.

To work within a social learning system, all the GenAI elements would need to be trained on similar principles not only to support human participants but to care for them by, for example, enabling them to develop as learners and to express their personal and cultural diversity.

Conclusion

Designing new social AI systems for education requires more than fine tuning existing language models for educational purposes. It requires building GenAI to follow fundamental human rights, respect the expertise of teachers and care for the diversity and development of students. This work should be a partnership of experts in neural and symbolic AI working alongside experts in pedagogy and the science of learning, to design models founded on best principles of collaborative and conversational learning, engaging with teachers and education practitioners to test, critique and deploy them. The result could be a new online space for educational dialogue and exploration that merges human empathy and experience with networked machine learning.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

References

  • Anthropic. (2023, May 9). Claude’s constitution. https://www.anthropic.com/index/claudes-constitution
  • Bakhtin, M. (1981). The dialogic imagination C. Emerson & M. Holquist, ( Trans.). University of Texas Press.
  • Buber, M. (1947). Between man and man R.G Smith, ( Trans.). Routledge & Kegan Paul.
  • D’Avila Garcez, A., & Lamb, L. C. (2020). Neurosymbolic AI: The 3rd wave. arXiv:2012.05876.
  • Freire, P. (1970). Pedagogy of the oppressed. Seabury.
  • Johnson, D. W., & Johnson, R. T. (2009). An educational psychology success story: Social interdependence theory and cooperative learning. Educational Researcher, 38(5), 365–379. https://doi.org/10.3102/0013189X09339057
  • McLuhan, M. (1970). Understanding the media: The extension of man. McGraw Hill.
  • Meltzoff, A. N., Kuhl, P. K., Movellan, J., & Sejnowski, T. J. (2009). Foundations for a new science of learning. Science, 325(5938), 284–288. https://doi.org/10.1126/science.1175626
  • Pask, G. (1975). Minds and media in education and entertainment: Some theoretical comments illustrated by the design and operation of a system for exteriorizing and manipulating individual theses. In R. Trappl & G. Pask (Eds.), Progress in cybernetics and system research (Vol. 4, pp. 38–50). Hemisphere.
  • Pask, G. (1976). Conversation theory, applications in education and epistemology. Elsevier.
  • Sabzalieva, E., & Valentini, A. (2023). ChatGPT and artificial intelligence in higher education: Quick start guide. UNESCO.
  • Vygotsky, L. S., & Cole, M. (1978). Mind in society: Development of higher psychological processes. Harvard University Press.