5,525
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Learning, teaching, and assessment with generative artificial intelligence: towards a plateau of productivity

ORCID Icon & ORCID Icon

The launch of ChatGPT (https://openai.com/blog/chatgpt) on 30 November 2022 has taken the world (including the education world) by storm, accelerating a new wave of innovation – the age of generative artificial intelligence (AI). For decades, AI researchers have been working on generative models that can generate speech, text, music, etc., but with the recent rise of deep learning, generative AI allows users to create content at a speed and accuracy that was not possible before. With advances in recent months, generative AI has become accessible to the general public. For language generation, we now have ChatGPT and GPT-4, Bard, LLaMA, and BLOOM; for image generation, there is DALL·E 2 (and now 3), Midjourney, and Stable Diffusion; Google’s MusicLM can be used to generate music; Meta’s Make-a-Video can be used to generate videos. By the time you read this, this list is likely to be outdated.

With these fast-paced technological changes, generative AI is a potentially disruptive technology. It has stimulated a lot of buzz in the field of learning, with many debates, opinion pieces, webinars, and research investigations. This special issue captures and identifies some of the recent discourse on generative AI, notably its potential intended and unintended effects in the broad domain of learning.

With a short four-month runway from the launch of the special issue call to publication, we targeted to be relevant and timely to the ongoing discourse. Recognizing the discordant and noisy conversations in the world, this issue intends to provide assurance, certainty, and guidance for ways forward. We are glad that we have received seven high quality articles which have been peer-reviewed by esteemed researchers and experts in the field – we believe we all had the common desire to provide theoretical and substantiated views on this topic.

The seven articles are summarized below. The first three articles are conceptual in nature and broadly emphasize the social aspect of generative AI in teaching and learning. The next two articles highlight teaching and learning in practice with an empirical paper and a generative AI application. The last two articles focus on assessment, providing student perspectives and a pedagogical framework for teachers.

Conceptual frameworks for thinking about generative AI and learning

When discussing the role of generative AI in education, it is often likened to a calculator. Just as calculators force us to rethink student assessment and what should be taught in maths education, generative AI forces us to rethink assessment and curriculum in literacy and other areas. However, in their brief report, Lodge et al. (Citation2023) note that this is an overly simplistic metaphor and ask, “It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence?” They suggest that generative AI is a much more multifaceted tool than calculators, and instead ask us to think about this new toolbox in terms of a two-dimensional typology: (1) whether the tool is being used individually or collaboratively, and (2) whether the tool is offloading or extending cognition. Calculators are tools that are (typically) used individually and offload cognition; generative AI can span all four quadrants of this typology. Lodge et al. insightfully bring generative AI into conversation with both older and contemporary theories from human-computer interaction and psychology, including cognitive offloading, extended mind, self-regulated learning, and hybrid learning.

Tan et al. (Citation2023) in “Leveraging Generative Artificial Intelligence based on Large Language Models for collaborative learning” conceptualizes how generative AI can be used to support collaborative learning. Guided by the principles of collaborative learning, they identify roles of generative AI and the interactive spaces between participants and AI in an ecosystem, and advocate for a human-AI collaboration research agenda in teaching and learning. Additionally, recognizing the current affordances and limitations of generative AI, the report articulates challenges and approaches for each member of the ecosystem (e.g., students, teachers, school leaders, and AI developers), for the sustainable, reliable, ethical, and responsible use of generative AI for collaborative learning.

Finally, Sharples (Citation2023) provides a commentary that explores the social nature of generative AI in “Towards social generative AI for education: Theory, practices and ethics”. Drawing on yet another analogy, Sharples suggests that just like the worldwide web had a breakthrough when it took a social turn, so too will generative AI. To articulate a theoretical perspective on social generative AI in education, Sharples draws on Gordon Pask’s conversation theory. Pask was a cybernetician, educational psychologist, and technology developer who developed conversation theory as a general framework for thinking about teaching and learning – as well as other kinds of human and machine interaction – in the 1970s. This theory is seemingly all-but-forgotten today, but Sharples reminds us of its relevance for conceptualizing a theoretical perspective on the role that generative AI could play in education. He also articulates what must take place to have truly conversational generative AI in education. (Hint: the technology is not there yet!) At the same time, he describes several roles that even large language models today, like ChatGPT, could play in supporting the social nature of learning.

While these three articles come from different perspectives and theoretical framings, they all converge on the idea that the relationship between people and generative AI is inherently social. It is social in the sense that people can now have conversations with AI, in a similar fashion to how people converse in social settings; but it is also social in that generative AI could support interactions across people in collaborative and social learning settings.

Teaching and learning with generative AI

Ulla et al. (Citation2023) in ““To generate or stop generating response”: Exploring EFL teachers’ perspectives on ChatGPT in English language teaching in Thailand” examines generative AI, namely, ChatGPT, as a language teaching tool from 17 tertiary educator’s perspectives in Thailand. Using a qualitative approach, these instructors, who had used ChatGPT for at least 3 months, generally had a positive attitude towards using ChatGPT and its range of applications such as lesson preparation and language creation activities. They also recognized the limitations and highlighted the need for conscientious implementation of its use in education. This research article, although exploratory, provides empirical evidence of tertiary educator attitudes towards generative AI. It seems that they expect positive outcomes from ChatGPT yet are also cautionary, advocating for monitored use.

Ali et al. (Citation2023) in the brief report “Supporting self-directed learning and self-assessment using TeacherGAIA, a generative AI chatbot application: Learning approaches and prompt engineering” pioneers a teaching assistant chatbot, TeacherGAIA, for K-12 students. Developed using GPT4 and an iterative process of prompt engineering, rather than the didactic default mode on ChatGPT, the application is designed to allow for student interactions based on the goal of the learning approach (e.g., knowledge construction), cognitive guidance and social-emotional support. While acknowledging some limitations, this publicly accessible application has the potential to provide personalized support to students in a just-in-time manner, and in a wide variety of subjects and topics. Rather than a purely factual transmission, the chatbot has been designed to encourage student inquiry and learning. Teachers can set a learning approach which will further complement their teaching and it could also reduce some of the assessment load off time-pressed teachers.

These two articles highlight the teaching and learning aspect of generative AI in practice and spotlight its diverse and versatile application. From language learning tools in higher education to a teaching assistant chatbot for multiple K-12 subjects, generative AI has the potential to be a useful learning tool, but more empirical studies on its effectiveness are needed.

Assessment with generative AI

While most of the papers in this special issue focus on the potential of generative AI to positively impact education, Gorichanaz (Citation2023) focuses on perhaps the most obvious negative consequence, that of cheating in the article “Accused: How students respond to allegations of using ChatGPT on assessments”. However, instead of focusing on the commonly discussed teacher’s perspective (e.g., how to prevent and detect cheating or how to redesign assessments in light of generative AI), Gorichanaz usefully expands this conversation by considering the student’s perspective. Specifically, he focuses on how students react to allegations of cheating by analysing 49 Reddit threads where students claimed they were accused of cheating and discussed how to navigate those accusations; importantly, over three-quarters of the students claimed they were falsely accused of cheating. Five themes were identified, spanning from matters of immediate interest to accused students – including the role of trust and taking a legalistic stance to accusations – to wider concerns, such as the role of higher education in society and how we should rethink assessments. While similar themes are likely to persist in as generative AI tools evolve, we believe this paper should motivate future work to continue monitoring how students, teachers, and administrators discuss cheating and assessment in the age of generative AI – and what these discussions suggest about policies and practices for maintaining both trust and value in educational institutions.

Building on the theme of rethinking assessment, the brief report by Hsiao et al. (Citation2023) “Developing a Framework to Re-design Writing Assignment Assessment for the Era of Large Language Models”, puts forth a pedagogical framework for assessment in writing assignments in view of generative AI and its possible misuse. Set in the context of higher education, this framework has six dimensions: purpose (related to the learning goals and context of the course), function and focus (balancing formative and summative assessment), grading criteria (critical thinking over mechanical aspects), modes (blended assessment approaches), authenticity (real and relevant tasks), and administration (detecting unauthorized generative AI usage). Focusing on critical thinking in assessment design, a set of implications of crafting course learning goals with generative AI with criteria of accuracy, precision, relevance, depth, balance, and logic were created that served as stimulus for the 6 dimensions of the framework in workshops with 30 educators in a Dutch university. This served to help participants think about how to redesign their writing assignments and the workshop was well-received. This paper highlights the assessment changes required in the face of generative AI which can provide relatively accurate and human-like answers. The design-based solutioning provided in this paper is a pedagogical and practical way forward for educators, and seems useful for other levels of education too, with its emphasis on critical thinking.

These concluding articles complete the “triangle” of learning, teaching, and assessment. However, by changing the nature of assessment, we thereby also change the learning process and how we teach. Therefore, generative AI is not only a tool that directly affects student learning, but also indirectly influences how and what they learn.

The hype cycle of generative AI in education

To provide a further rise above of the papers, we descriptively analyse the articles according to the Gartner Hype Cycle (Linden & Fenn, Citation2003) as well as articulate gaps and other research work that is needed. We began this editorial highlighting the buzz and hype of generative AI, and it is appropriately so that we categorize the “hype” according to each article’s viewpoint. Interestingly, according to Gartner (Citation2023), in the commercial world, generative AI is considered to be at the peak of inflated expectations. Perhaps academia may be ahead of commercial adopters.

Based on our reading of each article’s expectations on generative AI, we estimate the status of the generative AI on its possible trajectory. displays our mapping of the articles in the hype cycle. We also recognize that there are limits and criticisms to this hype cycle but offer it to provide some more clarity on the hype.

Figure 1. Estimated expectations of articles on generative AI mapped onto Gartner’s hype cycle. Original image (“Gartner hype cycle”, https://commons.wikimedia.org/wiki/File:Gartner_Hype_Cycle.svg) from Jeremykemp at English Wikipedia, CC BY-SA 3.0, via Wikimedia commons.

Figure 1. Estimated expectations of articles on generative AI mapped onto Gartner’s hype cycle. Original image (“Gartner hype cycle”, https://commons.wikimedia.org/wiki/File:Gartner_Hype_Cycle.svg) from Jeremykemp at English Wikipedia, CC BY-SA 3.0, via Wikimedia commons.

Ulla et al. highlight some of the positive expectations of generative AI. Still, the article highlights educator’s overall caution in using ChatGPT. Thus, we categorize it as descending from the “peak of inflated expectations”. Gorichanaz shows that generative AI may not live up to its hype for students, who may constantly fear being accused of using such tools (even when they do not) and simultaneously AI detectors might not live up to their promise for instructors or institutions. However, the article does point to some suggestions (e.g., rethinking assessments and policies) to help us get out of the “trough of disillusionment”. Tan et al., Hsiao et al., and Ali et al., all describe issues in generative AI and ways forward. Tan et al’.s human-AI collaboration articulation is conceptual while Hsiao et al. provide a more usable pedagogical assessment framework and Ali et al. develop an actual generative AI application. We therefore categorize these, in ascending order on the “slope of enlightenment” with the most practical solution i.e., Ali et al’.s, at the end, closer to the “plateau of productivity”. Sharples give us a glimpse of what he believes is needed to enact truly conversational and social generative AI; we are not there yet, but if we reach that point, we can envision that to be on the “plateau of productivity”.

We have not positioned Lodge et al’.s contribution on the hype cycle; on the one hand, they seem very optimistic about the many possibilities enabled by generative AI but on the other hand, they recognize different roles that these tools can play and some (e.g., using AI to enhance self-regulated learning) are more productive than others (e.g., not learning as a result of too much cognitive offloading). In this way, perhaps Lodge et al. are helping us see different possible plateaus that are more or less productive but that may mutually coexist. One question that remains is how do we get students (especially those from disadvantaged backgrounds) to use generative AI in more productive ways?

What this descriptive analysis on the Gartner Hype Cycle shows is that the articles overall have mindful expectations and are also not too disillusioned about the promise of generative AI. Rather, for the most part, the articles provide principled solutions to address some of the issues of generative AI.

Research gaps and opportunities for generative AI in education

Still, gaps remain. We articulate five research gaps which are also opportunities for further investigation on the role of generative AI in learning and education.

First, more empirical studies are needed to ascertain the gains and downsides of using generative AI for learning and teaching. In the short term, researchers, and practitioners (possibly working together in research-practice partnerships) could design, implement, and evaluate innovative instructional design, pedagogies, and teaching practices involving generative AI. For instance, (how) can generative AI act as an effective personalized tutor for students? Additionally, more studies examining the long-term learning benefits of generative AI tools used by teachers or students are needed.

Second, the ethical aspects of generative AI in education have to be further fleshed out. While some papers discuss the ethical concerns of using generative AI, the discourse on this overall could go deeper. More philosophical and pragmatic discussion on the ethical use of generative AI as well as guidelines are needed (e.g., see Williamson et al., Citation2023). For instance, how can learners use generative AI legitimately e.g., as a cognitive extender (i.e., Lodge et al, Citation2023)? What is the authorship role of generative AI in co-constructing student output – and relatedly, what constitutes plagiarism when students use generative AI that was trained on other people’s (copyrighted) text or art? What are the data privacy concerns with using these tools and how should these be navigated in educational environments? How do we ensure students do not use these tools for problematic or inappropriate purposes – without resorting to surveillance? What is at stake if students and teachers over-rely on tools that are developed by large companies with commercial or political motives?

Third, as education is a key levelling field in many societies, we would hope that generative AI in education should facilitate this. Thus, the issue of free and easy access to generative AI technologies for all learners is crucial. However, some generative AI tools are chargeable and may even be quite costly. Moreover, inequities in the use of educational technology go beyond cost and access but are also affected by many socio-cultural considerations (Reich & Ito, Citation2017). Furthermore, generative AI may reinforce certain inequities due to biases in how it responds to certain prompts, potentially disadvantaging students from certain demographic, socio-cultural, or linguistic backgrounds. Further research is needed on the effects of the lack of access to generative AI, low adoption of these tools, or inequitable effects when they are used on underprivileged students and societies.

Fourth, the articles in this issue focused almost exclusively on large language models, and in many cases ChatGPT (or GPT 4) in particular. There is much more to generative AI and the kinds of generative AI tools available are likely to evolve over the next few years. The issues surrounding the use of other generative AI tools, such as image generation tools, are likely different from text generation tools. Future work on generative AI tools in education should look at how the range of these tools will affect teaching and learning.

Finally, the papers in this special issue have predominantly focused on how generative AI might change how we learn, teach, and assess. However, it also has widespread implications for what we learn, an issue that was briefly touched upon by Hsiao et al. (Citation2023). More work is needed on understanding how generative AI can and should change the nature of educational goals and curricula. Although Lodge et al. (Citation2023) remind us that it is not a perfect metaphor, we have seen how calculators change the nature of what needs to be taught in maths education. Similarly, although there was a lot of excitement around how the World Wide Web might change the ways in which we learn (Spiro & DeSchryver, Citation2009; Spiro et al., Citation1992), it seems likely that the web has played a bigger role in changing the nature of what’s important to teach (e.g., ways of synthesizing knowledge rather than individual facts; Wineburg, Citation2018). Thus, future research could focus on what students need to learn in the age of generative AI. An obvious aspect of this is teaching generative AI literacy, but what exactly does this entail? Should we focus on information literacy, so people can better scrutinize sources to discern what is true or what is likely coming from real people? Should we teach prompt engineering? Or should we double down on basic skills to ensure people do not lose those skills by becoming over-reliant on generative AI?

Learning in the age of generative AI

It has been less than one year since ChatGPT launched and it seems to be the beginning of a new technological age with potential benefits and challenges to the field of learning. However, generative AI is not new. It has been around in some form since the 1950s (Cao et al., Citation2023). What has changed is generative AI’s performance and relative accessibility. Therein lies opportunities to leverage the affordances of generative AI in ethical ways for learning. Amidst the buzz and hype, the seven articles in this issue provide their theoretically based perspectives, empirical studies, and insightful thoughts to provide understanding and clarity on learning with generative AI.

We hope this special issue provides our readers with resources to make greater sense of generative AI for learning, teaching, and assessment. While not all areas are covered, we believe this fast-tracked issue provides timely insights and well-informed perspectives to harness the potential of generative AI to allow one to reach a plateau of productivity. We encourage academics and educators to reflect on the various learning perspectives and concerns, reconsider content and assessment criteria, and regenerate older learning theories and frameworks that may still be surprisingly relevant in this day and age.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Ali, F., Choy, D., Divaharan, S., Tay, H. Y., & Chen, W. (2023). Supporting self-directed learning and self-assessment using TeacherGAIA, a generative AI chatbot application: Learning approaches and prompt engineering, learning. Research, and Practice. https://doi.org/10.1080/23735082.2023.2258886
  • Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey of ai-generated content (AIGC): A history of generative ai from GAN to ChatGPT. arXiv preprint arXiv:2303.04226. https://doi.org/10.48550/arXiv.2303.04226
  • Tan, S.C., Chen, W., & Chua, B.L. (2023). Leveraging generative artificial intelligence based on large language models for collaborative learning, Learning: Research and Practice, doi: 10.1080/23735082.2023.2258895.
  • Gartner (2023). Gartner places generative AI on the Peak of Inflated Expectations on the 2023 hype cycle for emerging technologies. https://www.gartner.com/en/newsroom/press-releases/2023-08-16-gartner-places-generative-ai-on-the-peak-of-inflated-expectations-on-the-2023-hype-cycle-for-emerging-technologies
  • Gorichanaz, T. (2023). Accused: How students respond to allegations of using ChatGPT on assessments. Learning: Research and Practice. https://doi.org/10.1080/23735082.2023.2254787
  • Hsiao, Y.-P., Klijn, N., & Chiu, M.-S. (2023). Developing a framework to re-design writing assignment assessment for the era of large language models. Learning: Research and Practice. https://doi.org/10.1080/23735082.2023.2257234.
  • Linden, A., & Fenn, J. (2003). Understanding Gartner’s hype cycles. strategic analysis report Nº R-20-1971. Gartner, Inc, 88, 1423.
  • Lodge, J. M., Yang, S., Furze, L., & Dawson, P. (2023). It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? Learning: Research, and Practice. https://doi.org/10.1080/23735082.2023.2261106
  • Reich, J., & Ito, M. (2017). From good intentions to real outcomes: Equity by design in learning technologies. Digital Media and Learning Research Hub.
  • Sharples, M. (2023). Towards social generative AI for education: Theory, practices and ethics. Learning: Research and Practice.
  • Spiro, R. J., & DeSchryver, M. (2009). Constructivism: When it's the wrong idea and when it's the only idea. In S. Tobias & T. M. Duffy (Eds.), Constructivist instruction: Success or failure? (pp. 106–123). Routledge/Taylor & Francis Group.
  • Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. In T. M. Duffy & D. H. Jonassen (Eds.), Constructivism and the technology of instruction: A conversation (pp. 57–75). Lawrence Erlbaum Associates, Inc.
  • Ulla, M. B., Perales, W. F., & Busbus, S. O. (2023). ‘To generate or stop generating response’: Exploring EFL teachers’ perspectives on ChatGPT in English language teaching in Thailand. Learning: Research and Practice. https://doi.org/10.1080/23735082.2023.2257252
  • Williamson, B., Macgilchrist, F., & Potter, J. (2023). Re-examining AI, automation and datafication in education. Learning, Media and Technology, 48(1), 1–5. https://doi.org/10.1080/17439884.2023.2167830
  • Wineburg, S. (2018). Why learn history (when it’s already on your phone). University of Chicago Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.