560
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Artificial intelligence: Why is it our problem?

ORCID Icon
Received 26 Mar 2024, Accepted 19 Apr 2024, Published online: 14 May 2024

Not every new technology or public media hype warrants the attention of philosophers and theorists of education. In recent years, we have witnessed many educational trends and technologies that have garnered significant buzz, such as the gamification of learning (Dicheva et al., Citation2015), massive open online courses (MOOCs) (Pappano, Citation2012), and the use of virtual reality in classrooms (Freina & Ott, Citation2015). While these innovations have sparked much interest and discussion, not all of them have warranted a deep philosophical or theoretical examination.

However, some technological advancements, like artificial intelligence (AI), present genuine mysteries that go beyond mere novelty and unfamiliarity. In such cases, we are confronted with not only a lack of knowledge about the technology itself but also a lack of understanding of how to approach and conceptualize its implications. AI's potential to disrupt education and the roles of teachers and students presents a complex challenge that demands the attention of philosophers and theorists. The impact of AI on education can be viewed through two contrasting scenarios, each with far-reaching consequences.

If educators remain passive and allow the elemental forces of market and culture to dictate the infusion of AI in education, we risk deskilling of the current generation of students, as well as a new AI divide in the labor force. In an alternative scenario where we proactively work to reformat education in light of this transformative technology, AI has the potential to improve educational outcomes for all and realize its tremendous equalizing potential. These two options are real, and not easily ignored. To distinguish between passing fads and developments that merit philosophical and theoretical engagement, we must consider the technology’s potential to disrupt fundamental aspects of education, the existence of unresolved conceptual or ethical questions, and the need for new frameworks to understand and guide its implementation. Most importantly, we must estimate how impactful can doing nothing be.

At its core, philosophy is a form of tool-making; it is making intellectual tools. As John Dewey (Citation1916) argued, philosophy is not a mere intellectual exercise but a means of developing practical tools for solving problems and navigating the complexities of the world. In the case of AI, developing new philosophical and theoretical tools is crucial for understanding and navigating the profound disruptions it brings to education, and its consequences.

In this sense, philosophy and theory are not just about acquiring knowledge but about developing ways of thinking that enable us to approach new challenges and mysteries. When faced with a transformative technology like AI, philosophers and theorists of education have a crucial role to play in crafting the intellectual tools needed to comprehend and navigate this uncharted territory. Novelty has a tendency to escape understanding because sometimes we have no conceptual apparatus to make sense of it. AI's role in education is profoundly disruptive because it challenges our existing frameworks for understanding educational tools and their impact on learning. As we grapple with the implications of AI, the work of philosophers and theorists will be essential in shaping the future of education in the age of artificial intelligence.

Almost a quarter century ago, Nicholas Burbules (Citation2000) paper ‘Why Philosophers of Education Should Care About Technology Issues’ argued that philosophers of education need to critically engage with the profound implications of new information and communication technologies in education. He highlighted the potential opportunities and dangers presented by these technologies, such as the risk of creating an information caste society, commercialization of education, and the deinstitutionalization of traditional schooling. Burbules emphasized the need for philosophical analysis and critique of ontological, epistemological, ethical, and identity issues raised by these technologies, as well as their impact on learning, pedagogy, and curriculum. He called for philosophers of education to play an active role in shaping the development and implementation of these technologies in ways that align with educational values and goals. His call is as relevant, or even more so today as it was in 2000.

AI as a tool

Certainly, AI can be seen as a tool, a concept well-explored by educational theorists such as Lev Vygotsky. In his sociocultural theory, Vygotsky (Citation1978) emphasized the importance of tools, both physical and mental, in mediating learning and cognitive development. He argued that ‘the use of artificial means, the transition to mediated activity, fundamentally changes all psychological operations just as the use of tools limitlessly broadens the range of activities within which the new psychological functions may operate’ (p. 55).

From this perspective, the role of education is to help students internalize socially developed ways of using tools, which in turn shapes their cognitive processes. As Vygotsky (Citation1981) noted, ‘the inclusion of a tool in the process of behavior (a) introduces several new functions connected with the use of the given tool and with its control; (b) abolishes and makes unnecessary several natural processes, whose work is accomplished by the tool; and alters the course and individual features (the intensity, duration, sequence, etc.) of all the mental processes that enter into the composition of the instrumental act, replacing some functions with others (i.e. it re-creates and reorganizes the whole structure of behavior just as a technical tool re-creates the whole structure of labor operations)’ (pp. 139–140).

However, AI seems to be more than just another tool; it is a ‘super tool’ that can perform tasks that would typically require human cognitive processes, such as researching or writing essays. When it ‘abolishes and makes unnecessary several natural processes’, AI definitely does too much of it. This raises concerns about AI's potential to undermine the very cognitive and emotional growth that education seeks to foster.

In my usage, the term ‘super’ does not always imply something positive, akin to how ‘supersized’ in the context of junk food or drinks does not necessarily denote a desirable quality. ‘Super’ can signify both excellence and potential excess. Certain use of AI might result in intellectual ‘deskilling’ (Zuboff, Citation1988), where students rely on technology for tasks they should be able to do independently. In a worst-case scenario, teachers might continue assigning tasks that can be easily completed by AI, leading to students receiving good grades without actually learning anything. In the best-case scenario, educators would revise all assignments, raise their expectations, so that the use of AI still challenges students, fosters cognitive growth, and provides an alternative route to developing post-AI higher thinking skills. However, it is easy to say ‘revise all assignments’, but we need to provide a guideline to teachers on how to do this exactly. In other words, we need theory.

The disruptive nature of AI in education arises from its potential to fundamentally change the role of tools in learning and cognitive development. Traditional tools, as conceptualized by Vygotsky and others, mediate and enhance human cognitive processes. However, AI's ability to independently perform complex cognitive tasks challenges this paradigm. As educators and theorists navigate this new landscape, it is crucial to develop frameworks that guide the responsible integration of AI in education, ensuring it supports students’ cognitive, social, and emotional growth.

In navigating the intricate landscape of AI in education, the work of Peters and Green (Citation2024) offers a valuable philosophical insight. Their exploration in ‘Wisdom in the Age of AI Education’ delves into the essence of wisdom amidst the technological upheaval brought about by AI. Peters and Green argue that in an era where information is abundant and learning environments are increasingly digitized, the cultivation of wisdom stands as a pivotal educational objective. They posit that AI, despite its transformative potential, introduces complex ethical and practical challenges that demand a thoughtful integration into educational frameworks. This perspective underscores the necessity of a philosophical approach to AI in education, one that transcends mere technical proficiency to embrace a holistic understanding of wisdom’s role in shaping a future where technology and human values coalesce.

Viewing AI merely as a tool overlooks its unique nature; it is the most unconventional tool we have encountered. Consequently, Vygotsky’s theory requires revision, particularly its focus on internalization. If students in their future lives will always work in tandem with AI, the interaction with AI is not so much internalized as it is externalized. The skills transcend their being as an attribute of a person; they become the attribute of an AI-human pair. This shift calls for a reevaluation of traditional learning theories to accommodate the evolving dynamics of human-AI collaboration in education.

AI as a teacher

AI's potential to be viewed as an instructor, collaborator, learning assistant, or coach further complicates its role in education. While the role of a teacher or instructor has been explored and debated for centuries, AI presents a new challenge to our understanding of these roles. As Vygotsky (Citation1978) noted, ‘human learning presupposes a specific social nature and a process by which children grow into the intellectual life of those around them’ (p. 88). This highlights the importance of social interaction and guidance in the learning process, traditionally provided by human instructors.

However, AI's ability to engage in interactive, personalized, and adaptive instruction blurs the line between tool and teacher. Luckin (Citation2010) contends that AI technologies have the capability to tailor the learning experience to individual needs by offering adaptive scaffolding and support (p. 38). This suggests that AI can take on some of the roles typically associated with human instructors, such as providing guidance, feedback, and support tailored to individual learners’ needs.

Moreover, AI's capacity for natural language interaction and its growing ability to understand and respond to learners’ questions and concerns further positions it as a potential collaborator or coach. Woolf (Citation2010) observes that AI tutors can simulate several essential roles of human instructors by engaging in dialogue with learners, answering questions, and offering explanations and feedback (p. 62).

However, the nature of AI as a learning collaborator or coach is also different from that of a human instructor. While human teachers bring their own experiences, emotions, and social understanding to the learning process, AI's ‘intelligence’ is based on predictive algorithms and data. This raises questions about the quality and depth of the interaction and support AI can provide, as well as its ability to foster the kind of emotional and social growth that is often a key part of the teacher-student relationship. As of today, the quasi-instructor’s role AI may play does not involve the relational sphere, which is essential for learning motivation.

Furthermore, the use of AI as a learning collaborator or coach raises ethical concerns about the potential for AI to perpetuate biases or to be used in ways that exacerbate existing inequalities in education (Zawacki-Richter et al., Citation2019). As educators and theorists navigate the integration of AI into educational settings, it is crucial to consider these issues and to develop frameworks that ensure AI is used in ways that support, rather than undermine, the goals of education.

AI's potential to act as a collaborator, learning assistant, or coach challenges traditional understandings of the role of instructors in education. While AI can provide adaptive, personalized support to learners, it is important to recognize the limitations and potential risks associated with relying on AI in these roles. As we continue to explore the possibilities and implications of AI in education, we must strive to develop approaches that harness its potential while remaining grounded in the fundamental human values and goals of education.

While AI can be considered a kind of instructor, it is an exceptionally atypical one. Although AI has the potential to possess extensive knowledge and serve as a valuable resource in delivering differentiated instruction, it falls short as a relational partner and is unlikely to inspire strong motivation to learn. Essentially, AI is a teacher that requires the supervision by a human teacher to be truly effective.

AI as a textbook

AI's potential to serve as a textbook or a source of information adds another layer of complexity to its role in education. Traditionally, textbooks have been a cornerstone of educational practice, providing students with a structured, curated, and authoritative source of knowledge. Apple (Citation1992) cites A. Graham Down: ‘textbooks, for better or worse, dominate what students learn. They set the curriculum, and often the facts learned, in most subjects’ (p. 6).

However, AI-generated content differs from traditional textbooks in several key ways. First, AI can generate information dynamically, adapting to learners’ queries and needs in real-time. This flexibility contrasts with the static nature of printed textbooks, which are typically fixed in their content and structure. Cope and Kalantzis (Citation2009) point out that the introduction of digital media has transformed the educational textual landscape, allowing for educational content that is more interactive, multimedia-rich, searchable, and fluid (p. 87).

Second, AI-generated content is not subject to the same rigorous editorial and quality control processes as traditional textbooks. While textbooks are typically authored by subject matter experts and undergo extensive review and revision before publication, AI-generated content is the product of algorithms and training data, which can be biased, incomplete, or even inaccurate. AI systems are only as good as the data used to train them. Unfortunately, it has been shown to inherit biases along with raw training data.

Given these differences, the use of AI as a textbook or information source in education raises important questions about the role of authority, expertise, and critical thinking in the learning process. Educators and theorists must grapple with how to help students navigate and critically evaluate AI-generated content, developing the skills needed to identify potential biases, errors, or limitations.

While AI has the potential to serve as a dynamic and adaptive source of information in education, it also presents unique challenges related to quality, bias, and transparency. As AI becomes increasingly prevalent in educational settings, it is crucial to develop frameworks and strategies that help educators and learners critically engage with AI-generated content, fostering the skills and dispositions needed to navigate this new informational landscape. Yes, it is a great textbook, but not a very good textbook after all.

Enter the philosophers

When a phenomenon, such as AI in education, eludes categorization and understanding, it can lead to a range of negative consequences. As Wittgenstein (Citation1953) famously argued, ‘the limits of my language mean the limits of my world’ (p. 68). In other words, our ability to conceptualize and categorize a phenomenon is intimately tied to our ability to understand and engage with it effectively.

In the case of AI in education, the difficulty in categorizing it as a tool, collaborator, or textbook, among other roles, reflects a deeper challenge in understanding its nature and implications. Without a clear conceptual framework to guide our thinking, we risk making mistakes in how we use and respond to AI in educational settings.

These mistakes can manifest in various ways. For example, an overreliance on AI as a tool or collaborator may lead to the deskilling of students and teachers. Attempts to replace teachers with AI may undermine the relational, and consequently, motivational basis of education. Similarly, an uncritical acceptance of AI-generated content may expose students to biased or inaccurate information, undermining the goals of education.

Moreover, a limited understanding of AI in education can result in an inability to foresee and address potential negative social impacts. Crucially, without intervention, AI is likely to benefit those already privileged, leaving it out of reach for those who most need its equalizing potential. Without a strong conceptual framework to steer our approach, we risk neglecting or underestimating these issues until they have already inflicted damage.

In light of these challenges, the philosophical community has a crucial role to play in helping to categorize, understand, and guide the development and use of AI in education. By drawing on the rich tradition of educational theory and philosophy, we can work to develop new conceptual frameworks that can help educators, policymakers, and the public make sense of AI's role in education.

This may involve adapting existing theories to the unique challenges posed by AI, such as extending Vygotsky’s (Citation1978) concept of mediated learning to account for the ways in which AI can shape and support student learning. Alternatively, it may require the development of entirely new theoretical frameworks that can capture the distinct features and implications of AI in education.

These are just a few examples where we need to engage theory. The goal of this philosophical work should be to provide a foundation for the responsible and effective use of AI in education. By helping to categorize and understand this new phenomenon, philosophers and theorists can contribute to the development of policies, practices, and pedagogies that harness the potential of AI while mitigating its risks and negative consequences.

Conclusion

The advent of AI in education poses a significant challenge for philosophers and theorists. As we have explored, AI's role in education defies easy categorization, blurring the traditional boundaries between tools, collaborators, and textbooks. This ambiguity reflects a deeper challenge in comprehending AI's nature and its implications for educational practices.

The risks associated with AI in education, such as the potential deskilling of students and teachers, the spread of biased information, and the erosion of privacy and equity, underscore the urgent need for philosophical engagement. Without a clear conceptual framework, we risk mishandling AI's integration into educational settings, which could lead to detrimental consequences.

To navigate this complex landscape, it is imperative that we draw on the rich traditions of educational theory and philosophy. By developing new conceptual frameworks or adapting existing ones, we can better understand and guide AI's role in education. This endeavor requires not only theoretical insight but also practical engagement with AI tools to ensure our frameworks are grounded in reality.

As we move forward, the goal of our philosophical work should not be limited to understanding AI in education but should extend to shaping its development and use in ways that align with our fundamental values and aspirations. By actively engaging with AI, both experientially and conceptually, philosophers and theorists can play a crucial role in ensuring that this powerful technology enhances the transformative power of learning, rather than undermining it.

Alexander M. Sidorkin

California State University, Sacramento, CA, USA[email protected]

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Apple, M. W. (1992). The text and cultural politics. Educational Researcher, 21(7), 4–19. https://doi.org/10.2307/1176356
  • Burbules, N. C. (2000). Why philosophers of education should care about technology issues. In L. Stone (Ed.), Philosophy of Education 2000 (pp. 37–44). Philosophy of Education Society.
  • Cope, B., & Kalantzis, M. (2009). New media, new learning. In D. R. Cole & D. L. Pullen (Eds.), Multiliteracies in motion: Current theory and practice (pp. 87–104). Routledge.
  • Dewey, J. (1916). Democracy and education: An introduction to the philosophy of education. Macmillan.
  • Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in education: A systematic mapping study. Journal of Educational Technology & Society, 18(3), 75–88.
  • Freina, L., & Ott, M. (2015, April). A literature review on immersive virtual reality in education: State of the art and perspectives [Paper presentation]. In The International Scientific Conference Elearning and Software for Education (Vol. 1, pp. 10–1007). https://doi.org/10.12753/2066-026X-15-020
  • Luckin, R. (2010). Re-designing learning contexts: Technology-rich, learner-centred ecologies. Routledge.
  • Pappano, L. (2012, November 2). The year of the MOOC. The New York Times.
  • Peters, M. A., & Green, B. J. (2024). Wisdom in the age of AI education. Postdigital Science and Education. https://doi.org/10.1007/s42438-024-00460-w
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
  • Vygotsky, L. S. (1981). The instrumental method in psychology. In J. V. Wertsch (Ed.), The concept of activity in Soviet psychology (pp. 134–143). M. E. Sharpe.
  • Wittgenstein, L. (1953). Philosophical investigations. Wiley-Blackwell.
  • Woolf, B. P. (2010). Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Morgan Kaufmann.
  • Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0
  • Zuboff, S. (1988). In the age of the smart machine: The future of work and power. Basic Books.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.