4,587
Views
1
CrossRef citations to date
0
Altmetric
Research articles

AI and its implications for research in higher education: a critical dialogue

ORCID Icon & ORCID Icon
Pages 563-577 | Received 22 Jun 2023, Accepted 05 Oct 2023, Published online: 25 Mar 2024

ABSTRACT

This article weighs in on the developing discourse on AI's role in higher education research through a structured dialogue between two diametrically opposed academics. Utilising a dialectical framework, the discourse transcends surface-level debates to grapple with ethical, methodological, and epistemological questions that are often overlooked, but are paramount to the field's integrity. Russell advocates for AI as an indispensable tool for academic research, while Rachel raises critical questions about AI’s potential to undermine the very essence of academic inquiry. Through this disputation, the article reveals the complex tensions between efficiency and ethical concerns, between innovation and integrity. Rather than offer facile solutions, it exposes the intricate challenges and invites the reader to consider the broader implications for academic research practice. The article serves as a catalyst for a more rigorous, nuanced dialogue, aiming to shift the discourse from the technological to the ethical and epistemological. It is an imperative read for anyone committed to understanding the transformational potential and pitfalls of AI in higher education research.

Introduction

At the intersection of technological innovation and academic inquiry, the growing capability of Artificial Intelligence (AI) marks a critical juncture for research in higher education. This is not merely a matter of augmenting research with advanced tools. Instead, AI is starting to disrupt established methodologies, ethical paradigms, and fundamental principles that have long guided scholarly work. The aim of this article is to engage in a rigorous dialogue on AI's role in higher educational research. Specific activities, such as writing, data analytics and automated content analysis in literature reviews are particularly susceptible to this disruption.

The dialectical approach we employ in this article traces its roots to ancient philosophical dialogues and is particularly adept at dissecting complex issues (Sirén, Citation2012). Through this lens, we present the contrasting perspectives of the two authors: Russell, who extols the virtues of AI, and Rachel, who raises critical questions about its integration into academic research. The purpose is not to sow discord, but to illuminate the diverse opinions within the academic community on the integration of AI in higher education research. The debate draws us into the power dynamics and fundamental assumptions that underlie this topic, demonstrating that dialectics are not neutral but reveal historical, social, and institutional forces that are, at times, made explicit (Xie et al., Citation2023). The discussion navigates pivotal themes such as the purpose, practice and ethics of academic research, as well as the impact of AI on the sourcing and review of knowledge. While the aim is not to advocate one viewpoint over another, the goal is to foster a comprehensive dialogue that embraces diversity of thought and informs a balanced perspective (Korac-Kakabadse et al., Citation1999).

We invite the reader to join this intricate intellectual journey, noting that the mission is not to win an argument but to deepen our understanding and broaden our insights. We see this text as an initial but significant step toward a robust, informed, and proactive discourse on the future of academia in an age increasingly dominated by AI (Chubb et al., Citation2021).

Literature review

Artificial Intelligence (AI) has dramatically altered the landscape of academic research, acting as a catalyst for both methodological innovation and broader shifts in scholarly paradigms (Pal, Citation2023). Its transformative power is evident across multiple disciplines, enabling researchers to engage with complex datasets and questions at a scale previously unimaginable (Juarez-Orozco et al., Citation2019; Oren et al., Citation2020). Notably, AI's capacity for automation liberates researchers from time-consuming and often monotonous tasks that traditionally constituted the initial stages of research (Brynjolfsson & McAfee, Citation2011; Hui, Citation2020). This automation isn't merely about expediency; it also serves to enhance the reliability and reproducibility of research. As many studies show, AI can significantly reduce human error in data collection and analysis (Burger et al., Citation2023; Neyedli et al., Citation2011).

Beyond the sphere of automation, AI's capabilities extend to more intellectually demanding tasks. For instance, machine learning algorithms excel at conducting intricate content analyses, identifying trends, and even spotlighting gaps in existing bodies of research (Burger et al., Citation2023; Nguyen-Trung et al., Citation2023). These functionalities allow researchers to focus on the conceptual and theoretical aspects of their work, allowing for a deeper level of engagement with their research questions. Moreover, AI's ability in natural language processing has proven invaluable in literature review tasks, where algorithms can scan and summarise vast quantities of literature, presenting researchers with cogent summaries and highlighting areas for future investigation (Müller et al., Citation2022; Tauchert et al., Citation2020).

The transformative impact of AI is perhaps most vividly captured in its capacity to facilitate interdisciplinary research. AI's ability to quickly and efficiently analyse datasets from diverse academic fields allows researchers to make connections they might not have otherwise considered, opening the door to novel interdisciplinary projects and collaborations (Beretta et al., Citation2021; Kusters et al., Citation2020). This not only expands the scope of individual research efforts but also serves to enrich academic discourse as a whole.

AI's transformative role is not merely operational; it also has profound implications for the epistemological foundations of research. By enabling more complex and large-scale analyses, AI allows for a more nuanced understanding of phenomena, effectively expanding the parameters of what can be known and studied (Bzdok et al., Citation2019; Hey & Hooper, Citation2020; Müller et al., Citation2022). In this sense, AI acts as a transformative force that augments human capabilities, rather than merely serving as a tool for automating repetitive tasks. It is an active participant in the research process, shaping not only how research is conducted but also what kinds of questions can be asked and answered (Burger et al., Citation2023; Chubb et al., Citation2021). Therefore, the facilitative role of Artificial Intelligence (AI) in academic research cannot be dismissed as mere technological window-dressing; rather, it marks a methodological pivot of consequence.

AI's impact penetrates the core structures of academic procedures, wielding a frequently underappreciated effect on the entire research cycle (Burger et al., Citation2023; Nguyen-Trung et al., Citation2023). For instance, natural language processing algorithms don't merely ‘scan’ existing literature; they deconstruct complex academic narratives into data points, thereby reframing how scholars engage with extant knowledge (Tauchert et al., Citation2020). This algorithmic interloping is equally transformative in data analytics. Here, machine learning models serve not merely as advanced calculators but as heuristic devices that refract academic inquiry through new methodological lenses. They facilitate a kind of data hermeneutics, making legible patterns and trends that might escape even the most discerning human eye. Thus, what is often misconceived as automated number-crunching is, in fact, a paradigm shift in how data is understood and utilised (Sathya et al., Citation2022).

The writing process too is not immune to AI's encroachment. Beyond mere grammar checks or stylistic suggestions, AI-driven writing aids have the potential to recalibrate the researcher's relationship with their own text. These tools can nudge authors towards more coherent argumentative structures and even flag logical inconsistencies, thereby interrogating the text's epistemic integrity (Abd-Elsalam & Abdel-Momen, Citation2023; Pividori & Greene, Citation2023).

Similarly, AI's role in the publication process ought to be conceived not merely as an efficiency-boosting measure but as a radical rethinking of peer-review paradigms (Kousha & Thelwall, Citation2023). By automating initial scans of submitted papers, AI algorithms serve as gatekeepers of academic rigour, filtering out manuscripts that fail to meet baseline quality criteria before they ever reach human reviewers (Checco et al., Citation2021; Kousha & Thelwall, Citation2023).

AI doesn't just speed up the practice of doing research; it changes how we think about research problems, how we look at data, and even what we consider to be knowledge. This is not a minor change, as it is redefining the very traditions for academic rigour. However, this comes with the need to tackle new ethical and methodological issues. Far from trivial, AI brings with it a Pandora's box of ethical and methodological complexities that are far from peripheral; they are, in fact, central to the discourse (Christou, Citation2023; Ryan et al., Citation2021). One glaring issue is data privacy. AI's voracious appetite for data is well-known, but the ethical dimensions surrounding data handling are often glossed over. This creates a dilemma for researchers, who must grapple with the tension between leveraging AI's capabilities and ensuring data privacy. Equally tricky is the issue of informed consent. In a traditional research model, informed consent is a non-negotiable ethical cornerstone. However, the multifaceted nature of AI complicates this straightforward contractual agreement. For instance, machine learning algorithms possess the ability to reuse data for various analyses, which may not have been considered in the original informed consent. These types of ethical complexities require us to reconsider our approach to participant ethics, particularly when their data can be used in multiple ways (Liaw et al., Citation2020; Tozzi & Cinelli, Citation2021).

The challenges extend to methodology as well. The computational basis of AI could inadvertently sway researchers towards methods that favour quantitative measurements, potentially side-lining qualitative insights. This trend has been noted in the literature, where scholars argue that the rise of AI and machine learning techniques can create a methodological bias, privileging quantifiable data over nuanced qualitative analysis (Anis & French, Citation2023; Feuston & Brubaker, Citation2021). This computational determinism could inadvertently narrow the research scope, leading to skewed or incomplete findings. Moreover, the ‘black box’ nature of many AI algorithms challenges the very tenets of scientific research, which values transparency and replicability (Bakken, Citation2019; Dafoe, Citation2013). In essence, the integration of AI into academic research is not a mere add-on but a complex intervention that reconfigures ethical and methodological landscapes. These complexities are foundational to the integrity of AI-augmented research. They demand a reframing of existing ethical guidelines and methodological approaches, necessitating a multidisciplinary dialogue that goes beyond the conventional boundaries of individual academic disciplines. Failure to address these intricacies not only risks undermining the scientific credibility of research findings but also raises serious ethical questions that could have long-term implications. Similarly pressing are the limitations inherent to AI algorithms themselves, especially when they fall short of capturing the causal relationships that form the backbone of rigorous scholarly inquiry.

While AI has proven revolutionary in the scope and efficiency of data analysis, it is imperative to scrutinise its limitations critically, particularly when the end goal is causal inference. The current state of AI algorithms, primarily designed for pattern recognition, are not equipped to discern causality (Bishop, Citation2021; Ganguly et al., Citation2023) – a critical aspect of academic research. This limitation is not a minor hiccup but a fundamental challenge to the base principles of scholarly inquiry. Moreover, AI's shortcomings become glaring when applied to disciplines that are deeply rooted in qualitative methodologies, such as the humanities and social sciences. While algorithms can churn through vast amounts of quantitative data, they are at a loss when nuanced understanding and contextual interpretation are required. For example, AI algorithms are ill-suited for capturing the multifaceted nature of human behaviour and social constructs (Ligo et al., Citation2021; Sloane & Moss, Citation2019). This is not just a technological shortcoming; it is a profound misalignment with the epistemological frameworks that these disciplines are built upon.

While some argue that the limitations of AI are temporary and will be overcome as the technology matures, this perspective often dismisses the foundational differences between machine learning and human intellectual inquiry (Brooks, Citation2021; Dreyfus & Dreyfus, Citation1988). For instance, the ‘black box’ nature of many AI algorithms not only raises ethical concerns, but also challenges. These challenges are not peripheral but are central to academic endeavour and require rigorous scrutiny. Any attempt to side-line or ignore these limitations not only compromises the integrity of individual research projects but also risks undermining the credibility of the academic field as a whole.

In closing, this literature review scrutinised four distinct yet interconnected dimensions of AI's influence on academic research. We analysed AI's transformative capabilities, which are extensive but not universally applicable across all academic fields. We explored AI's function as a supplementary tool, shedding light on its specific roles and limitations in varied research tasks, from literature reviews to data analysis. Importantly, we examined the ethical and methodological challenges that come with AI's integration into academic settings, emphasising that this is not merely a technological transition but also a significant ethical and methodological turning point. Finally, we addressed scepticism towards AI, highlighting the valid concerns regarding its epistemological and ethical standing.

The points raised here form the contextual backdrop for an upcoming debate between Russell and Rachel, where these themes will be brought to life in a direct exchange that aims to exemplify many of the concerns and perspectives outlined in this review. This literature review thus serves as the foundation for an ongoing, critical discourse that is neither academic window-dressing nor speculative futurism. Instead, it is an urgent and critical inquiry into AI's role in academic research.

The aim of this discourse is not to reach a simple consensus but to foster a nuanced understanding that reconciles AI's capabilities with its limitations, ethical dilemmas, and epistemological challenges. This discussion must extend beyond the allure of new technology to grapple with the fundamental principles that underpin rigorous academic research. This is not merely an academic exercise; it is a prerequisite for any future integration of AI into academia. We hope that we have equipped the reader with the intellectual tools necessary to engage in the upcoming debate, as well as the broader discourse that is as complex as it is pivotal. The goal is to encourage not just contribution but also critical questioning and, where necessary, a critique of the ongoing narrative surrounding AI in academic research.

Introducing the debaters

I am Russell, a firm advocate for AI's transformative role in academia. My philosophy is simple – the essence of academia lies not in the medium of its output, but in its capacity to stimulate innovative thinking, generate new knowledge, and contribute to solving real-world educational problems. As such, AI is not a threat but a powerful ally in our mission. AI can help us transcend traditional constraints, catalyse innovative pedagogical practices, and drive high-impact research. Let us not mistake the medium for the message. Text or writing serves as a crucial vehicle for conveying research and insights, but they are not the heart of our mission. Our true essence lies in the creation and application of knowledge. So, let us leverage AI not as a mere tool, but as a partner in intellectual exploration and problem-solving.

I am Rachel, a critic of the adoption of AI in academia. My very identity as an academic researcher is bound up in my ability to think, read and write critically. Do I want to ship this out to AI? No – where would that leave me as an academic? I am willing to concede that AI could ably assist in some of my teaching functions – who wouldn’t want help with providing feedback to hundreds of students? But when it comes to my research, that is a different story. And yes, there is an important link between research and teaching. If I expect my students to write their own assignments, how could I then use AI to write my research articles? That would be absolute hypocrisy.

Debate

The debate proper explores four fundamental areas of academia, each currently witnessing transformative impacts brought about by AI. Before addressing these topics, we begin by establishing the context with a discussion on the purpose of academic research. Subsequently, we delve into the following practices: – finding, reading, and evaluating information; writing; reviewing articles; original thought. Each section will illustrate how AI is altering these academic practices. In conclusion, we shed light on our own identities as academics engaged in this vibrant discussion. Prepare to immerse yourself in a deep dive into the intriguing intersection of academia and artificial intelligence.

The purpose of academic research

Russell: I feel it's pertinent to start by considering the purpose of research specifically into higher education. We often measure the success of our research by the number of publications and citations it generates, and the prestige of the journals where it's published. But isn't the ultimate objective of our work to solve real-world problems and make an impact on people's lives? How did we get to this position where our focus appears to have been swayed towards academic publications, as opposed to impact? Is it due to the ‘publish or perish’ mantra that has taken root, setting the pace of scholarly life and often overshadowing the broader mission of academia – to contribute to society. This pressure has led many to view academic work as a private good, a commodity that can be traded for status, promotions, and grant funding. In contrast, the impact of research – its power to bring about change and betterment in society – is essentially a public good. It's a societal benefit that, while harder to quantify, holds far more value in the grand scheme of things.

I argue, shouldn't our work as educators and researchers be more servant than status-driven? Shouldn't we strive to ensure that our research advances our personal careers on the back of our work making a genuine difference in the world? By embracing AI and its transformative power, we could refocus our efforts on driving research that brings tangible improvements to educational mindsets and practices. We are now in an era where we can harness the prowess of AI, not just as a tool, but as an active partner in the scholarly process.

AI has the potential to manage operations related to research into higher education, design research programmes in this field, analyse data sets relevant to higher education, and support the thinking and writing processes of scholars in this area. This collaborative partnership reveals the potential of AI to augment the insights needed to inform and shape new practices and focal points in research into higher education.

Rachel: I am not just a researcher in higher education but a professor of higher education. How did I get there? By undertaking careful and methodological research, and yes, by ensuring my research was disseminated in appropriate channels. When seeking advice from a senior leader about my chance of promotion, what did that leader scrutinise? Not my personal narrative about my strengths and the impacts I had made, nor my teaching portfolio or the aspects of my CV which elaborated my research philosophy, grants obtained and supervision etc. They focused on the number of research publications. So, I think Russell is correct when he says that we are bound up in a system driven by publishing, despite more recent research assessment rounds allowing us to include commentary on the impact of our research.

Although I agree with Russell’s pitch for our research to make a difference, I disagree with his ideas about AI and its role in research. While I can entertain the use of AI tools to assist in my research, I cannot, as Russell suggests, allow AI to be ‘an active partner in the scholarly process’. To do so would undermine my credibility as a researcher – it would no longer be my (or my team’s) ideas, or my ability to design and conduct research. I would not be the one analysing the data, or writing up the results. We can achieve the outcome Russell seeks – more impactful research – without invoking AI as a partner.

The practice of finding, reading and evaluating the literature

Rachel: Okay, I have a confession to make. I find the process of doing a literature review the most tedious aspect of the research process. But this does not mean I’m willing to hand over this aspect to AI. Finding, reading, evaluating and synthesising literature might be tedious, but it can also spark interest and enthusiasm when you learn more about the field or practice and how you might contribute. Moreover, doing a literature review is a necessary stage for researchers to go through to appreciate the breadth and depth of a topic, to map out the field, and to identify gaps in knowledge. We already have search engines to assist with finding literature, and librarians – human experts in this area – who can help ensure we are using the best search terms and strategies. We have referencing tools that help us manage our sources. And, yes we need to read the articles ourselves – not AI. We can, as Inger Mewburn aka the Thesis Whisperer advises, ‘read like a mongrel’, meaning that we can scan articles to identify the pieces that are the most relevant (Citation2011). Most importantly though, we make judgements about the articles – do they draw on appropriate sources? Are they methodologically sound? Have they analysed their data and made conclusions in a convincing way? I doubt that AI can replicate the thinking that goes into evaluating articles.

Russell: The preservation of personal engagement in the literature review process, which Rachel rightly stresses, needs to be considered in light of the substantial time and energy it consumes. Rachel herself confesses to the tedious nature of this process, underscoring a paradox: Are we not diluting our intellectual potential by directing it towards procedural tasks, when it could be more effectively employed in deep analysis and synthesis? While I agree with Rachel on the crucial role of personal engagement in developing understanding, I suggest that this involvement is best served in the more intellectually demanding stages of the literature review. Rather than a preoccupation with information seeking, our intellectual capital would yield greater dividends in analysing targeted, relevant, extracted data. It is here that new insights, relationships, and patterns can be discovered, enriching our contributions to our respective fields (Atanassova et al., Citation2019).

Rachel’s emphasis on the irreplaceability of human judgement in the evaluation of literature quality and relevance underscores a significant issue. The allocation of significant intellectual resources in the early stages of a literature review, primarily triaging, seems an inefficient use of our expertise. Would our judgement not be more effectively applied in the meta-analysis phase, discerning patterns, relationships, and insights? This is where the transformative power of AI tools becomes evident. Tools like Rayyan, Dimensions, AskyourPDF, Elicit, Scite and DistillerSR, utilising AI, can automate the more laborious elements of a literature review. They are able to parse large volumes of data, providing targeted, relevant content for analysis (Irfan et al., Citation2019). The result is a shift in intellectual workload, focusing it where it is most effective – the higher-order thinking and intellectual exploration stage. AI tools are not just facilitators; they are essential for the modern academic. They allow us to push beyond traditional boundaries of knowledge, creativity, and innovation by freeing more time for meta-analysis, critical thinking, and exploration (Kitchin, Citation2014). The use of AI tools in this context is more than an act of keeping pace with an increasingly digitised academic landscape; it’s about harnessing the potential of technology to augment our capabilities and elevate academic pursuits (Lund et al., Citation2023).

Embracing AI tools’ transformative potential is not about replacement but augmentation. They supplement our capabilities by automating routine tasks, thereby freeing us to focus on more critical and innovative thinking. The incorporation of these tools into the literature review process marks a significant paradigm shift in academia, where efficiency, relevance, and intellectual exploration take centre stage. As we tread the path of academia, it is clear that the utilisation of AI is a necessity, not an option, for the modern scholar.

The practice of academic writing

Russell: My relationship with the ChatGPT application while writing an article has evolved into a dynamic, intensive, and synergistic collaboration. This isn't a typical user-tool relationship, but rather a nuanced dance characterised by rapid, fluid interaction and co-creation that pushes the boundaries of traditional human–computer interaction. ChatGPT acts as an interactive sounding board for me, assisting in the development, refinement, and organisation of my ideas. As I interact with the AI, I don't merely receive cold, calculated responses, but participate in a dialogic form of co-construction. This form of communication involves the exchange of ideas, suggestions, edits, and feedback in real-time, in a manner that fuses human creativity, judgment, and strategic direction with the AI’s capability to process language.

Over the course of a single project, there may be hundreds of cycles of this human-AI interaction as I engineer prompts that will render what I need. And through this process, I've realised that I'm not just using a tool, I'm part of a dialogue. This discourse takes the form of ‘dialogic creation’ – while I am the architect sketching out my design, ChatGPT, as the builder, co-contributes to the process of identifying gaps, inconsistencies, or areas for improvement in the work. The most remarkable part is that this is not a one-way street. In fact, I've found myself learning from the AI, refining my design and even discovering new insights that I had previously overlooked. At the same time, I am actively engineering my interactions with the system in order to provide high-level directives, context, constraints, and clarifications. It's not just a transactional interaction, it's a real-time developmental process in which the AI learns from every directive I give, refining its subsequent responses and better adapting to my style, objectives, and content requirements. For me, this process is iterative, dynamic, and truly transformative.

Rachel: Russell describes what he terms ‘dialogic creation’ with ChatGPT in his academic writing. But it isn’t really his writing though is it? I love academic writing and see it as a core part of my academic identity. Much has been written about academic writing being a core part of academic identity. For example, Hyland (Citation2002) posited that ‘Academic writing, like all forms of communication, is an act of identity: it not only conveys disciplinary “content” but also carries a representation of the writer’ (my emphasis added). My writing represents me as an academic.

I realise I am perhaps unusual in enjoying writing; I am aware for many it is not enjoyable. Moreover, for me, writing comes quite easily – again perhaps unusual. I have confidence in my writing, and I usually write in collaboration with co-authors, getting the benefit of other real minds providing me with feedback. Why then would I ever entertain the thought of using AI to help me write? Russell comments that AI can identify gaps, inconsistencies and areas of improvement – my co-authors (and ultimately reviewers) can do that, and most importantly I wouldn’t be having my identity as a researcher undermined. If I cannot write up my own work, then where does that leave me as an academic? But I do have a small confession to make – as I wrote this paragraph, I noticed some text had been underlined – I had used a spurious word and the word processor was telling me it would be clearer for readers if I deleted a word – which I then did. So, is it a matter of degrees? What artificial input am I willing to accept to improve my writing? Word processing suggestions yes, but wholesale use of ChatGPT suggesting rewrites of my work – a firm NO.

The practice of reviewing articles

Rachel: Reviewing journal articles is a core part of my academic work, which I see as a professional duty. Peer review benefits both journals (through helping editors to publish only quality and significant research) and authors (by getting feedback to improve the manuscript) (Fischer, Citation2010). But it also benefits me as a reviewer. As Mahmić-Kaknjo et al. (Citation2021) noted, doing peer review allows reviewers: to keep up-to-date with advances in the field; to identify future directions for their research; to network with professional peers; and can assist in career development, since reviewing can count towards contributing to the research environment. Furthermore, engaging in peer review can support researcher identity development (Gardner & Willey, Citation2019). Regarding the latter point, through reviewing, early career researchers were able to benchmark their thinking by seeing other reviewers’ comments, and for some, they also had a sense of joining a professional community; one subsequently met the author at a conference, making an instant connection. Had AI done their review, they would not have this sense of development, and nor would they make deep connections with peers.

As an established academic I admit that my appetite for peer review is wearing thin due to the constant haranguing from what is now a multitude of journals vying for my input. I find myself being far more selective in what I agree to take on. So, yes, the thought of farming out my reviewing to AI is very tempting, but I think it would do a disservice to the authors. Is AI more astute than I am? Could it recognise when seminal work had been missed? Can it ascertain if the authors have critically engaged with the literature? Can it spot a misalignment in aims, methods, findings and conclusions? Can it detect if the analysis is flawed? I’m not sure I want to hear Russell’s response to these questions.

Russell: Thank you for your thought-provoking stance, Rachel. Your inquisitive outlook perfectly highlights the essential debate between AI's capabilities and human expertise in academic review. You ask, ‘Is AI more astute than I am?’ Let's be clear: AI is not about surpassing human intellect. Instead, it leverages the power of machine learning and Natural Language Processing to complement and enhance our abilities, providing faster and more efficient access to information. However, it falls short in making critical judgment calls. AI's strength resides in its capacity to search for, gather, and succinctly summarise extensive amounts of data rapidly. But the acumen, judgment, and critical thinking are human faculties that are irreplaceable and indispensable to academic review.

That said, you've raised the issue of whether AI can recognise seminal work – yes, it certainly can. AI's potential to identify important works lies in its ability to rapidly cross-reference a multitude of sources, thereby distinguishing the degree to which the work has influenced or shaped a field of study. Tools like Elict, Scite, Dimensions, Rayyan, DistillerSR and Covidence can detect frequently cited papers and thus potentially identify seminal works. However, the interpretive reading, understanding the broader context, and the appreciation of the intellectual impact of a work remains a human endeavour.

Similarly, the detection of a misalignment between aims, methods, findings, and conclusions, or the ability to discern whether an analysis is flawed, at this stage, is an emerging area. Currently, it would require well-engineered prompts to detect these types of misalignments, but there is an understanding that these types of detection will emerge in the very near future. It does have the ability to identify structural inconsistencies, repetition, or a lack of cohesion between different parts of a paper, hence prompting a closer human examination of these aspects.

On your question about AI's capability to gauge whether authors have critically engaged with the literature, AI does indeed have the capacity to detect patterns and identify common themes and gaps in the literature. But the critical engagement with the literature, the synthesis of various viewpoints, and the judgment of the depth and breadth of the engagement are elements that are somewhat difficult to detect.

While I argue AI in the academic review process is profound and transformative, it is not without its limitations. My position, therefore, is not to replace human reviewers with AI but to incorporate AI as a tool that can assist us in making the review process more efficient. The human element is integral to the process of academic review, and it is through a collaborative approach between human reviewers and AI that we can achieve the best outcomes.

AI can also play a role in mitigating bias in the review process. We know review bias occurs, both unintentionally and intentionally stemming from the reviewer's inclinations. By providing an initial objective assessment, AI tools can help reduce such biases, ensuring a more equitable review process. This does not negate the need for human reviewers, but it does offer a potential check and balance that could improve the fairness and quality of reviews. However, given the speed at which AI and machine learning technologies are advancing, we might witness AI systems that can understand context, semantics, and even nuances to an extent previously thought to be solely within the human domain. These advances could allow AI to provide deeper, more insightful analysis and feedback during the review process, further enhancing its utility as an assistant to human reviewers.

But I agree, Rachel, it's essential that AI is not, and should not be, the final word. Its power is its ability to rapidly triage articles to highlight potential issues, leaving the more in-depth and nuanced feedback to humans. As you said, Rachel, the connections formed, the development of researcher identity, and the satisfaction of contributing to one's field are elements that AI can't replicate. So, I concur that AI offers a tool for augmentation rather than replacement (Hassani et al., Citation2020).

The practice of original thought

Russell: Rachel, I am sure we share the relentless pursuit of original thought, which is undoubtedly a cornerstone of our scholarly endeavours. Given the ever-accelerating pace of technological advancements, the data deluge, and the rapid expansion of our global knowledge base, I view AI as a unique opportunity rather than an insurmountable challenge. In this context, AI emerges not as an adversary but as a powerful ally, a catalyst in our collective quest for original thought (Bubeck et al., Citation2023). AI also shoulders the overwhelming task of managing and processing large data volumes. Advanced tools such as AskmyPDF, Papers, and Elicit facilitate rapid understanding and sifting through extensive literature, enabling researchers to identify gaps and fostering novel hypothesis generation (Xu et al., Citation2021)

In our prior conversation Rachel, you rightly highlighted the significance of intellectual engagement and critical analysis in academic research. While AI's capabilities may not fully match these human attributes, it is steadily demonstrating potential beyond mere data processing. Machine learning advancements and Natural Language Processing have empowered AI models, like OpenAI's GPT-4, to generate content that, while not yet rivalling human ingenuity, serves as an intellectual springboard, inspiring original thought. Indeed, critics argue that AI's algorithms simulate originality, confined by their programming and training data. Yet, we must acknowledge that human cognition is shaped by cultural, educational, and personal experiences. Thus, rather than anthropomorphising AI, we should leverage its computational prowess and pattern recognition in enhancing our cognitive processes. So, while AI may not replicate the invaluable human touch in academia – the intellectual engagement, the critical analysis, the personal connections – we can enhance these aspects by using AI as a collaborator, a helper, and a catalyst for thought. I believe by embracing AI in our academic pursuits we can illuminate new intellectual paths and spur original ideas, propelling our collective quest for knowledge forward.

Rachel: Alright Russell – I think I am coming around. I am feeling a little more comfortable knowing that we agree on the need for academics to be the masters of original thought. But, given your comments about the rapid developments in AI, how long before we are working with AI that can indeed match (or surpass) our intellect? I suspect I am going to remain perturbed for my foreseeable academic future. In the meantime, I am willing to concede that perhaps I should harness the power of AI to assist in aspects of my research. I could use AI to help find and summarise literature, and to identify possible research gaps. Maybe I will even use AI to review my draft writing, but I am still not willing to co-partner in the writing process, as I do think this would undermine my academic identity. But am I advantaged as an academic because for me writing comes so easily, and English is my first language? I can see that by allowing peers who struggle with academic writing in English to embrace the power of AI in assisting their writing might level the academic playing field. For, as Russell so rightly espouses, it is so important that our scholarly endeavour benefits society.

Concluding comments

In this lively dialectic, we have dissected the multifaceted perspectives surrounding the role of AI in academic research. The debate has been far from superficial, challenging foundational beliefs and ethical considerations that underpin academic practice. Russell embodies the potential of AI, invoking the optimism often associated with technological futurists, while Rachel stands as a sentinel, cautioning against the uncritical adoption of AI, thereby echoing the scepticism of academic purists.

Rachel's nuanced position exemplifies the quintessential dilemma of the modern academic – eager to innovate but cautious of the unforeseen consequences. Her willingness to embrace AI for certain tasks, such as literature reviews, indicates an openness to change. However, her hesitance to let AI into the realm of academic writing speaks to a deeper ideological conflict, one that questions the very essence of human intellectual contribution. This is not an isolated sentiment but rather a reflection of the broader academic community's struggle to define the boundaries of AI's influence. Rachel's concerns extend to the peer review process, a cornerstone of academic integrity. She grapples with the idea of AI-assisted reviews, pondering whether algorithmic neutrality can ever replace human intuition and ethical judgement.

Russell, on the other hand, represents a view that sees AI as more than a mere tool. To him, AI is a transformative force with the potential to revolutionise how we conduct research, write, and even think. Unlike Rachel, Russell sees no sacrilege in allowing AI to participate in academic writing or peer review. Instead, he views AI as a partner in intellectual pursuits, capable of eliminating bias and augmenting human capabilities. His perspective adds a layer of complexity to the debate, challenging us to rethink our own biases and assumptions about the role of technology in academic life. However, it is crucial to point out that Russell's optimism is not blind; it is based on a calculated understanding of AI's capabilities and limitations.

The contrasting perspectives offered by Rachel and Russell serve as microcosms of the larger debate within higher education. Both sides bring valid points to the table – AI promises efficiency and data-driven insights but also raises ethical, epistemological, and ontological concerns. This is not a debate about the future; it is a debate about the present. AI is already here, transforming various facets of academic life, from research methodologies to administrative procedures. The pressing question now is not whether to integrate AI, but how to do it in a way that aligns with our core academic values and ethical commitments. The discussion around AI in academia cannot be a monologue; it must be a dialogue that involves not just researchers and academics but also ethicists, policymakers, and students. As technology evolves, so too will its ethical and practical implications. Therefore, conversations like the one sparked by this debate must continue to occur in academic circles, policy discussions, and even casual conversations among stakeholders. It is not about reaching a final verdict, but about keeping the dialogue open, critical, and informed.

As academics, we have a collective responsibility to shepherd this technological revolution in a direction that is consistent with our values. Whether we lean towards Russell's enthusiastic endorsement, align with Rachel's cautious scepticism, or find a different path altogether, the choices we make today will shape the academic landscape for generations to come. The challenge, then, is to maintain an ongoing, adaptive dialogue that allows us to continually reassess and redefine our relationship with AI as it evolves. This is not just a technological imperative, but an ethical, epistemological, and existential one as well.

Acknowledgements

We would like to acknowledge the use of Dimensions, Scite, Elicit and ChatGPT (OpenAI).

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Abd-Elsalam, K. A., & Abdel-Momen, S. M. (2023). Artificial intelligence's development and challenges in scientific writing. Egyptian Journal of Agricultural Research, 101(3), 714–717.
  • Anis, S., & French, J. A. (2023). Efficient, explicatory, and equitable: Why qualitative researchers should embrace AI, but cautiously. Business & Society, 62(6), 1139–1144. https://doi.org/10.1177/00076503231163286
  • Atanassova, I., Bertin, M., & Mayr, P. (2019). Mining scientific papers: NLP-enhanced bibliometrics. Frontiers Media SA. https://www.frontiersin.org/articles/10.3389frma.2019.00002/full
  • Bakken, S. (2019). The journey to transparency, reproducibility, and replicability. Journal of the American Medical Informatics Association, 26(3), 185–187. https://doi.org/10.1093/jamia/ocz007
  • Beretta, V., Desconnets, J.-C., Mougenot, I., Arslan, M., Barde, J., & Chaffard, V. (2021). A user-centric metadata model to foster sharing and reuse of multidisciplinary datasets in environmental and life sciences. Computers & Geosciences, 154, 104807. https://doi.org/10.1016/j.cageo.2021.104807
  • Bishop, J. M. (2021). Artificial intelligence Is stupid and causal reasoning will Not Fix It. Frontiers in Psychology, https://doi.org/10.3389/fpsyg.2020.513474
  • Brooks, R. A. (2021). A human in the loop: AI won't surpass human intelligence anytime soon. IEEE Spectrum, 58(10), 48–49. https://doi.org/10.1109/MSPEC.2021.9563963
  • Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. MIT Press.
  • Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Tulio Ribeiro, M., & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712. Retrieved March 01, 2023, from https://ui.adsabs.harvard.edu/abs/2023arXiv230312712B
  • Burger, B., Kanbach, D. K., Kraus, S., Breier, M., & Corvello, V. (2023). On the use of AI-based tools like ChatGPT to support management research. European Journal of Innovation Management, 26(7), 233–241. https://doi.org/10.1108/EJIM-02-2023-0156
  • Bzdok, D., Nichols, T. E., & Smith, S. M. (2019). Towards algorithmic analytics for large-scale datasets. Nature Machine Intelligence, 1(7), 296–306. https://doi.org/10.1038/s42256-019-0069-5
  • Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchi, G. (2021). AI-assisted peer review. Humanities and Social Sciences Communications, 8(1), 1–11. https://doi.org/10.1057/s41599-020-00703-8
  • Christou, P. A. (2023). Ηow to use Artificial Intelligence (AI) as a resource, methodological and analysis tool in qualitative research? The Qualitative Report.
  • Chubb, J., Cowling, P. I., & Reed, D. (2021). Speeding Up to keep Up: Exploring the Use of AI in the research process. Ai & Society. https://doi.org/10.1007/s00146-021-01259-0
  • Dafoe, A. (2013). Science deserves better: The imperative to share complete replication files. PS: Political Science & Politics, 47((01|1)), 60–66. https://doi.org/10.1017/S104909651300173X
  • Dreyfus, H. L., & Dreyfus, S. E. (1988). Mind over machine: The power of human intuition and expertise in the era of the computer. IEEE Expert, 2(2), 110–111. https://doi.org/10.1109/MEX.1987.4307079
  • Feuston, J. L., & Brubaker, J. R. (2021). Putting tools in their place: The role of time and perspective in human-AI collaboration for qualitative analysis. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–25. https://doi.org/10.1145/3479856
  • Fischer, C. C. (2010). A value-added role for reviewers in enhancing the quality of published research. Journal of Scholarly Publishing, 42(2), 226–237. https://doi.org/10.3138/jsp.42.2.226
  • Ganguly, N., Fazlija, D., Badar, M., Fisichella, M., Sikdar, S., Schrader, J. r., Wallat, J., Rudra, K., Koubarakis, M., Patro, G. K., Amri, W. Z. E., & Nejdl, W. (2023). A review of the role of causality in developing trustworthy AI systems. ArXiv, abs/2302.06975.
  • Gardner, A., & Willey, K. (2019). The role of peer review in identity development for engineering education researchers. European Journal of Engineering Education, 44(3), 347–359. https://doi.org/10.1080/03043797.2018.1500526
  • Hassani, H., Silva, E. S., Unger, S., TajMazinani, M., & Mac Feely, S. (2020). Artificial intelligence (AI) or intelligence augmentation (IA): what Is the future? Artificial Intelligence, 1(2), 143–155. https://www.mdpi.com/2673-2688/1/2/8
  • Hey, T. J. G., & Hooper, V. (2020). AI3SD Video: AI for Science: Transforming Scientific Research.
  • Hui, G. (2020). Artificial Intelligence and the Future of Labour Demand.
  • Hyland, K. (2002). Genre: Language, context, and literacy. Annual Review of Applied Linguistics, 22(1), 113–135. https://doi.org/10.1017/S0267190502000065
  • Irfan, R., Rehman, Z., Abro, A., Chira, C., & Anwar, W. (2019). Ontology learning in text mining for handling big data in healthcare systems. Journal of Medical Imaging and Health Informatics, 9(4), 649–661. https://doi.org/10.1166/jmihi.2019.2681
  • Juarez-Orozco, L. E., Martinez-Manzanera, O., Storti, A. E., & Knuuti, J. (2019). Machine learning in the evaluation of myocardial ischemia through nuclear cardiology. Current Cardiovascular Imaging Reports, https://doi.org/10.1007/s12410-019-9480-x
  • Kitchin, R. (2014). Big data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 205395171452848. https://doi.org/10.1177/2053951714528481
  • Korac-Kakabadse, N., Korac-Kakabadse, A., & Kouzmin, A. (1999). Dysfunctionality in “citizenship” behaviour in decentralized organizations. Journal of Managerial Psychology, https://doi.org/10.1108/02683949910292132
  • Kousha, K., & Thelwall, M. A. (2023). Artificial intelligence to support publishing and peer review: A summary and review. Learned Publishing.
  • Kusters, R., Misevic, D., Berry, H., Cully, A., Cunff, Y. L., Dandoy, L., Díaz-Rodríguez, N., Ficher, M., Grizou, J., Othmani, A., Palpanas, T., Komorowski, M., Loiseau, P., Frier, C. M., Nanini, S., Quercia, D., Sebag, M., Fogelman, F. S., Taleb, S., … Wehbi, F. E. Z. (2020). Interdisciplinary research in artificial intelligence: Challenges and opportunities. Frontiers in Big Data, https://doi.org/10.3389/fdata.2020.577974
  • Liaw, S.-T., Liyanage, H., Kuziemsky, C. E., Terry, A. L., Schreiber, R., Jonnagaddala, J., & de Lusignan, S. (2020). Ethical use of electronic health record data and artificial intelligence: Recommendations of the primary care informatics working group of the international medical informatics association. Yearbook of Medical Informatics, 29((01|1)), 51–57. https://doi.org/10.1055/s-0040-1701980
  • Ligo, A. K., Rand, K., Bassett, J., Galaitsi, S. E., Trump, B. D., Jayabalasingham, B., Collins, T., & Linkov, I. (2021). Comparing the emergence of technical and social sciences research in artificial intelligence. Frontiers of Computer Science.
  • Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750
  • Mahmić-Kaknjo, M., Utrobičić, A., & Marušić, A. (2021). Motivations for performing scholarly prepublication peer review: A scoping review. Accountability in Research, 28(5), 297–329. https://doi.org/10.1080/08989621.2020.1822170
  • Mewburn, I. (2011). Reading like a mongrel. Thesis Whisperer Blog. https://thesiswhisperer.com/2011/03/08/reading-like-a-mongrel/
  • Müller, H., Pachnanda, S., Pahl, F. B., & Rosenqvist, C. (2022). The application of artificial intelligence on different types of literature reviews - A comparative study. 2022 International Conference on Applied Artificial Intelligence (ICAPAI), 1–7.
  • Neyedli, H. F., Hollands, J. G., & Jamieson, G. A. (2011). Beyond identity: Incorporating system reliability information into an automated combat identification system. Human Factors: The Journal of the Human Factors and Ergonomics Society, https://doi.org/10.1177/0018720811413767
  • Nguyen-Trung, K., Saeri, A. K., & Kaufman, S. (2023). Applying ChatGPT and AI-powered tools to accelerate evidence reviews. https://doi.org/10.31219/osf.io/pcrqf
  • Oren, O., Gersh, B. J., & Bhatt, D. L. (2020). Artificial intelligence in medical imaging: Switching from radiographic pathological data to clinically meaningful endpoints. The Lancet Digital Health, https://doi.org/10.1016/s2589-7500(20)30160-6
  • Pal, S. (2023). A paradigm shift in research: Exploring the intersection of artificial intelligence and research methodology. International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences, 11(3). https://doi.org/10.37082/IJIRMPS.v11.i3.230125
  • Pividori, M. D., & Greene, C. S. (2023). A publishing infrastructure for AI-assisted academic authoring. bioRxiv.
  • Ryan, M., Antoniou, J., Brooks, L. D., Jiya, T., Macnish, K., & Stahl, B. C. (2021). Research and practice of AI ethics: A case study approach juxtaposing academic discourse with organisational reality. Science and Engineering Ethics, 27(2). https://doi.org/10.1007/s11948-021-00293-x
  • Sathya, K. B. S., Jebamani, B. J. A., & Fowjiya, S. (2022). Deep learning. https://doi.org/10.4018/978-1-6684-6001-6.ch001
  • Sirén, C. (2012). Unmasking the capability of strategic learning: A validation study. The Learning Organization, https://doi.org/10.1108/09696471211266983
  • Sloane, M., & Moss, E. (2019). AI’s social sciences deficit. Nature Machine Intelligence, 1(8), 330–331. https://doi.org/10.1038/s42256-019-0084-6
  • Tauchert, C., Bender, M., Mesbah, N., & Buxmann, P. (2020). Towards an Integrative approach for automated literature reviews using machine learning. https://doi.org/10.24251/hicss.2020.095
  • Tozzi, A. E., & Cinelli, G. (2021). Informed consent and artificial intelligence applied to RCT and Covid-19.
  • Xie, T., Pentina, I., & Hancock, T. (2023). Friend, mentor, lover: Does chatbot engagement lead to psychological dependence? Journal of Service Management, https://doi.org/10.1108/josm-02-2022-0072
  • Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Liu, X., Wu, Y., Dong, F., & Qiu, C.-W. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4), 100179. https://doi.org/10.1016/j.xinn.2021.100179