498
Views
1
CrossRef citations to date
0
Altmetric
Articles

Student perceptions of peer cheating behaviour during COVID-19 induced online teaching and assessment

, , &
Pages 966-980 | Received 23 Jan 2023, Accepted 18 Aug 2023, Published online: 28 Sep 2023

ABSTRACT

In this article we report on a study of higher education students’ (N = 256) perceptions on the willingness, pressure, and frequency of their peers to cheat in online assessments at an Australian university in Singapore during the COVID-19 induced Online Teaching and Assessment period (COTA). MANOVA was used to identify the differences in perception between COTA and In-Person Teaching and Assessment (IPTA), as well as differences between academic disciplines and stages of study. The findings demonstrate that students perceived an increase across all areas of online cheating during COTA, and that these perceptions varied significantly by discipline but not by stage of study. Inductive qualitative thematic analysis was then used to explore the reasons behind the perceived increases, identifying themes related to anonymity, material access, pressure to achieve, lack of consequences, and peer group access. The implications of this research offer deeper insight into assessment security, design, and student concerns during emergency online teaching periods which can inform institutional policies in the future.

Introduction

Despite the easing of COVID-19 era pandemic measures in many parts of the world, at the time of writing, online distance learning has remained above pre-pandemic levels (Middleton, Citation2022). Consequently, changes in teaching and assessment are underway as part of a long-term global shift to virtual learning (García-Morales et al., Citation2021). Even as in-person learning resumes, blended learning has been described as a ‘mandatory component’ of face-to-face instruction post-pandemic (Turnbull et al., Citation2021).

The rapid shift to what we call COVID Online Teaching and Assessment (COTA) has led to concerns among educators on how to safeguard academic integrity (Gamage et al., Citation2020). Safeguarding academic integrity is essential, as dishonest behaviours during study can result in graduates joining the workforce with a negatively impacted ability to learn (Carpenter et al., Citation2006), and students who self-report as likely to engage in academically dishonest behaviours express the same likelihood of acting dishonestly in other areas of life (Guerrero-Dib et al., Citation2020). As a result, protecting academic integrity is important for maintaining trust in Higher Education Institutions (HEIs) (Sefcik et al., Citation2020) which is increasingly questioned by the public (Roe, Citation2023). Therefore, in addition to preventing violations through education, training, and detection, there is also a need to understand students’ perceptions of cheating, as peer pressure has been found to influence cheating in online assessments (Akbulut et al., Citation2008), and students who believe that peers are cheating without consequence may be more likely to cheat (Ives et al., Citation2017).

This research aims to understand the impact of COTA on students’ perceptions of cheating among their peers. We report our findings from a survey (N = 256) at a private Australian university campus in Singapore which explored students’ perceptions of peers’ willingness to cheat, pressure that students felt to cheat, and students’ perceived frequency of cheating behaviours during the COTA period, building on the research of Walsh et al. (Citation2021). We undertake the first study in this sociocultural and geographical area to explore attitudes towards cheating during COTA and the first study to explore differences across both stages of study and discipline of study using mixed MANOVA. This study furthers the field by providing data for comparison regarding the effects of COTA on student perceptions of cheating in the context of a Singapore-Australia private HEI.

Defining academic integrity, academic dishonesty and cheating

We define academic integrity as participants’ dedication to norms and values, including honesty, fairness, trust, respect, and responsibility (Lynch et al., Citation2021), while we classify academic dishonesty as intentional actions which seek to violate the code of academic integrity. Clarity is required, as multiple factors can lead to unintentional violations of academic integrity (Amigud, Citation2020), and there are differences between institutions’ and individuals’ definitions of academic dishonesty (Parkinson et al., Citation2022).

Our investigation focused on academic cheating in online assessments during COTA. The definition of cheating used in this study, and communicated to survey respondents in the pre-survey screening and in the survey itself, was a simplified version of Cizek’s (Citation2004) definition as follows: ‘“Cheating” means an intentional behaviour that breaks the rules of a test or assignment, and gives a student an unfair advantage over others’.

Academic cheating in the COVID-19 pandemic

The COVID-19 pandemic led to the migration of much of society onto online platforms to allow for physical distancing (Adedoyin & Soykan, Citation2020). At the time of writing, many of the restrictions globally, and in Singapore, have been lifted.

Before the pandemic led to the need for COTA, it was argued that online courses should deter students from cheating due to more flexible study schedules (Grijalva et al., Citation2006; Stuber-McEwen et al., Citation2009). It has been claimed that concerns about increases in cases of academic cheating in online versus face-to-face courses are based more on anecdotal assumptions than on empirical evidence (Beck, Citation2014; Fuller et al., Citation2020). However, Eaton (Citation2020) contends that online learning deterring cheating is a result of the fact that studies undertaken prior to the pandemic tended to focus on voluntary online classes with a more mature student base. Lanier (Citation2006) found that cheating was more prevalent in online courses, with Watson and Sottile (Citation2010) finding that students were four times more likely to cheat in online classes than in person.

While more investigation among specific student populations is needed (Holden et al., Citation2021), there is evidence that online assessments provide more opportunities for academic cheating (Miller & Young-Jones, Citation2012). Online assessment may result in increased plagiaristic behaviours (Clarke et al., Citation2023), while access to the Internet for students has been described as precipitating a loss of control over assessment integrity (St-Onge et al., Citation2022).

This explains why tools to deter online cheating were in development prior to the global periods of COTA (Chuang et al., Citation2017; Diedenhofen & Musch, Citation2017). However, just as technological solutions have been developed to combat cheating, other tools continue to emerge which allow students more opportunities to cheat, including personal devices which may be used in invigilated exams (Luck et al., Citation2022), AI-powered writing assistants (Roe, Renandya & Jacobs, Citation2023) and use of machine tools to engage in back translation (Jones & Sheridan, Citation2015) leading to a ‘technological arms race’ scenario (Roe & Perkins, Citation2022). This has led to speculation on the future of academic integrity in light of the recent advancements in Generative Artificial Intelligence tools, such as ChatGPT (Cotton et al., Citation2023; Perkins, Citation2023), especially given the challenges with technological methods to detect the use of these tools (Perkins et al., Citation2023; Sadasivan et al., Citation2023).

During the COTA period, Jenkins et al. (Citation2022) found an increase in first-time cheating among psychology students (N = 214), attributable to increased stress and pressure. Daniels et al. (Citation2021) found that among a group of Canadian undergraduate students (N = 98) engagement and goals for study achievement decreased during COTA, and Lancaster and Cotarlan (Citation2021) reported an increase in requests for assistance in contract cheating in the early stages of the pandemic. During COTA in Australia, Henderson et al. (Citation2022) found that 216 students (n = 7839) self-reported cheating on examinations, and that use of disallowed materials and seeking the help of others were the most common methods of cheating.

Student perceptions of academic cheating during the COVID-19 pandemic

In a study focused on beliefs and perceptions regarding cheating, King et al. (Citation2009) found that 73.6% of students (N = 121) believed that it was easier to cheat online than traditional classes. During a COTA period, Walsh et al. (Citation2021) found that in the United States, 81% of STEM students surveyed (n = 299) believed that cheating had increased in frequency in online learning as opposed to face-to-face learning.

In Bahrain, Baniamer and Muhamed (Citation2022) found that students believed that most online cheating occurred by saving material on separate devices for assessments and by using several browsers to locate data. Continuing with examinations, Reedy et al. (Citation2021) found that among 1921 students in Australia, 52.27% believed that there was no significant difference in cheating between a traditional examination, and an alternative assessment or examination which took place over the Internet. In contrast, academic staff believed the opposite, perceiving that student cheating was more common and easier in online examinations and assessments. In Israel, Amzalag et al. (Citation2021) found that students did not perceive cheating to be higher in online examinations, while educators perceived this the opposite. These findings suggest that more research is required to understand the differences in perceptions between faculty members and students.

Materials and methods

The aim of this research was to understand students from different disciplines and the stages of their attitudes towards peer cheating during COTA at an Australian university campus in Singapore. As classes gradually moved back to face-to-face mode in the first quarter of 2022, we surveyed students in order to:

  1. Understand whether students perceived that their peers were more willing to cheat, felt more pressure to cheat, or engaged in cheating behaviours more frequently during the COTA period, and whether they believed these changes would continue in the future, regardless of assessment type.

  2. Identify whether significant variations in these perceptions exist between the stages and disciplines of study.

  3. Investigate why students felt that changes in cheating attitudes and behaviours occurred.

We adopted a mixed-methods approach to answer our research questions. The first and second research questions are primarily answered by quantitative data and statistical analysis, whereas the third is answered by the analysis of qualitative data. Data was collected from currently enrolled students using a survey comprising 12 closed-ended questions exploring respondent demographics and Research Questions 1 and 2, and one open-ended qualitative text entry question to address Research Question 3. A version of Walsh et al.’s (Citation2021) survey (delivered to 299 STEM students across 31 institutions in the United States) was used, but modified to explore additional areas and geared towards students from a single institution in Singapore, therefore enabling an assessment of differences between students based on discipline and stage of study.

Ethical approval for the survey and its distribution was provided by the institution’s ethical review board, and the survey link was distributed via email to all enrolled students. The list of questions asked in the survey are shown in Appendix 1.

The choice to focus on student perceptions of peer cheating, as opposed to asking for self-reported instances of cheating, was made to enable a sensitive investigation of the challenging environment faced by students during COTA. Although the perceptions of others may not be fully accurate, they still provide valuable insights into a community's norms and values which will be shaped by the respondent’s own attitudes, behaviours, and experiences. For example, a respondent who may have engaged in cheating behaviours may perceive that cheating is common among peers, which allows for an understanding of instances of cheating behaviours without directly asking respondents whether they engaged in these behaviours themselves.

Although the move to distance education meant that interactions between students were more limited or structured, respondents may have witnessed or heard about instances of cheating, and students may form perceptions based on their experiences in the online environment, such as discussions, group work, observed behaviours, and online social interactions. For example, a student might notice a peer consistently performing well on online tests but struggling in discussions or other interactive elements of the course. Although these perceptions may be more accurate during IPTA, where respondents may be able to draw on more concrete evidence of cheating behaviours (such as witnessing a peer looking at another student's paper), valid perceptions can still be gathered in an online environment.

The survey did not describe specific assessment types but referred to all online assessment methods used during COTA. In Singapore, this began in early 2020 and was gradually phased out as the pandemic eased, with a full return to IPTA by March 2023. While we recognise that assessment methods may have a significant effect on cheating behaviours, and including this separation may have provided more nuanced insight into the subject matter, the decision was made to view all assessments together rather than investigate the differences between different assessment types. The rationale for this is to provide a holistic view of cheating behaviours across all types of assessment and to ascertain overarching trends or patterns in students’ views of cheating behaviours.

The independent within-subjects factor of the quantitative analysis was the mode of teaching and assessment employed (IPTA or COTA), and the independent between-subject factors were the stages and disciplines of study. The stages of study explored included pre-university, undergraduate (first, second, and third years), and postgraduate study (Master’s and Doctoral levels). Disciplines of study consisted of Non-degree (including Foundation and English for Academic Purposes (EAP) programmes), Business (including Business, Tourism & Hospitality), Sciences (including all Science and Information Technology programmes) and Psychology/Health (including Psychology, Health, and Education programmes), mirroring the arrangement of subjects across the Schools of the institution.

The dependent variables (DVs) examined were how much respondents believed their peers were willing to cheat, felt pressured to cheat, or engaged in cheating behaviours.

Results and discussion

Prior to data analysis, the data were cleansed to remove students who were not located in Singapore and those who took <90 s to complete the survey. A total of 246 valid responses were obtained for the quantitative questions, and 172 for the qualitative question.

summarises the responses following data cleansing across the independent between-subject factors of the study stage and discipline.

Table 1. Frequency of responses across the stages and disciplines of study.

compares the mean values obtained for all dependent variables across the two modes of teaching: IPTA and COTA. Responses ranged from 1 to 5, with higher responses indicating an increase in the DVs. Data is expressed as mean ± standard deviation for IPTA and COTA.

Table 2. Mean responses across study DVs.

Quantitative results

Mixed MANOVA was used to analyse the effect of teaching context on student perceptions of cheating across the six disciplines and five stages of study. The two teaching modes assessed were IPTA (in-person teaching and assessment before COVID-19) and COTA (online teaching and assessment during COVID-19).

Assumption checking

Preliminary assumption checking was done to evaluate normality, outliers, multicollinearity, linearity, homogeneity, and sphericity. Shapiro-Wilkes showed non-normal data, but Q-Q plots and z-scores of skewness and kurtosis indicated acceptably normal data. No univariate outliers were found due to single-item Likert-type dependent variables. No multivariate outliers (Cook’s distance <1) or multicollinearity (all VIF values <5) were observed, and the scatterplots indicated linear relationships. Pillai’s trace was used to interpret results due to unequal sample sizes among groups.

Discipline of study

Examining the discipline of study, Box’s M test (p = .001) and Levene’s test (p > .05) were used to assess homogeneity. No significant differences were found between disciplines (F(9,702) = 1.765, p = .071). However, a significant difference was observed in the mode of study (IPTA vs. COTA) (F(9,702) = 23.063, p < .001; partial η2 = .230). The analysis showed that respondents felt that their peers were more likely to cheat during COTA than IPTA, with increased cheating pressure and behaviours. A significant interaction effect between mode and disciplines of study (F(3,232) = 2.412, p = .011; Pillai’s trace = .90; partial η2 = .030) indicated the need for follow-up univariate tests. Post-hoc Tukey’s HSD tests were used to examine the differences between disciplines in the three DVs. Business students (2.51 ± 1.14) perceived less cheating willingness compared to Psychology/Health (3.27 ± 1.18, p < .001) and Science students (3.00 ± 1.20, p = .044) during COTA. Business students (2.42 ± 1.13) also reported lower cheating behaviour perceptions than Psychology/Health (3.17 ± 1.12, p < .001) and Science students (3.00 ± 1.20, p = .044). Non-degree students (2.50 ± 1.21) perceived lower cheating levels than Psychology & Health students during COTA (3.17 ± 1.21, p = .045).

Stage of study

Exploring the stage of study, homogeneity of variances was assessed using Levene’s Test of Homogeneity of Variance (p > .05), but no homogeneity of variance-covariance matrices was found by Box’s M test (p = ≤ .001). Mixed MANOVA was suitable since all group sizes were over 30 (Allen & Bennett, Citation2008).

The differences between stages of study in the scores across all DVs were not significant (F(6,466) = 1.765, p = .740; Pillai’s trace = .015; partial η2 = .008). No interaction was found between the mode and stage of study (F(6,466) = .721, p = .632; Pillai’s trace = .018; partial η2 = .009). Of the students, 20.3% thought that cheating would be more frequent if teaching and assessment were online/blended, 29.3% thought it would remain the same, and 29.8% thought it would be less frequent. 40.6% of students thought cheating was less acceptable in online classes, 51.1% thought there was no difference, and 8.2% thought it was more acceptable. The quantitative results indicate a need for further analysis through qualitative techniques to understand recurrent themes contributing to the increase in cheating perceptions and behaviours, and to enrich our understanding of the data.

Qualitative results

Q10 of the survey asked students to explain the reasons for the perceived rise in cheating. 172 responses (3,087 words) were collected, with an average of 17.9 words per response. Thematic analysis was used based on its record of successful use to generate insights into the effects of the COVID-19 pandemic on student experiences (Mutinda & Liu, Citation2021). The ‘organic’ approach (Braun & Clarke, Citation2016) was used, constructing themes through engagement with the data and recognising patterns of shared meaning inductively (Braun & Clarke, Citation2019). Coding was undertaken reflexively and analysis at the semantic level was performed.

Responses were extracted from Qualtrics and entered into MAXQDA for data familiarisation. The data was coded and analysed inductively to create an unrefined code map. Themes were then developed. This analysis yielded five major themes. In classifying ‘major’ and ‘minor’ themes, the featuring of the theme in a minimum of 5% of responses was selected. Minor themes were featured in less than 5% of qualitative responses, with a range of 1.2–3.5%, while major themes appeared in the range of 7.6%–34% of responses. Minor and major themes at times were represented by the same single answer and often co-occurred. The major themes generally captured the largest range of respondents’ views on why online cheating increased during COTA (). These included anonymity (N = 59, 34%), access to material (N = 20, 11.6%), lack of consequences (N = 18, 10.4%), pressure to achieve (n = 17, 9.9%), and access to people (N = 13, 7.6%). Other minor themes included increased difficulty of material (N = 6, 3.5%), inability to learn the material (N = 4, 2.3%), lack of instructor support (N = 4, 2.3%), and language ability (N = 2, 1.2%) ().

Table 3. Major and minor themes.

Major theme one: anonymity

The most common theme was anonymity. Students mentioned a teacher, supervisor, or invigilator as a deterrent to cheating, which is usually associated with invigilated exams but could also mean less direct supervision. Research has found that students mostly cheat on exams, not written assignments, despite faculty perceiving the opposite to be true (Harper, Bretag & Rundle, Citation2021). In many responses, this theme intersected with a perceived lack of immediate consequences such as being caught. This is demonstrated in the example labelled as Response 1 (R1).

R1: It may be a situation where they feel like they can’t get caught as easily. Since the lecturer/tutor doesn’t have their eyes on them, they can’t be watched out for.

In this case, the respondent identified that the other students felt as though they were less likely to be caught, as they were not continuously observed or checked by their assessor. Other responses suggested that physical presence is important, as shown below:

R2: Because there are no physical supervisor near them so some students studying online might attempt to cheat but this will only lead to them feeling guilty and not rewarding if they cheat.

This common pattern of anonymity and lack of physical supervision recurred throughout the data. The lack of physical presence or supervision as a motivating factor is evidenced by Henderson et al.’s (Citation2022) finding that supervised online exam-takers engaged in less cheating during COTA than unsupervised.

Major theme two: access to material

‘Access to material’ was mentioned in 20 responses. This regularly intersected with the first theme, in which access to prohibited resources is a motivating factor; however, the lack of anyone knowing about such access is also common.

R3: If some materials are not allowed for exam, they can have it with them in online subjects without anyone knowing.

R4: There’s no invigilators in an online exam as compared to a physical exam. And because a large part of the grade is on the line for students, they would do anything that can help them a little bit from reading textbooks, having their notes.

‘Access to material’ as a theme reflects the many types of resources students alluded to, including devices, notes, textbooks, and online resources. This theme describes a method by which students cheat, while overall, it is judged that a lack of invigilation and consequences allow students to access external information. Such responses are again related to the traditional model of invigilated examination.

Major theme three: lack of consequences

Under this theme, respondents explained the view that students were more likely to engage in online cheating because of reduced risk of being identified or ‘caught’. This theme often intersected with ‘Anonymity’.

R6: It is probably easier and less risky to cheat in online quizzes and tests than when taking them physically, you are also in front on computers or phones.

R7: Because of the lack of accountability and the sense of no consequences for their actions.

A sense of being able to avoid detection and increased temptation due to a lack of monitoring is distinct from anonymity, as the low risk of being caught was not always related to being anonymous, but attributable to the context, for example, being at home.

Major theme four: pressure to achieve

This theme focuses on the instrumentality of assessment and results as driving increased cheating behaviours when an opportunity arises, also in line with Henderson et al.’s (Citation2022) findings that financial, time, and failure pressures were major motivations for cheating.

R9: They want to get good grades to boost their GPA so they can get into postgraduate subjects more easily than others.

R10: I guess because they want to have a better grade in a easier way. But I also believe most students are good enough and honest enough to not cheat. And get good grades by themselves.

These responses focused on the need for students to achieve as high a grade as possible, in some cases, referring to future plans and requirements for study.

Major theme five: access to people

The final major theme explicitly referred to in 13 responses was Access to people, which often co-occurred with anonymity and lack of sight and access to material.

R11: Everyone has the same kind of question so it’s easier to cheat with their peers.

R12: Much easier methods to cheat through collusion with fellow classmates or external tutors.

‘Access to people’ in this theme suggests that students recognise the significance of the co-action effect, as in a social setting (especially in a high stakes environment with limited supervision), a ‘bad’ example can and most likely will have a ‘bad’ effect on others (Radomskaya & Bhati, Citation2022). The mention of external tutors suggests a relationship with written assessments in the style of an essay or project, rather than an examination.

Minor themes (subject difficulty, inability to learn, lack of instructor support, and language ability) act as secondary factors which contribute to a complex picture of multiple influences on perceived increases in cheating. The minor themes correspond to Schultz et al.’s (Citation2022) findings that failure to invest in interaction at the beginning of a study period was significantly correlated with increased searching for examination material on the Internet.

Discussion

The results of this investigation showed that students perceived an increase in cheating during COTA, and that this varied by subject of study but not by year of study. All students believed that their peers were more willing to cheat, felt more pressured to cheat, and engaged in cheating more during COTA compared to IPTA.

The thematic analysis yielded five major and four minor themes, which can be grouped as factors that motivate cheating, and factors that enable cheating (see ). Behavioural, psychological, and environmental factors drive cheating motivation, interacting with contextual elements such as lack of instructor support, anonymity, lack of consequences, and perceived difficulty. Our results align with existing literature on student cheating behaviour and perceptions, especially in COTA. Our findings confirm prior research that online learning can lead to perceived higher cheating rates (Lanier, Citation2006; Watson & Sottile, Citation2010), or at least the perception of increased academic dishonesty in a cohort.

In terms of subject-specific differences, our findings extend prior work by Jenkins et al. (Citation2022) by not only examining cheating in the discipline of Psychology but also extending this analysis across various subject areas, suggesting a potential subject-dependent dimension of academic dishonesty. Our research supports the works of King et al. (Citation2009) and Walsh et al. (Citation2021), showing that students find it easier to cheat online. Our thematic analysis revealed that anonymity, a lack of instructor support, and perceived difficulty in materials caused these perceptions. This is in contrast with the results of Reedy et al. (Citation2021) and Amzalag et al.’s (Citation2021) findings, revealing that students generally agree that online exams are more likely to be cheated on, regardless of subject or year. This discrepancy may require further research that considers the cultural and institutional differences in academic dishonesty. Solutions such as simulated invigilated exams or supervised online writing workshops may help reduce cheating, but HEIs must find better technological solutions to combat cheating while minimising students’ cognitive load.

Limitations

Krásničan et al. (Citation2022) outline several limitations that are commonly associated with contract cheating research, although these can be generalized to academic integrity research as a whole. One major issue is non-specific language. In this case, cheating is an emotive term that can have a range of connotations, depending on the student’s context. To remedy this, we offered a definition within our survey to guide students. This also relates to the broader limitation that academic integrity and dishonesty are not universal although it is argued that some aspects of the student experience related to the causes of academic dishonesty, such as performance orientation, may be universal (Roe, Citation2022). Despite this debate, it is complicated to ‘pin down’ the meaning of ‘cheating’ and what precisely it means in our survey.

Our survey relied on self-reported data, although this was enquiring about the perceptions of other students and not the respondents themselves. In addition, it is possible that an increased focus by institutions and the promotion of academic integrity issues may result in students perceiving cheating behaviours as more prevalent than they are. Equally, grapevine discussions about students who have not faced any consequences may exaggerate the results.

Our research goals were focused on exploring students’ perceptions of cheating, and we did not seek to compare this to the collected data on academic misconduct violations. Although such data could theoretically demonstrate the accuracy of such perceptions, this was not within the scope of our research but is an area for further investigation.

Finally, COTA is not equivalent to online teaching or assessment. The COTA period was an exceptional event and ‘emergency’ online teaching and assessment cannot be generalised to future, well-planned online teaching with assessments adjusted accordingly. As a result, further research is encouraged in this area as universities worldwide continue to develop online and blended learning in the post-pandemic world.

Conclusion

The results of the quantitative analysis underscore that students generally perceived their peers as more likely to cheat, more pressured to cheat, and engaged in more frequent cheating behaviours during COTA periods. Discipline level differences were observed, with Business students and non-degree students reporting lower perceived levels of cheating behaviours during COTA than their counterparts in the Psychology & Health, and Science disciplines. However, the study stage did not appear to influence the perceptions. Future perspectives on cheating in blended and online assessments are divided, with a slight majority anticipating a decrease in cheating frequency. Most respondents believed that the acceptability of cheating in online subjects remained unchanged, yet a significant minority viewed online cheating as less acceptable.

The qualitative analysis demonstrated that several major and minor themes contributed to perceptions of increased cheating, which relate to both the context of the exam (e.g., increased anonymity) and the practical techniques of engaging in cheating behaviours (e.g., using the Internet to search for information).

Overall, our research adds depth to the existing body of knowledge by exploring how factors such as the subject of study, year of study, and perceived difficulty influence students’ perceptions and behaviours related to cheating during online study periods, offering a more detailed understanding of this issue in the post-COTA context.

Ethical approval

Ethical approval for this study was granted by the James Cook University Human Research Ethics Committee (reference number: H8779).

Acknowledgements

We would like to acknowledge Lisa L. Walsh’s assistance in sharing survey information to assist with our questionnaire development.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Adedoyin, O. B., & Soykan, E. (2020). COVID-19 pandemic and online learning: The challenges and opportunities. Interactive Learning Environments, 31(0), 863–875. https://doi.org/10.1080/10494820.2020.1813180
  • Akbulut, Y., Şendağ, S., Birinci, G., Kılıçer, K., Şahin, M. C., & Odabaşı, H. F. (2008). Exploring the types and reasons of Internet-triggered academic dishonesty among Turkish undergraduate students: Development of Internet-Triggered Academic Dishonesty Scale (ITADS). Computers & Education, 51(1), 463–473. https://doi.org/10.1016/j.compedu.2007.06.003
  • Allen, P., & Bennett, K. (2008). SPSS for the health & behavioural sciences. Thomson.
  • Amigud, A. (2020). Cheaters on Twitter: An analysis of engagement approaches of contract cheating services. Studies in Higher Education, 45(3), 692–705. https://doi.org/10.1080/03075079.2018.1564258
  • Amzalag, M., Shapira, N., & Dolev, N. (2021). Two sides of the coin: Lack of academic integrity in exams during the corona pandemic, students’ and lecturers’ perceptions. Journal of Academic Ethics, 20(2), 243–263. https://doi.org/10.1007/s10805-021-09413-5
  • Baniamer, Z., & Muhamed, B. (2022). Cheating in online exams: Motives, methods and ways of preventing from the perceptions of business students in Bahrain. In A. Hamdan, A. E. Hassanien, T. Mescon, & B. Alareeni (Eds.), Technologies, artificial intelligence and the future of learning post-COVID-19: The crucial role of international accreditation (pp. 267–282). Springer International Publishing. https://doi.org/10.1007/978-3-030-93921-2_16
  • Beck, V. (2014). Testing a model to predict online cheating—Much ado about nothing. Active Learning in Higher Education, 15(1), 65–75. https://doi.org/10.1177/1469787413514646
  • Braun, V., & Clarke, V. (2016). (Mis) conceptualising themes, thematic analysis, and other problems with Fugard and Potts'(2015) sample-size tool for thematic analysis. International Journal of Social Research Methodology, 19(6), 739–743.
  • Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597.
  • Carpenter, D. D., Harding, T. S., Finelli, C. J., Montgomery, S. M., & Passow, H. J. (2006). Engineering students’ perceptions of and attitudes towards cheating. Journal of Engineering Education, 95(3), 181–194. https://doi.org/10.1002/j.2168-9830.2006.tb00891.x
  • Chuang, C. Y., Craig, S. D., & Femiani, J. (2017). Detecting probable cheating during online assessments based on time delay and head pose. Higher Education Research & Development, 36(6), 1123–1137. https://doi.org/10.1080/07294360.2017.1303456
  • Cizek, Gregory. (2004). Cheating in academics. In Charles Speilberger (Ed.), Encyclopedia of Applied Psychology. Netherlands: Elsevier.
  • Clarke, O., Chan, W. Y. D., Bukuru, S., Logan, J., & Wong, R. (2023). Assessing knowledge of and attitudes towards plagiarism and ability to recognize plagiaristic writing among university students in Rwanda. Higher Education, 85(2), 247–263. https://doi.org/10.1007/s10734-022-00830-y
  • Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1–12. https://doi.org/10.1080/14703297.2023.2190148
  • Daniels, L. M., Goegan, L. D., & Parker, P. C. (2021). The impact of COVID-19 triggered changes to instruction and assessment on university students’ self-reported motivation, engagement and perceptions. Social Psychology of Education, 24(1), 299–318. https://doi.org/10.1007/s11218-021-09612-3
  • Diedenhofen, B., & Musch, J. (2017). PageFocus: Using paradata to detect and prevent cheating on online achievement tests. Behavior Research Methods, 49, 1444–1459.
  • Eaton, S. E. (2020). Academic integrity during COVID-19: Reflections from the University of Calgary. https://prism.ucalgary.ca/handle/1880/112293
  • Fuller, R., Joynes, V., Cooper, J., Boursicot, K., & Roberts, T. (2020). Could COVID-19 be our ‘There is no alternative’ (TINA) opportunity to enhance assessment? Medical Teacher, 42(7), 781–786. https://doi.org/10.1080/0142159X.2020.1779206
  • Gamage, K. A. A., de Silva, E. K., & Gunawardhana, N. (2020). Online delivery and assessment during COVID-19: Safeguarding academic integrity. Education Sciences, 10(11), 301. https://doi.org/10.3390/educsci10110301
  • García-Morales, V. J., Garrido-Moreno, A., & Martín-Rojas, R. (2021). The transformation of higher education after the COVID disruption: Emerging challenges in an online learning scenario. Frontiers in Psychology, 12. https://www.frontiersin.org/article/10.3389fpsyg.2021.616059.
  • Grijalva, T. C., Kerkvliet, J., & Nowell, C. (2006). Academic honesty and online courses. College Student Journal, 40(1), 180–185.
  • Guerrero-Dib, J. G., Portales, L., & Heredia-Escorza, Y. (2020). Surveying fake news: Assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. International Journal for Educational Integrity, 16, 1. https://doi.org/10.1007/s40979-019-0049-x
  • Harper, R., Bretag, T., & Rundle, K. (2021). Detecting contract cheating: examining the role of assessment type. Higher Education Research & Development, 40(2), 263–278.
  • Henderson, M., Chung, J., Awdry, R., Mundy, M., Bryant, M., Ashford, C., & Ryan, K. (2022). Factors associated with online examination cheating. Assessment & Evaluation in Higher Education, 1–15. https://doi.org/10.1080/02602938.2022.2144802
  • Holden, O. L., Norris, M. E., & Kuhlmeier, V. A. (2021). Academic integrity in online assessment: A research review. Frontiers in Education, 6. https://www.frontiersin.org/article/10.3389feduc.2021.639814.
  • Ives, B., Alama, M., Mosora, L. C., Mosora, M., Grosu-Radulescu, L., Clinciu, A. I., Cazan, A.-M., Badescu, G., Tufis, C., Diaconu, M., & Dutu, A. (2017). Patterns and predictors of academic dishonesty in Romanian university students. Higher Education, 74(5), 815–831. https://doi.org/10.1007/s10734-016-0079-8
  • Jenkins, B. D., Golding, J. M., Le Grand, A. M., Levi, M. M., & Pals, A. M. (2022). When opportunity knocks: College students’ cheating amid the COVID-19 pandemic. Teaching of Psychology, 50(4), 407–419. https://doi.org/10.1177/00986283211059067
  • Jones, M., & Sheridan, L. (2015). Back translation: An emerging sophisticated cyber strategy to subvert advances in ‘digital age’ plagiarism detection and prevention. Assessment & Evaluation in Higher Education, 40(5), 712–724. https://doi.org/10.1080/02602938.2014.950553
  • King, C. G., Guyette, R. W., & Piotrowski, C. (2009). Online exams and cheating: An empirical analysis of business students’ views. Journal of Educators Online, 6(1). https://eric.ed.gov/?id=EJ904058.
  • Krásničan, V., Foltýnek, T., & Henek Dlabolová, D. (2022). Limitations of contract cheating research. In S. E. Eaton, G. J. Curtis, B. M. Stoesz, J. Clare, K. Rundle, & J. Seeland (Eds.), Contract cheating in higher education: Global perspectives on theory, practice, and policy (pp. 29–42). Springer International Publishing. https://doi.org/10.1007/978-3-031-12680-2_3
  • Lancaster, T., & Cotarlan, C. (2021). Contract cheating by STEM students through a file sharing website: A COVID-19 pandemic perspective. International Journal for Educational Integrity, 17, 1. https://doi.org/10.1007/s40979-021-00070-0
  • Lanier, M. M. (2006). Academic Integrity and Distance Learning. Journal of Criminal Justice Education, 17(2), 244–261. https://doi.org/10.1080/10511250600866166
  • Luck, J.-A., Chugh, R., Turnbull, D., & Rytas Pember, E. (2022). Glitches and hitches: Sessional academic staff viewpoints on academic integrity and academic misconduct. Higher Education Research & Development, 41(4), 1152–1167. https://doi.org/10.1080/07294360.2021.1890697
  • Lynch, J., Salamonson, Y., Glew, P., & Ramjan, L. M. (2021). “I’m not an investigator and I’m not a police officer”—A faculty’s view on academic integrity in an undergraduate nursing degree. International Journal for Educational Integrity, 17, 1. https://doi.org/10.1007/s40979-021-00086-6
  • Middleton, K. V. (2022). Considerations for future online testing and assessment in colleges and universities. Educational Measurement: Issues and Practice, 41(1), 51–53. https://doi.org/10.1111/emip.12497
  • Miller, A., & Young-Jones, A. D. (2012). Academic integrity: Online classes compared to face-to-face classes. Journal of Instructional Psychology, 39(3–4), 138–145.
  • Mutinda, G., & Liu, Z. (2021). Perceptions on the implications of the COVID-19 pandemic on university students' wellbeing in Kenya-a thematic analysis approach. Higher Education Research & Development, 41(7), 2247–2261.
  • Parkinson, A. L., Hatje, E., Kynn, M., Kuballa, A. V., Donkin, R., & Reinke, N. B. (2022). Collusion is still a tricky topic: Student perspectives of academic integrity using assessment-specific examples in a science subject. Assessment & Evaluation in Higher Education, 47(8), 1416–1428. https://doi.org/10.1080/02602938.2022.2040947
  • Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching and Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07
  • Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2023). Game of Tones: Faculty detection of GPT-4 generated content in university assessments. arXiv. https://doi.org/10.48550/arXiv.2305.18081
  • Radomskaya, V., & Bhati, A. S. (2022). Hawker centres: A social space approach to promoting community wellbeing. Urban Planning, 7(4), 167–178. https://doi.org/10.17645/up.v7i4.5658
  • Reedy, A., Pfitzner, D., Rook, L., & Ellis, L. (2021). Responding to the COVID-19 emergency: Student and academic staff perceptions of academic integrity in the transition to online exams at three Australian universities. International Journal for Educational Integrity, 17, 1. https://doi.org/10.1007/s40979-021-00075-9
  • Roe, J. (2022). Reconceptualizing academic dishonesty as a struggle for intersubjective recognition: A new theoretical model. Humanities and Social Sciences Communications, 9, 1. https://doi.org/10.1057/s41599-022-01182-9
  • Roe, J. (2023). Discursive construction of contract cheating and degradation of higher education: Comments on the daily mail online. Forum: Qualitative Social Research 14(2).
  • Roe, J., & Perkins, M. (2022). What are Automated Paraphrasing Tools and how do we address them? A review of a growing threat to academic integrity. International Journal for Educational Integrity, 18, 1. https://doi.org/10.1007/s40979-021-00094-6
  • Roe, Jasper, Renandya, Willy A, & Jacobs, George M. (2023). A review of AI-powered writing tools and their implications for academic integrity in the language classroom. Journal of English and Applied Linguistics, 2(1). http://doi.org/10.59588/2961-3094.1035
  • Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-generated text be reliably detected? arXiv. https://doi.org/10.48550/arXiv.2303.11156
  • Schultz, M., Lim, K. F., Goh, Y. K., & Callahan, D. L. (2022). OK google: What’s the answer? Characteristics of students who searched the internet during an online chemistry examination. Assessment & Evaluation in Higher Education, 47(8), 1458–1474. https://doi.org/10.1080/02602938.2022.2048356
  • Sefcik, L., Striepe, M., & Yorke, J. (2020). Mapping the landscape of academic integrity education programs: What approaches are effective? Assessment & Evaluation in Higher Education, 45(1), 30–43. https://doi.org/10.1080/02602938.2019.1604942
  • St-Onge, C., Ouellet, K., Lakhal, S., Dubé, T., & Marceau, M. (2022). COVID-19 as the tipping point for integrating e-assessment in higher education practices. British Journal of Educational Technology, 53(2), 349–366. https://doi.org/10.1111/bjet.13169
  • Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat: Frequency and type of academic dishonesty in the virtual classroom. Online Journal of Distance Learning Administration, 12(3), 1–10.
  • Turnbull, D., Chugh, R., & Luck, J. (2021). Transitioning to E-Learning during the COVID-19 pandemic: How have Higher Education Institutions responded to the challenge? Education and Information Technologies, 26(5), 6401–6419. https://doi.org/10.1007/s10639-021-10633-w
  • Walsh, L. L., Lichti, D. A., Zambrano-Varghese, C. M., Borgaonkar, A. D., Sodhi, J. S., Moon, S., Wester, E. R., & Callis-Duehl, K. L. (2021). Why and how science students in the United States think their peers cheat more frequently online: Perspectives during the COVID-19 pandemic. International Journal for Educational Integrity, 17, 1. https://doi.org/10.1007/s40979-021-00089-3
  • Watson, G., & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses? Educational Foundations and Technology, https://mds.marshall.edu/eft_faculty/1

Appendix 1:

Supplementary Material: List of survey questions

The below questions were presented in an online survey. Prior to completing the survey, respondents were provided with information about the purpose of the study to ensure informed consent. Options were provided for Q1-Q6, with 5-point Likert scales used for Q7-Q9 and Q11, and open-ended text boxes provided for Q10.

  1. Are you currently living or studying in Singapore?

  2. What is your nationality?

  3. What is your gender identity?

  4. How many years have you been studying at JCU Singapore?

  5. Which of the following best describes your program of study?

  6. Which of the following best describes your primary field of study?

  7. ‘Cheating’ means an intentional behavior that breaks the rules of a test or assignment, and gives a student an unfair advantage over others. Do you think cheating happens more frequently in online subjects rather than face to face subjects?

  8. Based on your experience in face to face subjects before the COVID-19 Pandemic:

    1. How willing are students to cheat?

    2. How often do students feel pressured to cheat?

    3. How often do students cheat?

  9. Based on your experience with these activities in online subjects during the COVID-19 Pandemic:

    1. How willing are students to cheat?

    2. How often do students feel pressured to cheat?

    3. How often do students cheat?

  10. If you feel students are more likely to cheat in online subjects, why is this?

  11. If university subjects and assessment remain in a fully online or blended mode (a mixture of online and offline) permanently, do you think cheating will stay the same, become more frequent, or become less frequent?

  12. Do you believe its more or less acceptable to engage in academic cheating in online subjects?