569
Views
0
CrossRef citations to date
0
Altmetric
COMPUTER SCIENCE

A countless variant simulation-based toolkit for remote learning and evaluation

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2203437 | Received 27 Oct 2021, Accepted 09 Apr 2023, Published online: 23 Apr 2023

Abstract

The COVID-19 pandemic has brought about a profound transformation in the educational landscape in recent months. Educators worldwide have been challenged to tackle academic issues they could never have imagined. Among the most stressful situations faced by students and teachers is implementing online assessments. This paper proposes a system that includes exam prototypes for computer architecture modules at the higher education level. This system generates a wide range of questions and variations on the server side, supported by a set of simulators, resulting in many unique examination proposals. This system streamlines the monitoring process for the teacher, as it eliminates the possibility of two students receiving similar exams and reduces student stress by allowing them to practice with a limitless number of exam samples. This paper also highlights several indicators that demonstrate the advantages of this framework.

PUBLIC INTEREST STATEMENT

This article describes a web application with which it is possible to generate a virtually infinite number of student assessments for a given topic using a combination of 60–80 variable inputs per test (about 2–4 inputs per question). The answers to the questions are obtained through functions programmed in PHP, so the correction of the exams is automatic and immediate, and much more versatile and portable than the “calculated questions” already existing in different e–learning platforms. The number of combinations is so high that each student has, in practice, an endless number of exams to take. At the time of the evaluation, the student obtains a unique and exclusive copy of the exam, completely different from that of another student; therefore it is suitable for the online evaluation. The exams are universally accessible, via the Internet, and the modular programming of the tool facilitates its portability to many subjects.

1. Introduction

The evaluation process in university education is often seen as a necessary evil by students and teachers. However, it is an important aspect of the education system, as it allows for the individual evaluation of the work done and the level of knowledge students acquire. Students are motivated to seek greater understanding and develop self-control in challenging situations through evaluation.

Conducting evaluations requires careful consideration, as it determines not only a student’s level of knowledge but also their ability to apply it. In most technical studies, the evaluation should test a student’s ability to apply, generalize, or synthesize concepts rather than their ability to memorize information.

Another challenge in assessments is that some students may only be motivated to pass the course if they truly learn the material. To address this, the assessment form should be structured to encourage passing the exam and learning. Additionally, monitoring mechanisms should be in place during exams to deter students from cheating, which ultimately devalues the credibility of the evaluation process and the educational system.

In the past, in-person evaluations in a supervised classroom were the primary mechanism for fair evaluations. However, the COVID-19 pandemic has made this approach unfeasible. Fortunately, technology provides evaluators with tools such as virtual exams through video conferencing, randomized questions, anti-plagiarism software, and time constraints to ensure fairness in the evaluation process. Limiting the time available for the exam makes it more difficult for students to cheat or receive assistance from others.

In this work, a learning platform has been developed in which exams are generated novelty. The statements’ writing, the solutions’ calculation, and the test correction are automatically obtained from a reduced set of input variables. These variables combine in a practically infinite way. Unlike other tools, such as the “calculated solutions” available on platforms such as Blackboard (Citationundefined), Canvas (Citationundefined) or Moodle (CitationMartin Dougiamas), in this proposal, answers are not obtained from simple mathematical expressions or simple and not very portable algorithms. Simulation modules programmed in PHP language are used here, making the platform easily portable and extensible to many knowledge areas. Along with all this, the system presented in this work can solve simple problems, such as a basic binary-decimal conversion, to the complete operation of a CPU.

The rest of the paper is structured as follows. Section II presents the origin of the proposed evaluation system. Subsequently, in section III, we analyze the state of the art regarding online learning and assessment systems. The inherent features of the proposed evaluation system are described in section IV. There we will explain the different modules of the framework, question types, input/output data formats, etc. Section V presents how each assessment is designed. Finally, the conclusions are included in section VI.

2. Proposed evaluation system developed within the context of the COVID-19 pandemic

The period of confinement due to the COVID-19 pandemic produced academic effects at the Spanish university from 13 March 2020, and continued until the June 2020 exam period. Subsequently, the situation was repeated between November of that same year and June 2021. At that time, it became clear how important it was to use non-face-to-face learning and evaluation methodological alternatives. In this sense, professors of Fundamentals of Computing subjects from the Higher Technical School of Industrial Engineers of the University of Malaga (Spain) and some module students developed a new system for student self-learning and evaluating their progress in those subjects. The main idea was to make a set of (Romero & Bandera, Citation2021) simulators that allow innumerable tests of basic concepts of computer architecture to be carried out through exercises with immediate and justified solutions. At the same time, these simulators are used to generate knowledge tests where the randomness in the questions and the time available to take the exam are combined. This further ensures that a student can not get help/advice from classmates taking the same exam simultaneously.

The simulators are entirely transparent for the student. They are used, firstly, to generate a unique variant of the exam, secondly, so that these generated variants include congruent and balanced questions, and thirdly, to obtain an automated correction from the system. We want to remark that the student only receives a single exam version during the examination. In short, the simulator acts to generate exam versions and correct them automatically, but they do not intervene or access during the exam time.

On the other hand, each version of the test is nearly random, and the number of combinations on every partial test carried out is so high (it far exceeds 1048 combinations). This diversity of proposals generated from the system made us decide to use this tool in an alternative way. It will be used not only to make student examinations but also to let them practice before the examination day. Hence, students can repeatedly try, avoiding the painful “surprise” of not knowing which will be the exam structure. With this, an increase in the degree of training is expected.

3. State of the art

Distance Education is not new. Its origin dates back to the last part of the 19th century and the beginning of the 20th century when people could only travel short distances to Higher Education institutions. In the last 20–25 years, and thanks to the rise of the Internet, many institutions have been promoting their distance education offers. On the one hand, this has meant the possibility of having new channels and potential students and, on the other, the opportunity to transform teaching to confront with guarantees the highly competitive panorama in which we are (Poehlein, Citation1996). In 2000, Volery and Lord identified in (Volery & Lord, Citation2000) three critical aspects when it comes to achieving quality distance learning: i) used technology (ease of access and navigation, friendly user interface), ii) the type of instructor (attitude with the students, technical ability with tools, class interaction, etc.) and iii) the technological skills of students.

It is essential to highlight that, with online education, the role of the students goes from being a mere passive recipient of knowledge to being an active agent of learning (Candy et al., Citation1994). In this sense, Draves presented in (Draves, Citation2002) ten reasons why distance learning is more popular and better cognitively than face-to-face learning. Regarding the teaching staff, Medina and Miranda (Medina-Herrera & Miranda-Valenzuela, Citation2019) determined the characteristics that make a teacher more suitable to achieve a great acceptance of the students in courses that go from being face-to-face to synchronous online. The results show that the best teachers are usually young, with excellent technological skills, ease of interpersonal interaction, and good social skills. Experienced teachers with good technical skills and fantastic teaching techniques also stand out.

In 2000, Hmieleski and Champagne demonstrated that about 98% of course evaluations were paper-based (see (Hmieleski & Champagne, Citation2020)). Since then, online course evaluation has become bent in many universities while much research has been done to detect advantages, disadvantages, similarities and differences. An interesting study between research works comparing online and face-to-face evaluation methods is presented by Morrison in (Morrison, Citation2013). He remarks that caution must be exercised in assuming that online and paper evaluations yield similar results, even when the same instrument is used with the same population. While this does not indicate whether one version is preferable to another, it suggests that reactivity to the medium might influence the results and, hence, the weight that can be placed on them. Ultimately, decisions on whether to opt for online or paper evaluations might be taken on the grounds of a range of cost savings rather than for educational reasons, and both the literature review and the data in the present study indicate that when time and timeliness are at a premium, these are essential considerations.

There are a lot of comparisons of assessment results in the literature using online and face-to-face exams. Many present similar results for both methods (Avery et al., Citation2006; Donovan et al., Citation2006; Gamliel & Davidovitz, Citation2005), while others show just the opposite, that some types of exams produce better results than others (Burton et al., Citation2012; Carini et al., Citation2003; Kasiar et al., Citation2002). In (Stowell et al., Citation2012), Stowell et al. show that online assessments generate a significantly lower number of students presented than the face-to-face case (about 20%). This may be due to the anxiety that this type of evaluation produces in them and the knowledge that teachers lose control of the conditions in which each student takes the exam: they can do it without ever having been to class, with the help of another partner, in groups, etc. This can make them think that this type of evaluation will reduce their rating.

Online evaluation has been widely used for several years ago. C.Smaill presented one of the first frameworks developed in (Smaill, Citation2006). It is a web-based system for learning and evaluation focused on Electrical Engineering. Grun and Zeileis, in 2009, designed a package exam in R for the automatic generation of standardized statistical exams (Grün & Zeileis, Citation2009). After that, we can remark on some other popular learning and training systems, like Moodle (CitationMartin Dougiamas), OLAT (Citationundefined), OpenOLAT (Citationundefined), ONYX (Citationundefined) and Blackboard (Citationundefined). One of the last approaches was described by Zeileis et at. in (Zeileis et al., Citation2014), where they extend their previous framework to provide interfaces for several output formats, such as PDF, HTML or XML. Most of the above approaches are complete e-learning platforms with the possibility of remote assessment. In all of them, the evaluation is carried out through a set of questions whose solution is obtained through “simple” mathematical formulas.

The above frameworks have a significant drawback. Most of them have a different way of defining new questions. A specific user—interface is used to represent every question of the exam. Teachers need to use a concrete format to write them, which inherently limits the type and range of questions.

Recently, and due to the COVID–19 pandemic that we are suffering worldwide, some authors analyze the massive and widespread use of online teaching and assessment, as well as the results and consequences that this is producing. The importance of having a good distance education system available would imply being able to solve times of confinement like the ones we currently live in. This will allow many students worldwide, including those from developing countries or even the third world, to enjoy the right to education with many guarantees. In this sense, Basilaia and Kvavadze present in (Basilaia & Kvavadze, Citation2020) how the use of a distance education tool affected a European country. On the other hand (Hasan & Bao, Citation2020), analyzes the psychosocial disaster that a time of confinement can produce in the population of a third-world country if teaching is degraded. Finally, George in (George, Citation2020) demonstrates that the use of appropriate online strategies for teaching and assessment during COVID–19 prevents students’ performance from deteriorating. At the same time, it analyzes the main benefits of this type of methodology and shows some examples of possible online exams.

The scholarly evaluation integrity has received more attention than ever because of online learning (Dendir & Maxwell, Citation2020; Noorbehbahani et al., Citation2022). In this way, Manoharan describes and uses an approach to reduce the level of cheating in multiple-choice examinations using a personalization that creates as many versions of the examination scripts as there are students in STEM subjects (Manoharan, Citation2019, Citation2021).

Finally, it is essential to note that many tools like Chegg (Citationundefined) have become very popular for a few years. Students use them a lot to cheat on exams. Analyzing this platform, it can be seen how it includes homework resolution services, 24/7 tutoring, and an extensive database of questions/answers that can help students in their teaching process. Still, the worrying thing is that it can help solve online exams more efficiently. Broemer and Recktenwald discuss this problem at (Broemer & Recktenwald, Citation2021), noting that it is not a new problem but a long-standing one. However, for obvious reasons, between 2020 and 2022, these types of frames have become popular. This is generating a broad debate in Academia.

4. Randomization in questions and simulation

This section describes the internal part of the proposed evaluation framework and method. We want to point out that the approach presented here is far from an e-learning platform like those shown in the previous section. The main goal has been to develop a set of simulators that, apart from being used for autonomous learning by the student, can be used as a basis for developing questions and assessment systems that can be operated remotely and on-site.

The use of random questions is as old as the tests themselves. Either way or another, teachers have resorted to a random change of some element in every exam question to achieve necessary test changes to get the student to study the concept instead of the answer. In most cases, the random (and variable) part of the question is obtained from a discrete set of elements, which can be numbers (e.g., “How much is 4 + {7, 8, 9} ?”), boolean values (e.g,”The 16’s complement of 5 {is, is not} 11”), or even, colors (e.g, “If the traffic light is {green, red, yellow}, can I cross the road?”). Normally, for each element of the set (which we could call “input”), there is a different answer that can be either calculated employing a mathematical expression, or in most cases, it must be established more or less manually. This second case is the one that requires special attention, as it surely represents the overwhelming majority of examination questions.

Except for those cases in which the maximum size of the input set (variable elements in the question) is tiny, it is expected that lecturers, especially in the field of Computer Sciences and Engineering, need to calculate the corresponding result (i.e., output set/responses) of each question using a more or less considerable time. And it is precisely for this reason that the input set is usually small (usually comes from a collection of previous exams).

But what if we can automate the calculation of the answer? Suppose a machine/computer where exams are automatically designed using tools (simulators) that allow you to calculate the output torque in a complex system of gears and motors. The teacher could produce his tables rather quickly to have as many input layouts as students in his course. Even in this case, the teacher’s effort and time invested are high enough not to resist the temptation to reduce the number of issues. But what if we go further? What if the editor that generates the exam text incorporates a specific simulator on the content of each exam question? In this case, the input elements could grow by several orders of magnitude without the teacher needing to determine the specific answer to each input set since the simulator can calculate non-trivial solutions immediately.

4.1. The CASIUM simulators

In the specific case of the Computer Fundamentals course, which has served as an improvised training ground for this work, the set of contents was defined according to the ACM Curricula Recommendations (Citationundefined). Hence, our tool incorporates highly varied course subjects: the binary representation of information, instruction set architecture, computer data path design, memory information storage, memory hierarchy, input and output systems, and even some operating system issues, such as managing CPU usage or memory virtualization. Fortunately, in the months before confinement, a group of teachers and students of the course began to develop a set of specific simulators for each of the subjects above, supported by the CASIUM project (Computer Architecture Simulators—University of Málaga) (Romero & Bandera, Citation2021). In the first step, these tools aimed not to evaluate the student but to facilitate learning. Thus, simulators were developed for each of the main parts that constitute the architectural foundations of a computer. They have been classified into five modules, and some of them are shown below:

Module 1: Data Representation:

a) Alphanumeric encoder: This tool has been designed to study the encoding and decoding of characters in alphabets worldwide. It has been designed so the student understands the different ways of encoding information (ASCII, ANSI, ISO, UNICODE, etc.) and the problems derived from incorrect text decoding.

b) Numeric encoder-1: This module includes two tools for non-real number representation (natural and integer): Two novel encoders with a detailed display of the encoding process for the most used non-real number computer representation formats (BCD, natural binary, two’s complement, etc.).

c) Numeric encoder-2: The last tool of this module is related to real numbers. It shows the floating point number encoding and decoding process using the IEEE-754 format, the most popular real number internal representation in current microprocessors.

Module 2: Basic Digital Electronics:

a) Multiplexer: The first tool of this module simulates the most used combinational component in the microprocessor design. Its operation is very simple, and we only implement two parts: a signal decoder and a bus decoder.

b) Registers and Counters: This simulator can teach students about the operation of a single register, where the binary information is stored, how counters work, and why timing is required in sequential circuits.

c) Adder–Subtractor circuit: This tool was designed to work using three modes with several combinations. The work modes are adder mode (basic), subtractor mode, and adder–subtractor mode. It also incorporates a viewer of the inner full-adder circuits (Figure ).

Figure 1. Main screen of one of the eighteen simulators developed for the subject (adder-subtractor module, which incorporates the simulator of a full adder).

Figure 1. Main screen of one of the eighteen simulators developed for the subject (adder-subtractor module, which incorporates the simulator of a full adder).

Module 3: CPU Components:

a) Instructions Memory and Control Unit: The first tool simulates the first step of every instruction execution on any processor. It shows how the program counter (PC) output bus is used to select (read) an instruction from memory (small and randomly started). The selected instruction will appear on the circuit’s output bus. It is well known (in the knowledge area) that the instruction is nothing more than a binary sequence, indistinguishable from any other set of bits. Therefore, the sequences must be interpretable as instructions and not as data. For this reason, in this tool, the memory will be two working modes:

● In primary mode, memory-only stores binary sequences. In this case, nothing would differentiate it from data memory except the implicit property that it is “read-only”.

● In decoded mode, the binary sequence is interpreted as a MIPS or ARM instruction (depending on the configurable option).

b) Registry Bank: This tool could be the most important. Considering that the registry bank participates in two of the five stages of a MIPS processor (an essential pillar in the teaching of the Computer Architecture area), a perfect understanding of its operation is necessary for good learning of the course. A bank is simulated with 32 registers, of 32 bits each (randomly initialized), with two read ports and one write port.

c) Data Memory: The programming of this tool reuses many of the classes and methods used in the corresponding ones for the instruction memory and the register bank. It has two modes of operation:

● ROM mode, with read-only accesses.

● R/W mode, where read and write operations can be used.

Module 4: Global Simulators:

a) MIPS microprocessor: This tool seems to be only a combination of many of the previously explained tools (Arithmetic Unit, Instruction Memory, Register Bank and Data Memory). However, the complexity of this module goes much further since, in this case, the inputs and outputs of the four previous modules are linked. In fact, the set of tools constitutes a complete simulator of the reduced MIPS processor.

b) Delays in the CPU: This tool aims to demonstrate how signal propagation delays on a digital circuit influence the clock frequency of a given microprocessor. It is proposed to use a CPU based on a simplified version of the MIPS processor as a reference, widely used in teaching many Computer Fundamentals and Architecture courses.

Module 5: Input/Output and Operating Systems:

a) Preemptive multitasking and time quanta: A microscopic view of how the operating system deals with multitasking using time slicing. Any process can be blocked due to interrupts (Figure ).

Figure 2. Quantum! A simulator for distributing CPU time slots between different processes using preemption in a multi-core architecture.

Figure 2. Quantum! A simulator for distributing CPU time slots between different processes using preemption in a multi-core architecture.

b) Interruptions and Daisy Chain: Three devices can set external interrupts, which may be masked individually or globally. The acknowledged response to interruptions is managed by a daisy–chain module.

c) Wator: A population dynamics simulation of a toroidal ocean, using multi–threading and high-intensive CPU usage, combined with the process explorer of the system to teach preemption and multitasking.

Many of these applications (compiled for Windows systems) have been published on the Microsoft Store platform for free to distribute among students (Citationundefined) easily.

5. Test design

The design of every test with this tool is quite simple (a scheme is shown in Figure ). Each exam statement, —for a given student, in a particular place (IP address), and date/time—is retrieved from an URL after entering his ID card number. The resulting page, built using PHP and JavaScript languages, generates an exclusive test from a set of base questions. The generated exam can operate in two main modes: First, in a training mode, where each page reload produces a different test version. It allows the student to check his ability some days (and even hours and minutes) before the exam; Secondly, this tool can also be used in a release mode, that is, during the exam. In this case, page statements do not depend on the current retrieval date/time but on a fixed seed parameter, thus being able to be easily replicated later (after the exam). The working mode changes automatically depending on the retrieval time of the test.

Figure 3. A simplified scheme of the CASIUM test system.

Figure 3. A simplified scheme of the CASIUM test system.

As shown in Figure , every statement question of the exams contains random strings chosen from a vast input set of values.

Figure 4. Exam question statement example, with the random parameters in bold.

Figure 4. Exam question statement example, with the random parameters in bold.

5.1. Generation of the input set (random values) of the questions

The calculation of the random values necessary to give a value to the input sets of the related questions is obtained as follows:

  1. A single input character string is built, made up of the concatenation of the student’s ID number, the date and exact time (in seconds) of the test page generation, the public and private IP, and a secret keyword.

  2. A 160–bit string is obtained using the SHA1 algorithm (Secure Hash Algorithm (see table ). The potential number of combinations of the hash sequence is around 1048.

    Table 1. (Nearly) random values calculation

  3. Each exam (there are five of them, up to date (Romero & Bandera, Citation2021)) contains around 50 input variables of different types (Boolean, integers in a range, discrete set, etc.), so we generate, at starting from arbitrary substrings of the hash sequence, about 50 random numbers.Footnote1 Let’s look at an example question:

Convert the float number –val1 ($rnd1) to IEEE754, and represent the result in val2 ($rnd2)

In the example, the numbers $rnd1 and $rnd2 are 5 and 1-bit width, respectively, and they could be the first 6 bits of the SHA1 string. A specific function converts the random number into the corresponding element of the input set. Thus, in the example, if $rnd1 is 01001, the corresponding value is val1 = 10010.011 (18.375), where the centre bits are the random part. As can be seen, it is intended that the correspondence between $rnd1 and val1 generate sequences so that the questions are of similar difficulty so that a non-random part is maintained. The result for a particular student at a specific moment is Convert the float number -18,375 to IEEE754 and represent the result in hexadecimal

5.2. Implementation of simulators and right—answer calculation

It is clear that when the number of students is high, it would be unfeasible for the teacher if every question had an answer that could not be automatically calculated. In this work, we have implemented around 12 simulators of the 18 existing ones in C# language (see section IV-A), using the PHP language, to automatically obtain the answer for each sentence of the exam.Footnote2 With these simulators, we prepare the exam on the server side and the correct answer to every question.

In the training mode, answers can be seen by the student using a custom button, labelled “ unveil”, as it can be seen in Figure .

Figure 5. Part of an exam question statement (in training mode). After each question, an element appears with the text “reveal” (red button arrow), which, when marked with the mouse, shows the corresponding answer (red top arrow). Also note that when you fill in any field, the submit button is highlighted (in a bright color).

Figure 5. Part of an exam question statement (in training mode). After each question, an element appears with the text “reveal” (red button arrow), which, when marked with the mouse, shows the corresponding answer (red top arrow). Also note that when you fill in any field, the submit button is highlighted (in a bright color).

Otherwise, both in the training and release modes allow to submit of the student’s answer to the database, where it is stored, close to the correct answer, a time stamp, and the random base string (which adds redundancy, and it may be required for security) (Figure ).

Figure 6. Each group of 3 or 4 questions is submitted independently, and several submissions can be made, all of which are saved in a database. The green button indicates a successful submission after any change in any field. A pop-up message advises the proper storage of the information in the database.

Figure 6. Each group of 3 or 4 questions is submitted independently, and several submissions can be made, all of which are saved in a database. The green button indicates a successful submission after any change in any field. A pop-up message advises the proper storage of the information in the database.

Finally, a third operating mode (feedback mode) is incorporated into this tool. It automatically evaluates each section, according to similarity criteria established by the teacher, between the given and correct answers, greatly facilitating the evaluation work. Since the student can send multiple solutions, the evaluation chooses the last one delivered.

Regarding the possibility that students can use the simulators on which this system is based as additional help during the exam, we understand that the ability and the time necessary to transfer specific questions to the simulators, check the results, and copy back the obtained answers to their examination test would only be possible in cases of very outstanding students.

6. Results and discussion

In this section, we present a self-assessment of the learning platform used in recent years in teaching Computer Architecture. We also highlight the exceptional circumstances experienced in the 2020 academic year and the results obtained during this period. However, compared with other courses, any statistical conclusions derived from this last period must be handled carefully. Data from around 109 students who took the exam showed they all passed, with an average grade of 0.6 points higher than the previous year. However, it is impossible to determine if this improvement is due to the exam’s new format or the absence of copies. It was found that the students answered an average of around four complete exams in the training modality, in addition to those that they could take without handing in (using the “reveal answer” option). And while it’s nearly impossible to determine during the actual exam if they could have been cheated, anonymous inquiries through their peers indicate that “it was almost impossible because all the exams were so different, and there was hardly any time.”

An analysis of a more extended period, from 2017 to 2022, shows the proposal’s effectiveness. The CASIUM tool was not available in the first three years of that period. During this period, the average grade of the students was 6.9 points out of a maximum of 10 in 2017, 7.2 in 2018, and 7.0 in 2019. Once the tool was implemented, the average grade increased to 7.6 in 2020, 7.8 in 2021, and 7.7 in 2022. Although the number of students differed in each course, the results are statistically comparable.

Another notable aspect of the comparison with previous results is that the system still needs to be fully implemented in 2020. Only two modules were available, so the remaining matters must be evaluated traditionally. In addition, an online assessment was carried out during the lockdown period. In 2021, although the teaching was face-to-face, the evaluation was also online. Normality was recovered in 2022 when teaching and evaluation were entirely face-to-face.

The most crucial aspect of the proposed evaluation method lies in the many combinations of tests that make it practically impossible for two randomly chosen tests to show any similarity. Compared to another scenario, Moodle, where 20 questions would be selected randomly, and each of ten questions would be chosen, the system offers any number of questions with an average of 256 combinations per question whose answer is automatically calculated. Furthermore, the number of combinations can grow without increasing the complexity, so it is only possible to learn the answers by knowing the solution procedure.

In conclusion, this article shows how it is possible to design an exam with countless variants that facilitate evaluation and training in complicated academic circumstances, such as those experienced in 2020, through subject-specific simulators.

The tool presented here offers a unique and innovative approach to creating exams that students worldwide can easily access. The versatility of the programming languages used and the existing equations and simulators available in PHP/JavaScript make it easier and more efficient for teachers to generate questions and answers for their exams.

Moreover, the modular structure of the code makes it easily extensible to many subjects as long as a programmable result can be obtained from sets of input parameters. The ability for users to incorporate their routines provides a high degree of customization and flexibility.

This system does not depend on any platform, whether paid or freely accessible, despite the limitations that these may have. Additionally, it does not use any special user-interface (GUI) to create new questions and exams. Instead, any module implementing a mathematical function can be used as a source to generate new questions.

It is even possible for any teacher without knowledge of PHP to use the new artificial intelligence tools to program their questions and obtain solutions. But above all, it offers students the opportunity to self-assess as often as necessary, receiving automatic feedback on their progress.

These factors make this proposal an attractive option for teachers looking to create exams that are accessible to a global audience.

Acknowledgments

We want to thank all the teachers and students of the Computer Fundamentals subject for their willingness to collaborate in the development of this work, and especially those who, with their valuable suggestions, have contributed to improving the system. We also want to thank the Department of Computer Architecture of the University of Malaga for facilitating virtual teaching through financial and human resources. And, of course, many thanks to the University for financing this work through the Educational Innovation Project PIE19-096 (CASIUM) and the INNOVA22 program that belongs to the UMA Internal Teaching Plan.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The work was supported by the Spanish National Government (MINECO) [PID2019-105396RB-I00]; Universidad de Málaga [PIE19-096, PIE22-099].

Notes on contributors

Felipe Romero

The CASIUM group, an acronym for Computer Architecture Simulators, was established in 2018 with professors and studentsfrom 4 Spanish universities(Málaga, Seville, Granada and Almería), and with the aim of developing computer applications that support teaching in the area of Architecture and Computer Technology. From 2018 to 2020, they developed about 40 applications for Windows, many of them available in the Microsoft Store. Due to the circumstances caused by the COVID–19 pandemic, the research has been reoriented to online teaching and assessment through the programming of web applications using languages such as PHP and SQL. The group is led by Drs. Luis F. Romero and Gerardo Bandera, professors at the University of Malaga, and authors of numerous publications on High Performance Computing. The CASIUM group has participated in four teaching innovation projects with funding from the University of Malaga and the Junta de Andalucía.

Notes

1. While the SHA1 sequence is not, in fact, a random number, this term will be used throughout the paper for clearness.

2. It is clear that this enormous translation and implementation effort of C# objects to PHP is only rewarded if an exam is designed in a way that can be reused in many future examinations. In fact, after programming an exam for every chapter of the course, it can be considered that we have an examination tool that can be reused for decades.

References

  • ACM/IEEE-CS joint task force on computing curricula, “computer science curricula 2013,”. Tech Rep. ACM Press and IEEE Computer Society Press. December. 2013. https://doi.org/10.1145/2534860.
  • Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does on online delivery system influence student evaluations? The Journal of Economic Education, 37(1), 21–12. https://doi.org/10.3200/JECE.37.1.21-37
  • Basilaia, G., & Kvavadze, D. Transition to Online Education in Schools during a SARS-CoV-2 Coronavirus (COVID-19) Pandemic in Georgia. (2020). Pedagogical Research, 5(4), 2468–4929. em0060, e-ISSN:. https://doi.org/10.29333/pr/7937
  • Blackboard Learn 9.1. Blackboard Inc: http://www.blackboard.com/. 2010.
  • Broemer, E. and Recktenwald, G. (2021). Cheating and Chegg: A Retrospective, 2021 ASEE Virtual Annual Conference Content Access, Long Beach, USA. https://doi.org/10.18260/1-2–36792
  • Burton, W. B., Civitano, A., & Steiner-Grossman, P. 2012.Online versus paper evaluations: Differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1), 58–69. https://link.springer.com/article/10.1007/s12528-012-9053-3
  • Candy, P. C., Crebert, G., & O’Leary, J. (1994). Developing lifelong learners through undergraduate education. National Board of Employment, Education and Training, Australian Government Publishing Service.
  • Canvas LMS and the instructure learning PlatformBlackboard. Instructure Inc: https://canvas.instructure.com/. 2011.
  • Carini, R. M., Hayek, J. C., Kuh, G. D., Kennedy, J. M., & Ouimet, J. A. (2003). College students responses to web and paper surveys: Does mode matter? Research in Higher Education, 44(1), 1–19. https://doi.org/10.1023/A:1021363527731
  • Chegg study pack. Chegg Inc: https://www.chegg.com/study-pack. 2005.
  • Dendir, S., & Maxwell, R. S. (2020). Cheating in online courses: Evidence from online proctoring. Computers in Human Behaviour Reports, 2, 100033. https://doi.org/10.1016/j.chbr.2020.100033
  • Donovan, J., Mader, C., & Shinsky, J. (2006). Constructive student feedback: Online vs. traditional course evaluations. Journal of Interactive Online Learning, 5(5), 283–296.
  • Draves, W. A. (2002). Teaching online. Learning Resources Network (LERN.
  • Gamliel, E., & Davidovitz, L. (2005). Online versus traditional teaching evaluation: Mode can matter. Assessment & Evaluation in Higher Education, 30(6), 581–592. https://doi.org/10.1080/02602930500260647
  • George, M. L. (2020). Effective teaching and examination strategies for undergraduate learning during COVID-19 school restrictions. Journal of Educational Technology Systems, 49(1), 23–48. https://doi.org/10.1177/0047239520934017
  • Grün, B., & Zeileis, A. (2009). Automatic generation of exams in R. Journal of Statistical Software, 29(10), 1–14. https://doi.org/10.18637/jss.v029.i10
  • Hasan, N., & Bao, Y. (2020). Impact of “e-Learning crack-up” perception on psychological distress among college students during COVID-19 pandemic: A mediating role of “fear of academic year loss”. Children and Youth Services Review, 118, 105355. https://doi.org/10.1016/j.childyouth.2020.105355
  • Hmieleski, K., & Champagne, M. V. (2020). Plugging into course evaluation. The Technology Source.
  • Kasiar, J. B., Schroeder, S. L., & Holstad, S. G. (2002). Comparison of traditional and web-based course evaluation processes in a required, team-taught pharmacotherapy course. American Journal of Pharmaceutical Education, 66(3), 268–270.
  • Manoharan, S. (2019). Cheat-resistant multiple choice examinations using personalization. Computers and Education, 130, 139–151. https://doi.org/10.1016/j.compedu.2018.11.007
  • Manoharan, S. (2021). On individualized online assessments in STEM subjects. Proceeding of the IEEE International Conference on Engineering, Technology & Education (TALE), December, 2021, Wuhan, China. https://doi.org/10.1109/TALE52509.2021.9678631
  • Martin Dougiamas, “Moodle,” https://moodle.org visited 2021-05-31.
  • Medina-Herrera, L. M. and Miranda-Valenzuela, J. C. (2019). What kind of teacher achieves student engagement in a synchronous online model?. Proceedings of the IEEE Global Engineering Education Conference (EDUCON), Dubai, United Arab Emirates.
  • The Microsoft store. June (2021). url (windows only): ms-windows-store://publisher/?name=Luis F Romero, visited 2021-05-31
  • Morrison, K. (2013). Online and paper evaluations of courses: A literature review and case study. Educational Research and Evaluation, 19(7), 585–604. https://doi.org/10.1080/13803611.2013.834608
  • Noorbehbahani, F., Mohammadi, A., & Aminazadeh, M. (2022). A systematic review of research on cheating in online exams from 2010 to 2021. Education and Information Technologies, 27(6), 8413–8460. https://doi.org/10.1007/s10639-022-10927-7
  • OLAT 7.6 – User Manual. (2012). IT Services, Universität Zürich. http://www.olat.org/url
  • ONYX Testsuite. (2014a). BPS Bildungsportal Sachsen GmbH. http://onyx.bps-system.de/ URL
  • OpenOLAT 9.4 – User Manual. URL. http://www.openolat.org. 2014b.
  • Poehlein, G. W. (1996). Universities and information technologies for instructional programmes: Issues and potential impacts. Technology Analysis & Strategic Management, 8(3), 283–290. https://doi.org/10.1080/09537329608524251
  • Romero, L. F. and Bandera, G., “The casium project,” 2021, https://casium.uma.es.
  • Smaill, C., The Implementation and Evaluation of OASIS, a web-based learning and assessment tool in electrical engineering, 36th ASEE/IEEE Frontiers in Education Conference, https://doi.org/10.1109/FIE.2006.322482. 2006.
  • Stowell, J. R., Addison, W. E., & Smith, J. L. Comparison of online and classroom-based student evaluations of instruction. Assessment & evaluation in higher educationVol. 37. 4pp. 465–473. ISSN 0260-2938. Taylor & Francis. https://doi.org/10.1080/02602938.2010.545869. 2012.
  • Volery, T., & Lord, D. (2000). Critical success factors in online education. International Journal of Education Management. https://www.emerald.com/insight/content/doi/10.1108/09513540010344731/full/html
  • Zeileis, A., Umlauf, N., & Leisch, F. (2014). Flexible generation of e-learning exams in R: Moodle quizzes OLAT assessments, and beyond. Journal of Statistical Software, 58(1), 1–36. https://doi.org/10.18637/jss.v058.i01