600
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Academics’ perception of final examinations in engineering education

ORCID Icon, ORCID Icon & ORCID Icon
Received 13 Nov 2022, Accepted 09 Nov 2023, Published online: 26 Nov 2023

ABSTRACT

This study seeks to investigate academics’ perceptions and experiences with traditional final examinations in engineering courses. An online survey of 40 academics was conducted to understand the academics’ beliefs and rationale for the use of final examinations in their teaching practices. The survey was followed by in-person interviews with 11 academics to clarify the outcomes of the online survey. The results indicated that most academics considered the final examination as an effective and equitable way of assessing student knowledge and skills. However, the findings also showed that final examinations generally might not provide students with proper feedback, and traditional final examinations might not be effective in testing students’ practical and soft skills required by industry.

1. Introduction

Final examinations (or exams) have arguably been the most common assessment item in engineering education, and have long been used to assess student’s knowledge, understanding, and problem-solving skills. However, there is a growing body of literature that questions the effectiveness of final examinations in testing student skills and understanding (Price et al. Citation2008; Williams Citation2014; Williams and Brennan Citation2004). For example, Price et al. (Citation2008) suggested a shift from summative to formative assessment, away from marks and grades towards evaluative feedback focused on intended learning outcomes, while Williams and Brennan (Citation2004) pointed out that well-constructed formative feedback would have high motivational value to enhance learning. Although Dochy et al. (Citation1999), as well as Mills and Treagust (Citation2003), noted that the needs of today’s engineering graduates have transformed towards developing practical skills and industry experience, little has changed in assessment practices in engineering education.

Griffith School of Engineering and Built Environment (EBE) has traditionally used final examinations administrated by Griffith University to assess students’ knowledge and skills at the end of the semester. About ten years ago, there was a major shift in the teaching practices in the School of Engineering and Built Environment (EBE) to focus on experiential learning, especially for first-year courses (Li, Öchsner, and Hall Citation2019; Palmer and Hall Citation2011). As part of this transformation, final examinations in some courses were replaced with alternative assessments. This replacement instigated a series of in-person conversations and discussions about the effectiveness of final examinations and other types of assessment in engineering education. Driven by these changes, this study seeks to investigate academic perceptions of final examinations and compare them to the current literature on this topic.

2. Literature context

The following section reviews the literature related to the use of final examinations in higher education and discusses the existing practices and concerns regarding this type of assessment. In this review, the traditional final examination is referred to as a closed-book, invigilated, time-constrained examination that is offered at the end of the semester (Bloxham and Boyd Citation2007; Suskie Citation2018). The term alternative assessment usually refers to alternative ways of measuring student knowledge and skills such as projects, oral presentations, essays, journals, and portfolios (Suskie Citation2018). In the literature, the traditional final examinations are commonly referred to as high-stakes assessments as they have a significant weighting towards the final grade (Franke Citation2018), while the coursework such as quizzes, labs, or other low-weight assessments offered throughout the course is considered as a low-stakes assessment (Gibbs and Lucas Citation1997).

Traditional final examinations have historically been used to assess student knowledge, and educators and students have long been accustomed to this practice. Current literature indicates the following appealing aspects of final examinations for students:

  • - Students seem to prefer written examinations (especially with multiple-choice questions) because they have a good understanding of what they need to produce to successfully pass this assessment (Ben-Chaim and Zoller Citation1997; Traub and MacRury Citation1990; Van de Watering et al. Citation2008).

  • - Compared to alternative assessment methods such as projects and oral presentations, the final examination may be more ‘convenient’ for students, especially for those who already have an existing load from class activities. For such students, it may be rather challenging to complete many large projects during a semester, and thus only a 2-hour final examination at the end of the semester may be a better option. Howell et al. (Citation2020) compared student satisfaction with experiential courses (without a final examination) and courses with a final examination in the first year of an engineering degree and noted that the courses with final examinations generally received better student satisfaction scores.

The teachers use final examinations for several reasons, including:

  • - Invigilated examinations are perceived as a fair evaluation of student performance, and they limit the opportunity for students to plagiarise compared to other assessment items which may not be supervised by the teacher (Wilkinson Citation2009).

  • - Final examinations generally have a large impact on grades, and thus they draw particular attention from students (Goorts Citation2020). Even if students do not engage in course activities, they will still study for the final examination to receive at least a passing grade so that they can complete the course. Tuckman (Citation1998) noted that students will study for final examinations, but they may not necessarily keep up with regular assigned activities within the course.

  • - Compared to alternative assessment methods, final examinations seem to be less time-consuming and less labour-intensive to implement (Gratchev and Jeng Citation2018; Kandlbinder Citation2007). In addition, Flores et al. (Citation2015) noted that for some specific courses or larger classes, final examinations can still be the preferred method of assessing student knowledge and understanding. Despite the wide use of traditional final examinations in higher education, the literature has also identified major concerns which are discussed below.

  • - High-stakes summative assessments such as final examinations are commonly associated with the ‘backwash’ effect when students focus only on what would gain them higher grades (Biggs Citation1999). Caspersen et al. (Citation2017) argued that grades and tests may not directly inform teachers about the quality of student learning while Struyven et al. (Citation2003) noted that traditional examinations lead to the tendency for students to learn only for assessment rather than to retain and build on knowledge gained. Furthermore, Chansarkar and Raut‐Roy (Citation1987) studied the performance of undergraduate students under different methods of assessment and concluded that student performance was worse under examinations.

  • - Final examinations generally do not provide sufficient feedback that can be processed by students and enhance student work on learning strategies for future situations (Knight Citation2002; Price, Handley, and Millar Citation2011; Sendziuk Citation2010). Carless et al. (Citation2011), and Williams (Citation2014) noted that when an assessment is given at the end of the semester, there is limited scope for students to apply insights from the teacher’s comments. Scoles et al. (Citation2013) suggested the use of exemplars to close the feedback gap for examinations. However, this practice raises concerns among teachers as it may encourage a ‘cutting corners’ approach, a spoon-feeding effect, or memorisation of the examination questions.

  • - Students who undertake projects and/or oral presentations with more formative feedback tend to view such formative assessments as a fairer and more effective process than students who are assessed by more traditional methods such as examinations or written tests (Flores et al. Citation2015). For such alternative assessments, formative feedback seems to give students a better understanding of the subject (Sadler Citation1989, Citation1998) and it may improve students’ performance as well (Collett, Gyles, and Hrasky Citation2007; Sly Citation1999).

  • - It appears that in some cases, final examinations may be ineffective at evaluating certain types of outcomes (Knight Citation2002), soft skills (Flores et al. Citation2015), or attributes such as lifelong learning skills, teamwork skills, ability to handle diversity and sustainability, hard work and good organisation (Parsons, Citation2007). Therefore, alternative assessments should be used depending on the learning outcome that is being assessed.

  • - The literature indicates that ‘examination stress’ or ‘test anxiety’ can not only affect student performance but also have detrimental effects on well-being and lead to negative health outcomes (Putwain Citation2007; Roome and Soan Citation2019). Sung et al. (Citation2016) argued that the performance of highly test-anxious students was consistently lower compared to low test-anxious students of the same ability. It is possible that the cognitive capacity of highly test-anxious students is impaired due to stress, resulting in less working memory (Beilock Citation2008) and feeling less prepared (Chamberlain, Daly, and Spalding Citation2011).

3. Research questions

Most of the previous studies have been related to the analysis of student perceptions and their performance in final examinations. There appears to be limited research into academics’ perceptions of traditional final examinations, and their beliefs and rationales for the use of this type of assessment item. Accordingly, this study seeks to bridge this gap by exploring the opinions of engineering academic staff. The research questions were as follows:

  1. What are the perceptions of academic staff on the use of final examinations as an effective tool to assess student knowledge, understanding, and skills?

  2. What are the major concerns held by academics regarding the use of final examinations?

  3. What type of course do academics believe would be the most suitable for the use of final examinations?

It is noted that this study does not aim at establishing the best assessment practice as it only provides a view on academics’ perception of traditional final examinations (although limited to academics from the Griffith School of EBE) in the context of the current literature. The authors have used both traditional examinations and alternative assessments (instead of final examinations) in their teaching practices, and their selection of the assessment type has been guided by several factors, including the type of course, learning outcomes, course content, and its structure. The authors believe that both final examinations and alternative assessments, when designed to assess students’ knowledge, understanding, and skills required by the course learning objectives, can be successfully used in teaching practices.

4. Methods

This study consisted of an anonymous online questionnaire, followed by a series of in-person interviews. Although no incentives were offered for participation, the academics who completed the survey or participated in the interviews seemed to be intrinsically motivated to express their opinions on final examinations and alternative types of assessment. This study was conducted in accordance with ethical standards managed by the Griffith University Ethics Committee (Ref No: 2021/459).

4.1. Online survey

The study started with a survey to explore how academics perceive final examinations. An invitation to complete the online questionnaire was emailed to all academic staff (82 academics) from the School of Engineering and Built Environment at Griffith University and also circulated to academics in engineering schools at four other universities in Australia.

The survey was open for a period of four weeks over July and August 2021 and was designed using Microsoft FormsTM to collect data on the following dimensions: perceptions of examination fairness and effectiveness, the importance of feedback, and perceptions of alternative assessments. The questionnaire included a set of statements shown in where respondents could select from Strongly Agree to Strongly Disagree. There were also two multiple-choice questions regarding assessment practices used in courses, and one open-ended question that prompted participants to reflect on their experience with the final examination or other assessment approaches.

Table 1. Demographics of survey respondents (N = 40).

A total of 40 responses were received, and all 40 respondents completed all the survey questions. In addition, 27 out of 40 academics provided additional comments in the open-ended question on the use of final examinations or alternative assessment items.

4.2. Interviews

To gain a better understanding of the survey results, a series of semi-structured in-person interviews (Magaldi and Berler Citation2020) were conducted. The interview questions were designed to explore perceptions of the use of final examinations for different engineering courses/class sizes; the importance of final examinations in providing students with feedback, and the effectiveness of final examinations in relation to industry practices. Each interviewee was asked a few questions to start a conversation, and their responses were recorded. The questions were as follows:

  1. What kind of courses or subjects do you think are best suited to using final examinations?

  2. Do you think final examinations assist in preparing students for the industry?

  3. Do you think final examinations are an effective way to provide feedback to students?

  4. Do students ever approach you after final examinations to receive feedback?

All the interviewees were based at the School of Engineering and Built Environment at Griffith University, and were selected for an interview based on the following factors: 1) willingness to participate; 2) to ensure representation of the different engineering disciplines; and 3) representation of different academic levels. It is noted that the authors and the interviewees did not have any interests or common understanding of the discussed topics which could influence or affect the outcome of the interviews.

The interviews were conducted individually either in person or via Teams. The length of interviews varied from 15 minutes to 30 minutes, depending on how much the participant was willing to contribute. All answers were noted and confirmed with the interviewees.

4.3. Data analysis

The results of the online survey were analysed by grouping positive and negative responses to better understand the academic perception of final examinations. The open-response items, including the data from in-person interviews, were analysed to identify the recurrence and co-occurrence of keywords or themes that reflected the participant’s opinions on final examinations and alternative types of assessment.

5. Participants

5.1. Online survey

There were 40 responses to the survey, with 29 participants from Griffith University (72.5%), 7 from James Cook University (17.5%), and 1 each (2.5%) from the Queensland University of Technology, The University of Queensland, The University of Tasmania, and the University of the Sunshine Coast respectively. The demographics of the participants are shown in . Most of the participants were in the middle of their careers or senior academics, which seems to correlate with their long teaching experience (at least 10 years of teaching).

5.2. Interviews

In-person interviews were conducted either face-to-face or via Teams with eleven academics of different ages and academic positions. The academic details are given in .

Table 2. Demographics of interview respondents (N = 11).

6. Results and discussion

The results of the survey are summarised in where the responses are grouped as positive (Agree or Strongly agree) and negative (Disagree or Strongly disagree) and presented as a percentage of the total responses. The percentage of ‘neutral’ responses is not shown but can be readily estimated from when necessary.

Table 3. Results of academic survey.

From this survey, the key points regarding the academics’ perception of final examinations can be drawn as follows:

6.1. Integrity

77.5% of participants agreed or strongly agreed that final examinations provided an equitable way of assessing students. The academics’ comments suggested that final examinations provided a supervisor-controlled environment, in which each student worked on their task without the assistance of others, and thus cheating/plagiarism was minimised. This examination setup allowed to assess the knowledge and understanding of each student. The quotations below are from responses to the online survey.

Final exams are one of the few times we can ensure with integrity that a student is demonstrating particular skills.

Exam is the only way to ensure student learning and avoid cheating.

Given the issues of plagiarism, there should still be some form of invigilated tests to assess students’ understanding of fundamental concepts.

The final exam is a good way to assess individual learning.

The integrity of assessment items has been a topic of concern among academics. The literature suggests that invigilated examinations limit cheating among students compared to other assessment types that allow students to seek help from others (Parsons Citation2007). Unfortunately, cheating has become an even greater problem when many assessment items, including examinations, moved to online delivery (Harmon and Lambrinos Citation2008), especially in the past two years due to COVID restrictions (Slade et al. Citation2022). As the final examination is typically heavily weighted and often determines the student’s final mark or grade, it seems logical that teachers wish to ensure that they can still assess individual student’s knowledge and skills.

6.2. Class size

The comments provided by the participants suggested that final examinations tended to work well when the class size was relatively high, in contrast to alternative assessments such as projects, where marking could be time-consuming and labour-intensive. The quotations below are from responses to the online survey.

In a class of > 100 students, there is no easy way of assessment, given all the other tasks academics need to do.

Exams are easier (therefore cheaper) to mark than papers.

I think an independent, applied project tied to an oral presentation and report would be the most effective way to assess student knowledge. The problem is that some of my courses have more than 130 students in them … . it would take me more than 3 weeks of full-time marking to mark all of the assessments.

The only reason I use a final exam is because of the class size. Final exam is among the few methods that are manageable with the current ‘budget’.

The literature suggests that alternative methods of assessment such as projects (Gratchev and Jeng Citation2018), portfolios (Vigeant Citation2021), or oral presentations (Akimov and Malin Citation2020) may take significantly more time, either in preparation and/or marking compared to traditional final examinations. Considering the relatively high academic workloads, it is understandable that the majority of the respondents preferred final examinations over other assessments.

6.3. Effectiveness

The majority of respondents agreed that final examinations were effective in assessing student knowledge (77.5%) as well as final examinations encouraged students to study (82.5%). The quotations below are from responses to the online survey.

In my opinion, a properly designed final exam may motivate/excite students to learn.

[The Final exam] also encourages them to study, dependent on the weighting of the exam/if it is a hurdle.

The literature indicates that students pay a high level of attention to the final examination as it is a high-stakes assessment. Hattingh et al. (Citation2019), who surveyed more than 200 students, noted that many students prioritise the content of the final examination; however, they only study what will be covered in this assessment, while some students admit to memorising the content without understanding it. According to Case and Marshall (Citation2004), when approaching examinations, a significant portion of students tend to adopt a surface approach to learning.

Although students agree that they have a better understanding of the material after the examination, they also quickly forget it after the assessment has passed (Hattingh, Dison, and Woollacott Citation2019). This poses a question as to whether final examinations help students retain any knowledge. The current survey also showed that only 47.5% of the respondents believed that final examinations helped students retain long-term knowledge.

when I was a student, I was capable of doing very well on exams even when I knew my grasp of the material was shallow/limited, and I didn’t retain much of what I crammed.

6.4. Test anxiety

90% of respondents admitted that students might feel anxiety about the final examination, which could affect their academic performance. However, there was also a belief that this could be of benefit to students as it provided them with opportunities to develop strategies on how to cope with such anxiety. The quotations below are from responses to the online survey.

With regard to student stress impacting performance, part of the education process should be how to help students manage performing under stress. Not that they will be delivering their best performance, but that they can provide a sufficiently adequate performance.

Traditional final examinations (i.e. short, closed-book examinations with high weighting) have been long associated with stress and anxiety that may affect student performance (Roney and Woods Citation2003; Trifoni and Shahini Citation2011). An extensive course load, studying all night before the examination, and the failure of students to allocate sufficient time to their studies have been reported as the main contributing factors to anxiety (Khoshhal et al. Citation2017). It is not surprising that some students consider the final examination as an unfair type of assessment (Hattingh, Dison, and Woollacott Citation2019) when their final grade for the whole course depends on how they perform on the day of the examination.

6.5. Feedback to students

Only 40% of academics agreed that final examinations were effective in providing feedback. The comments were mostly related to 1) the time of final examinations; i.e. at the end of the term when it seems too late to provide students with feedback, and 2) the fact that students were not interested in receiving any feedback after they had successfully passed the final examination. The quotations below are from the interviews.

After the final exam, they [students] don’t come back to review and get feedback (Participant 10).

When the course is finished, if the students have passed, students don’t come for feedback regarding their marks unless they are high achievers with unusually low marks. I don’t think they really look for feedback at the end (Participant 9).

In a sense, the grade does give them some feedback (Participant 2).

Exams tend to happen at the end of a trimester or semester, so it is too late to provide any feedback for them to correct behaviours, or it’s very difficult to correct behaviour (Participant 8).

The literature generally considers end-of-semester examinations as summative assessments (or assessments of learning) because students do not usually receive formative feedback after the final examination (Parsons Citation2007). Hattingh et al. (Citation2019) reported that students find feedback to be valuable to them as it facilitates their understanding; however, only a very small number of students receive feedback from their examinations. The results of this study suggest that although the interviewed academics seem to be willing to discuss the final examination results with their students, many students do not wish to do so. This can be due to the following factors: a) student feedback literacy (Carless and Boud Citation2018; Gratchev Citation2023), where students may not be interested in receiving feedback once they are satisfied with their mark. b) As the final examination is offered at the end of the semester, students may feel that they cannot use this feedback to improve their performance in the course (Carless et al. Citation2011). c) Students may feel intimidated in approaching the lecturer after the end of the term to review their final examination work, or they may not want to bother their lecturer.

Interestingly, some academics viewed final examinations more as a ‘feedforward’ tool than the assessment, which gives feedback afterwards.

I think they [final exams] are an effective way when a student arrives at the exam and does the exam, it’s fairly reasonable for them to know if they’ve understood the content on the exam or not before the mark came. There is an element of feedback just by participating in that (Participant 8).

Feedforwarding by using exemplars may be a solution to the feedback issue, as it allows students to take control of the feedback process. Exemplars can be authentic student work from previous cohorts or teacher-constructed examples based on the instructor’s experience with issues that students typically deal with (Sadler Citation2010). Scoles et al. (Citation2013) found that those students who engaged with the exemplars engaged in a deep approach to learning, and they also showed better academic performance in the final examination (Handley and Williams Citation2011).

6.6. Practical skills and industry relevance

Less than 50% of the respondents (i.e. 45%) believed that final examinations prepared students for industry, while only 32% agreed that final examinations were effective in assessing students’ practical skills.

I think final exams are not an authentic way that is replicated in a workplace environment (Participant 8).

I don’t think there are many situations in the industry that parallel the exam experience. I don’t think the engineering industry has many situations where you get 1 hour to solve a problem, but you are not allowed to get help from anybody (Participant 7).

I don’t think final exams help with critical knowledge for industry because exams are not designed for long-term retention of that knowledge (Participant 9).

However, several respondents felt that when properly designed, final examinations could still be used as an effective tool to test students’ practical skills and their ability to solve real-world engineering problems.

I use a partially open-book examination as it models closer to real-life engineering situations (Online survey).

Exams put students under some controlled stress environment where they have to recollect procedures for solving problems that might be suitable to some workplaces (Participant 8).

When students go into the industry then they should be able to understand the language that the industry person is talking … and they need to have a thorough knowledge of the basics. that actually comes only from the testing or assessing the students based on these exams (Participant 5).

Interestingly, a few respondents mentioned that they used open-book examinations, which according to the literature, seem to provide a means for not only assessing student knowledge but also their understanding of how to apply this knowledge (Parsons Citation2007). Open-book examinations are perceived as a step towards student-centred and constructivist learning (Williams and Wong Citation2009), and they tend to encourage creative thinking (Theophilides and Dionysiou Citation1996). Additionally, open-book examinations have benefits for students because many students consider them less threatening than traditional closed-book examinations (Myyry and Joutsenvirta Citation2015). According to Anaya et al. (Citation2010), open-book examinations use a format that closely resembles a realistic work environment, where analysing and evaluating information allows students to creatively solve problems leading to the development of critical thinking. Although open-book examinations provide an alternative to traditional closed-book examinations, they may not be as popular, as according to Shine et al. (Citation2004), significant effort is required to prepare for such an examination, especially at a time when academic teaching loads are high.

6.7. Assessments and course content

When the final examinations are not appropriate to assess certain skills or attributes linked to the course learning outcomes, other methods of student assessment can be used. The comments provided by the respondents in the online survey and interviews indicated that many academics used a variety of formative and summative assessment items such as projects, quizzes, labs, and/or presentations during a term. These coursework assessments, compared to the final examination, were commonly used to assess student skills that might be difficult or not technically possible to assess during a 2-hour final examination.

An attempt was made to determine what type of course the traditional final examination would be the most suitable for. Many academics believed that final examinations would be most suitable for a course with a lot of technical content (including mathematics), or theoretical content that needs to be tested.

For theoretical and analytical courses, a final exam is still the most effective way of assessing the student’s understanding of concepts and their applications in solving real-world problems (Online survey).

I think heavily mathematical-based courses, certainly not design ones (Participant 8).

Final exams are good when you have a heavier theoretical course (Participant 9).

To test the concepts … .or understand the theory behind the application (Participant 4).

7. Limitations

A few limitations in this study may affect the generalisability of the findings to different populations:

  • - The online survey and interviews were primarily conducted among engineering academics at Griffith University. Although the results agree to some extent with the international literature, the results of this study mostly represent the opinions of academics from the Griffith School of Engineering and Built Environment (EBE). In addition, as there were only 40 responses, a wider survey with more responses would help get a deeper understanding of academics’ perceptions of final examinations.

  • - Historically, final examinations in the Griffith School of Engineering and Built Environment (EBE) have been invigilated, short duration (2–3 hours) examinations offered to students after the end of the semester. The data obtained, presented, and discussed in this study is mostly related to this type of examination.

  • - The survey questions were designed to collect the participants’ opinions on the concerns regarding final examinations which were identified during the literature review. For this reason, the survey questions were mostly related to feedback, stress, anxiety, and the practical importance of traditional final examinations.

  • - It was not possible to establish any correlations between the type of course, course content, and the use of final examinations in the online survey as the respondents were from different disciplines, and taught different content, which could be mostly theoretical, practical, or a combination of both. Future research into this area is recommended.

8. Concluding remarks

Based on the survey responses from a range of Australian academics and interviews with academics at Griffith University, the following conclusions regarding academics’ perceptions of traditional final examinations can be drawn:

  • Final examinations can provide an effective and equitable way of assessing student knowledge and understanding. Most participants believed that invigilated final examinations minimised cheating and could be an effective assessment item for those courses that focused on technical/mathematical content.

  • Traditional final examinations may not be effective in testing students’ practical skills or soft skills, and they may not be effective in preparing students for practical tasks required by industry.

  • Final examinations may not be effective in providing students with feedback because they are generally offered at the end of the term when it may be too late to change anything. However, this can also be due to student unwillingness to seek feedback on their final examination performance, especially when they are satisfied with their mark.

  • Many participants admitted that test anxiety could affect student performance, but they also believed that if students could learn to control this anxiety, they might find this useful in their later careers.

Acknowledgments

This study was carried out in accordance with ethical approval (2021/459) from the Griffith University Human Research Ethics Committee.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Ivan Gratchev

Dr. Ivan Gratchev is an Associate Professor at the School of Engineering & Built Environment, Griffith University, Australia. His research interests are in geotechnical engineering and engineering education. During his academic career, Dr. Gratchev has taught several geotechnical courses (including soil mechanics, rock mechanics and geotechnical engineering practice) using a project-based approach. His teaching achievements were recognised by his peers and students through several learning and teaching citations and awards. Dr. Gratchev has written textbook on soil mechanics entitled ”Soil Mechanics through Project-Based Learning” and rock mechanics ”Rock Mechanics through Project-Based Learning”, which are popular learning tools among students.

Simon Howell

Simon Howell is a Lecturer at the School of Engineering & Built Environment, Griffith University, Australia. Simon is the first-year coordinator for Engineering students on the Gold Coast, and he also manages the professional practice and employability skills stream at the School of Engineering and Built Environment. Simon is focused on inspiring students about their future careers, and he believes in developing the next generation of engineers and designers though using hands-on projects. Simon works to engage with industry to source site visits for students, as well as to assist industry to promote graduate and internship opportunities to the student cohort.

Sascha Stegen

Dr. Sascha Stegen is a Senior Lecturer at the School of Engineering & Built Environment, Griffith University, Australia. He has over 26 years of experience in the electrical industry and is specialised in Solar/wind/bio/geothermal energy integration to electricity grid as well as off grid, electromobility and wireless charging, Sascha is the Program Director for Master of Electronics and Energy Engineering as well as Program Director for the Master of Electronic and Communication Engineering.

References

  • Akimov, A., and M. Malin. 2020. “When Old Becomes New: A Case Study of Oral Examination as an Online Assessment Tool.” Assessment & Evaluation in Higher Education 45 (8): 1205–1221. https://doi.org/10.1080/02602938.2020.1730301.
  • Anaya, L., N. Evangelopoulos, and U. Lawani 2010. “Open Book Vs. Closed Book Testing: An Experimental Comparison.” In 2010 Annual Conference & Exposition, Louisville, Kentucky, 15–929.
  • Beilock, S. L. 2008. “Math Performance in Stressful Situations.” Current Directions in Psychological Science 17 (5): 339–343. https://doi.org/10.1111/j.1467-8721.2008.00602.x.
  • Ben-Chaim, D., and U. Zoller. 1997. “Examination-Type Preferences of Secondary School Students and Their Teachers in the Science Disciplines.” Instructional Science 25 (5): 347–367. https://doi.org/10.1023/A:1002919422429.
  • Biggs, J. 1999. “What the Student Does: Teaching for Enhanced Learning.” Higher Education Research & Development 18 (1): 57–75. https://doi.org/10.1080/0729436990180105.
  • Bloxham, S., and P. Boyd. 2007. Developing Effective Assessment in Higher Education: A Practical Guide: A Practical Guide. England: Open University Press.
  • Carless, D., and D. Boud. 2018. “The Development of Student Feedback Literacy: Enabling Uptake of Feedback.” Assessment & Evaluation in Higher Education 43 (8): 1315–1325. https://doi.org/10.1080/02602938.2018.1463354.
  • Carless, D., D. Salter, M. Yang, and J. Lam. 2011. “Developing Sustainable Feedback Practices.” Studies in Higher Education 36 (4): 395–407. https://doi.org/10.1080/03075071003642449.
  • Case, J., and D. Marshall. 2004. “Between Deep and Surface: Procedural Approaches to Learning in Engineering Education Contexts.” Studies in Higher Education 29 (5): 605–615. https://doi.org/10.1080/0307507042000261571.
  • Caspersen, J., J. C. Smeby, and P. Olaf Aamodt. 2017. “Measuring Learning Outcomes.” European Journal of Education 52 (1): 20–30. https://doi.org/10.1111/ejed.12205.
  • Chamberlain, S., A. L. Daly, and V. Spalding. 2011. “The Fear Factor: Students’ Experiences of Test Anxiety When Taking A-Level Examinations.” Pastoral Care in Education 29 (3): 193–205. https://doi.org/10.1080/02643944.2011.599856.
  • Chansarkar, B. A., and U. Raut‐Roy. 1987. “Student Performance Under Different Assessment Situations.” Assessment and Evaluation in Higher Education 12 (2): 115–122. https://doi.org/10.1080/0260293870120204.
  • Collett, P., N. Gyles, and S. Hrasky. 2007. “Optional Formative Assessment and Class Attendance: Their Impact on Student Performance.” Global Perspectives on Accounting Education 4:41. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=6df1b13b6de651a7062aaa7c0da3cb74c3dcf329.
  • Dochy, F. J. R. C., M. Segers, and D. Sluijsmans. 1999. “The Use of Self-, Peer and Co-Assessment in Higher Education: A Review.” Studies in Higher Education 24 (3): 331–350. https://doi.org/10.1080/03075079912331379935.
  • Flores, M. A., A. M. Veiga Simão, A. Barros, and D. Pereira. 2015. “Perceptions of Effectiveness, Fairness and Feedback of Assessment Methods: A Study in Higher Education.” Studies in Higher Education 40 (9): 1523–1534. https://doi.org/10.1080/03075079.2014.881348.
  • Franke, M. 2018. “Final Exam Weighting as Part of Course Design.” Teaching & Learning Inquiry 6 (1): 91–103. https://doi.org/10.20343/teachlearninqu.6.1.9.
  • Gibbs, G., and L. Lucas. 1997. “Coursework Assessment, Class Size and Student Performance: 1984‐94.” Journal of Further and Higher Education 21 (2): 183–192. https://doi.org/10.1080/0309877970210204.
  • Goorts, K. 2020. “Replacing Final Exams with Open-Ended Course Projects in Engineering Education.” Teaching Innovation Projects 9 (1): 1–18. https://doi.org/10.5206/tips.v9i1.10328.
  • Gratchev, I. 2023. “Replacing Exams with Project-Based Assessment: Analysis of Students’ Performance and Experience.” Education Sciences 13 (4): 408. https://doi.org/10.3390/educsci13040408.
  • Gratchev, I., and D. S. Jeng. 2018. “Introducing a Project-Based Assignment in a Traditionally Taught Engineering Course.” European Journal of Engineering Education 43 (5): 788–799. https://doi.org/10.1080/03043797.2018.1441264.
  • Handley, K., and L. Williams. 2011. “From Copying to Learning: Using Exemplars to Engage Students with Assessment Criteria and Feedback.” Assessment & Evaluation in Higher Education 36 (1): 95–108. https://doi.org/10.1080/02602930903201669.
  • Harmon, O. R., and J. Lambrinos. 2008. “Are Online Exams an Invitation to Cheat?” The Journal of Economic Education 39 (2): 116–125. https://doi.org/10.3200/JECE.39.2.116-125.
  • Hattingh, T., L. Dison, and L. Woollacott. 2019. “Student Learning Behaviours Around Assessments.” Australasian Journal of Engineering Education 24 (1): 14–24. https://doi.org/10.1080/22054952.2019.1570641.
  • Howell, S., W. Hall, and N. Emerson. 2020. “Implementing an Integrated First-Year Engineering Curriculum with Mixed Teaching Approaches.” ASEAN Journal of Engineering Education 4 (1): 1–10. https://doi.org/10.11113/ajee2020.4n1.13.
  • Kandlbinder, P. 2007. “Writing About Practice for Future Learning.” In Rethinking Assessment in Higher Education, edited by D. Boud, and N. Falchikov, 169–176. London: Routledge.
  • Khoshhal, K. I., G. A. Khairy, S. Y. Guraya, and S. S. Guraya. 2017. “Exam Anxiety in the Undergraduate Medical Students of Taibah University.” Medical Teacher 39 (sup1): S22–S26. https://doi.org/10.1080/0142159X.2016.1254749.
  • Knight, P. T. 2002. “Summative Assessment in Higher Education: Practices in Disarray.” Studies in Higher Education 27 (3): 275–286. https://doi.org/10.1080/03075070220000662.
  • Li, H., A. Öchsner, and W. Hall. 2019. “Application of Experiential Learning to Improve Student Engagement and Experience in a Mechanical Engineering Course.” European Journal of Engineering Education 44 (3): 283–293. https://doi.org/10.1080/03043797.2017.1402864.
  • Magaldi, D., and M. Berler. 2020. “Semi-Structured Interviews.” In Encyclopedia of Personality and Individual Differences, edited by Virgil Zeigler-Hill, and Todd K. Shackelford, 4825–4830. https://link.springer.com/referenceworkentry/10.1007/978-3-319-24612-3_857.
  • Mills, J. E., and D. F. Treagust. 2003. “Engineering Education—Is Problem-Based or Project-Based Learning the Answer.” Australasian Journal of Engineering Education 3 (2): 2–16.
  • Myyry, L., and T. Joutsenvirta. 2015. “Open-Book, Open-Web Online Examinations: Developing Examination Practices to Support University students’ Learning and Self-Efficacy.” Active Learning in Higher Education 16 (2): 119–132. https://doi.org/10.1177/1469787415574053.
  • Palmer, S., and W. Hall. 2011. “An Evaluation of a Project-Based Learning Initiative in Engineering Education.” European Journal of Engineering Education 36 (4): 357–365. https://doi.org/10.1080/03043797.2011.593095.
  • Parsons, D. 2007. “Encouraging Learning Through External Engineering Assessment.” Australasian Journal of Engineering Education 13 (2): 21–30. https://doi.org/10.1080/22054952.2007.11464003.
  • Price, M., K. Handley, and J. Millar. 2011. “Feedback: Focusing Attention on Engagement.” Studies in Higher Education 36 (8): 879–896. https://doi.org/10.1080/03075079.2010.483513.
  • Price, M., B. O’Donovan, C. Rust, and J. Carroll. 2008. “Assessment Standards: A Manifesto for Change.” Brookes eJournal of Learning and Teaching 2 (3): 1–2.
  • Putwain, D. 2007. “Researching Academic Stress and Anxiety in Students: Some Methodological Considerations.” British Educational Research Journal 33 (2): 207–219. https://doi.org/10.1080/01411920701208258.
  • Roney, S. D., and D. R. Woods. 2003. “Ideas to minimize exam anxiety.” Journal of Engineering Education 92 (3): 249–256. https://doi.org/10.1002/j.2168-9830.2003.tb00765.x.
  • Roome, T., and C. A. Soan. 2019. “GCSE Exam Stress: Student Perceptions of the Effects on Wellbeing and Performance.” Pastoral Care in Education 37 (4): 297–315. https://doi.org/10.1080/02643944.2019.1665091.
  • Sadler, D. R. 1989. “Formative Assessment and the Design of Instructional Systems.” Instructional Science 18 (2): 119–144. https://doi.org/10.1007/BF00117714.
  • Sadler, D. R. 1998. “Formative Assessment: Revisiting the Territory.” Assessment in Education Principles, Policy & Practice 5 (1): 77–84. https://doi.org/10.1080/0969595980050104.
  • Sadler, D. R. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal.” Assessment & Evaluation in Higher Education 35 (5): 535–550. https://doi.org/10.1080/02602930903541015.
  • Scoles, J., M. Huxham, and J. McArthur. 2013. “No Longer Exempt from Good Practice: Using Exemplars to Close the Feedback Gap for Exams.” Assessment & Evaluation in Higher Education 38 (6): 631–645. https://doi.org/10.1080/02602938.2012.674485.
  • Sendziuk, P. 2010. “Sink or Swim? Improving Student Learning Through Feedback and Self-Assessment.” International Journal of Teaching and Learning in Higher Education 22 (3): 320–330.
  • Shine, S., C. Kiravu, and J. Astley. 2004. “In defence of open-book engineering degree examinations.” International Journal of Mechanical Engineering Education 32 (3): 197–211. https://doi.org/10.7227/IJMEE.32.3.2.
  • Slade, C., G. Lawrie, N. Taptamat, E. Browne, K. Sheppard, and K. E. Matthews. 2022. “Insights into How Academics Reframed Their Assessment During a Pandemic: Disciplinary Variation and Assessment as Afterthought.” Assessment & Evaluation in Higher Education 47 (4): 588–605. https://doi.org/10.1080/02602938.2021.1933379.
  • Sly, L. 1999. “Practice Tests as Formative Assessment Improve Student Performance on Computer‐Managed Learning Assessments.” Assessment & Evaluation in Higher Education 24 (3): 339–343. https://doi.org/10.1080/0260293990240307.
  • Struyven, K., F. Dochy, and S. Janssens 2003. “Students’ Perceptions about New Modes of Assessment in Higher Education: A Review.” In Optimising New Modes of Assessment: In Search of Qualities and Standards. Innovation and Change in Professional Education, edited by M. Segers, F. Dochy, and E. Cascallar, Vol. 1. Dordrecht: Springer. https://doi.org/10.1007/0-306-48125-1_8
  • Sung, Y. T., T. Y. Chao, and F. L. Tseng. 2016. “Reexamining the Relationship Between Test Anxiety and Learning Achievement: An Individual-Differences Perspective.” Contemporary Educational Psychology 46:241–252. https://doi.org/10.1016/j.cedpsych.2016.07.001.
  • Suskie, L. 2018. Assessing Student Learning: A Common Sense Guide. San Francisco: John Wiley & Sons.
  • Theophilides, C., and O. Dionysiou. 1996. “The Major Functions of the Open-Book Examination at the University Level: A Factor Analytic Study.” Studies in Educational Evaluation 22 (2): 157–170. https://doi.org/10.1016/0191-491X(96)00009-0.
  • Traub, R. E., and K. A. MacRury. 1990. Multiple-Choice Vs. Free-Response in the Testing of Scholastic Achievement. Ontario: Ontario Institute for Studies in Education.
  • Trifoni, A., and M. Shahini. 2011. “How Does Exam Anxiety Affect the Performance of University Students.” Mediterranean Journal of Social Sciences 2 (2): 93–100.
  • Tuckman, B. W. 1998. “Using Tests as an Incentive to Motivate Procrastinators to Study.” The Journal of Experimental Education 66 (2): 141–147. https://doi.org/10.1080/00220979809601400.
  • Van de Watering, G., D. Gijbels, F. Dochy, and J. Van der Rijt. 2008. “Students’ Assessment Preferences, Perceptions of Assessment and Their Relationships to Study Results.” Higher Education 56 (6): 645. https://doi.org/10.1007/s10734-008-9116-6.
  • Vigeant, M. 2021. “A Portfolio Replacement for a Traditional Final Exam in Thermodynamics.” Education for Chemical Engineers 35:1–6. https://doi.org/10.1016/j.ece.2020.11.010.
  • Wilkinson, J. 2009. “Staff and Student Perceptions of Plagiarism and Cheating.” International Journal of Teaching and Learning in Higher Education 20 (2): 98–105.
  • Williams, P. 2014. “Squaring the Circle: A New Alternative to Alternative-Assessment.” Teaching in Higher Education 19 (5): 565–577. https://doi.org/10.1080/13562517.2014.882894.
  • Williams, R. and J. Brennan. 2004. Collecting and Using Student Feedback Date: A Guide to Good Practice. York, UK: Higher Education Academy. http://www.heacademy.ac.uk/resources/detail/CollecAbstract.
  • Williams, J. B., and A. Wong. 2009. “The Efficacy of Final Examinations: A Comparative Study of Closed‐Book, Invigilated Exams and Open‐Book, Open‐Web Exams.” British Journal of Educational Technology 40 (2): 227–236. https://doi.org/10.1111/j.1467-8535.2008.00929.x.