1,748
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Evaluation Of Student Engagement With Peer Feedback Based On Student-Generated MCQs

, , &
Pages 27-37 | Published online: 15 Dec 2015

Abstract

In this paper, we describe the usage patterns found on four different modules and analyse the detailed outcomes of two case studies based around the usage of the PeerWise system which encourages students to create multiple choice questions (MCQs) for their peers and allows them to evaluate and write comments on MCQs written by their peers. The case studies evaluated data collected from using PeerWise with different student cohorts taking the same modules over two consecutive academic years. Between the years interventions were introduced to attempt to increase student engagement. It was noted that although increased levels of participation were observed in cohorts on different modules the authors are aware that other factors often can have a strong influence, including the awarding of marks for participation, the year of study and students perceived value. This evaluation seems to show that the early and specific interventions applied did have an influence on the pattern of student usage by increasing the student engagement with the PeerWise system.

1. Introduction

Technological innovations over the last twenty years have seen the use of computer based training (CBT) programs in education evolve into full blown virtual learning environments (VLE). These systems offer a range of different ways in which staff and students can communicate within a course e.g. discussion fora, chatrooms and quizzes. CitationChickering and Gamson (1987) suggested that course design should encourage learners to take an active role in the construction of their own learning and CitationBandura (1977) discussed the impact of social learning while CitationVygotsky (1978) described his ideas of collaborative learning in terms of social interaction that involved a community of learners and instructors. A common theme in all of these ideas is that learners should acquire and share experiences. According to CitationCollis & Moonen (2006) and Hamer (2006) a Contributing Student Pedagogy (CSP) is an approach which encourages students to be active contributors to the learning experiences of themselves and others, and to value the contribution of others. The use of student-generated multiple choice questions (MCQs) for learning is one example of this approach, and has a wide range of documented benefits, CitationFellenz (2004), including development of a deeper understanding of the subject content learned, with a shift from acquiring knowledge to using knowledge and developing a sense of ownership of the subject content. CitationWickersham and McGhee (2008) also suggest that this deeper learning is evidenced when learners don’t just regurgitate information but reflect on it producing knowledge.

PeerWise (CitationDenny et al, 2008) is one such CSP system which can be used to help students develop a deeper understanding of the subject being learned. It is a web-based software application that was developed at the University of Auckland within the Computer Science department and allows students to contribute questions, answers, comments and ratings about any of the questions placed within the system. It has been used on many computing type programmes around the world as it provides a mechanism to build-up an online repository of MCQs which students have contributed towards and which over time can build to an invaluable teaching asset for both staff and students.

The authors of the PeerWise system assert that asking students to write MCQs, and to provide appropriate explanations of their answers provides a richer and deeper learning experience; much better than simply answering practice questions which have been provided by staff. While CitationYu (2009) suggests that the quality of engagement with CSP can be improved by scaffolding, for example by providing generic question stems with sample questions and by supporting two-way discussions between question authors and responders.

In this work we explain our experience of working with PeerWise over a number of years on a range of modules with both large and small class sizes from first to fourth year. After examining the data collected at the end of the first year we noted that although students had added plenty of questions and answered many of the questions, their usage of the feedback aspects of the system for providing explanations to questions and commenting on questions was low to non existent for many. On the whole student usage has been positive but has highlighted issues which can arise with some students, including uncertainty in how to go about writing questions, lack of engagement beyond simply answering questions and low participation rates in certain student cohorts. Our assertion is that although the students could clearly use the system with little or no help it appeared that even the more mature student cohort required a guiding hand to help them see the value in providing explanations and comments to the questions within the system. It is to be expected that asking students to consider writing explanations about their own questions and providing comments on their peers’ questions and answers offers them an opportunity for greater understanding of the given topics being studied. We decided to see if we could bring about a change in student participation particularly with respect to student commenting on other student questions and answers. A decision was made to provide an intervention technique of providing additional scaffolding that would hope to bring about this change of working practice for the benefit of the students.

This paper explains the evaluation of our intervention of providing additional early guidance in our modules. Section 2 identifies related work to this field of study and the PeerWise usage on four modules is described in section 3. The results of this usage is statistically analysed in section 4 and a general discussion and future work is discussed in section 5 with our conclusions presented in section 6.

2. Related Work

The authors of PeerWise (CitationDenny et al, 2009a, 2009b) have published studies focussing on specific aspects of student use of the system, including topic coverage and question quality. CitationPurchase et al (2010) report that a question repository of acceptable quality was generated by students with no guidance from instructors. They suggest that focused guidance may improve the quality of questions, but can place constraints on peer judgements so that students no longer had ownership of their contributions. CitationBarak and Rafaeli (2004) describe the use of their Questions Sharing and Interactive Assignments (QSIA) system for supporting student-generated questions, and state that their students were not given instruction in how to evaluate their peers’ contributions and were expected to generate their own rules and criteria.

In contrast, other authors choose to provide specific guidance and scaffolding. CitationFellenz (2004) provided students with an introduction to Bloom’s taxonomy and guidance on designing multiple choice items through examples and extensive tutorials and discussions. Students were given a quite prescriptive specification for the structure of their questions: stem, set number of distracters and explanation, and the students submissions were on paper rather than using the support of technology. CitationNicol (2007) comments that the work of Fellenz is aimed at producing very high-quality questions, but takes the view that the focus should be on the learning process, not the output, and that it is not necessary for students to produce extremely high-quality tests.

CitationWu and Yu (2009) describe an “Integrative Model” in which students were involved in regular question-generating activities following instruction and received regular staff feedback on their questions. They compared learners’ perceived task value for students who were given minimal instruction and for students following the integrative model. They comment that perceived task value influences engagement in a learning task and concluded that a systematic learning process leads students to observe the advantages of the introduced technique. CitationYu (2009) describes a system which has been developed to provide specific forms of scaffolding for the use of student-generated MCQs. For example, the system can provide generic question stems with sample questions, access to model questions and can support two-way discussions between question authors and responders/assessors.

CitationLiaw, Chen and Huang (2008) reported that web-based collaborative learning systems allowed learners more opportunities to get involved. The impact of Web 2.0 on Higher Education isn’t completely understood; although studies of the use of collaborative Web 2.0 tools are well underway e.g. CitationMinocha (2009) and Trentin (2009) with wikis, CitationKerawalla et al (2009) with blogs, and CitationCann (2008) and Jucevičienė (2010) with social networking tools. With recent advances in mobile technologies and the cost of such devices falling; the affordability has led to increased debate regarding their potential use in an education context. The use of mobile technology has become practically ubiquitous amongst students in higher education, with CitationHarley et al (2010) describing it as “the dominant mode of communication” amongst this population. The availability of collaboration and communication tools and devices provides flexibility for students to get involved at any time or place thus challenging the restricted access of the traditional classroom. Many of our younger learners, variously typified as digital natives by CitationPrensky (2001) appear perfectly able and willing to use a variety of different technologies and social software applications with little or no introductory material. Minimal guidance approaches for staff have been reviewed by CitationKirschner at al (2006), cited by CitationHamer (2006), as those approaches in which learners, rather than being presented with essential information, must discover or construct essential information for themselves. PeerWise is one such system that gives each student the opportunity of integrating what they have discovered about the topic being researched into a community of practice, CitationWenger (1998). Kirschner et al argue that there should be support for, “…strong instructional guidance…” when using these systems and CitationKawase et al (2010) showed a link between the quality, and amount of annotations that students provide in such systems with overall student performance.

3. PeerWise Usage

The learning activities described in this section were implemented across modules delivered during academic years 2009/10 and 2010/11 within the School of Engineering and Computing at Glasgow Caledonian University, see below for details. These modules were chosen as they could provide a collection of data from a range of differing cohort traits: including year of study, class size, and programme subject area.

Figure 1 List of modules used in the evaluation.

In the first year of using the PeerWise system there was no specific coordinated approach to the delivery and expectations of the system. The students were given access to the system and given a brief account of its purpose and basic usage, and no further direction was given other than criteria for minimum participation required in order to gain credit.

In the second year within the modules tutorials or group discussions were used to provoke dialogue on how to write, “…good…” questions, or on how to evaluate questions. The aim was not to provide guidance on question-writing as such: there was no discussion of Bloom’s taxonomy, for example. Rather, the intended outcome was that students would gain an understanding of the learning process which they could follow to benefit from the use of PeerWise, including answering questions, evaluating questions and receiving peer feedback. Typically, the activities were structured as follows:

Stage 1: Introduction session to the PeerWise system with allocation of user identifiers to allow access.

Stage 2: Students were given an initial set of multiple choice questions to attempt. The questions were designed to be of varied quality. Issues which were represented in the “poor” questions included: superficial content, multiple correct answers and poorly-designed distracters. However, the students were given no indication that this was anything other than a straightforward exercise.

Stage 3: A set of answers and explanations was given out and students were asked to look at these and discuss. As was the case for the questions, these were designed to be of varied quality - issues included: incorrect answers, ambiguous explanations and missing explanations. The instructor then initiated a whole- group discussion of the answers and explanations, during which it was apparent to the students that the provided answers and explanations were not necessarily correct or helpful. The discussion considered the feedback which might be provided to the author of each question, either through a majority choice of a different answer than the author’s, or through comments made by respondents.

Stage 4: Students were asked to rate the questions, including the answers/explanations, on the same scale of 0 to 5 as used in PeerWise. These ratings were then discussed by the whole tutorial group. It was emphasized by the instructor that these ratings would provide valuable feedback to the question setter, and that it was important to justify ratings.

Stage 5: After the preparatory stages the students were expected to use the PeerWise system as specified by the work to be undertaken on each module.

Sections 3.1 to 3.4 that follow describe the specific activities carried out in each of the modules. A summary of the measures which were collected for the modules on PeerWise usage is shown in . In addition to the basic data on number of students and total number of questions and answers submitted, the measures chosen included the total number of comments and the number of students who wrote comments as well as the mean and variance of those data. These are likely to be indicative metrics of engagement with the process of providing peer feedback.

Figure 2 PeerWise usage data.

3.1 Tutorial support in Introduction to Database Development

PeerWise was included as a learning and assessment activity within an introductory databases module in sessions 2009/10 and 2010/11. The module was delivered over a short (6 week) timescale as part of a set of short introductory modules within a first year course which is common to all computing programmes. The main assessment instrument was a hand-in assignment (i.e. 100% coursework), but there was also an online MCQ test. To encourage engagement with PeerWise, a component of the overall module mark (10%) was awarded on the basis of that participation. Participation was required to be completed within weeks 2 to 5 of the module. To attain full credit, students were required to contribute at least 5 questions and answer 10 questions contributed by others. No formal requirement was placed on commenting on questions.

3.1.1 Learning activity

In the first delivery, students were given guidance on how to access the PeerWise system and directed to the PeerWise user guide for instructions on how to use the system, with no further guidance. In the second delivery tutorials, following the stages described in section 3 above, were conducted in small-group (<20) sessions with the same instructor for each group. They took place in week 4, so that students had been introduced to PeerWise and the requirement for them to use it had been explained. shows excerpts from the materials used in this tutorial - the content presented to students at each of the stages described in section 3 is shown for one question. The full activity presented a range of questions illustrating different issues in the design of the question, answer or explanation. Similar activities were used in the other modules.

Figure 3 Excerpts from learning activity support material in Introduction to Database Development.

3.1.2 Evaluation

Initial evaluation of the second delivery has focused on evidence for engagement in the learning process. We have not yet considered the content of the questions, see future work later. Also, a discussion of the question quality and topic coverage of the first delivery is provided in a previous paper by CitationPaterson et al (2012). Columns 2 & 3 of show data for this module and the results suggest that the two cohorts were broadly similar in terms of overall engagement with PeerWise. The number of questions and answers being created was broadly similar. A small number of students in both cases contributed more questions than required, and a similar majority of the students answered more questions than was required.

Unlike the other measures, the number of comments and the number of students who wrote comments have more than doubled. CitationDenny et al (2009a) used total number of characters in comments as a measure rather than simply the number of comments, to avoid giving equal emphasis to trivial and thoughtful comments. Review of the comment text showed that the range of depth of the comments was broadly similar between cohorts, and we have used the number of comments as a basis for comparison. These measures suggest that the guidance provided in the tutorials for the second cohort has had a significant impact in engagement with the feedback aspect of the learning process, see section 4 later for detailed analysis. However, the actual numbers are still very low: with the number of comments being written accounting for approximately 5% of the number of answers submitted.

An important purpose of ratings and comments is to provide peer feedback to the question author. We currently do not have any measure of whether students, “…completed the circle…", by looking at and learning from the feedback. In future delivery we plan to introduce, as part of the assessment, an activity which requires students to reflect on peer feedback.

3.2 Tutorial support in Web Systems Development

PeerWise was included as a learning and assessment activity within a final year (Honours) Web Systems Development module in sessions 2009/10 and 2010/11. The cohort was small (<20) in each of these years. These students have an examination (50%) & a coursework (50%). A component of the coursework mark (10%) was awarded on the basis of participation with PeerWise. A completion deadline of two weeks before the end of the module was in place to minimise interfering with any other hand-ins or exam preparation. To attain minimum credit, students were notified that they were required to contribute at least 10 questions, with the clear understanding that minimum effort gets minimum marks and that they would be compared against each other’s overall usage to gain additional marks. PeerWise has a facility, viz. a Leaderboard, which allows users to see how they are performing in relation to other users of the system via a set of points that are automatically allocated for participation. CitationDourish and Belloti (1992) define this awareness as “an understanding of the activities of others, which provides a context for one’s own activity”.

3.2.1 Learning activity

In the first delivery the PeerWise system was introduced in a four hour laboratory session delivered in week two with a brief demonstration of how to use the system and an informal discussion of what was expected of the students. In the second delivery, again in week 2, the approach was broadly similar with additional and specific intervention in that the discussion in the introductory session was broadened to consider the benefit from effectively commenting on and grading the questions of peers. The focus was clearly on the collaborative nature of the students’ work and their involvement with the PeerWise system and not specifically about the potentially competitive aspect of the system. Students were actively encouraged to explore the PeerWise system in the four hour class.

3.2.2 Evaluation

Columns 4 & 5 of show data for this module. The overall level of engagement in both years was impressively high: despite the small cohorts, a substantial repository of questions has been created each time. The number of answers and comments, and the proportion of students who wrote comments, suggests that these students are more open to engagement with the learning process than the first year students in the introductory module. This may be due to the greater maturity of final year students. It may also be a consequence of a competitive spirit among a small group of students who know each other and are motivated by the competitive element in the marking, supported by the leaderboard feature. However, the increase in the number of comments written by students in the second cohort appears striking. This suggests that the intervention in this case has encouraged these students to engage with the feedback aspect of the process.

A questionnaire with both open and closed question types was given to students upon completion of the PeerWise activity. This asked students to comment on their use of PeerWise and on their perceived value in using such a system. Many students agreed that PeerWise was, “…easy to use…”,and, “…would have been useful on other modules…”, and that it helped them to learn. On the negative side of things, some students commented that they were unsure if the answers provided were always correct and that a lot of the questions were too “easy”.

3.3 Tutorial support in Artificial Intelligence

PeerWise was included as a learning and assessment activity within a 3rd year (pre-Honours year) undergraduate Games Artificial Intelligence module delivered over a 12 week period during the session 2010/11. The cohort was small (<20). The main assessment instruments were an examination (70%) and a coursework (30%). To encourage engagement with PeerWise, a component of the coursework mark (10%) was awarded on the basis of that participation. To attain full credit, students were required to contribute at least 10 questions and show evidence of participation in rating and commenting on questions.

3.3.1 Learning activity

PeerWise was introduced within a tutorial following the stages described in section 3, in which students were asked to critically evaluate a set of five questions provided by the instructor. This led to discussion of the value of critical evaluation and peer feedback. The tutorial took place in week 8 of the module.

3.3.2 Evaluation

Column 6 of shows values for measures indicative of student engagement for this module. As this was the first time PeerWise had been used on this module there are no comparable data from previous years. Comparison with other modules suggests that the level of engagement with commenting is comparable with that found in the Introductory Database Development module when a similar intervention was implemented, but not quite as high as in the Web Systems Development module.

3.4 Tutorial support in IT Project Management

PeerWise was included as a recommended revision tool within a new final year (Honours) IT Project Management module, with 24 students. The module was delivered over a 12 week period during the session 2010/11.The assessment instruments included a hand-in group assignment (24%), an individual assignment (6%) and a formal written exam (70%). The use of PeerWise was optional in this module with no assessment marks being assigned to students for their engagement with PeerWise.

3.4.1 Learning activity

PeerWise was introduced as an exam revision tool at week 8 following the stages described in section 3. Participation was encouraged by explaining the purpose of PeerWise at a scheduled lecture session and by posting instructions for accessing PeerWise on the VLE. In week 11 an exam revision tutorial included four example questions with appropriate solutions. These were discussed with the class and students were once again reminded to participate in PeerWise and the potential benefits of its usage were highlighted.

3.4.2 Evaluation

The level of engagement with the PeerWise process was very low on this module and therefore is not included as part of . Only 4 out of 24 students submitted one or more question, while 10 answered one or more questions. No comments were written at all. A questionnaire was distributed to students after completion of the module, asking students to comment on their use of PeerWise or barriers to its use. Comments included, “I had enough information off the lecture notes and off books and online resources, therefore I didn’t require to access PeerWise”, and, “…did not think it would be helpful”. Of the 9 respondents to the questionnaire 7 rated, “…not enough time…” as the biggest barrier to using PeerWise. Interestingly, no students commented on the fact that there were no marks available for participation.

4. Statistical Analysis

Amongst other data collected from a variety of different cohorts of students, see , the number of comments posted by students in response to other student questions in two independent populations of students was collected over the academic years 2009/10 & 2010/11. The two years of data were collected from different cohorts of students studying on two different programmes with each at different levels of their education. All populations of students used the same PeerWise system as their main tool for contributing to the data being collected.

From the figures it appeared promising that the number of comments had increased markedly between years and a statistical analysis was carried out to see if this difference was significant.

4.1 Method

The statistical tests that were carried out compared the number of comments provided by one population against the number of comments being provided by the second population using a hypothesis testing technique to conclude whether there was a significant difference between the populations. As there is no pairing or matching between the populations in the two samples this is a two independent sample problem. In this case, we write our null and alternative hypotheses as:

  • H0 : µA = µB there is no difference between the mean number of comments provided by population A and those provided by population B.

  • HA : µA ≠ µB there is a difference between the mean number of comments provided by population A and those provided by population B.

It is customary to compare the variance between samples to ascertain whether we are dealing with an equal or unequal variance test. Usually a factor of less than 3 between population variances would imply that we are dealing with equal variances. In our data this was not the case and as so we should consider an unequal variance test as being appropriate e.g. Welch t-test. In using this parametric test we have assumed that the number of comments is normally distributed. However, if this is not the case, then an appropriate non-parametric test such as the Mann-Whitney test would be appropriate.

4.2 Results

Based on our data from we calculated the following values:

Figure 4 Test statistic ,Welch-Satterthwaite (v) values and p-values.

Figure 5 Test statistic, Mann-Whitney (U) and p-values.

The p-values for both equal variance and unequal variance between populations are shown in above while Figure 5 shows the p-values for the non-parametric equivalent test. The confidence interval was taken to be 0.95 and as the p-values are all below 0.05, irrespective of test type, this shows that there is evidence at the 5% level of significance to reject H0. So we can conclude that there is a significant statistical difference between the populations for both of our case studies.

5. Discussion And Future Work

In both of the case studies the data highlights that there is a significant difference between the mean number of comments provided by population A and those provided by population B. However, although there is a difference in the means between the years it does not necessarily imply that the changes that have taken place are purely because of the intervention techniques used. When working with students, one always has to be aware of the variability between student cohorts. It does, however, suggest that this has been a successful intervention technique which should be performed again. Also, although the number of comments has improved significantly our study says nothing about the nature of the comments that were submitted. Future evaluation on our data e.g. comparing the length of the comments and the quality of the given comments as ranked by staff and/or students could be carried out. This would also allow us to consider whether students rate the same questions as highly or poorly as do the staff/experts? It is likely that the quality of the questions is important if the students are to really learn anything from the system. This leads us to consider whether staff should actively deposit good questions in the system to start the students off with rather than having an empty course to begin with?

In the AI module (section 3.3) we don’t have any data from before the intervention / scaffolding process took place and yet, interestingly, it appears that the data values are comparable to the other datasets that were captured after the intervention took place on the other two modules. It has encouraged us to consider the possibility that the type of student cohort may well be significant! Also of interest was the fact that the IT Project Management module (section 3.4) had very low participation rates and although no student specifically identified the lack of marks as being a reason for the lack of engagement it may be of value to evaluate this in a future study.

The chosen technology platform, PeerWise, worked well with no reported technical issues and most students seemed perfectly comfortable with its usage with very little instruction. Clearly, a few students decided for whatever reason to not fully engage with the system even though there were marks given for participation. Perhaps, students did not think it worth the effort to learn how to use a new system to achieve the marks that were available to them. Some students suggested that it could be useful to use PeerWise in other modules that they were taking while some argued that it would not be appropriate in all subject areas. Again, this leads to possible follow-on studies to evaluate what students think they gain or lose from participating in such systems. Do greater participation levels relate to student performance? If so, on this module only or across the board? Another area of interest could be to consider usage patterns which might help staff spot failing or disengaging students early or maybe patterns could be identified of typical student behaviour e.g. do students tend to spread their workloads or cram everything in to near the end of the assessment period so only the latest questions ever get answered?

On the staff side it was fairly straightforward to use and seems to be worthy of the effort required to setup the user administration and to collect the usage data at the end. The success of such a system is dependent on the way in which students are motivated to collaborate with their peers. Their participation, by contributing questions towards a body of MCQs, provides an asset which could be of benefit to other student cohorts on similar modules. The success of such a system may also be dependent on the level of staff involvement and their motivations for asking students to use such systems.

6. Conclusions

From our data it was observed that nearly all the students posted and answered questions to a varying level of success, irrespective of year of study, cohort or module. However, the posting of comments occurred at a very low to non-existing level in the first year of our PeerWise usage. It was decided to introduce additional support for our students, see section 3 for details, to encourage increased participation specifically with respect to the feedback aspects of PeerWise.

The analysis of our data, presented in section 4, imply that our intervention with respect to the usage of PeerWise show a significant increase in the amount of participation that our students demonstrated particularly with an increase in the number of comments being found. Although there was no specific intervention that was used uniformly between modules and by all staff our results show that the intervention of providing early additional guidance on what is expected from the system at the very beginning has improved student participation significantly on both modules for which we have two years of data. These early lessons about expectations of what was expected and how to author good questions have been successful, even on the AI module for which we only have one set of data.

Overall, a significant number of questions on a range of topics were contributed by students and collected towards a repository of online MCQs. We believe that staff should view students as partners in helping to build a blended learning environment containing these resources. This may be a shift from the norm but can build to a sizeable asset for very little staff effort and can be reused on other similar type modules. The primary purpose of this would not be in the creation of a high quality repository of questions and answers, but to encourage students to fully participate in the entirety of their own learning through peer support, personal reflection and critical thinking and thus ultimately helping to improve the quality of their overall learning experience.

The analysed data shows that the early effort in clarifying our expectations of the students’ work has been worthwhile. So we can conclude from our evaluation of student engagement with peer feedback based on student generated MCQs that our interventions have been successful in improving the levels of student engagement in the area of peer support.

7.

Acknowledgments

The original work was supported by a grant from the HE Academy General Development Fund. PeerWise was created at and is hosted by the University of Auckland, New Zealand, and the use of this facility is greatly appreciated. Thanks especially to Paul Denny for his valuable support and advice, to Steven Walters for practical advice with the statistical aspects of the paper and to all the students who used the PeerWise system.

References

  • Bandura A., (1977), Social Learning Theory, New York, General Learning Press
  • Barak M. & Rafaeli S., (2004), On-line question-posing and peer-assessment as means for web-based knowledge sharing in learning, International Journal of Human-Computer Studies 61(1), 84-103.
  • Cann A. J., (2008), Web 2.0 comes of age: disintermediation and the long tail in higher education.
  • Chickering AW. & Gamson Z.F., (1987), Seven Principles For Good Practice In Undergraduate Education, AAHE Bulletin, 3-7.
  • Collis B. & Moonen J., (2006), Engaged Learning with Emerging Technologies, Springer Netherlands, chapter The Contributing Student: Learners as Co-Developers of Learning Resources for Reuse in Web Environments, 49-67
  • Denny P. and Luxton-Reilly A. & Hamer J., (2008), The PeerWise system of student contributed assessment questions, in ‘ACE ’08: Proceedings of the tenth conference on Australasian computing education’, Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 69-74
  • Denny P. and Luxton-Reilly A. & Simon B., (2009a), Quality of Student Contributed Questions Using PeerWise, in MargaretHamilton & TonyClear, ed., ‘Eleventh Australasian Computing Education Conference (ACE 2009)’, ACS, Wellington, New Zealand, 45-53
  • Denny P. and Luxton-Reilly A.; Hamer J. & Purchase H., (2009b), Coverage of course topics in a student generated MCQ repository, in ‘ITiCSE ‘09: Proceedings of the 14th annual ACM SIGCSE conference on Innovation and technology in computer science education’, ACM, New York, NY, USA, 11-15
  • Dourish P. and Belloti V., (1992), Awareness and coordination in shared workspaces, In Proc. Computer Supported Cooperative Work (CSCW’92).
  • Fellenz M., (2004), Using assessment to support higher level learning: the multiple choice item development assignment, Assessment & Evaluation in Higher Education 29(6), 703-719
  • Hamer J., (2006), Some experiences with the “contributing student approach”, in ‘ITICSE ’06: Proceedings of the 11th annual SIGCSE conference on Innovation and technology in computer science education’, ACM, New York, NY, USA, 68-72
  • Harley D. and Acord S.K. and Earl-Novell S. and Lawrence S., & King C.J., (2010), Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. University of California, Berkeley, CA: Center for Studies in Higher Education.
  • Jucevičienė P. G. V., (2010), A Conceptual Model of Social Networking in Higher Education., Electronics and electrical engineering, ISSN 1392-1215. No.6 (102)
  • Kawase R. and Herder E. and Nejdl W., (2010), Annotations and Hypertrails with SpreadCrumbs: An Easy Way to Annotate, Refind and Share. In: WEBIST 2010: Proceedings of the 6th International Conference on Web Information Systems and Technologies
  • Kerawalla L. and Minocha S. and Kirkup G. and Conole G., (2009), An empirically grounded framework to guide blogging in higher education., Journal of Computer Assisted Learning, Vol. 25, pp. 31-42.
  • Kirschner P.A. and J.Sweller, and R.E.Clark. (2006), Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching, Educational Psychologist. 41(2), 75-86.
  • Liaw S.S and Chen G.D., & Huang H.M., (2008), Users’ attitudes toward Web-based collaborative learning systems for knowledge management, Computers and Education, 50, 950-961
  • Minocha S, (2009), An empirically-grounded study on the effective use of social software in education, Education + Training, Vol. 51 Iss: 5/6, 381 - 394
  • Nicol D., (2007), ‘E-assessment by design: using multiple-choice tests to good effect’, Journal of Further and Higher Education 31(1), 53.
  • Paterson J.H. and Wilson J.N. & P. Leimich P, (2012), Uses of Peer Assessment in Database Teaching and Learning, in Data Security & Security Data: 27th British National Conference on Databases, Lecture Notes in Computer Science, Vol. 6121, 135-146.
  • Prensky M., (2001), ‘Digital natives, digital immigrants’, On the Horizon 9(5). Lincoln: NCB University Press.
  • Purchase H. and Hamer J.; Denny P. & Luxton-Reilly A., (2010), The quality of a PeerWise MCQ repository, in ‘Proceedings of the Twelfth Australasian Conference on Computing Education - Volume 103’, Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 137-146
  • Trentin G, (2009). Using a wiki to evaluate individual contribution to a collaborative learning project., Journal of Computer Assisted Learning, 25, 43-55.
  • Vygotsky L., (1978), Mind in society: the development of higher psychological processes, Cambridge, MA: Harvard University Press
  • Wichersham L.E., & McGhee P., (2008), Perceptions of satisfaction and deeper learning in an online course, The Quarterly review of Distance Education, 9(1), 73-83
  • Wenger E., (1998), Communities of Practice: Learning, meaning, and identity, Cambridge University Press
  • Wu C-P & Yu F-Y., (2009), Changing Students’ Perceived Value and Use of Learning Approaches for Online Student-Generated Questions via an Integrative Model, in ‘Workshop Proceedings of The 17th International Conference on Computers in Education: ICCE 2009’, 30-34
  • Yu F.-Y., (2009), Scaffolding student-generated questions: Design and development of a customizable online learning system, Comput. Hum. Behav. 25(5), 1129-1138

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.