2,665
Views
1
CrossRef citations to date
0
Altmetric
Editorial

Evidence and evaluation in learning technology research

Pages 1-4 | Published online: 27 Jan 2017

This issue sees the change of name of the journal from ALT-J, Research in Learning Technology to Research in Learning Technology – The Journal of the Association for Learning Technology. It might seem a small change in the reordering of the title and subtitle of the journal, but it will require a commitment from all of us involved in the journal to change our habits of referring to it simply as ‘ALT-J’. The new title reflects our growing recognition of the importance of research in informing learning technology practice and the development of policy. The change in title also reflects our understanding of the community who produce and read such research. We hope that the self-explanatory Research in Learning Technology title better represents the aim of the journal to publish papers from a broad inter-disciplinary field, which encompasses all sectors of education and industry. Our intention is to be more inclusive to current and potential authors, reviewers and readers across the world.

As we have debated and prepared for this title change, it is the needs of this broad readership that have preoccupied us. There is currently a focus on ‘evidence’ within this community. There is a plethora of evaluative studies becoming available as the field tries to respond to calls from governments, funders and institutions for evidence-informed practice. Learning technology researchers have an important role to play here and we are keen that the journal provides a forum where researchers can share their evidence, open it up to peer review and use the review process and subsequent publication, discussion and citation to develop their ideas. The journal plays a key role in the Association for Learning Technology's aim to raise the profile of research in learning technology. More widely, we hope that the journal, and the research community it represents, contributes to the effective use of learning technology.

With this in mind, how does this issue help us to assess what kinds of evaluative evidence are needed and how we can go about producing it? Clearly, we need to be able to produce convincing evidence arising from evaluative studies with explicitly stated research questions and appropriately selected methodology. In addition, evaluating learning technology often has its own challenges, from the common situation of working in multi-disciplinary and multi-national teams (McAndrew, Taylor and Clow Citation2010) to the perennial but inexplicable comparison of online to face-to-face teaching (Bethel and Bernard Citation2010).

The first paper in this issue, Zhen Li's study of e-learning community, demonstrates how we might respond to some of the challenges above. Li uses Archer's (Citation2007) account of reflexivity as the interplay between the causal powers of structural and cultural properties and the subjects themselves, to analyse how two groups of learners experienced the formation of learning communities within their online programs. There is a clear perspective for the data collection and analysis that “It should be kept in mind that it is their – the learners' – communities that we hope will emerge and it is therefore important to examine their experiences from their own perspectives”. Archer encourages us to look at people's responses to environmental structures and we must do so through their eyes. What Li finds is that there are striking differences between the student experiences in the two programs. Contrary to expectations, learning communities emerged in the course underpinned by a didactic content-driven pedagogy, not the course led by constructive design principles. Li shows that these findings can be understood by applying Archer's notions of structure and agency to an examination of the student responses to the different course pedagogies, set within the broader social and cultural structure of the Chinese perspective.

For those looking for ‘evidence’, this small-scale study may not seem relevant, as it involves just over 300 learners in two professional development programs and in a Chinese context. However for me this paper demonstrates the importance of research which prioritizes the learner's perspective; of theoretically informed research; and of careful, open-minded analysis by the researcher. Reading this paper should change how we look at our own data and challenge us to think about how we interpret it. As a field, we need to be clear about the kinds of evaluative research we value and use the publication channels we have available to showcase them.

The second paper is also a report of a small-scale evaluative study. Ming Nie and her colleagues report on how an innovative use of learning technology transformed the educational experience. They examined the impact of the provision of e-books preloaded with course materials on the study patterns of work-based, distance learners. Taking an action research approach allowed them to consider the opportunities and challenges of the introduction of the technology as experienced by the staff and students involved over time. For anyone interested in, or who is being asked to provide evidence for “what works?”, this study exemplifies the benefit of having clear evaluation questions. The study aims to assess the extent to which “these devices could enhance flexibility in curriculum delivery to better accommodate the needs and demands of highly mobile work-based learners”. They found that students both valued and made use of the ability to have their course resources to hand, allowing these time-poor students to make better use of their study time by studying more in public places, on the move and without internet access. For colleagues looking for ways of improving the flexibility and convenience of study materials, this paper provides some evidence that e-books are at least worth a second look.

Next, Macgregor, Spiers and Taylor have reported on what seems to be a similar evaluation of an e-learning innovation – audio feedback. Students who received the audio feedback felt it was more detailed and easier to understand than students who were in the written feedback group. In addition, staff spent less time producing the audio than the written feedback. Having established student perceptions the evaluation then tackles a more ambitious question. The evaluation is explicitly fore grounded with a statement about why it was conducted “Anecdotal evidence gathered by a number of evaluations has hypothesised that audio feedback may be capable of enhancing student learning more than other approaches”. The aim here is to provide evidence that is more convincing than the anecdotes. As we so often see in educational research, despite positive student feedback, the quasi-experimental design provides no statistical support for an association between audio feedback and improvements in learning outcomes.

These first three papers in this issue are more than just descriptions of practice. As Norris and Lefrere point out in their paper, such accounts of experiences are now more commonly shared on the web, and in the informal communities of practice that have emerged there, than in peer reviewed journals. Rather, the papers here ask tough evaluative questions – questions which preoccupy much of our current search for evidence. They ask “how can the use of technology transform the educational experience?” and “which technologies are useful in which contexts?”

Studies such as these add to a growing corpus of knowledge but are not, on their own, enough. A further challenge is how to pull together studies from a rapidly growing field. What do we need to make sense of all these evaluative studies? As Bethel and Bernard (Citation2010) have proposed in relation to distance education, we need to understand how to choose models for synthesizing our research. They arranged existing models on a continuum from systematic to purposeful and give some guidance to researchers choosing a methodology suitable for their synthesis. Synthesizing research evidence is particularly challenging in our field. Learning technology research is interdisciplinary and that brings a wide range of research approaches and designs. The field is growing rapidly, with more studies being published and we are being asked to provide recommendations and guidance for practice and policy almost as soon as the latest technology has been made available and the first implementation studies conducted.

Alongside evaluative studies, and the synthesis and interpretation of them, I think we need challenging conversations. Research in Learning Technology aims to be a place to provoke conversations within our community. For example, Norris and Lefrere in this issue challenge us to consider the state of the development of online practices, presenting us with a model and examples onto which we can map our own institution's evolution of online learning. They are frank about the role that technology can play in reducing the costs of education to both institutions and their students. They present us with a range of ways of describing how institutions are using technology to transform their infrastructure, staffing and provision. They challenge us to reconsider how we conceive of higher education fundamentally – emphasizing the role of the learner in selecting their own learning experiences, resources and certification that meet their current needs from those available. This paper contributes to the ongoing conversation about how institutions can make use of technology to respond “nimbly” to a changing environment.

In the final paper in this issue, Borovik presents a provocative assessment of how the needs of the discipline are served – or not – by learning technology. Borovik reminds us not to accept current wisdom on the adoption of ICT in university level teaching, but to assess its suitability for our own context. Using the example of the teaching of mathematics, Borovik explains the ways in which e-learning has failed to meet the expectations of this community and provides a wish list of what is needed now. Such a direct challenge to the use of virtual learning environments within a “one size fits all” approach is a welcome addition to the journal. It is written explicitly for an audience outside of mathematics and I hope it provokes conversations within your own disciplines about what such a paper might look like from your own perspective.

This issue contributes a mix of practical evaluations and provoking conversations to our field in a way which, I think, suits Research in Learning Technology well. This collection of papers contributes to our understanding of how to conduct and synthesize evaluative research in ways which help us to meet our aims of informing practice and policy. It also demonstrates the need for space within the journal in which we can debate how technology can serve our needs, whether that is the needs of the sector in difficult financial times or the needs of our discipline. It is good to have these conversations and I hope you all find something in this issue that not just informs, but provokes you.

References

  • Archer M. Making our way through the world: Human reflexivity and social mobility. Cambridge University Press: New York, 2007
  • Bethal E.C., Bernard M. Developments and trends in synthesizing diverse forms of evidence: Beyond comparisons between distance education and classroom instruction. Distance Education 2010; 31(3): 231–56.
  • McAndrew P., Taylor J., Clow D. Facing the challenge in evaluating technology use in mobile environments. Open Learning 2010; 25(3): 233–49.