4,871
Views
1
CrossRef citations to date
0
Altmetric
Report on Project Activity

Doing it your way: The Variation in, and Importance of, Personal Style in Teaching Quantitative Methods for University Social Science Students

&

Abstract

This paper reviews and discusses ideas and advice on quantitative methods teaching in the social sciences that were presented and debated in a series of workshops led by experienced university teachers of quantitative methods for social science students. Despite considerable similarity in course content for introductory statistics modules at the undergraduate level, there is often great freedom for teachers and so huge variation in how statistics is taught. This involves different approaches to theory, examples, practical exercises and so on. We argue that there is no single style that would be most effective for everyone, but instead it is important for teachers to teach in a style that suits them.

Introduction

Teaching any subject can be a daunting experience for new lecturers. Teaching statistics to social scientists is particularly difficult to many in UK social science departments because lecturers are relatively isolated, with little by way of guidance from colleagues as to what to teach or how to teach it. In some ways this isolation means that those teaching quantitative methods to social scientists are left alone to choose what they want to do, although this tends to operate within the boundaries of the university's or department's course requirements. This freedom is valuable, allowing lecturers to decide for themselves what they want to do and how they want to do it.

This paper discusses how this freedom for university teachers of quantitative methods has produced a remarkable diversity in the nature and style of presentation of those courses. While it will always remain difficult to enthuse students who would rather be doing other things and who see little use of quantitative methods in their substantive courses, there are at least many ways in which to try. Moreover, as different approaches have proved successful, lecturers probably need not worry about trying to find some optimal way to construct their courses and do their teaching. Indeed both the approaches taken by very successful quantitative methods teachers and the advice that they give is to teach in a style that works for the teacher personally. This implies some recognition that different styles will suit different teachers, and indeed different students. Successful quantitative methods teaching, like other teaching, involves being sufficiently comfortable with what you are doing and how you are doing it to be able to relax and let your enthusiasm for the subject matter show. This helps the students relax and so helps them become enthusiastic too.

The paper first introduces the Oxford Quantitative Methods (QM) teachers’ workshops from which most of the material for this paper is drawn and then goes on to describe some of the key sources of variation between undergraduate social science introductory courses (such as depth, breadth and nature of the material covered). The paper then turns to a series of more specific topics including how to start courses, how to enthuse students, examples, exercises, the role of statistical theory and statistical literacy, the role of statistical computing, and how to deal with mixed ability groups. The paper concludes with some further discussion.

The workshops

The Oxford Quantitative Methods (QM) Teachers Project (CitationOxford Quantitative Methods Teachers Project 2012) is part of the ESRC's Curriculum Innovation and Researcher Development Initiative and arose out of the need for improvement in undergraduate quantitative methods teaching on most social science degrees, as documented by, among others, CitationMacInnes (2009). The Oxford project aims to contribute to the envisioned step change in the quality of quantitative methods skills used by the UK undergraduate social science community by helping teachers improve the quality of their courses by providing a variety of sources of advice, techniques and suggestions, different aspects of which will be taken up by different teachers.

The main component of the project consists of a series of workshops for teachers, or would-be teachers, of quantitative methods for undergraduate social scientists, and a website with slides and videos from those presentations to provide access to a wider audience. In addition, the website hosts a discussion forum and an archive of QM teaching materials, which are available to all who are interested. This article focuses on the workshops we have run to date.

The broad theme of the workshops has been the speakers' own experiences of teaching on social science undergraduate quantitative methods introductory courses, including what they think has worked well for them. Aspects covered have included aims and objectives of undergraduate courses, content, structure, style, the balance of statistical theory and practical examples, the nature of practical advice, mode of assessment, and the choice of software. Speakers have also shared their favourite teaching examples.

Presenters so far have included a variety of different people from different backgrounds – politics, sociology and statistics. They have taught quantitative methods to undergraduate social scientists in the UK and Europe but disproportionately North America. They are Robert Andersen, Alan Agresti, Sean Carey, Andy Field, Andrew Gelman, Manfred te Groetenhuis, Paul Kellstedt, William Jacoby, Wendy Olsen and Laura Stoker. This paper is based on advice and ideas that these speakers presented, but also some of the observations and comments that workshop participants made.

The workshops were focused on teaching introductory undergraduate courses in statistics for social sciences but there was also some discussion of introductory graduate courses. Many of the pedagogical issues were similar for both even though the course aims and objectives differed, primarily in the extent to which students are being trained to conduct statistical analyses themselves.

The importance of context

There are striking differences between the kinds of courses that people teach as introductory statistics for social sciences, and much of the variation is linked to disciplinary, national and institutional context.

The aims and objectives of courses vary according to disciplinary norms and priorities and the subject benchmarks, such as those set by the Quality Assurance Agency for Higher Education (QAA) in the UK. For example, psychologists learn a larger range of statistical tests and how to select between them while sociologists and political scientists are more likely to focus on regression modelling. The ambitions of different courses varied both in quality and quantity. Whereas US social science departments require undergraduates to do a 14 (or even 28) week course with two-to-four hours a week teaching time, even the most ambitious UK departments will rarely provide more than one hour a week for ten weeks. (For a review of the situation for politics courses in the UK and also a discussion of an exceptionally long quantitative methods module see CitationAdeney & Carey (2009)).

This has corresponding consequences for the amount that can be taught, but it is also the case that UK social scientists simply have lower expectations of undergraduate student mathematical abilities and what they are expected to learn than in North America (CitationMacInnes 2009).

The problem of low expectations in the UK is apparent from comparison of course content even after making some allowance for within nation university prestige rankings and corresponding variation in school achievement levels. There might be some justification in UK universities having lower expectations of social science undergraduates. Several university teachers with experience of working in both the UK and also in a North American or European universities remarked on the poor mathematical abilities of otherwise very high achieving UK social science students. This is perhaps because many of them did not do particularly well at GCSE maths and then did arts-only AS and A-levels (CitationAdvisory Council on Mathematics Education 2011).

Rather than working harder to bring students up from a low mathematical starting point, the response of many if not most UK social science departments has been almost the opposite. Even when faculty want to train their undergraduates more thoroughly in quantitative methods, there is pressure in departments to improve National Student Survey (NSS) satisfaction scores. Since compulsory statistics courses are unpopular there was a perception among workshop participants and their colleagues that introducing or expanding them would reduce scores and potentially jeopardise league table placements and future student enrolments (see also CitationAdeney & Carey 2009). Although this seems like improper use of quantitative social research, it nevertheless appears that NSS statistics may be driving down the quality of statistical education in the social sciences.

Another intriguing difference between North America and most UK departments is the extent to which teaching an introductory quantitative methods course is seen as service teaching for a core course which other lecturers rely on their students having done. Sometimes there are specific skills that students are expected to know for later courses, such as how to conduct and interpret certain statistical analyses. This is clearly the case with psychology on both sides of the Atlantic (CitationQAA 2014, CitationChamberlain et al. forthcoming). Such requirements naturally shape the course content, sometimes to the extent that there is relatively little room for teachers to choose what is taught.

Whereas there seems to be significant amounts of quantitative research taught in leading universities in all the social sciences in North America, there are still many UK social science departments where little quantitative research is done or taught (CitationMacInnes 2009, CitationParker et al. 2010). Those who do teach statistics in such departments often face a battle to persuade students of the relevance of the quantitative methods, but they do have a lot of freedom to choose what is taught. In these circumstances there can be an enormous variation in course aims, objectives and content, from traditional statistical theory with proofs of key results to equation-light statistical literacy courses driven by examples for illustrating concepts. Also the idea of embedding quantitative methods teaching in substantive social science courses has become increasingly popular (for examples see CitationClough 2012, CitationUniversity of Manchester 2014) with some arguing that it is best if quantitative methods modules are not stand alone but integrated into substantive modules and reflective of the ethos of the department (CitationAdeney & Carey 2009, CitationChamberlain et al. forthcoming).

It also matters a lot how quantitative methods teaching is organised within a university (i.e. the institutional context). Some universities have a structure whereby statistics is taught across the social sciences by (social) statisticians or a methodology department. This can happen with either undergraduate or graduate courses or both. One of the advantages of this approach is efficiency and another is that those teaching the courses are experts in statistics. The disadvantage with this approach is that courses need to be very general so some of the examples might be difficult for students from particular disciplines to relate to. There is also the need to achieve some kind of consensus on course content to satisfy the interests of different disciplines, for example psychologists are still keen to study ANOVA which is more peripheral for political science. For these and similar considerations, and for reasons of space, there is no systematic discussion here of what statistical concepts, results and techniques should be taught, although they were discussed in detail in the workshops.

Personal approaches to specific aspects of quantitative methods teaching

After taking the contextual factors into account, the most striking impression from the five workshops we have hosted thus far is the variety of teaching styles that are used and how these reflect the different personalities of the teachers. Some are lively and comical in the way they discuss material while others are calmer and more understated. Examples for some are silly made-up stories, for others they are pertinent real-world research and policy issues, and for others they are abstract illustrations. People use examples sometimes to introduce theory and sometimes to illustrate a theoretical point after it has been made. Some have a more formal division between lectures and exercises outside lecture time; others have classes dominated by hands-on exercises within classes.

This section is divided into several subsections covering specific aspects of teaching and course design. The aim is to highlight some of the key sources of variation in main approaches to these different issues. Although there are themes running across each subsection (e.g. a more formal versus a more relaxed approach) it is not possible to identify a set of broad approaches within which the choices on these particular issues follow directly. It is certainly not possible to say that if you take, say, a more formal approach to some aspects then you must take a formal approach to others.

First lecture: how to start the course

As well as describing the course aims and objectives many workshop presenters recommended some kind of motivating content. This could include a discussion as to why statistical analysis is important, both for the discipline and more broadly for social research and particularly policy-making, and in fact to the everyday life of most people, especially when they are trying to deal with complex information.

Motivation is a key concern as there is a lack of interest among social science students for quantitative methods courses and a widespread view among students, in the UK and other countries, that numeracy skills are of low importance (e.g. CitationMurtonen et al. 2008, CitationWilliams et al. 2008, CitationAdeney & Carey 2009, CitationChamberlain et al. forthcoming) with Williams and colleagues, for example, arguing that it is a bigger problem than lack of numerical skills.

Learning statistics is often justified to students as the acquisition of a transferable skill, which although true (e.g. CitationJanda 2001, CitationAndersen & Harsell 2005) is not a view commonly held by students, as outlined above. It may therefore be helpful to explain how the material is relevant for what students are interested in and want to do. Paul Kellstedt recommends telling students that an understanding of the research will enable them to be cleverer than the political campaign managers and journalists that many politics students aspire to be. More generally, it can help to tell students that, although most of them will forget the details, they will get some useful thinking skills that will help them sort sense from nonsense in statistical information presented to them throughout life. It is a valuable goal of statistical education, and education more widely, to help develop critical thinking skills and to appreciate the merits of careful investigation, both for future study and careers and in life (CitationBlastland and Dilnot 2007, CitationHalpern 2014).

More immediately for students, their quantitative methods courses should help them with their more substantive courses. With greater statistical literacy it will be easier for them to use critical evaluation, sort out who and what to believe and reconcile different research findings – something that they have to do in most substantive courses. While it should be acknowledged that they are unlikely to find clear cases of statistical errors, they should at least be able to think and write more intelligently about the relative strengths and weaknesses of different studies, about the methods and research design, and be more perceptive about what they do not say as well as what they do.

Another approach to starting a course is to illustrate a key concept. For example, Gary King has started courses by getting students to identify a very basic drawing of a house as a house. This is used to make the point that statistical and social scientific models are representations, and despite being simplifications and abstractions they nonetheless can identify important characteristics. Those who are teaching quantitative methods without a (suitable) preceding course on philosophy of social science or research design often find it necessary to start by introducing the basic ideas about causal theory development and testing so students can see what the statistics are really for.

Finally, while the lecturer can explain the benefits from learning statistics and tell motivating stories to enthuse students at the start of a course, it might also be helpful to acknowledge prejudices and prior concerns. If disinterest, fear and hostility are acknowledged they can be addressed more clearly. Lecturers should do their best to convince students that the course material will be more interesting and more accessible and that they will be more able to cope than they feared. As outlined above, lack of interest and lack of confidence are key barriers to learning. But a “I hate this too so I share your pain” approach is unlikely to work: teachers do need to be positive about what they are teaching.

The need for motivation is one of the main points that teachers agree on. Enthusiasm and showing why you value (or even love) quantitative methods is important, but plenty of true believers rightly question whether it helps to proselytise or to tell students they are misguided if they do not value quantitative social research. Bob Andersen also rightly argues that no one should say quantitative methods are better than qualitative methods per se: which are appropriate depends on the research question.

Using examples

Our workshop participants and presenters seem to agree that good examples are necessary, not least to get and keep the attention of students. But there is some difference according to whether these are used to introduce and motivate methods, to answer questions and solve puzzles, or to illustrate techniques after the statistical concepts have been explained. Relevance is important to student learning and this is reflected in the approach of many QM teachers in recommending real-world, practical and discipline-relevant examples. Interesting examples that are not discipline specific can also help especially if they are related to everyday life, for example from newspapers and/or close to student interests (e.g. sport and music).

Real-world data also has both the advantage and disadvantage that students can focus on the broader substantive topic that the data pertain to. Advantageous because students have learnt both something about the world and a statistical point and its relevance, but problematic if they lose track of the statistical issues and/or feel that the statistical analysis of the data was so woefully over-simplified that the experience serves to strengthen their doubts about the intellectual integrity of quantitative research. So there is also a question as to whether examples need to be real or whether they might be better if they are made up. For instance, Andy Field creates elaborate and funny examples with zombies and other horror movie paraphernalia. Finding light-hearted examples with data that illustrate a point clearly can be hard work. But while it is easier to make such scenarios up it may be harder to convince students of their relevance and this may have a negative impact on learning. On the other hand they may be more engaging and therefore motivating.

Bad examples are also handy. People often learn more quickly from the mistakes of others (CitationKahneman 2011), or at least delight in the schadenfreude. The failure of the 1936 Literary Digest poll of US Presidential vote intention (CitationBryson 1976) is a classic example of bad sampling. Sean Carey recommends asking students to draw conclusions from inadequate data to help them think about what data they would need to have in order to make the point they want to make.

Whether real, modified or imaginary, it is clear that good design and use of examples requires a lot of time and effort to be effective.

Exercises and activities

Active, or experiential, learning is widely praised but not always put into practice. This approach, developed from CitationPiaget and Cook's (1952) theory of discovery learning in young children has since been adapted for other ages, including adults, most notably by CitationKolb (1984) and CitationKolb & Fry (1975). There are many well-established benefits to this approach in higher education (e.g. CitationJustice et al. 2007, CitationMachemer & Crawford 2007, CitationCherney 2008, CitationCavanagh 2011).

However, there are differing views as to what this specifically involves and approaches as to how to incorporate this into courses. Perhaps most dramatically, Andrew Gelman uses nearly all of his class time for doing hands-on exercises and students do these exercises while he is talking about other things. Often this involves students generating data for later parts of a lecture. His book Teaching Statistics: A Bag of Tricks provides many wonderful in- and out-of-class exercises, together with an excellent discussion of teaching statistics (CitationGelman & Nolan 2002). Gelman notes that teachers love reading about practical exercises but many do not actually use this approach because of limited class time and an awkwardness with trying something new, or fear of losing control. It is likely that this is a particular issue in the UK where class time is limited and there is a substantial syllabus to get through, but Wendy Olsen from Manchester has used active learning so much she says her lectures are more like mass tutorials. There is a certain confidence needed for this approach and a lot of time and effort is required in preparation. However, such activities done well have the potential to improve the student experience and learning by firstly, making it more engaging and secondly, basing it on active learning. It must be noted however, that commentators such as Kolb, among many others, believe that active learning is only one part of effective learning. There is a need too for skills such as abstract conceptualisation and in practice different people have different learning strengths; some learn better from experience and some using more abstract thinking, so activities may best be used as part of a range of teaching techniques.

Laura Stoker also stressed the importance of different learning styles and hence the need for the lecturer to teach to all styles, with reference to the VARK (Visual, Aural, Read/Write and Kinaesthetic) framework (see CitationFleming 2001). Not least, discussing different learning styles is a helpful non-threatening way to acknowledge that some students will not find it easy to learn statistics just by listening to lectures or just by reading the text book.

Lecturing style

As well as the importance of teaching to different learning styles, personal style also comes into play. Many of the workshop speakers emphasised the need to do what suits you and it is clear that there is a wide variety of approaches. Some use chalk-and-talk, some heavily detailed and structured presentation slides, some do extemporaneous lecturing (but with a lot of prior preparation!) and some use mixtures of these.

Humour is a very important part of the way that some lecturers teach, but not all, and different people choose to be funny in different ways. Perhaps here it is most clear that personal style matters: there is no point in trying to deliver crude or wacky jokes if you are not naturally inclined to do so. What is useful about humour in teaching is that it can help relax and refocus students, or help them develop a strong mental image to help them remember a concept (CitationNeumann et al. 2009). These things can be achieved without humour, and certainly without jokes: something silly or unexpected can do.

Sean Carey finds it useful to use pictures of people, sometimes statisticians but more often people that epitomise the story that is being told. Some lecturers use one-to-one discussions with members of the audience to capture attention since people often like listening to others’ conversations. While this style has its merits it also has the risk that students can quickly lose faith in teachers who cannot comfortably answer questions.

Assessments

Several teachers emphasised the virtue of having many frequent assessments, both because students learn from doing homework and because it is important to ensure that they keep up with the course. Statistics is much more of a cumulative learning subject than most substantive social science courses, and so they cannot understand the later material if they did not learn the more basic concepts. Students need to be told this, but the course should also ideally be designed with regular strong incentives to keep up. Workshop presenters also emphasised variety in the assessment style. Remarkably Bob Andersen recommends giving students exam questions in advance (or just reading them out loud for undergraduates). He says this has very little effect on their exam performance but substantially reduces their anxiety levels.

Statistical computing

There was some debate over whether to teach students how to actually analyse data for themselves, especially when there is very little space for quantitative methods overall, and this goes back to the issue of active learning as outlined above. Undergraduates are typically taught what others have done while graduate students train to be researchers themselves. In this vein undergraduate quantitative methods courses might better use the limited time available to teach more methods or to help students understand them better, rather than teaching them how to apply a very limited subset of statistical techniques. After all, the vast majority will go on to become consumers, not producers, of statistics. However, many students do learn better by direct experience and even if they may not be producers themselves, experience will help them understand the statistics they will read. This is echoed by Bill Jacoby, who argues that students cannot become effective consumers without some basic experience on the production side as well. John MacInnes goes further and tells his students that learning statistics is like riding a bike: perfect theoretical knowledge of how to do it does not help much, practice does.

Statistical computing does not need to be part of the introductory statistics course per se, and it is often an additional separate course. There is also variation in the choice of statistical software, with SPSS, Stata and R being the most common, and the extent to which students are encouraged to use syntax or point and click. These choices depend on a variety of factors including cost, institutional norms, the purpose of the course and its place in the degree programme, student ability, time available, and what will be most useful for students (both during and after their degree).

Although many of our workshop presenters took the view that statistical computing is an important transferable skill which is valuable for undergraduates as well as graduates and so should be an important component, Bob Andersen cautions against teaching too much too fast. Teaching people to be able to run statistical analyses that they don't fully understand leads to “dangerous people”, he says. There are arguably already too many mistakes in quantitative social research and it might be better to have less research with fewer errors.

Role of mathematical statistics

Most of the workshop speakers emphasised teaching concepts, but there was some variation as to what this implied in practice. Some argued for downplaying and minimising the use of equations, while others felt that it is still important for students to be able to understand some proofs of basic results. Which equations and which proofs, if any, are important depends a lot on the course aims and objectives, but also on the personal style and preferences of the lecturer. For instance, Laura Stoker is particularly fond of using expectation algebra.

There is also some variation between lecturers in how strictly statistical theory should be presented. A classic problem here is the interpretation of frequentist confidence intervals. Students are naturally inclined to think that a 95% confidence interval should have a 95% chance of including the mean. But the strict interpretation is that 95% of intervals constructed in this way (with repeated random sampling etc.) will include the mean, which is not the same thing. A Bayesian 95% credible interval for the mean has the easier interpretation and is practically the same as the frequentist 95% confidence interval with a decent sample size. So, many people, such as Alan Agresti, are happy to provide the Bayesian interpretation for a frequentist confidence interval in introductory statistics courses for social science students. This is because there is no practical difference worth worrying about so it is best not to get caught up in the issue. Indeed it is often the statisticians, not the quantitative social scientists, who are the most relaxed about deviating from formal but convoluted interpretations to more natural but approximate interpretations of statistical analyses, since they have the knowledge and experience to judge when approximation would be misleading or not. Of course some want their students to be exposed to the two different approaches and many find it philosophically exciting. Others question whether undergraduates can handle formal Bayesian analysis, even if the basic intuition is appealing.

More generally, our workshop presenters said that it is common to find students struggle with statistical inference. Several workshop speakers recommended focusing first on identifying and describing patterns in data before turning to the questions of how that data relate to the real world. Bob Andersen suggests emphasising that statistical inference is a guess, albeit an intelligent one. He finds that once students appreciate that we never know whether a particular sample is representative and that what we are doing is trying to guess what the overall population looks like, they understand the theory better.

Dealing with mixed-ability classes and lack of confidence

Students come with different skill sets and abilities, and also varying levels of confidence in their ability to learn statistics, although there is overall a common tendency for students to have a low level of confidence in their ability to do statistical tasks that is not always related to their actual ability (see for example, CitationChamberlain et al. forthcoming). Their level of enthusiasm and motivation is often affected by this, although students can be reluctant for other reasons too.

Managing mixed ability classes is an ongoing issue (eg. CitationReid et al. 1981, CitationIreson et al. 2002, CitationAbraham 2008, CitationWilkinson & Penney 2013, CitationOfsted 2013), especially in state compulsory education. There is much less research on this in university education, perhaps because there is a much smaller range of abilities. However, increased university participation in the UK, USA and Europe has meant this is a growing issue, as illustrated by this subject coming up in all workshops. There is scant research on the benefits or otherwise of undergraduate mixed ability classes, but research on compulsory education (CitationReid et al. 1981, CitationIreson et al. 2002, CitationSmith and Sutherland 2003, CitationBoaler 2008) has illustrated the advantages in terms of learning, confidence and motivation of mixed ability classes, although the picture on academic achievement is not so clear for maths (CitationIreson et al. 2002).

Perhaps the best starting point for all classes, whether mixed ability or otherwise but particularly for mixed ability, is simply to teach better! Whatever helps you enthuse, motivate, relax and grab the attention of your students is likely to help, especially with the less able and more fearful students. Similarly, greater clarity of exposition should help with all, but with mixed ability classes and groups with diverse interests there is an increased need to provide more different illustrations, examples and ways of making the same point. Sean Carey goes so far as to say that the three main priorities need to be variation, variation and variation (see also CitationAdeney & Carey 2009). This includes variation of examples, styles, assessment types, and personnel. Others disagree and say that it can be bewildering, especially with lots of changes in substantive examples. Manfred te Groetenhuis argues that discipline-specific organisation of classes and examples are particularly important for this issue. Here the mixed ability debate dovetails with the learning styles and active learning debates.

Simplifying things for students can help. Andy Field notes that there is no need to bombard students with a multiplicity of statistical tests and complex formulae: it makes them feel confused and stupid if they cannot work out the ‘right’ test, and undermines their confidence. Laura Stoker suggests organising material in lists so what they need to know is well defined and limited, e.g. three problems with this and four kinds of that. Wendy Olsen recommends helping students from a very low skill base, starting with interpretation of frequency and cross-tabulation tables and then building up in a very structured way with well-defined objectives in terms of skills to be mastered at each stage, and much of this might well be done in the context for a substantive course where the relevance of the empirical data can be more thoroughly discussed.

Another solution is to provide more resources. Simply having more and longer lectures might help, but there is a danger in losing the attention of more able or skilled students in slower, more basic classes. Differentiation within the class is one solution. This could be differentiation by task, or differentiation by support, the latter involving additional and more tailored support for different kinds of student. Tailored support might include computing workshops and surgeries, office hours, how-to guides, and message boards on a virtual learning environment. Chamberlain et al. (forthcoming) provide some evidence of the need for the latter.

Group work

A related issue is group work. Group work inside or outside class time is perennially controversial in all university education, mostly because some more diligent students resent free riding by what they sometimes perceive as more lazy ones. Laura Stoker provides an important counterpoint to this by arguing that it is important not just to be able to understand what the lecturer is saying but to be able to explain the material to others. In this way, more able or skilled students benefit from trying to teach others in their group. The limited pedagogical literature on this on university education indicates that group work can be beneficial (CitationCooper & Mueck 1990, CitationCooper & Robinson 2000, CitationMulryan-Kyne 2010, CitationCavanagh 2011) as does the more extensive research on this at the pre-16 level (see for example CitationIreson et al. 2002, CitationBoaler 2008, for a discussion of this). In secondary education, research on co-operative learning – working in pairs or small groups – in mathematics has found strong positive impacts on learning if the methods incorporate two key elements: group goals and individual accountability (CitationSlavin et al. 2013). Co-operative learning is “especially well-suited to mathematics, as it helps pupils to understand their own misconceptions in the process of constructing meaning” (CitationSlavin et al. 2013).

Conclusion

While this paper has emphasised variation in course design and delivery and the freedom of lecturers to make choices according to personal style, there are several common themes that have emerged from our workshops on quantitative methods teaching for undergraduate social scientists. One is the effort required. For most social scientists who do not have degrees in statistics, this first and foremost implies making sure you know and understand the material well yourself. As with all other teaching it is important to know what you are talking about. Even then, good quantitative-methods teaching requires a lot of time and effort in preparation. Laura Stoker, for example, described how she liked to lecture extemporaneously, but only after lots of preparation! It is hard work to devise a good statistics course and teach it well, but it is rewarding for both the teacher and the students.

Another recurrent theme was the need to enthuse and motivate. The volume and variety of examples, activities and exercises matter a lot here. While they always require a lot of time and effort in preparation, there is considerable freedom to choose ones that work for you personally.

Finally, as well as variation and freedom of choice, the paper also discussed how quantitative methods teaching is shaped by disciplinary, national and institutional context. On the one hand the marginalisation of quantitative research in many UK social science departments means that many quantitative methods lecturers here have enormous freedom over course content, because their colleagues are disinterested. On the other hand the limited time, resources, expectations, ambition and links with substantive courses all mean that quantitative methods teachers in UK social science face greater challenges and constraints than those in North America.

Acknowledgements

We are most especially grateful to the Oxford QM teachers’ workshop presenters (listed in the paper) and participants on whose views and observations this paper is mainly based. Also we would like to thank Sean Carey and John MacInnes for excellent comments on earlier drafts. The workshops were kindly funded by the Economic and Social Research Council with a Researcher Development Initiative award (Grant Number ES/J01155X/1) as part of the Quantitative Methods Initiative.

References

  • Abraham, J. (2008) Pupils' perceptions of setting and beyond: a response to Hallam and Ireson. British Educational Research Journal 34 (6), 855–863.
  • Adeney, K. and Carey, S. (2009) Contextualising the teaching of statistics in political science. Politics 29 (3), 193–200.
  • Advisory Council on Mathematics Education (2011) Mathematical needs: mathematics in the workplace and higher education. London: ACME.
  • Andersen, K. and Harsell, D.M. (2005) Assessing the impact of a quantitative skills course for undergraduates. Journal of Political Science Education 1 (1), 17–27.
  • Boaler, J. (2008) Promoting ‘relational equity’ and high mathematics achievements through an innovative mixed ability approach. British Educational Research Journal 34 (2), 167–194.
  • Blastland, M. and Dilnot, A. (2007) The tiger that isn't: seeing through a world of numbers. London: Profile.
  • Bryson, M.C. (1976) The literary digest poll: making of a statistical myth. American Statistical Association 30 (4), 184–85.
  • Cavanagh, M. (2011) Students' experiences of active engagement through cooperative learning activities in lectures. Active Learning in Higher Education 12 (1) 23–33.
  • Chamberlain, J.M., Hiller, J. and Signoretta, P. (forthcoming) Counting better? An examination of the impact of quantitative method teaching on undergraduate social science students' statistical anxiety and confidence to complete statistical tasks. Active Learning in Higher Education.
  • Cherney, I.D. (2008) The effects of active learning on students' memories for course content. Active Learning in Higher Education 9, 152–71.
  • Clough, E. (2012) Integrating quantitative methods into the politics curriculum: a seminar-based approach. Available at http://www.ncl.ac.uk/gps/research/project/4256 (accessed 05 January 2014).
  • Cooper, J.L. and Mueck, R. (1990) Student involvement in learning: cooperative learning and college instruction. Journal of Excellence in College Teaching 1, 68–76.
  • Cooper, J.L. and Robinson, P. (2000) The argument for making large classes small. New Directions in Teaching and Learning 81, 5–16.
  • Gelman, A. and Nolan, D. (2002) Teaching statistics: a bag of tricks. New York: Oxford University Press.
  • Halpern, D.F. (2014) Thought and knowledge: an introduction to critical thinking (5th ed.) NY: Psychology Press.
  • Ireson, J., Hallam, S., Hack, S., Clark, H. and Plewis, I. (2002) Ability grouping in English secondary schools: effects on attainment in English, mathematics and science. Educational Research and Evaluation: An International Journal on Theory and Practice 8 (3), 299–318.
  • Janda, K. (2001) Teaching research methods: the best job in the department. The Political Methodologist 10 (1), 6–7.
  • Justice, C., Rice, J., Warry, W., Inglis, S., Miller, S. and Sammon, S. (2007) Inquiry in higher education: reflections and directions on course design and teaching methods. Innovative Higher Education 31, 201–14.
  • Fleming, (2001) Vark: a guide to learning styles. Available at www.vark-learn.com (accessed 05 January 2014).
  • Kahneman, D. (2011) Thinking, fast and slow. London: Allen Lane.
  • Kolb, D.A. (1984) Experiential Learning. Englewood Cliffs, NJ: Prentice Hall.
  • Kolb, D.A. and Fry, R. (1975) Toward an applied theory of experiential learning. In Theories of Group Process ( ed. C. Cooper). London: John Wiley.
  • MacInnes, J. (2009) Proposals to support and improve the teaching of quantitative research methods at undergraduate level in the UK. Swindon: ESRC.
  • Machemer, P.L. and Crawford, P. (2007) Student perceptions of active learning in a large cross-disciplinary classroom. Active Learning in Higher Education 8, 9–30.
  • Mulryan-Kyne, C. (2010) Teaching large classes at college and universitylevel: challenges and opportunities. Teaching in Higher Education 15 (2), 175–185.
  • Murtonen, M., Olkinuora, E., Tynjälä, P. and Lehtinen, E. (2008) “Do I need research skills in working life?”: University students' motivation and difficulties in quantitative methods courses. Higher Education 56 (2), 599–612.
  • Neumann, D.L., Hood, M. and Neumann, M.M. (2009) Statistics? You must be joking: the application and evaluation of humor when teaching statistics. Journal of Statistics Education 17 (2). Available at http://www.amstat.org/publications/jse/v17n2/neumann.pdf (accessed 14 February 2014).
  • Ofsted (2013) The report of her majesty's chief inspector of education, children's services and skills schools. Ofsted, London.
  • Oxford Quantitative Methods Teachers Project (2012) Available at http://www.sociology.ox.ac.uk/qmteachers (accessed 20 March 2014).
  • Parker, J., Dobson, A., Scott, S., Wyman, M. and Sjöstedt Landén, A. (2010) International bench-marking review of best practice in the provision of undergraduate teaching in quantitative methods in the social sciences. Faculty of Humanities and Social Sciences: Keele University.
  • Piaget, J. and Cook, M.T. (1952) The origins of intelligence in children. Journal of Research in Special Educational Needs.
  • Quality Assurance Agency for Higher Education (QAA) (2014) Subject benchmark statements. Available at http://www.qaa.ac.uk/AssuringStandardsAndQuality/subject-guidance/Pages/Subject-benchmark-statements.aspx (accessed 10 March 2014).
  • Reid, M.I., Goacher, B. and Vile, C. (1981) Mixed ability teaching: problems and possibilities. Educational Research 24 (1), 3–10.
  • Slavin, R., Sheard, M., Hanley, P., Elliott, L, Chambers, B. and Cheung, A. (2013) Effects of co-operative learning and embedded multimedia on mathematics learning in key stage 2: final report. Institute for Effective Education, University of York.
  • Smith, C. and Sutherland, M.J. (2003) Setting or mixed ability? Teachers’ views of the organisation of pupils for learning. Journal of Research in Special Educational Needs 3 (3), 141–146.
  • University of Manchester (2014) Enriching social science teaching with empirical data project. Available at http://www.socialsciences.manchester.ac.uk/essted/ (accessed 10 March 2014).
  • Williams, M., Payne, G.L., Hodgkinson, L. and Poole, D. (2008) Does British sociology count? Sociology students’ attitudes toward quantitative methods Sociology. 42 (5), 1003–1021.
  • Wilkinson, S.D. and Penney, D. (2013) The effects of setting on classroom teaching and student learning in mainstream mathematics, English and science lessons: a critical review of the literature in England. Educational Review.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.