807
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

An intervention study: teaching the comparison method to enhance secondary students’ comparison competency

ORCID Icon & ORCID Icon

Abstract

To tackle the growing challenges facing our societies, such as climate change, we need to understand scientific knowledge and methods. Developing scientific literacy in schools is therefore necessary. To do so, we need to be able to assess competencies associated with scientific literacy. Secondly, educators need meaningful tools which can be implemented in geography classes. One important competency students learn in geography is comparison. Although students perform comparisons in geography classes regularly, we do not know their level of comparison competency. Research is also needed on potential tools to teach the comparison method efficiently in geography classes. Therefore, in this study, we assessed the comparison competency of 83 French and German secondary students and tested a tool to enhance comparison competency in an intervention study using a pre- and post-test control group design. Results indicate that students initially possessed low levels of comparison competency. Our intervention allowed students from the experimental group to improve their comparison skills significantly. The improvement in their post-test scores was positively correlated with the use of the comparison method during the intervention. This shows that teachers should include explicit instructions on the comparison method to help students develop their scientific literacy.

Introduction

Our societies face many challenges today: climate change, the COVID-19 pandemic, the growing world population and more. These issues give rise to much debate and are sometimes appropriated for political ends. To understand them and to consider solutions, scientific knowledge and knowledge of the scientific method are essential. Therefore, developing scientific literacy is one of the most important educational challenges today, since it can enable students to understand the debates and act in consequence. Scientific literacy; the ability to think critically and understand scientific issues, has been long identified as crucial (OECD, Citation2019). Scientific literacy is a key competency (Rychen & Salganik, Citation2001, p. 16) that encompasses not only the content knowledge related to scientific questions but also procedural and epistemic knowledge about how science is practised and how scientific knowledge is produced (OECD, Citation2019, p. 99). This also includes critical perspectives on science practices and a deep understanding of the strengths and limitations of science. To reinforce scientific literacy, scientific methods must be practised and taught in schools and students’ competencies related to scientific methods need to be assessed. This is also true for the science of geography (Chang & Kidman, Citation2019).

Comparison is one of the most important scientific methods (Piovani & Krawczyk, Citation2017, p. 822). It is used, for example, in urban studies and geography. Comparison is used to generalize or theorize from cases (Krehl & Weck, Citation2020, p. 1867), to build models, to reflect on processes or on the specificity of examples through contrasting (Nijman, Citation2007, p. 4). Understanding how the comparison method is used by geographers can help students critically reflect on comparisons that are made in other contexts, for example in recent debates around the consequences of climate change or migration in different contexts.

Comparisons are frequently referred to in geography curricula (e.g. in France: Ministère de l’Education Nationale et de la jeunesse, Citation2020, p. 2; in Germany: DGfG, Citation2017). Comparison tasks are also common (9.18%) tasks in geography textbooks (Simon, Budke, & Schäbitz, Citation2020, p. 6). Comparison as one of the common geographical cognitive demands in geography education (Bourke & Lane, Citation2017) is therefore also one of the geographical skills secondary students need to learn.

However, there is no assessment of secondary students’ levels of comparison competency. Neither do we know how to reinforce comparison competency and thus teach comparison competency as a part of scientific literacy. In recent years, various authors have identified the need for research related to assessment of students’ learning in geography education (Kidman & Chang, Citation2022; Lane & Bourke, Citation2019). Educators lack instruments assessing key geographical skills (Lane & Bourke, Citation2019, p. 11; Bourke & Lane, Citation2017), although some tests were developed, for example to assess spatial thinking (Bednarz & Lee, Citation2019). Research is also needed to know what teaching tools are effective to improve geographical education (Kidman & Chang, Citation2022, p. 170) and how formative and summative assessment can be integrated (Bourke & Mills, Citation2022, p. 17). Therefore, we designed an intervention study involving French and German secondary students in which we assessed their competency and tested a tool to enhance it. Whereas differences between German and French students will be analyzed in another article, in this paper, we will investigate the following research questions:

  1. How competent are secondary students in the different dimensions of comparison competency?

  2. To what extent does the use of the comparison method applied in the intervention have an impact on students’ comparison competency?

This article begins with the theoretical framework used as a basis for the intervention (Theoretical background). This is followed by a description of the test and of the intervention (Methods). The results section presents the test results and effects of the intervention (Results). Then, we discuss the implications for the enhancement of comparison competency (Discussion).

Theoretical background

Comparison competency

To compare means to select comparison units (i.e countries or migration routes) and juxtapose them with comparison variables (criteria such as economic growth or obstacles to migrants) to identify similarities and/or differences (Namy & Gentner, Citation2002, p. 6). Comparison is a fundamental tool of human reasoning which allows us to draw general conclusions from specific observations (Gick & Holyoak, Citation1983, p. 31), and to apply those to new examples (Loewenstein & Gentner, Citation2001, p. 211). It is also one of the fundamental modes of spatial thinking identified by Gersmehl and Gersmehl (Citation2007, p. 184).

Comparison is also a research tool widely used in the social and natural sciences to build rules (Lijphart, Citation1971, p. 691) or identify specific peculiarities (Piovani & Krawczyk, Citation2017, p. 3). Geographers too use comparison as an important research strategy to investigate geographical spaces and build theory or analyze local variations (Kantor & Savitch, Citation2005). The objectives and methodology of geographic comparisons are much discussed in the scientific community. For example, the scientific debate on the definition of a “global city” (Sassen, Citation1999; Robinson, Citation2006) revolves around the possibility to formulate general characteristics for a group of cities based on the comparison of specific examples. The comparison methodology itself often “remains implicit” (Krehl & Weck, Citation2020, p. 1858) and there is no common model for the comparison method among geographers. This shows how comparison is a dynamic and diverse research tool: the way comparisons are made has theoretical and epistemological implications.

Only in geography education research did Wilcke and Budke (Citation2019, p. 7) model the comparison process. Their model allows for a wide variety of comparisons and can serve as a first step to teach this complex method. In this model, first, students have to decide on a specific question to be resolved with the comparison. Second, they select comparison units and variables. Then, they identify similarities and differences and provide an answer to the initial question, while weighting variables and deriving explanations. Each step of this method shall ideally be reflected and justified.

Comparison in geography education can be therefore defined as an important competency which students need to master and which encompasses four specific dimensions (see ).

Figure 1. Competency model for comparison in geography education (further explained and detailed in Simon & Budke, Citation2020, p. 5). Own elaboration.

Figure 1. Competency model for comparison in geography education (further explained and detailed in Simon & Budke, Citation2020, p. 5). Own elaboration.

First, comparison requires the ability to formulate a geographical question, to autonomously choose the data that serve as a basis for comparison, and to select comparison units and variables, following the comparison process described by Wilcke and Budke (Citation2019) (“First dimension: planning and implementation of comparison processes,” see ). Secondly, comparing involves the ability to reflect on the comparison process and justify the selection of comparison elements (“Second dimension: reflection and argumentative justification of comparison processes,” see ). Thirdly, comparison requires the ability to recognise and analyse how comparison variables differently impact comparison units (“Third dimension: acknowledgement and analysis of interrelations between geographical information,” see ). Finally, students must be able to achieve comparison goals, such as inferring rules or identifying the specificity of given examples (“Fourth dimension: achievement of comparison objectives,” see ).

Applying the comparison competency model to French, German and English geography textbook tasks enabled us to demonstrate that comparison tasks mainly served content-related purposes and did not foster scientific reasoning and argumentation skills (Simon et al., Citation2020; Simon & Budke, Citation2020). In another study, most German university students in geography did not display high levels of comparison competency and many of them were not able to structure their comparison texts in effective ways (Simon & Budke, Citation2023). Their competency levels were especially low in the first, second and third dimensions of comparison competency (). However, we do not know how competent students from secondary schools are at performing comparisons. This study thus aims to provide an explorative assessment of comparison skills of secondary students in Germany and France.

Interventions to foster geographical skills

In the Road map for twenty-first century geography education, Bednarz, Heffron, and Huynh (Citation2013, p. 8) described the need to investigate the “characteristics of effective geography teaching.” One way to implement this research objective is to carry out intervention studies in experimental or quasi-experimental conditions. Intervention studies in geography are, however, rather scarce (Abricot, Zuniga, Valencia-Castaneda, & Miranda-Arredondo, Citation2022) and often focus on two geographical competencies: systems thinking (e.g. Cox, Elen, & Steegen, Citation2019) and spatial thinking (e.g. Lee & Bednarz, Citation2012). To date, no intervention study has examined how to promote comparison competency in geography education. Yet, this is crucial if we want to enhance geographical and scientific literacy while teaching geographical methods such as comparison in secondary education.

Furthermore, interventions can help us understand which tools are more efficient for teaching a scientific method. Scientific methods and scientific literacy are often observed to be better assimilated when accompanied by argumentation to build evidence-based arguments or to justify and reflect along the scientific process (Cavagnetto, Citation2010; Jimenez-Aleixandre & Erduran, 2007). Indeed, scientific methods such as comparison are much more complex than just following specific steps like a kind of recipe. For example, comparison requires intense reflection on comparison variables, comparison units and the relative weight of comparison elements, meaning students must be able to use arguments to support the comparison process (Wilcke & Budke, Citation2019). Cavagnetto (Citation2010, p. 10–13) describes different approaches, or orientations, used in argument-based interventions aiming to foster scientific literacy and identifies an immersive approach as the best strategy for teaching argumentation. Cox et al. (Citation2019) stressed the importance of explicit teaching strategies as decisive for the success of classroom interventions. As no intervention has been designed to date to foster comparison competency, there is no research on which tools are valuable for fostering comparison competency. Therefore, we conducted an intervention study in which we used the comparison method explicitly as a scaffold (Vygotsky, Citation1978) to foster the development of comparison competency. Our design uses a “mixed” approach (Cavagnetto, Citation2010, p. 11) taking elements from the “structured” method firstly (learning explicitly the structure of comparison) and immersion in a research situation secondly.

Methods

To investigate our research questions, we led a quasi-experimental intervention study with a pre- and post-test control group design.

Sample

83 students from two secondary schools, two classes in Germany and two classes in France, participated in this study. There were 44 students in the experimental group (29 French, 15 German) and 39 in the control group (31 French, 8 German). Since the quasi-experimental design was subject to reduced internal validity due to the non-randomized group attribution of participants, we controlled for socio-demographical variables such as school results, age, and former geographical education since students were enrolled in similar school forms with both French and German experimental groups enrolled in specialized geography classes.Footnote1 We also realized a t-test on the results of the pre-test to ensure the possibility of regrouping students from France and GermanyFootnote2: in both intervention and control groups, results showed that they were not significantly different from each other (experimental group: t(42) = 1.604, p = .116, two-tailed; control group: (t(37) = 1.302, p = .201, two-tailed). The intervention took place in October 2021 in Germany and in December 2021 in France. Students from both groups took the pre-and post-tests just before and after the intervention. Students and parents were previously informed of the study to which they consented. All tests and data were anonymized for analysis.

Pre-and post-tests

Since textbook comparison tasks often only require students to simply reproduce information, they do not allow us to assess competency in all dimensions of our competency model (Simon & Budke, Citation2020). Therefore, we formulated an open task (see Box 1) that allowed for varied answers and enabled us to assess different levels of comparison competency. To complete the test students could use different migration stories provided to them in the test form. Students had one A4 page to provide an answer which was expected to be given in essay form, to be able to assess students’ skills in argumentation (Paniagua, Swygert, & Downing, Citation2019, p. 111) and knowledge of comparison processes (Wilcke & Budke, Citation2019, p. 8).

Box 1. Task for the assessment of comparative competencies (pre-test). Own elaboration.

“Perform a comparison of migration stories, based on your personal knowledge and/or one or more of the following texts.”

The post-test was based on the same principles as the pre-test but the subject was changed to control for former subject knowledge and concentrate only on comparison competency. Therefore, in the post-test, we collected testimonies about different housing situations in big cities which students had to compare. The task from the post-test is presented in Box 2.

Box 2. Task for the assessment of comparative competencies (post-test). Own elaboration.

“Perform a comparison of housing situations in big cities, based on your personal knowledge and/or one or more of the following texts.”

Both the pre-and post-test were rated using our already reliable and validated assessment tool which allows us to analyse students’ achievements in the different dimensions of comparison competency (Simon & Budke, accepted, see ).

Table 1. Comparison competency assessment: list of categories to measure comparison ­competency (Simon & Budke, Citation2020, p. 5, see table 1). Own elaboration.

To ensure the reliability of our scoring, we calculated inter-rater agreement on 50% of German tests and intra-rater reliability on 25% of French tests (with a 2 monthly interval between ratings). We obtained a percent agreement between judges of 93.5% and a Cohen’s Kappa of .837 on German tests and a percent agreement of 95.7% and a Cohen’s Kappa of .905 on French tests which is considered almost perfect (Landis & Koch, Citation1977, p. 165).

Intervention

After both groups took the pre-test, students in the experimental group were taught an intervention course during 6 classes of 45 min each while students from the control group did not receive any treatment. We decided not to intervene on the regular curriculum of the control group, first, because the subject-specific topic used in the intervention was not assessed in the post-test and second, because we could then compare our intervention involving explicit instruction with the regular curriculum. The intervention was based on a digital learning unitFootnote3 available as an Open Educational Resource. All documents, videos, maps, texts and interviews of this digital learning unit were developed in collaboration with scientists from the Collaborative Research Center CRC-806.Footnote4 These documents on past migration were juxtaposed with documents and sources on recent migration. Students could choose between different questions related to migration such as reasons for migration, migration routes, research on migration or obstacles to migration. gives an overview of the intervention.

Figure 2. Overview of the intervention study and different analyses. Own elaboration.

Figure 2. Overview of the intervention study and different analyses. Own elaboration.

First phase of the intervention

In the first phase, students followed instructions on the digital learning unit which allowed them to get to know the comparison elements (variables, units) and the different comparison steps (see ). The comparison method presented in this part was based on the model by Wilcke and Budke (Citation2019) adapted to be the main teaching tool used during the intervention to train comparison competency.

Figure 3. Comparison steps as provided to students during the intervention. Translated from German. Own elaboration.

Figure 3. Comparison steps as provided to students during the intervention. Translated from German. Own elaboration.

Second phase of the intervention: written comparison task (table and text)

To apply the comparison method learnt in the first phase, in the second phase of the intervention students had to carry out a comparison. The comparison process was divided into steps which were directly identified in the digital learning unit and corresponded to the different steps of the comparison method (see ). Each step was to be justified using arguments. To complete the task, students were provided with a table,Footnote5 which was used along with the comparison method as a scaffold to guide students in the investigative decision-making process involved in the comparison and enhance their justification of the comparison process. While comparison units were given (recent and past migrations), the task was formulated very openly, contrary to existing textbook tasks which are often very closed in their formulation (Simon et al., Citation2020). It allowed for great autonomy in the choice of the comparison question and comparison variables. Many different elements could be found in very diverse documents. For example, to answer the question on the comparison of migratory routes between Africa and Europe used in past and in recent migrations, students could consult different maps, texts and a film but also add detail to their answers by consulting the texts on the obstacles to migration. Students had to write a text summarizing their findings. We analysed students’ individual answers (table and text) to the task, assessing if they had properly implemented the steps of the comparison method (Steps 1 to 6, see ). Moreover, we checked if they had properly explained their decisions and/or results in the different steps (for example, justifying results or the choice of variables) (transversal task, see ). This allowed us to rate students on a scale of 10 points to evaluate the implementation of the method as a didactic tool.

Table 2. Assessment tool to evaluate subtasks corresponding to comparison steps (based on Figure 1). Own elaboration.

These first and second phases of the intervention were designed to allow students to learn and practice autonomously selecting comparison elements, justifying their choices and reflecting on the weighting of variables and on the comparison contexts.

Third phase: group discussions

In the third phase, group discussions were organized. Participants had to compare their approaches with those of other students and defend their own ideas through argumentation. This phase reinforced the use of the comparison method as a tool, while revising comparison elements and process.

Quantitative analysis

Descriptive statistics were performed on the pre-and post-tests to assign the performance of each student to different levels within the competency model. We analyzed groups’ performances and progress between pre and post-test using t-tests and a one-way analysis of covariance with pre-tests used as a covariate (ANCOVA).

Results obtained during the intervention (see ) were also correlated to the difference between scores in pre-and post-tests.

Finally, the results of the experimental group in the post-test were compared to results obtained in a previous study of university students’ comparison competency (Simon & Budke, Citation2023). Since the assumption of homogeneity of variances was violated, a Welch’s t-test was used.

Results

Secondary students’ comparison competency: results of the pre-test

The pre-test was designed to gain insight into existing comparison competencies and to uncover any possible differences between the experimental and the control group. The experimental group obtained out of 28 possible points a mean of 10.05 ± 2.96 whereas the control group achieved a higher score of 11.51 ± 2.427. Comparison competency in the two groups was thus quite low since they attained under half of the maximum possible points. The difference between groups was significant (t(81) = 2.448, p = .017, two-tailed). This can be explained by the fact that groups were not randomized. Both teachers also explained that students from both control groups had generally better grades than those from the experimental groups. As a consequence, we used then the pre-test as a covariate in all analyses to control for this fact. We also correlated the personal data of participants to control for other possible covariates but no significant relation was found between the pre-test results and other variables such as gender or age. We classified both groups’ results in the different dimensions of the comparison competency model (Simon & Budke, Citation2020, p. 5) (see ).

Figure 4. Results obtained in the pre-test by both the experimental and the control group in the different dimensions of comparison competency. The higher the level, the higher the competency. Own elaboration.

Figure 4. Results obtained in the pre-test by both the experimental and the control group in the different dimensions of comparison competency. The higher the level, the higher the competency. Own elaboration.

Results in the first dimension of comparison competency (Planning and implementation of comparison processes, see ) showed good levels since 88.64% of students in the experimental group and 75.92% in the control group achieved level 3. But, students mainly did not explicitly select their comparison elements. Only 4.54% of students from the experimental group and 7.69% from the control group explicitly chose a question to answer, and only 6.82% of the experimental group and 10.26% of the control group explicitly chose comparison variables. Comparison units, on the contrary, were often cited (90.9% of students from the experimental group, and 89.74% of the control group).

Results in the second dimension of comparison competency (Reflection and argumentative justification of comparison processes, see ) showed that argumentation did not support the comparison process in most tests, with only 31.82% and 56.41% of students respectively from the experimental and the control group attaining level 1 in this dimension. Only two students (5.13%) of the control group attained level 2 in this dimension while justifying their choice of comparison units.

In dimension 3 of our competency model (Acknowledgement and analysis of interrelations in geographical information, see ), 72.73% in the experimental group and 87.18% in the control group achieved level 2. This was mainly due to the fact that they used more than one variable to compare the units. Level 3, a more complex level of this dimension, at which students must be able to weigh variables, was only obtained by one student from the experimental group and 4 students from the control group (10.26%).

Results in dimension 4 of comparison competency (Achievement of comparison objectives, see ) were better since 38.64% in the experimental and 48.72% of students in the control group achieved level 4. However, 22.72% of the experimental group and one student in the control group did not achieve any objective with their comparison since they only juxtaposed comparison units along variables without stating if comparison units were different or similar. Also, no student achieved level 2 or level 3. 38.64% of students in the experimental group and 48.72% in the control group achieved only level 1.

Overall, the results of the pre-test showed rather low levels of comparison competency in both groups in dimensions 2 and 3 of comparison competency related to argumentation and interrelations between comparison elements. Relatively higher levels were found in dimensions 1 and 4 related to comparison processes and comparison objectives.

Effects of the intervention

To calculate the effects of the intervention, scores and net differences of the control group and of the experimental group in pre-and post-tests were calculated. Results are presented in .

Table 3. Descriptive statistics of results from both groups in the pre- and post-test. Own elaboration.

To ascertain if the differences between pre-and post-test scores were significant in both groups, we performed t-tests on paired samples. When considering assumptions for the calculation of the t-test, we found only light outliers so we decided not to exclude them from our data. In both groups, the differences between the pre-and post-test scores were not normally distributed, as shown by the Shapiro-Wilk test (control group: p = .022; experimental group: p = .005). However, since our samples were reasonably large (N > 30), we also proceeded further with the analysis (Stone, Citation2010, p. 1563). In the experimental group, post-test scores were significantly higher than pre-test scores, t(43) = 4.069, p < .001 (two-tailed), d = .613. In the control group, post-test scores were significantly lower than pre-test scores: t(38) = −3.810, p < .001 (two-tailed), d = .61. In both groups, the differences were significant with medium effect sizes (Cohen, Citation1988).

To confirm that the intervention had an impact on scores, we performed an ANCOVA with pre-test scores entered as a covariate (Dugard & Todman, Citation1995). The homogeneity of regression slopes was respected with regard to the dependent variable, as the interaction terms were not statistically significant (p = .152). The residuals were normally distributed, as stated by the Shapiro-Wilk test, p = .175. However, three light outliers were found and the homogeneity of variance was not satisfied, as the Levene’s test showed (p = .022). Outliers were identified as the highest achievers in the post-test with scores of 19 points (out of 28 possible points). They only stood out by one point as compared to the next highest achieving students (18 points). Therefore, after careful consideration, we included their results in the calculations and went forward with the ANCOVA. The calculation showed that after adjusting for the pre-test, post-test results differed significantly in the two groups, F(1, 80) = 20.258, p < .001, partial η2 = .202. This confirmed that the intervention had a positive impact on the comparison competency of the experimental group.

The influence of the implementation of the comparison method on comparison competency

Finally, we investigated whether or not scores in the implementation of the comparison method (see ) could be linked to the difference between pre-and post-tests scores in the experimental group. Results showed that there was an almost strong (Cohen, Citation1988) positive correlation between both elements r = .470, p = .001. We can then suppose that the observed progress between pre-and post-test in this group was correlated to the systematic use of the teaching tool presenting the comparison method (see ).

When classifying the results of the experimental group in the post-test into the different dimensions of comparison competency, we could observe that all dimensions of comparison competency improved between tests (see ).

Figure 5. Results obtained by the experimental group in the pre- and post-tests in the different dimensions of comparison competency. The higher the level, the higher the competency. Own elaboration.

Figure 5. Results obtained by the experimental group in the pre- and post-tests in the different dimensions of comparison competency. The higher the level, the higher the competency. Own elaboration.

In Dimension 1 (“Planning and implementation of comparison processes,” see ), 54.5% of students achieved level 4 while 45.5% achieved level 3. This showed that more students chose all elements of the comparison process in the post-test than in the pre-test, including a question to answer with the comparison. In dimension 2 (“Reflection and argumentative justification of comparison processes,” see ) there was also an improvement in the use of argumentation not only to justify their answers (from 31.8% in the pre-test to 43.18% in the post-test thus obtaining level 1 in this dimension), but also to justify the choice of comparison elements (6.82% of students in the post-test obtaining then level 2). In Dimension 3 (“Acknowledgement and analysis of interrelations in geographical information,” see ), higher levels were also achieved, with 15.91% of students obtaining level 4, while none had achieved this level in the pre-test. Finally, students also performed better in dimension 4 (“Achievement of comparison objectives,” see ) where level 4 was achieved by 63.64% of students (in the pre-test, only 38.64% of students obtained this level).

Overall, the intervention and the use of the comparative method as a tool had a positive impact on post-test scores and on comparative competency in the experimental group. When compared to pre-test results of first-year students (a mean of 10.82 ± 2.29) in our previous study which used the same assessment tool (Simon & Budke, Citation2023), the results in the post-test (a mean of 12.68 ± 3.92) of secondary students were significantly better (95% - CI [.22, 3.49]), t(49.07) = 2.28, p = .026).

Discussion

In this study, we conducted an intervention study with 83 secondary students from France and Germany using a quasi-experimental design and provided a first assessment of how competent secondary students are at performing a comparison. We also tested the use of the comparison method to reinforce comparison competency. This first assessment of secondary students’ comparison competency and test of a tool to enhance it provides an initial response to calls for more research in the area of assessing and enhancing geographical skills (Bourke & Mills, Citation2022; Kidman & Chang, Citation2022; Lane & Bourke, Citation2019).

Results showed that most secondary students in our study mastered only low levels of comparison competency before the intervention. In the pre-test, students did not reflect on the comparison process while selecting comparison elements. Many students did not use argumentation to justify their decisions in the comparison process. These initial results align with our previous research on textbook tasks. Textbook comparison tasks often fail to foster the development of comparison competencies since they do not enhance higher-order thinking and only focus on comparison results (Simon et al., Citation2020; Simon & Budke, Citation2020), which can explain the low levels achieved in the pre-test. Secondary students’ argumentative skills were also found to be insufficient in various studies (Budke, Schiefele, & Uhlenwinkel, Citation2010, p. 68; Uhlenwinkel, Citation2015, p. 59). The better results found in the fourth dimension of comparison competency (“Achievement of comparison objectives,” see ) are also in accordance with our initial analysis of textbook tasks which showed that textbook tasks performed better in this dimension (Simon et al., Citation2020; Simon & Budke, Citation2020). Students were able to achieve higher results, since they had already been confronted with tasks that enhance this dimension of comparison competency.

Our intervention proved that enhancing the comparison competency of secondary students is possible via the explicit communication of the comparison method - as a tool in the first phase of the intervention and as a prompt in the teaching instructions to build a comparison during the immersion in a research situation phase - (see ). Although Cavagnetto (Citation2010 p. 15) described the “immersion” intervention as the best intervention form for developing argumentation, in our study, the mixed organisation (first, the explicit teaching of the comparison method and then immersion in an research situation using the method) was proven to be valuable. This aligns with other studies where explicit teaching strategies were successful at enhancing skills such as system thinking (e.g. Cox et al., Citation2019). Indeed, in our study, explicitly teaching the different steps of the comparison method and using the method throughout the implementation phase allowed the competency to be reinforced, since we correlated the use of the method to progress between pre-and post-tests. This enabled all dimensions of comparison competency to be bolstered, since we could see improvements in both the methodological dimensions (dimensions 1 and 2) and the content-related dimensions (dimensions 3 and 4) of comparison competency. This suggests a possible interdependency between progress in methodological and content-related dimensions of comparison competency and points to the success of explicit instruction in metacognitive strategies teaching the students how to learn.

Finally, in the results of the experimental group, secondary students from German and French high schools (“Gymnasium” and “lycée”) in the post-test showed a significant increase in comparison competency, which allowed them to perform better than university students evaluated without the teaching intervention (Simon & Budke, Citation2023). This demonstrates the great potential of the explicit teaching and use of the comparison method in geography classes in secondary education to reinforce comparison competency. Although the intervention was successful, we did not test its efficacy in the long term even if the post-test showed that students could transfer the generic model to other comparison tasks. Further research could assess if progressive teaching of the comparison method, used regularly as a scaffold (Vygotsky, Citation1978) during secondary education, would ensure that the progress made by secondary students would last. Also, since the comparison method was used as a basis for the assessment tool development (Simon & Budke, Citation2020, Citation2023; see ), our results suggest that providing the students with the assessment tool (or parts of it) would allow them to assess their own progress and the assessment tool to be both summative and formative. More generally, teaching geographical methods in secondary education should be implemented more to prepare students for university and to enforce scientific literacy.

Acknowledgements

The authors would like to thank Prof. Dr. Frank Schäbitz for his support, Michelle Wegener, and students and teachers who participated in the study for their collaboration.

Disclosure statement

The authors declare no conflict of interest.

Additional information

Funding

This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)–Project Number 57444011–SFB806. The digital learning unit (OER) was funded by the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the DiGeo project under the funding code 16DHB3003.

Notes

1 In Germany, all participants from the experimental group were enrolled in an advanced elective geography class: “Leistungskurs Geographie”. In France participants were enrolled in the history-geography, geopolitics and political sciences stream: “Spécialité Histoire-géographie, géopolitique, sciences politiques”.

2 Differences between the two countries will be explored in another article.

4 The CRC-806 “Our way to Europe” was funded in three phases from 2009 to 2021. In this interdisciplinary project, archaeologists, climate researchers, geoscientists, etc. explored factors, obstacles and possible routes for human dispersal from Africa to Europe. It involved our project which aimed to disseminate research results while doing educational research. More information can be found here: https://www.sfb806.uni-koeln.de/

5 The table and instructions for students can be found on the following pages of the digital learning unit: https://www.ilias.uni-koeln.de/ilias/ilias.php?ref_id=4325913&obj_id=350011&cmd=layout&cmdClass=illmpresentationgui&cmdNode=hb&baseClass=ilLMPresentationGUI

References

  • Abricot, N., Zuniga, C. G., Valencia-Castaneda, L., & Miranda-Arredondo, P. (2022). What learning is reported in social science classroom interventions? A scoping review of the literature. Studies in Educational Evaluation, 74, 101187.
  • Bednarz, S., Heffron, S., & Huynh, N. (2013). A road map for 21st century geography education: Geography education research. Washington, DC: Association of American Geographers.
  • Bednarz, R., & Lee, J. (2019). What improves spatial thinking? Evidence from the spatial thinking abilities test. International Research in Geographical and Environmental Education, 28(4), 262–280.
  • Bourke, T., & Lane, R. (2017). The inclusion of geography in TIMSS: Can consensus be reached? International Research in Geographical and Environmental Education, 26(2), 166–176.
  • Bourke, T., & Mills, R. (2022). Binaries and silences in geography education assessment research. In T. Bourke, R. Mills, & R. Lane (Eds.), Assessment in geographical education: An international perspective (pp. 3–27). Cham: Springer.
  • Budke, A., Schiefele, U., & Uhlenwinkel, A. (2010). ‘I think it’s stupid’is no argument: Investigating how students argue in writing. Teaching Geography, 35(2), 66–69
  • Cavagnetto, A. R. (2010). Argument to foster scientific literacy: A review of argument interventions in K–12 science contexts. Review of Educational Research, 80(3), 336–371.
  • Chang, C.-H., & Kidman, G. (2019). Curriculum, pedagogy and assessment in geographical education – For whom and for what purpose? International Research in Geographical and Environmental Education, 28(1), 1–4.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2.). Hillsdale: Lawrence Erlbaum Associates.
  • Cox, M., Elen, J., & Steegen, A. (2019). The use of causal diagrams to foster systems thinking in geography education: Results of an intervention study. Journal of Geography, 118(6), 238–251.
  • DGfG. (2017). Bildungsstandards im Fach Geographie für den Mittleren Schulabschluss mit Aufgabenbeispielen. Bonn: Selbstverlag DGfG.
  • Dugard, P., & Todman, J. (1995). Analysis of pre-test-post-test control group designs in educational research. Educational Psychology, 15(2), 181–198.
  • Gersmehl, P. J., & Gersmehl, C. A. (2007). Spatial thinking by young children: Neurologic evidence for early development and “educability. Journal of Geography, 106(5), 181–191.
  • Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15(1), 1–38.
  • Jiménez-Aleixandre, M. P., & Erduran, S. (2007). Argumentation in science education: An overview. In S. Erduran & M. P. Jiménez-Aleixandre (Eds.), Argumentation in science education: Perspectives from classroom-based research (pp. 3–27). Dordrecht: Springer.
  • Kantor, P., & Savitch, H. v (2005). How to study comparative urban development politics: A research note. International Journal of Urban and Regional Research, 29(1), 135–151.
  • Kidman, G., & Chang, C.-H. (2022). Assessment and evaluation in geographical and environmental education. International Research in Geographical and Environmental Education, 31(3), 169–171.
  • Krehl, A., & Weck, S. (2020). Doing comparative case study research in urban and regional studies: What can be learnt from practice? European Planning Studies, 28(9), 1858–1876.
  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174.
  • Lane, R., & Bourke, T. (2019). Assessment in geography education: A systematic review. International Research in Geographical and Environmental Education, 28(1), 22–36.
  • Lee, J., & Bednarz, R. (2012). Components of spatial thinking: Evidence from a spatial thinking ability test. Journal of Geography, 111(1), 15–26.
  • Lijphart, A. (1971). Comparative politics and the comparative method. American Political Science Review, 65(3), 682–693.
  • Loewenstein, J., & Gentner, D. (2001). Spatial mapping in preschoolers: Close comparisons facilitate far mappings. Journal of Cognition and Development, 2(2), 189–219.
  • Ministère de l’Education Nationale et de la jeunesse. (2020). Thème 4: L’Afrique australe: Un espace en profonde mutation (8-10 heures). Paris: Ministère de l’Education Nationale et de la jeunesse.
  • Namy, L. L., & Gentner, D. (2002). Making a silk purse out of two sow’s ears: Young children’s use of comparison in category learning. Journal of Experimental Psychology. General, 131(1), 5–15.
  • Nijman, J. (2007). Introduction—Comparative urbanism. Urban Geography, 28(1), 1–6.
  • OECD. (2019). PISA 2018 science framework (pp. 97–117). OECD.
  • Paniagua, M., Swygert, K. A., & Downing, S. M. (2019). Written tests: Writing high-quality constructed-response and selected-response items. In R. Yudlowsky, Y.S. Park, & S. Downing (Eds.), Assessment in health professions education (2nd ed.). London and New York: Routledge.
  • Piovani, J. I., & Krawczyk, N. (2017). Comparative studies: Historical, epistemological and methodological notes. Educação & Realidade, 42(3), 821–840.
  • Robinson, J. (2006). Ordinary cities: Between modernity and development. New York: Routledge.
  • Rychen, D., & Salganik, L. (2001). The definition and selection of key competencies. Bern: Hogrefe & Huber Publishers.
  • Sassen, S. (1999). The global city : New York, London, Tokyo. Princeton University Press.
  • Simon, M., & Budke, A. (2020). How geography textbook tasks promote comparison competency—An international analysis. Sustainability, 12(20), 8344. Article 20.
  • Simon, M., & Budke, A. (2023). Students’ comparison competencies in geography: Results from an explorative assessment study. Journal of Geography in Higher Education, 0(0), 1–21.
  • Simon, M., Budke, A., & Schäbitz, F. (2020). The objectives and uses of comparisons in geography textbooks: Results of an international comparative analysis. Heliyon, 6(8), e04420.
  • Stone, E. R. (2010). T test, paired samples. In N. Salkind (Ed.), Encyclopedia of research design (pp. 1560–1565). Thousand Oaks, California: SAGE Publications Inc.
  • Uhlenwinkel, A. (2015). Geographisches Wissen und geographische argumentation. In A. Budke, M. Kuckuck, M. Meyer, F. Schäbitz, K. Schlüter, & G. Weiss (Eds.), Fachlich argumentieren. Didaktische Forschungen zur Argumentation in den Unterrichtsfächern (Vol. 7, pp. 46–61). Münster: Waxmann.
  • Vygotsky, L. S. (1978). Interaction between learning and development. In L. S. Vygotsky & M. Cole (Eds.), Mind in society: Development of higher psychological processes (pp. 79–91). Camridge, Mass.: Harvard University Press.
  • Wilcke, H., & Budke, A. (2019). Comparison as a method for geography education. Education Sciences, 9(3), 225.