Publication Cover
International Journal of Advertising
The Review of Marketing Communications
Volume 43, 2024 - Issue 3
1,105
Views
1
CrossRef citations to date
0
Altmetric
Perspectives

Perspectives: replication is more than meets the eye

ORCID Icon, ORCID Icon & ORCID Icon
Pages 580-599 | Received 01 Feb 2023, Accepted 02 Aug 2023, Published online: 08 Sep 2023

Abstract

Drawing on recent research and debates in social sciences, this paper situates replication in an advertising research context. We clarify the role of replication in the field and outline the challenges inherent in replication studies in advertising research. We further elaborate on how researchers should engage in replication research to increase the truth value of advertising research while overcoming the obstacles to replication research. Finally, we discuss how advertising scholars, reviewers, and editors can facilitate replication research to reduce the share of false-positive results and accumulate knowledge in the discipline. We see replication as critical in advertising research, given the high variability of experimental factors and the applied nature of the field. Therefore, a better understanding of replications and the challenges of advertising research should inspire scholars to engage in more replication attempts and reviewers and editors to consider it for publication.

Introduction

Various fields in the social sciences have recently faced a replication crisis when new research has failed to duplicate the results of previous research (e.g. Open Science Collaboration Citation2015), sparking a debate about the need for more replication research. Also, in advertising research, there is agreement that replication studies are lacking in the field and that leading journals should publish more of them (e.g. Carlson Citation2015; Eisend, Franke, and Leigh Citation2016; Kerr, Schultz, and Lings Citation2016; Park et al. Citation2015; Reid Citation2014; Reid et al. Citation1981).

How rare are replications in advertising research? Park et al. (Citation2015) surveyed the extent of replication studies published between 1980 and 2012 in four advertising journals (IJA, JA, JAR, and JCIRA). The total number of replication articles was 184, which makes up 6.4% of the total number of research articles published in the survey period, and there was a gradual increase in the share of replication articles from 3.5% in the 1980s up to 15.1% in the 2010s. Park et al. (Citation2015) found that intrastudy replications were more frequent than interstudy replications (56.6 and 44.6%, respectively) and that the former was responsible for the increase in replications in the 2000s. Overall, these results align with the claims that replication is lacking in advertising research.

Replication is frequently understood as duplicating the method of a previous study and analyzing whether the new results match those in the original study (Hubbard and Armstrong Citation1994). However, in a recent article, Nosek and Errington (Citation2020, 2) observe the following: ‘According to common understanding, replication is repeating a study’s procedure and observing whether the prior finding recurs. This replication definition is intuitive, easy to apply, and incorrect.’ Instead, Nosek and Errington (Citation2020) see replication as studies for which outcomes consistent (inconsistent) with a prior scientific claim would increase (decrease) confidence in the claim. Moreover, advertising research has focused chiefly on broad categories of replication research, such as intra- versus interstudy replications and exact versus extension replications (Park et al. Citation2015; Schultz, Kerr, and Kitchen Citation2022). In contrast, recent research in other fields emphasizes the importance of understanding replication research’s different types and purposes. For example, scholars have distinguished between direct and conceptual replication (Crandall and Sherman Citation2016), replication in the laboratory and the field (Mortensen and Cialdini Citation2010), and between replication aiming to weed out false-positive results and investigating boundary conditions (Hüffmeier, Mazei, and Schultze Citation2016). Thus, the overall picture is that replication research is more complex than what previous research and debate in advertising research suggest.

The low numbers of advertising replication studies and the incomplete understanding of replication research make it essential to provide a deeper understanding of replication research, outline the different types of replications, and explain why they are needed.

Replication research is essential for increasing confidence in scientific claims, building knowledge within a field, and increasing public trust in science (National Academy of Sciences Citation2018). For advertising research, which has seen a growing academic-practitioner divide (Ang et al. Citation2023), increasing confidence in the field’s research could also help to bridge this gap. In this context, it should be mentioned that replication research is not the only way to improve confidence in scientific claims. Scholars have argued in favor of research practices such as relying on substantive rather than statistical significance (Sawyer and Peter Citation1983), requiring preregistration of empirical studies (Nosek et al. Citation2018), and meta-analysis (Schmidt Citation1992) as a means to increase confidence in research results. However, their advocates acknowledge that these research practices serve as a supplement to replication studies rather than a replacement. Nevertheless, they remain rare in advertising research (e.g. a search of IJA, JA, and JAR found two articles with preregistered empirical studies).

Replication research increases confidence in science primarily by reducing the rate of false-positive research results. False-positive results are problematic not only because they diminish the truth value of research but also for other reasons, such as leading to fruitless research efforts and ineffective policy changes (Lewandowsky and Oberauer Citation2020; Simmons, Nelson, and Simonsohn Citation2011). However, there are different causes of false-positive results, and their detection requires different types of replication research. Thus, the first aim of this article is to explicate the problems with and causes of false-positive results, and the replication research needed to weed them out.

Moreover, replication studies serve more purposes than weeding out false-positive results. They should test the generalizability of theories and their boundary conditions (Brandt et al. Citation2014; Maner Citation2014; Nosek et al. Citation2022), study underlying processes by introducing mediators (Brandt et al. Citation2014), and provide additional data for the estimation of effect sizes (Brandt et al. Citation2014; Hüffmeier, Mazei, and Schultze Citation2016). Thus, replication studies serve multiple purposes, and the second aim of this article is to help advertising scholars select a purposeful replication strategy by delineating the different types of replication studies and what they can and cannot do.

All social sciences face several challenges that threaten the credibility of replication research (Brandt et al. Citation2014). Advertising research shares many challenges with other fields but also has particular challenges (e.g. the use of highly context-dependent stimuli). Thus, the third aim of the article is to outline the challenges that replication research in advertising faces.

To meet our aims, we draw on recent research and debates in psychology and other fields to situate replication in an advertising research context and propose how advertising scholars, reviewers, and editors can improve and facilitate replication research to increase confidence in our field’s research results.

The problems with False-Positive results

A false-positive result (or Type I error) is the rejection of a true null hypothesis, which means that the result indicates an effect or relationship that does not exist. Because of the probabilistic nature of statistical significance testing, some statistically significant results will be false positives by pure chance. The rate of false positives is not – contrary to common assumption – equal to the level of statistical significance (i.e. an α-level of p < .05 does not mean that less than 5% of all results are false positives) because it overlooks the Bayesian base rate (see, e.g. Pinker Citation2021, 149–154) and the statistical power. Three factors determine the rate of false positives: 1) the prior probability of an effect (i.e. the likelihood that an effect exists), 2) the statistical significance level, and 3) the statistical power of the study (Ioannidis Citation2005; Miller and Ulrich Citation2022; Pashler and Harris Citation2012; see also Wacholder et al. Citation2004). For example, if the prior probability of an effect is 10% and the power is 35% – rates that could be realistic for advertising research – the rate of false positives is 56% ( in Pashler and Harris Citation2012, 532). Because the prior probability is unknown, the exact rate of false positives in any field cannot be known, and scholars have criticized the work of Ioannidis (Citation2005) and Pashler and Harris (Citation2012) for exaggerating the problem (Stroebe Citation2016). However, even if the exact rate of false positives is considerably less than half, the analyses point to a significant problem with false positives in the scientific literature.

In addition to false positives resulting from chance factors, there are at least three other causes of false-positive results. First, results may be influenced by experimenter or tacit knowledge effects (Hüffmeier, Mazei, and Schultze Citation2016). For example, experimenter expectations or behavior may lead to demand artifacts (i.e. the experimenter provides cues that guide the participants’ answers; Darley and Lim Citation1993). Second, questionable research practices (QRP; John, Loewenstein, and Prelec Citation2012) such as p-hacking (Simmons, Nelson, and Simonsohn Citation2011) and HARKing (‘Hypothesizing After the Results are Known;’ Kerr Citation1998) inflate the number of significant results and lead to the dissemination of false-positive results (see overview in Bergkvist Citation2020). For example, simulations have shown that carefully selecting control variables and altering other factors under the experimenter’s control can increase the chance of false-positive results more than tenfold (Simmons, Nelson, and Simonsohn Citation2011). Third, several social science fields have had high-profile cases of outright cheating in which researchers have faked or manipulated their results (e.g. Funder et al. Citation2014; Hamblin Citation2018; Levelt Committee, Noort Committee, and Drenth Committee Citation2012). While the full extent of academic fraud is unknown, it is, sadly, a phenomenon that likely contributes to the rate of false-positive results in the literature.

False-positive results are problematic for several reasons. Published results, including false positives, tend to be ‘sticky,’ thus affecting the ability of science to get closer to the truth: ‘…once published, there is no systemic ethic of confirming or disconfirming the validity of an effect. False effects can remain for decades, slowly fading or continuing to inspire and influence new research…’ (Nosek, Spies, and Motyl Citation2012, 619). Recent research shows that studies that failed to replicate in some of the large independent replication efforts (e.g. Open Science Collaboration Citation2015) are cited more frequently than studies whose results were replicated (Serra-Garcia and Gneezy Citation2021; but see Clark, Connor, and Isch Citation2023). Moreover, false-positive results can lead to a waste of resources. Researchers may build new research on false-positive results and, thereby, waste resources on fruitless research programs, and they may lead to ineffective policy changes (Lewandowsky and Oberauer Citation2020; Simmons, Nelson, and Simonsohn Citation2011).

In sum, false-positive results are likely more common than most scholars believe, and several serious problems are associated with them. While chance results are the main contributor to false-positive results, experimenter effects, QRP, and cheating also contribute to disseminating false-positive results. Recognizing the different causes of false-positive results is important as weeding them out requires different types of replication research.

The need for different types of replication research

Different purposes require different types of replication research. For example, replication studies suitable for weeding out false-positive results are not necessarily appropriate for studying generalizability and boundary conditions, and weeding out false positives caused by chance effects may require a different type of study than weeding out false positives caused by experimenter effects. Thus, developing a replication research strategy requires a fine-grained comprehension of different types of replication research. While there is scholarly agreement that replication studies are different, there is no agreement on how to classify them; scholars have tried to formulate distinctive typologies of replication research but have yet to reach a consensus. There are replication typologies with, for example, two categories (Crandall and Sherman Citation2016), three categories (Hudson Citation2023), and four categories (Easley, Madden, and Dunn Citation2000).

Our literature search identified one typology sufficiently fine-grained to distinguish between studies suitable for the different primary purposes of replication research (i.e. weeding out false positives, studying generalizability and boundary conditions, and studying underlying processes) and taking into account that false positives have different causes (chance, experimenter effects, and QRP and cheating). This hierarchically arranged typology with five categories of replication research was developed by Hüffmeier, Mazei, and Schultze (Citation2016). The typology reconciles different frameworks and terminology, distinguishing between exact replications, close replications, constructive replications, and conceptual replications in the laboratory and the field (). The typology clarifies the distinction between different types of replications and clarifies each replication type’s purpose. Significantly, the typology distinguishes replications by the original researchers from replications by independent researchers. This distinction is essential because replications by the original authors are particularly suited to identify false positives caused by chance. In contrast, replications by independent researchers are more suited to identify false positives caused by QRP or fraud. By offering a clear distinction in the purpose of each replication type, it also helps readers interpret the findings in replication studies.

Table 1. Typology of replication studies from Hüffmeier, Mazei, and Schultze (Citation2016).

While the different types of replication studies serve different purposes, it should be noted that they all provide additional data for the estimation of effect sizes (Brandt et al. Citation2014; Hüffmeier, Mazei, and Schultze Citation2016). Additional data are necessary to gain confidence in population effect sizes and understand the heterogeneity in effect sizes across studies. Accumulated evidence can be synthesized through meta-analyses, which offer more precise estimates of population effect sizes (Flora Citation2020).

Exact replications

Studies, particularly experiments, have four basic elements: sample, treatment, measured outcome, and setting (Hudson Citation2023; Machery Citation2020; Nosek and Errington Citation2020). Exact replications should duplicate all these elements of the original study, meaning they should use a sample from the same population, use the same stimuli (treatment), measure the same outcomes using identical measures, and carry out the study in the same setting as the original study. A further requirement is that the original authors execute the exact replication (Hüffmeier, Mazei, and Schultze Citation2016). By duplicating all elements of the original study, including the research team, a successful exact replication lends additional support to the original research results. Conversely, failure to replicate casts doubts on the original results. Thus, the primary purpose of exact replications is to reduce the rate of chance of false-positive results (Hudson Citation2023).Footnote1

Single empirical findings are interesting but insufficient to establish a valid theory. Every theoretical relation, especially a newly established relation, needs at least one exact replication study (e.g. Hüffmeier, Mazei, and Schultze Citation2016). This requirement is even more relevant for theoretically unexpected results or findings not strongly connected to a theoretical framework (Stroebe and Strack Citation2014). Therefore, when a finding is truly new, an exact replication reduces the likelihood of false-positive findings and constitutes a valuable contribution (Hudson Citation2023; Hüffmeier, Mazei, and Schultze Citation2016; Nosek, Spies, and Motyl Citation2012).

Close replications

Close replications are similar to exact replications, except that independent researchers execute them (Hüffmeier, Mazei, and Schultze Citation2016). The intent is to reproduce the original study with minimal differences. The primary purpose of close replications is to rule out false positives caused by experimenter or tacit knowledge effects, which are contingent upon the executing researchers (Hüffmeier, Mazei, and Schultze Citation2016). Close replications can also suggest the presence of QRP or fraud in previous studies. In addition, they can reduce the rate of false-positive results obtained by chance, provided they are sufficiently similar to the original study (Hüffmeier, Mazei, and Schultze Citation2016; Simons Citation2014). However, in the latter case, it is challenging to design close replications that are sufficiently similar to the original study to rule out other explanations of the results than false positives in the original study: factors such as a time lag or a different setting frequently provide alternative explanations of close replication results (see discussion below).

Constructive replications

Constructive replications have two parts: an exact or close replication of the original study and added elements (Hüffmeier, Mazei, and Schultze Citation2016). The added elements in a constructive replication typically investigate boundary conditions (e.g. population differences) or advance and refine theoretical relationships by excluding confounds or adding mediators or moderators (Hüffmeier, Mazei, and Schultze Citation2016). Constructive replications contribute to a field by advancing and refining the theoretical context and, using the exact or close replication part, reducing the rate of false positives (Hüffmeier, Mazei, and Schultze Citation2016).

An essential consideration in constructive replications that aim for exact or close replication of the original study is that the added elements should not interfere with the design elements from the original study. Thus, scholars should measure added mediating or moderating variables in constructive replications after the variables included in the original study and ensure that changes in instructions or other factors do not influence the exact replication part of the study.

Conceptual replications in the laboratory and the field

Conceptual replications vary some or all of the basic elements of the original study (sample, treatment, measured outcomes, setting) while keeping the underlying theoretical process unchanged. Conceptual replications thus aspire to ‘comparability to the original study … only in the aspects that are deemed theoretically relevant’ (Hüffmeier, Mazei, and Schultze Citation2016, 87). That reflects the main difference between the first three categories of replications and conceptual replications: a conceptual replication intends to examine the theoretical processes underlying the phenomenon independent of the methodology (Crandall and Sherman Citation2016; Hudson Citation2023), whereas the other categories (exact, close, or constructive replications) aim to duplicate – or at least partially reproduce – the methodological elements of the original study. Their respective names give the main difference between conceptual replications in the laboratory and the field.

The primary purpose of conceptual replications is to act as robustness checks testing whether theories hold under different conditions. Conceptual replications are forward-looking (Hudson Citation2023) since they cannot rule out false-positive results in the original study (Hüffmeier, Mazei, and Schultze Citation2016). On the contrary, unless preceded by exact replications, or appropriate close or constructive replications, conceptual replications could lead to the widespread dissemination of false results because of publication bias (Pashler and Harris Citation2012). Once new theories and effects have been established through exact or close replications, conceptual replications should rule out potential flaws in the original research design (Brandt et al. Citation2014) and explore the generalizability across contexts and methods, boundary conditions, and moderating variables (Hüffmeier, Mazei, and Schultze Citation2016; Nosek et al. Citation2022), with a focus on the truth value of the theory. Conceptual replications demonstrate the methodological independence of an effect, as they can exclude the existence of phenomena such as experimenter effects, narrow operationalizations, or sample bias (Hüffmeier, Mazei, and Schultze Citation2016).

Conceptual replications in the field help to establish valid theories with a real-world scope, as they allow the researcher to rule out that a laboratory finding is either ‘a laboratory artifact or too weak to prevail under the uncontrolled conditions of field studies’ (Hüffmeier, Mazei, and Schultze Citation2016, 89; see also Maner Citation2016; Mortensen and Cialdini Citation2010). Conceptual replications are the last step in developing theories, and a sufficient number of exact, close, and constructive replications is essential before pursuing conceptual replications.

Challenges with replication in advertising research

Differences in the four components of experiments (sample, treatment, measured outcomes, setting) make executing and interpreting replications’ results challenging in all social science fields (Machery Citation2020; Nosek and Errington Citation2020). Due to the nature of the field and the phenomena it studies, advertising research faces challenges that are less prevalent in other fields. Advertising is a constantly evolving social phenomenon. Its strategies and tactics and their effects on people change between situations and over time. Consequently, advertising stimuli used in one setting at one point in time may yield different results than the same stimuli in a different setting or at another point in time (Schultz, Kerr, and Kitchen Citation2022). Advertising research is also sensitive to changes in the stimuli and other factors that could influence the results.

Consequently, there are several factors that advertising scholars need to take into account when evaluating or designing replication studies. offers an overview of these factors (the table focuses on advertising research challenges and is not an exhaustive listing of replication challenges; for an overview of generic replication challenges, see Brandt et al. Citation2014). Failure to hold these factors constant or control for them means that meaningful interpretation of exact or close replication study results is challenging or impossible. In contrast, the factors suggest potential moderating factors that could be investigated in constructive and conceptual replications.

Table 2. Factors likely to lead to divergent results in advertising replication studies.

Differences in the sample between the original and the replication study make comparisons between studies perilous (Factor 1). Most advertising studies rely on convenience samples such as students or online snowball samples (Sarstedt et al. Citation2018). Since convenience samples are challenging to reproduce, there is a considerable risk that sample differences cause replication study results to diverge from the original study. For example, some studies sample marketing/advertising students, some sample business students, and others sample university students; these are likely different in important respects (e.g. their knowledge of advertising tactics), potentially influencing the results. Similarly, studies have found that samples from different online panels differ, and results may be incomparable (Peer et al. Citation2017). Accordingly, the samples in exact and close replication studies should be similar in all relevant aspects to those in the original studies for results to be comparable. Constructive and conceptual replication studies could consider sampling from different populations to investigate boundary conditions or moderating effects.

Changes in stimuli frequently affect study results (Factor 2). The complex nature of advertising stimuli makes it particularly important to keep materials constant across studies (Wells Citation2001). Even small changes in features such as picture resolution, legibility of the text, and sound quality could cause differential results in replication studies. In the context of exact or close replications, differences in the material could cause differences in the results. Therefore, the authors must use precisely the same material as the original study. In the case of constructive or conceptual replications, differences deliberately introduced in stimuli features can help determine the conditions necessary to (re)produce an effect. For example, physical attractiveness is a critical factor for the effectiveness of celebrity endorsements (see overview in Bergkvist and Zhou Citation2016), and the perceived attractiveness of the celebrity could be affected by the picture quality or other changes in the picture. Thus, advertising scholars should consider differences in the stimuli when conducting or evaluating exact or close replication studies. In contrast, a constructive replication could add a new stimulus (to compare with the original stimulus), and a conceptual replication could change the stimulus to investigate generalizability across different stimuli.

Measurement operationalizations are critical for comparability across studies (Factor 3). Advertising research practice studies have found considerable variability in advertising scholars’ measures (Bergkvist and Langner Citation2017, Citation2019; Bruner Citation1998). A dearth of standardized measures of common advertising constructs increases the likelihood that measures differ between studies (Bergkvist Citation2021). Differences in measurement could cause differences in study results, and advertising scholars must hold measures constant in exact and close replications. In constructive and conceptual replications, researchers can test the potential effects of alternative measures. In this context, an effect replicated with alternative measures can prove the methodological independence of the effect and increase confidence in the theorized effect.

The type of study (Factor 4), data collection setting (Factor 5), and stimuli presentation (Factor 6) could cause differences in results if they are not held constant across the original and exact and close replication studies. For example, an online replication of a study initially executed in a laboratory might not yield comparable results, and divergent results would be open to multiple interpretations. Similarly, individual and group data collection are incomparable, and stimulus presentation on a large screen is not comparable to computer screen presentation. Again, exact and close replication studies must hold these factors constant, while constructive and conceptual replications could investigate the effects of changes in these factors.

Some factors affect both the sample and stimuli components of experiments. A time interval between the original and replication study (Factor 7) could make samples incomparable and change the perception of stimuli (Schultz, Kerr, and Kitchen Citation2022). The duration of the publication process in most journals means that there will be a considerable interval between a published study and a replication based on the published article, increasing the likelihood that the sample’s characteristics and the perception of the stimuli have changed. There are similar problems with a change in geographic location (Factor 8). Advertising scholars work in different locations, and differences in culture and economic development may impact research results. For exact replications, the original researchers must ensure that the time lag between the original study and the exact replication is sufficiently short and that the geographic location is the same or similar (the same constraint applies to constructive replications with an exact replication part). For close replications, however, an implication of the time and geographic factors is that the results in most cases will not be comparable to the original study unless they are carried out shortly after the original study (which usually means well ahead of journal publication) by researchers in a geographic location similar to the that of the original study. On the other hand, time and geography factors are well worth investigating in constructive and conceptual replications.

Scholars attempting to replicate a previous study need detailed information about the particulars of that study, which is frequently missing in advertising journal articles (Bergkvist and Langner Citation2017; Sarstedt et al. Citation2018). Insufficient information about the original study’s sample, materials, procedures, or measures (Factor 9) makes it challenging or impossible to design the replication study satisfactorily (Brandt et al. Citation2014; Eisend, Franke, and Leigh Citation2016). Thus, the original authors should provide all necessary information, and replication study authors should ensure they have sufficient information to design their study.

In conclusion, the nine factors in show that exact and close replication studies in advertising research are challenging. Slight changes in any factor may make results incomparable and exact, and close replication study results challenging to interpret; it is impossible to know whether changes in the research-design factors caused a failure to replicate results or whether the original results were false positives. Consequently, in most cases, the original authors are the only ones that could execute a replication similar enough to the original study to rule out false positives, and they would have to do it shortly after the original study (to prevent time differences from affecting findings). Scholars attempting exact or close replications must ensure they control for as many factors as possible and consider the effects of research design differences when interpreting their results. Moreover, scholars attempting constructive and conceptual replications could use the factors in as a starting point for identifying moderating variables to be included in their studies (Hudson Citation2023).

Discussion

In the following two sections, we present recommendations for increasing the truth value of advertising research and ways of overcoming existing obstacles to replication research. We summarize the recommendations in .

Table 3. Recommendations for replication research.

Increasing the truth value of advertising research

It is a truism among scholars that research results have an uncertain truth value (e.g. Nosek, Spies, and Motyl Citation2012). However, it appears that many advertising scholars fail to appreciate how common false positives are in the literature, judging by the faith placed in the results from single studies and the common misunderstanding that the rate of false positives equals the standard significance level of p < .05. Although we do not know the true rate of false positives, it is safe to assume that it is far higher than most of us would like to admit (cf. Ioannidis Citation2005; Pashler and Harris Citation2012).

Currently, replication in advertising research is rare, and a majority (more than 70%) of replication studies are constructive or conceptual replications (Park et al. Citation2015). In a similar vein, calls for more replication research in advertising tend to (implicitly) focus on conceptual replication, or ‘reinquiries,’ and overlook exact or close replications (Carlson Citation2015; Eisend, Franke, and Leigh Citation2016; Reid Citation2014; but see Royne Citation2018). Although conceptual replications are essential in establishing a theory’s validity across contexts and methods, they cannot weed out false positives (Hüffmeier, Mazei, and Schultze Citation2016; Nosek, Spies, and Motyl Citation2012). Thus, advertising research needs both more and purposeful replication research, especially exact and close replications, if we want to reduce the rate of false-positive results in the literature. provides a decision tree that authors can use to decide which type of replication study is required.

Figure 1. Decision tree for selecting the type of replication study.

Figure 1. Decision tree for selecting the type of replication study.

Considering the ‘stickiness’ of false positives and the lack of exact replications, coupled with the challenges of replicating advertising research (), we argue that studies of unestablished advertising theories or with unusual findings should include at least one exact replication carried out by the original authors before results can be published in the leading advertising journals. Such a requirement would not eliminate false-positive results from the literature; results sometimes replicate by chance, and exact replications risk duplicating flaws (if any) in the original study (Brandt et al. Citation2014; Hudson Citation2023). However, an exact replication requirement would considerably reduce false positives and increase our belief in the veracity of advertising theories, hypotheses, and results.

Requiring exact replications does not obviate the need for conceptual replications. Once an effect has been established with exact and/or close replications, constructive and conceptual replications should further investigate the generalizability of the theory and its robustness in different settings and with different methodologies. Of particular importance are conceptual replications in the field that demonstrate that the original effect is not a laboratory artifact (Bergkvist and Langner Citation2023; Maner Citation2016; Mortensen and Cialdini Citation2010). Note, however, that original studies that were never replicated should not be conceptually replicated (Pashler and Harris Citation2012). In these cases, a study testing the same theory or effect should be regarded as an original study requiring replication to rule out false-positive results (i.e. exact, close, or constructive replication, depending on the circumstances) before publication. The same caveat applies to the conceptual replication of studies where the passage of time could have had an effect on the expected results.

A requirement for all replication research is that it should be preregistered (for an overview, see Bergkvist Citation2020), clearly stating whether the original results are expected to replicate. Without preregistration, the post hoc interpretation of results lacks credibility (Hudson Citation2023), as a post-hoc interpretation of replication results as confirming or disconfirming the original results without an ex-ante hypothesis is akin to HARKing (Kerr Citation1998). Without preregistration, studies replicating methods or theories in previous research could, at most, claim that their results are ‘congruent’ or ‘incongruent’ with prior results.

Enabling replication research requires that the original studies provide detailed documentation of their methods and make materials available (Brandt et al. Citation2014). These recommendations have been highlighted as desirable practices by researchers arguing for open science practices in other disciplines because they improve replicability (see Dienlin et al. Citation2021; Lewis Citation2020 for a discussion of open science practices). In addition, scholars should take care to label their replication studies appropriately. Not all intrastudy replications are labeled as replications, and most studies do not specify the type of replication. Missing or inappropriate labels confuse readers and hamper article searches. Thus, scholars should properly label replication studies (e.g. by adhering to the terminology in this article).

It should also be stressed that credible conceptual replications require careful use of pretests and/or manipulation checks to demonstrate that the study addressed the relevant theoretical constructs (Crandall and Sherman Citation2016; Fabrigar, Wegener, and Petty Citation2020; Hüffmeier, Mazei, and Schultze Citation2016). This way, the researcher can ensure that the study’s theoretical process is replicated (Hüffmeier, Mazei, and Schultze Citation2016). Not testing the manipulations opens up the replication study to critique whether or not they match the construct of interest (Fabrigar, Wegener, and Petty Citation2020) and whether a non-replication result indicates that the theory did not hold under the new conditions or whether the study failed to create the required theoretical conditions. Even when the original study did not conduct a pretest or manipulation check, close replication researchers are encouraged to test whether the original manipulations map onto the intended concepts. This facilitates the interpretation of a non-replication (Fabrigar, Wegener, and Petty Citation2020).

Overcoming obstacles to replication research

Several scholars have noted how many features of the academic world, such as editors’ and reviewers’ preference for novel findings and career-related publication requirements, disincentivize scholars to search for the truth and to carry out replication research (e.g. Giner-Sorolla Citation2012; Kerr Citation1998; Nosek, Spies, and Motyl Citation2012; Nosek et al. Citation2022; Schaller Citation2016). Thus, it is unlikely that we will see an increase in replication research unless there are systemic changes in academia. While these changes must come from all entities in our society concerned with research, including governments and university leaders, we focus on the roles of scholars, journal editors, and reviewers.

Scholars frequently are reluctant to carry out replication research as it comes with less academic prestige than novel research (Giner-Sorolla Citation2012). While novel research will most likely be more prestigious than replication research for the foreseeable future, scholars should be aware that replication research can be part of and add value to research on novel topics. Exact replication increases confidence in novel results and should increase the likelihood of journal acceptance. If the exact replication is part of a constructive replication, the additional study could add to the original study’s results by including mediating and moderating variables. Thus, a carefully planned series of studies could include both exact replication and an extension of the original study with a relatively low effort.

Adding a constructive replication to an original study represents an attractive avenue for researchers as its inclusion increases confidence in the original claim and advances understanding of the effect’s underlying process or boundary conditions. Thus, the combined contribution of an original study followed by a constructive replication should substantially increase a paper’s acceptance probability. A recent example of an article with an original study followed up with a constructive replication (including an exact replication), which could serve as a model for future research, is the study by Coleman, Royne, and Pounders (Citation2020).

Replication research has an opportunity cost that reduces the funds available for other research (Lewandowsky and Oberauer Citation2020). However, it is essential to consider the alternative viewpoint regarding the costs associated with replication research. Conducting replication studies has the potential to reduce costs in the long run. This is because relying on non-replicated research could result in allocating funds toward studying non-existent effects. Therefore, recognizing the value of constructive replication suggests that advertising scholars might be more inclined to engage in replication research if they are aware of its benefits.

Moreover, collaborative replication efforts could address the additional costs of replication research. Researchers can establish collaborative networks where multiple teams work together to replicate and validate findings. By pooling resources and expertise, these efforts can increase the rigor and generalizability of replication studies, leading to more reliable and robust conclusions while overcoming resource limitations. The American Academy of Advertising (AAA) and the European Advertising Academy (EAA) could facilitate the creation of collaborative networks by establishing networks or platforms where researchers can connect, share ideas, and collaborate on replication projects. These networks can help overcome resource limitations and foster a sense of community around replication efforts, and they provide support, guidance, and access to expertise for researchers conducting replications. Similarly, the AAA and EAA could facilitate the sharing of research data, including both original studies and replication attempts. Creating centralized repositories or platforms where researchers can access and analyze shared datasets can foster collaboration, increase the sample size of replication studies, and enhance the robustness of conclusions.

Journal editors and reviewers are gatekeepers determining what research is published in academic journals. In that role, they are key to increasing the amount of replication research. In author guidelines, journal editors have an ‘important lever to impose or discourage certain practices’ (De Pelsmacker Citation2021, 845), and recent research suggests that publication guidelines for transparency and best practices increase the reproducibility of the results in published studies (Brown, McGrath, and Sacco Citation2022). Recently, several marketing journals have amended their author guidelines with requirements for research transparency (e.g. the Journal of Marketing; Marketing Science) and preregistration of experiments (Marketing Letters), policies that should encourage replication research, although they fall short of requiring exact replication before publication. Thus, advertising journal editors could enable independent replications by requiring full disclosure of research methods (e.g. questionnaires and stimuli should be publicly available at the time of publication). They could also make exact replication a requirement before publishing novel results. Moreover, journal editors could encourage conceptual replications by inviting registered reports, that is, manuscripts that are reviewed and accepted (or rejected) for publication before data collection (Chambers Citation2019). This means that manuscripts with relevant research problems and adequate theoretical foundations and methodology are accepted irrespective of the empirical results, thus reducing publication bias in the results of published studies and increasing the likelihood of publishing replication studies with null results (Nosek et al. Citation2022). A first step could be to invite submissions to a special issue with studies based on registered reports.

While reviewers have limited influence on journal policies, they directly influence what research is accepted or rejected for publication. To this end, reviewers must see the value of replication research and be open to recommending acceptance of carefully executed replication studies. Reviewers should also realize the difference between different replication studies (exact, close, constructive, conceptual) and should be able to evaluate them properly. Maner (Citation2014) suggests that reviewers – when reviewing replication studies – should insist on clear and transparent reporting of methodological rather than theoretical details, focus on statistical power and methodological soundness, show a willingness to recommend acceptance of studies with non-replicating results, and avoid affording precedent to the original study (i.e. regarding the original study as better or more accurate than the replication study). Unfortunately, there are indications that replication research has low status among reviewers and that efforts to increase reviewers’ knowledge and understanding have limited effects (Nosek, Spies, and Motyl Citation2012). Enhancing reviewers’ receptiveness to replication research can begin by adding the keyword ‘replication’ into journal reviewer databases and manuscript handling systems. This enables editors to direct replication manuscripts to reviewers who are genuinely interested in this type of research.

In the longer term, advertising scholars must foster a research tradition that values, encourages, and is willing to publish replication research. This change can be done through our roles as educators. Many students run surveys and experiments each year as a final step to obtaining bachelor, master, or doctoral degrees. Encouraging these students to perform close, conceptual, or constructive replications would allow them to gain extensive methodological knowledge and an understanding of the importance of replication (Grahe et al. Citation2012; Smits and Cuykx Citation2017). Conducting a replication study comes with numerous challenges that need to be well-thought-through by the student. This process will also result in additional theoretical knowledge since students should consider the theoretical foundations for the replication studies. Integrating replication into our educational activities could cultivate a group of scholars who have direct experience conducting replication studies and recognize their significance.

Conclusion

Replication is critical in advertising research considering the high variability of experimental factors () and the applied nature of the field. To this end, authors, editors, and reviewers share a responsibility to increase the number of replication studies (Hubbard and Armstrong Citation1994). We hope that a better understanding of replications and the challenges inherent in advertising research can inspire scholars to do more replication research and reviewers and editors to accept it for publication.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

1 Exact replication results are never definitive. There is always a risk that uncontrolled extraneous factors affect the experimental setting. However, despite this, most scholars regard exact replications as valid tests of false positives in the original study (Brandt et al. Citation2014; Crandall and Sherman Citation2016; Easley, Madden, and Dunn Citation2000; Hudson Citation2023; Nosek and Errington Citation2020).

References

  • Ang, L., C. Buzeta, M. Hirose, M.J.C. van Loggerenberg, G. van Noort, R. Uribe, and H.A.M. Voorveld. 2023. An international perspective of the Academic-Practitioner divide in advertising: An exploratory study into its causes and solutions. International Journal of Advertising 42, no. 1: 181–200.
  • Bergkvist, L. 2020. Preregistration as a way to limit questionable research practice in advertising research. International Journal of Advertising 39, no. 7: 1172–80.
  • Bergkvist, L. 2021. Measure proliferation in advertising research: Are standard measures the solution? International Journal of Advertising 40, no. 2: 311–23.
  • Bergkvist, L., and K.Q. Zhou. 2016. Celebrity endorsements: A literature review and research agenda. International Journal of Advertising 35, no. 4: 642–63.
  • Bergkvist, L., and T. Langner. 2017. Construct measurement in advertising research. Journal of Advertising 46, no. 1: 129–40.
  • Bergkvist, L., and T. Langner. 2019. Construct heterogeneity and proliferation in advertising research. International Journal of Advertising 38, no. 8: 1286–302.
  • Bergkvist, L., and T. Langner. 2023. A comprehensive approach to the study of advertising execution and its effects. International Journal of Advertising 42, no. 1: 227–46.
  • Brandt, M.J., H. IJzerman, A. Dijksterhuis, F.J. Farach, J. Geller, R. Giner-Sorolla, J.A. Grange, M. Perugini, J.R. Spies, and A. van ‘t Veer. 2014. The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology 50, no. January: 217–24.
  • Brown, M., R.E. McGrath, and D.F. Sacco. 2022. Preliminary evidence for an association between journal submission requirements and reproducibility of published findings: A pilot study. Journal of Empirical Research on Human Research Ethics : JERHRE 17, no. 3: 267–74.
  • Bruner, G.C.II 1998. Standardization & justification: Do aad scales measure up? Journal of Current Issues & Research in Advertising 20, no. 1: 1–18.
  • Carlson, L. 2015. The journal of advertising: Historical, structural, and brand equity considerations. Journal of Advertising 44, no. 1: 80–4.
  • Chambers, C. 2019. What’s next for registered reports? Nature 573, no. 7773: 187–9.
  • Chan, K., L. Li, S. Diehl, and R. Terlutter. 2007. Consumers’ response to offensive advertising: A Cross-Cultural study. International Marketing Review 24, no. 5: 606–28.
  • Clark, C.J., P. Connor, and C. Isch. 2023. Failing to replicate predicts citation declines in psychology. Proceedings of the National Academy of Sciences of the United States of America 120, no. 29: E 2304862120.
  • Coleman, J.T., M.B. Royne, and K.R. Pounders. 2020. Pride, guilt, and Self-Regulation in Cause-Related marketing advertisements. Journal of Advertising 49, no. 1: 34–60.
  • Crandall, C.S, and J.W. Sherman. 2016. On the scientific superiority of conceptual replications for scientific progress. Journal of Experimental Social Psychology 66, no. September: 93–9.
  • Darley, W.K, and J.-S. Lim. 1993. Assessing demand artifacts in consumer research: An alternative perspective. Journal of Consumer Research 20, no. 3: 489–95.
  • De Pelsmacker, P. 2021. What is wrong with advertising research and how can We fix it? International Journal of Advertising 40, no. 5: 835–48.
  • Dienlin, T., N. Johannes, N.D. Bowman, P.K. Masur, S. Engesser, A.S. Kümpel, J. Lukito., et al. 2021. An agenda for open science in communication. Journal of Communication 71, no. 1: 1–26.
  • Easley, R.W., C.S. Madden, and M.G. Dunn. 2000. Conducting marketing science: The role of replication in the research process. Journal of Business Research 48, no. 1: 83–92.
  • Eisend, M., G.R. Franke, and J.H. Leigh. 2016. Reinquiries in advertising research. Journal of Advertising 45, no. 1: 1–3.
  • Fabrigar, L.R., D.T. Wegener, and R.E. Petty. 2020. A Validity-Based framework for understanding replication in psychology. Personality and Social Psychology Review : An Official Journal of the Society for Personality and Social Psychology, Inc 24, no. 4: 316–44.
  • Flora, D.B. 2020. Thinking about effect sizes: From the replication crisis to a cumulative psychological science. Canadian Psychology / Psychologie Canadienne 61, no. 4: 318–30.
  • Friestad, M., and P. Wright. 1994. The persuasion knowledge model: How people cope with persuasion attempts. Journal of Consumer Research 21, no. 1: 1–31.
  • Funder, D.C., J.M. Levine, D. Mackie, C.C. Morf, S.V. S.G. West. … Task Force on Publication and Research Practices, Society for Personality and Social Psychology. 2014. Notice: PSPB articles by authors with retracted articles at PSPB or other journals: Stapel, smeesters, and sanna. Personality & Social Psychology Bulletin 40, no. 1: 132–5.
  • Gilbert, D.T., G. King, S. Pettigrew, and T.D. Wilson. 2016. Comment on “estimating the reproducibility of psychological science. b. Science (New York, N.Y.) 351, no. 6277: 1037.
  • Giner-Sorolla, R. 2012. Science or art? How aesthetic standards grease the way through the publication bottleneck but undermine science. Perspectives on Psychological Science : A Journal of the Association for Psychological Science 7, no. 6: 562–71.
  • Grahe, J.E., A. Reifman, A.D. Hermann, M. Walker, K.C. Oleson, M. Nario-Redmond, and R.P. Wiebe. 2012. Harnessing the undiscovered resource of student research projects. Perspectives on Psychological Science : A Journal of the Association for Psychological Science 7, no. 6: 605–7.
  • Hamblin, J. 2018. September 24, ‘A Credibility Crisis in Food Science,’ available at https://www.theatlantic.com/health/archive/2018/09/what-is-food-science/571105/.
  • Hubbard, R., and J.S. Armstrong. 1994. Replications and extensions in marketing: Rarely published but quite contrary. International Journal of Research in Marketing 11, no. 3: 233–48.
  • Hudson, R. 2023. Explicating exact versus conceptual replication. Erkenntnis 88, no. 6: 2493–514.
  • Hüffmeier, J., J. Mazei, and T. Schultze. 2016. Reconceptualizing replication as a sequence of different studies: A replication typology. Journal of Experimental Social Psychology 66, no. September: 81–92.
  • Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2, no. 8: E 124.
  • John, L.K., G. Loewenstein, and D. Prelec. 2012. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science 23, no. 5: 524–32.
  • Kerr, G., D.E. Schultz, and I. Lings. 2016. ‘Someone should do something”: Replication and an agenda for collective action. Journal of Advertising 45, no. 1: 4–12.
  • Kerr, N.L. 1998. HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review : An Official Journal of the Society for Personality and Social Psychology, Inc 2, no. 3: 196–217.
  • Laroche, M., M.V. Nepomuceno, L. Huang, and M.-O. Richard. 2011. What’s so funny? Journal of Advertising Research 51, no. 2: 404–16.
  • Levelt Committee, Noort Committee, and Drenth Committee. 2012. Flawed Science: The Fraudulent Research Practices of Social Psychologist Diederik Stapel (English translation of the Dutch report ‘Falende wetenschap: De frauduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel’).
  • Lewandowsky, S., and K. Oberauer. 2020. Low replicability can support robust and efficient science. Nature Communications 11, no. 1: 358.
  • Lewis, N.A.Jr, 2020. Open communication science: A primer on why and some recommendations for how. Communication Methods and Measures 14, no. 2: 71–82.
  • Machery, E. 2020. What is a replication? Philosophy of Science 87, no. 4: 545–67.
  • Maner, J.K. 2014. Leťs put our money where our mouth is: If authors are to change their ways, reviewers (and editors) must change with them. Perspectives on Psychological Science 9, no. 3: 343–51.
  • Maner, J.K. 2016. Into the wild: Field research can increase both replicability and Real-World impact. Journal of Experimental Social Psychology 66, no. September: 100–6.
  • McNiven, M.D., D.M. Krugman, and S.F. Tinkham. 2012. The big picture for Large-Screen television viewing: For both programming and Journal of Advertising Research 52, no. 4: 421–32.
  • Miller, J., and R. Ulrich. 2022. Optimizing research output: How can psychological research methods be improved? Annual Review of Psychology 73: 691–718.
  • Mortensen, C.R, and R.B. Cialdini. 2010. Full-Cycle social psychology for theory and application. Social and Personality Psychology Compass 4, no. 1: 53–63.
  • National Academy of Sciences. 2018. The science of science communication III: Inspiring novel collaborations and building capacity. Proceedings of a colloquium. Washington, DC: The National Academies Press.
  • Nosek, B.A, and T.M. Errington. 2020. What is replication? PLoS Biology 18, no. 3: E 3000691.
  • Nosek, B.A., J.R. Spies, and M. Motyl. 2012. Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science : A Journal of the Association for Psychological Science 7, no. 6: 615–31.
  • Nosek, B.A., C.R. Ebersole, A.C. DeHaven, and D.T. Mellor. 2018. The preregistration revolution. Proceedings of the National Academy of Sciences of the United States of America 115, no. 11: 2600–6.
  • Nosek, B.A., T.E. Hardwicke, H. Moshontz, A. Allard, K.S. Corker, A. Dreber, F. Fidler., et al. 2022. Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology 73: 719–48.
  • Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science 349, no. 28 August: 349.
  • Park, J.H., O. Venger, D.Y. Park, and L.N. Reid. 2015. Replication in advertising research, 1980–2012: A longitudinal analysis of leading advertising journals. Journal of Current Issues & Research in Advertising 36, no. 2: 115–35.
  • Pashler, H., and C.R. Harris. 2012. Is the replicability crisis overblown? Three arguments examined. Perspectives on Psychological Science : A Journal of the Association for Psychological Science 7, no. 6: 531–6.
  • Peer, E., L. Brandimarte, S. Samat, and A. Acquisti. 2017. Beyond the turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology 70, no. May: 153–63.
  • Pinker, S. 2021. Rationality: What it is, why it seems scarce, why it matters. London: Allen Lane.
  • Reid, L.N. 2014. Green grass, high cotton: Reflections on the evolution of the journal of advertising. Journal of Advertising 43, no. 4: 410–6.
  • Reid, L.N., L.C. Soley, and R.D. Winner. 1981. Replication in advertising research: 1977, 1978, 1979. Journal of Advertising 10, no. 1: 3–13.
  • Royne, M.B. 2018. Why We need more replication studies to keep empirical knowledge in check: How reliable is truth in advertising. Journal of Advertising Research 58, no. 1: 3–7.
  • Sarstedt, M., P. Bengart, A.M. Shaltoni, and S. Lehmann. 2018. The use of sampling methods in advertising research: A gap between theory and practice. International Journal of Advertising 37, no. 4: 650–63.
  • Sawyer, A.G, and J.P. Peter. 1983. The significance of statistical significance tests in marketing research. Journal of Marketing Research 20, no. 2: 122–33.
  • Schaller, M. 2016. The empirical benefits of conceptual rigor: Systematic articulation of conceptual hypotheses can reduce the risk of Non-Replicable results (and facilitate novel discoveries too). Journal of Experimental Social Psychology 66, no. September: 107–15.
  • Schmidt, F.L. 1992. What do data really mean? Research findings, Meta-Analysis, and cumulative knowledge in psychology. American Psychologist 47, no. 10: 1173–81.
  • Schultz, D.E., G. Kerr, and P. Kitchen. 2022. Replication and george the Galapagos tortoise. Journal of Marketing Communications 28, no. 3: 313–28.
  • Serra-Garcia, M., and U. Gneezy. 2021. Nonreplicable publications are cited more than replicable ones. Science Advances 7, no. May: 1–7.
  • Simmons, J.P., L.D. Nelson, and U. Simonsohn. 2011. False-Positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science 22, no. 11: 1359–66.
  • Simons, D.J. 2014. The value of direct replication. Perspectives on Psychological Science : a Journal of the Association for Psychological Science 9, no. 1: 76–80.
  • Smits, T., and I. Cuykx. 2017. Reflectie: Replicatie als wetenschapseducatie. Tijdschrift Voor Communicatiewetenschap 45, no. 2: 145–6.
  • Stroebe, W. 2016. Are most published social psychological findings false? Journal of Experimental Social Psychology 66, no. September: 134–44.
  • Stroebe, W., and F. Strack. 2014. The alleged crisis and the illusion of exact replication. Perspectives on Psychological Science : a Journal of the Association for Psychological Science 9, no. 1: 59–71.
  • Varan, D., M. Nenycz-Thiel, R. Kennedy, and S. Bellman. 2020. The effects of commercial length on advertising impact: What short advertisements can and cannot deliver. Journal of Advertising Research 60, no. 1: 54–70.
  • Wacholder, S., S. Chanock, M. Garcia-Closas, L. El Ghormli, and N. Rothman. 2004. Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. Journal of the National Cancer Institute 96, no. 6: 434–42.
  • Wells, W.D. 2001. The perils of N = 1. Journal of Consumer Research 28, no. 3: 494–8.
  • Zarantonello, L., K. Jedidi, and B. Schmitt. 2013. Functional and experiential routes to persuasion: an analysis of advertising in emerging versus developed markets. International Journal of Research in Marketing 30, no. 1: 46–56.
  • Zhang, Y., and B.D. Gelb. 1996. Matching advertising appeals to culture: The influence of products’ use conditions. Journal of Advertising 25, no. 3: 29–46.