3,066
Views
1
CrossRef citations to date
0
Altmetric
Articles

A review of experimental research on organizational trust

, &
Pages 102-139 | Received 23 Dec 2021, Accepted 11 May 2023, Published online: 13 Jun 2023

ABSTRACT

Trust profoundly shapes organisational, group, and dyadic outcomes. Reflecting its importance, a substantial and growing body of scholarship has investigated the topic of trust. Much of this work has used experiments to identify clear, causal relationships. However, in contrast to theoretical work that conceptualises trust as a multi-faceted (e.g. ability, benevolence, integrity), multi-level (e.g. interpersonal, intergroup), and dynamic construct, experimental scholarship investigating trust has largely investigated benevolence-based trust in dyadic relationships. As a result of the relatively limited set of paradigms experimental scholars have used to investigate trust, many questions related to different forms and types of trust remain un- and under-explored in experimental work. In this review, we take stock of the existing experimental trust scholarship and identify key gaps in our current understanding of trust. We call for future experimental work to investigate ability-based and integrity-based trust, to advance our understanding of the interplay between relationship history and trust, to study trust as a multi-level construct, to focus on the consequences of trust including the hazards of misplaced trust, and to study trust maintenance. To support these lines of inquiry, we introduce an ideal-typical process model to develop or adapt appropriate trust experiments.

Introduction

Trust profoundly shapes organisational, group, and dyadic outcomes (Barney & Hansen, Citation1994; de Jong & Elfring, Citation2010; Dirks & Ferrin, Citation2002). Reflecting its importance, organisational scholars have devoted substantial and sustained attention to studying trust (de Jong et al., Citation2017). These investigations have developed important theory and involved a variety of empirical methods (Lyon et al., Citation2012).

In this review, we focus on experimental investigations of trust in the organisational sciences. We introduce a comprehensive framework to synthesise existing scholarship using experimental methods to study trust and identify substantial gaps in our understanding of trust in organisational settings. We build on the substantial trust literature that has identified a number of important distinctions. For example, existing scholarship characterises trust as multi-faceted (e.g. affect- or cognition-based, McAllister, Citation1995), multi-level (e.g. interpersonal or interorganisational, Zaheer et al., Citation1998), cross-level (e.g. forming between individuals and collectives, McEvily et al., Citation2002), and dynamic (e.g. swift or based on a long shadow of the past, Meyerson et al., Citation1996). These distinctions afford greater precision in understanding what trust is, but they have also made the trust literature complicated and fragmented. This growing complexity reflects a maturing literature, and it calls for a deliberate effort to integrate extant findings.

In synthesising prior work, we identify both strengths of existing experimental investigations of trust in the organisational sciences along with relevant weaknesses and gaps. In particular, our review highlights a shortage of experimental work on the consequences of trust. Many investigations simply presume that trust has positive implications (Schilke et al., Citation2021), which is problematic given that trust not only has potential negative consequences (McAllister, Citation1997; Neal et al., Citation2016) but also creates opportunities for exploitation (McEvily et al., Citation2003; Schilke & Huang, Citation2018; Yip & Schweitzer, Citation2016). Experimental methods are particularly well-suited to identify causal relationships between trust and key outcomes in organisational research. This capacity is especially important to address heightened concerns about endogeneity in non-experimental designs that focus on performance outcomes as a dependent variable (Shaver, Citation1998).

Further, we raise concerns about the methodological fragmentation in trust research. We identify the most frequently used experimental procedures and describe the substantial diversity in the approaches scholars have used to study trust experimentally. We introduce a framework for contrasting different methods to develop programmatic research, conducive to both experimental replication and the cumulative progress of knowledge.

Finally, we disentangle assessments of trust perceptions, intentions, and behaviours and delineate how each of these constructs can be effectively measured in vignette, behavioural, and field experimental designs. Our work underscores the need to measure and manipulate trust in consistent ways to enhance the construct validity and replicability of trust research.

Surveys of trust scholarship

Although a substantial literature has used experimental methods to study trust, no recent review has integrated this body of scholarship (see Dirks & Ferrin, Citation2001; Kramer, Citation1999 for relevant reviews published more than 20 years ago). In the time since these papers were published, trust research employing experimental designs has developed both conceptually with discussions of useful trust conceptualizations and methodologically with the creation of promising new designs.

We focus our review on experimental investigations of trust, because – notwithstanding potential limitations of experimental designs – experimental studies enable us to identify clear, causal relationships and because experimental investigations of trust represent a large and growing body of scholarship. First, compared to other investigative methods, randomised experimental designs allow researchers to exercise greater control over potential confounding factors and to establish causality when studying both the antecedents and consequences of trust (Brewer, Citation1985; Shadish et al., Citation2002; Stone-Romero, Citation2011). Further, experimentation affords insight into the underlying mechanisms that contribute to trust formation and outcomes (Di Stefano & Gutierrez, Citation2019; Kramer, Citation1999; Spencer et al., Citation2005). Experimental games are also uniquely useful for their ability to capture trust as a behavioural (rather than exclusively attitudinal) phenomenon (Barrera, Citation2008). Finally, experiments provide empirical insight into phenomena that can be difficult to evaluate with other methodologies (Aviram, Citation2012), such as trust violation and repair.

Although it is clear that experimental methodology has made important contributions to our understanding of trust, our review identifies three limitations that have limited our understanding of trust. First, most experimental investigations focus only on a limited number of variables at a time (in contrast to surveys, for example, which allow researchers to capture a relatively large number of variables and integrate them into more complex research models). This practice is consistent with the principle of parsimony (Axelrod, Citation1997), but it can make it difficult for readers to evaluate how the study’s findings fit into the broader nomological network surrounding trust (de Jong et al., Citation2017). This can be problematic because important gaps as well as relevant interdependencies between different constructs may go unnoticed.

Second, methodological fragmentation characterises the experimental trust literature. Without common experimental methods, it is difficult to compare and integrate extant findings, especially when different experimental procedures yield conflicting results (e.g. Hill et al., Citation2009; Naquin & Paulson, Citation2003). As a result, it is unclear whether conflicting findings reflect weak relationships, moderated relationships, or artifacts of different methodological choices. Of course, methodological diversity also affords potential advantage. For example, when scholars find consistent results across paradigms, this scholarship provides compelling, convergent evidence (Lucas, Citation2003b; Lykken, Citation1968). Taken together, we call for scholars to make deliberate and informed choices when selecting experimental methods to investigate trust (LeBel et al., Citation2017).

Third, trust is a complex and multifaceted concept with related yet distinguishable dimensions of trustworthiness perceptions, trusting intentions, and trusting behaviours (Mayer et al., Citation1995). Explicating the nuanced differences among these approaches is beyond the scope of this review, but see McEvily and Tortoriello (Citation2011, pp. 38–40) for a related discussion. Our view is that each of these three facets of trust (perceptions, intentions, and behaviour) enhance our understanding of trust. In experimental work, scholars have measured trust using attitudinal self-reports or intentions as well as behavioural manifestations of trust. Trust is an inherently latent construct, and consistent with the logic of reflective measurement models, scholars have assumed that trust causes observable indicators. This perspective is supported by experimental work that has found that attitudinal and behavioural measures of trust tend to converge (e.g. Glaeser et al., Citation2000; McEvily et al., Citation2012; Schweitzer et al., Citation2006; Schweitzer et al., Citation2018). Still, we assert that scholars should account for the theoretical and potentially important practical distinctions between attitudes, intentions, and behaviours, which are often empirically conflated (Dietz & Den Hartog, Citation2006). We call for future scholars to avoid defining trust as an intention in the same article in which they measure trust as an action. That is, we call for experimental trust scholars to take seriously the challenge of construct-measurement correspondence.

We contribute to the trust literature by addressing each of the deficiencies outlined above and developing relevant guidelines and recommendations for how to avoid common pitfalls. First, we offer an integrative framework encompassing the constructs most commonly investigated in organisational trust experiments in order to integrate existing knowledge and identify gaps in the literature that may be fruitfully addressed through further experimentation. Second, we address concerns of methodological fragmentation in trust research by offering a systematic overview of the most frequently utilised experimental procedures. In this respect, our review can serve as a starting point from which researchers can identify established procedures and measures and access streamlined suggestions for designing new behavioural or vignette experiments whenever existing practices will not suffice. Finally, we assist in disentangling assessments of trust perceptions, intentions, and behaviours by delineating how each dimension can be effectively captured in behavioural, vignette, and field experimental designs.

Overview

Next, we introduce our methodological approach to the review and identify key themes in the extant trust literature. We then describe the most common experimental approaches in organisational trust research and highlight particularly noteworthy and innovative methods. In reviewing experimental measures, we offer advice for aligning experimental methods with research objectives. We conclude with a call for future experimental inquiry into trust.

Method

Sample

Our objective was to conduct a review of experimental methods in trust research within the field of organisational studies. Our sample consists of scholarly articles published through the end of 2020. We did not limit the starting date but note that experimental trust research became prominent in the late 1990s. To identify articles for inclusion in our analysis, we defined a relevant set of journals in which we conducted our search. Specifically, we chose to start our search in the eight core management journals according to the Texas A&M/University of Georgia Productivity Rankings,Footnote1 which are commonly considered both high-quality and largely representative of the organisational studies discipline. In addition, we included articles from the Journal of Trust Research, given the journal’s pertinent focus and because it has become one of the primary outlets for experimental trust scholarship in the organisational sciences. Finally, to ensure that we did not overlook important journals that publish experimental trust scholarship, we conducted a Web of Science search for articles containing ‘trust*’ and ‘experiment*’ in the abstract, restricting our search to the Management category. The top four journals that emerged from this search were Journal of Management Information Systems, Organizational Behavior and Human Decision Processes, Management Science, and Journal of Applied Psychology. As a result, we added the Journal of Management Information Systems and Management Science to our list. We acknowledge that, like any other sampling approach, a focus on specific journals may result in certain publications being overlooked. Nonetheless, we deemed such a focus necessary in order to strike an acceptable balance between tractability and comprehensiveness in our systematic coding of relevant work.

Using a variety of databases to scan each journal, we then searched for articles containing the terms ‘trust’ and ‘experiment’ anywhere in the text. In selecting articles to include in our sample, we read the abstract of each article in the search results (and the full text, when necessary) and included studies that met two criteria: articles must have (a) utilised an experimental design where some independent variable was manipulated and participants were randomly assigned to conditions, and (b) either manipulated or measured trust, trustworthiness, distrust, or a closely related construct (such as trusting intentions). Not included in our systematic coding are thus quasi-experimental designs, where the assignment to study condition is not random (Shadish & Luellen, Citation2005), but we will address such quasi-experimental designs in our discussion section. We covered experiments centred around trust in both individual and in collective referents (for instance, in a team or a company, e.g. Nakayachi & Watabe, Citation2005; Schabram et al., Citation2018). This procedure yielded a total of 204 studies published across 119 individual articles.

Coding

We adopted an iterative coding scheme by which we initially recorded the trust definition used within the paper, any independent variables and method(s) of manipulation, any dependent variables and method(s) of measurement, general experimental procedures, and whether the experimental method was newly developed, adapted from earlier research, or replicated verbatim. After recording this information in a portion of our sample and observing initial trends (Strauss & Corbin, Citation1990), we expanded our coding scheme to include the trust referent (i.e. the receiver of trust, or trustee), the presence or absence of deception, and the maximum potential performance incentive that participants were told they could earn above and beyond any show-up payment. We additionally coded whether each study represented a vignette or a behavioural experiment. All studies coded as ‘vignette’ were comprised of tasks in which participants were asked to read a description of a hypothetical subject or situation (e.g. a CEO delivering unfortunate news to their employees) and answer survey items regarding their perceptions (Aguinis & Bradley, Citation2014; Mutz, Citation2011). Studies were coded as ‘behavioural’ if their design constituted participants’ engagement in a task in which they were required to take action and make decisions, often (albeit not always) in incentive-compatible ways, rather than merely indicating intentions in a hypothetical situation. Similar to vignette experiments, behavioural experiments may include survey measures following task completion, often capturing participants’ trust attitudes toward another actor in the experiment (e.g. McAllister, Citation1995). We made our coding sheet publicly available on the Open Science Framework at https://doi.org/10.17605/OSF.IO/8P9U3.

Findings

Summary observations

We start with descriptive observations of this literature. First, based on the number of experimental trust articles published in each of our source journals over five-year blocks, provides a graphical representation of time trends. The developments shown in this figure point to a substantial increase of experimental methods in trust research over time. Most notably, Organizational Behavior and Human Decision Processes (OBHDP) is the most frequent publisher of this type of research, accounting for 47 out of the 119 total articles in our sample. The Journal of Applied Psychology is the second most frequent publisher of these articles, with a total of 18 articles in our observation period. The Academy of Management Review did not publish any experimental articles (as this journal only covers conceptual work).

Figure 1. Frequency of experimental trust publications by journal.

Figure 1. Frequency of experimental trust publications by journal.

Behavioural experiments and vignettes

Of the 204 studies, 125 (61%) were behavioural experiments and 79 (39%) were vignettes. Many experimental trust investigations leverage the ability of behavioural tasks to personally invest participants in the study and capture trust perceptions and/or trusting behaviours, thus overcoming potential concerns that actors may often not follow through on their perceptions and intentions with corresponding behaviour (Baumeister et al., Citation2007). Nonetheless, vignettes also play a central role in trust experimentation. For instance, employing vignette tasks enables scholars to explicate the emotional and cognitive processes in play during scenarios in organisational settings that may be difficult to create under laboratory conditions.

Monetary incentives

In 77 of the 204 experiments in our sample, experimenters informed participants that they would receive additional compensation based upon the decisions they and/or their counterpart made in the experiment. In many cases, participants were indeed rewarded according to their and their counterpart’s decisions, but in other cases participants were paid predetermined or random bonus amounts. The maximum performance-based payment we observed was $300 to be distributed to the individual or team with the stock portfolio of highest value at the end of a three-week cooperative investment task (Wilson et al., Citation2006). The median performance-based monetary incentive (in the experiments that offered a bonus payment) was $6, not including show-up payments.

Financial incentives are likely to substantially motivate and focus participants’ attention. Notably, the type of incentives may matter. For example, Brase (Citation2009) found that study incentive type (extra credit vs. flat show-up fee vs. flat show-up fee and performance-based payment) had a significant effect on task performance; individuals who received an additional performance-based payment achieved significantly higher performance than those in either of the other conditions. In addition, variations in the magnitude of performance-based incentives can change trusting behaviour (Parco et al., Citation2002). As a result, scholars should recognise that different incentive types and structures may not only change results but also represent meaningful theoretical contrasts. For example, by varying incentives, scholars can learn how extrinsic and intrinsic motives influence trust development (van der Werff et al., Citation2019).

More generally, incentives can often be an effective means to help increase both mundane realism (to the extent that trust is economically consequential in field settings) and experimental realism (to the extent that these incentives can make participants take the trust experiment more seriously) (on the distinction between mundane and experimental realism, see Aronson et al., Citation1990). Of course, not every experiment needs to use incentives, especially if it is designed to capture trust attitudes rather than behaviour and the experimenter can find other ways to create involvement, such as through engaging topics or video/virtual-reality stimuli, for example (Lonati et al., Citation2018).

Deception

We observed the use of deception in 61 of the 204 experiments in our sample (30%). A longstanding debate exists in the social sciences regarding the extent to which deception should or should not be employed in experiments. There are clear-cut differences in the acceptability of deception between the fields of sociology and psychology on the one hand and economics on the other. Sociologists and psychologists often view deception as a necessary tool, whereas experimental economists seek to avoid the use of deception entirely, to the extent that many economic laboratories ban the use of deception (Dickson, Citation2011). Personally, we take a middle-ground position similar to Cook and Yamagishi (Citation2008), who recommend deception should only be employed in situations where it would be impossible or highly impractical to do without. While we cannot offer definitive answers to this debate, it is important for reviewers and readers to be aware (and be understanding) of disciplinary differences in the acceptability of deception. Further, those scholars who do decide to use deception should be aware of relevant institutional stances and guidance on the issue. For example, deception is addressed in the American Psychological Association’s Ethical Principles of Psychologists and Code of ConductFootnote2 and the American Sociological Association’s Code of Ethics,Footnote3 including a discussion of the conditions under which deception would be considered ethical and the measures researchers should take to mitigate potentially adverse effects of deception. Similarly, many institutional review boards follow specific procedures with respect to approving studies involving deception.Footnote4

Trust as an independent vs. dependent variable

Only 14 studies in our sample manipulated trust or trustworthiness as an independent variable. In contrast, 179 studies measured trust or closely related factors as a dependent measure.Footnote5 Studies in which trust is an independent variable are typically designed to understand the consequences of varying the perceived trustworthiness of an actor, whereas experiments that measure trust as a dependent variable focus on studying the antecedents of trust – that is, input factors which may result in increased or decreased (perceptions of) trust or trustworthiness.

In general, manipulating trust or trustworthiness is challenging. Researchers often cannot directly manipulate how individuals interpret an entity or situation; instead, they can vary observable characteristics and information, which in turn may influence participants’ perceptions. That is, studies that investigate trust as an independent variable typically manipulate closely associated proxies such as indicators of an actor’s benevolence or integrity. For instance, in an experimental design employed by Starke and Notz (Citation1981), participants take a pre-test for Machiavellianism several days prior to visiting the laboratory for a behavioural experiment. Upon receiving instructions from the experimenter, participants learn that they will be paired with another participant to engage in a joint bargaining task. Each participant is told that their partner ostensibly received either a high score on the test for Machiavellianism, indicating that they possess traits (e.g. being manipulative) that are often indicative of an untrustworthy individual, or a low score, indicating the possession of trustworthy characteristics. In order to ensure validity in these research situations where trust is manipulated through proxies, manipulation checks are critical to ascertain whether the manipulation had its intended effect (Podsakoff & Podsakoff, Citation2019).

As a second example, Ferrin and Dirks (Citation2003) also manipulated trust. In their investigation, these authors conducted an experiment in which participants first engage in a joint problem solving task (in which they rate the usefulness of certain items in a survival situation, where each participant has half the necessary information for task completion). After this first task, ‘initial trust’ is manipulated by delivering information about their partner’s ostensible performance during the task and the extent to which they shared necessary information (either sharing all necessary information and performing well or sharing little relevant information and shirking responsibility).

In behavioural experiments, researchers must convince participants that another individual or entity has acted in a way that is either trustworthy or untrustworthy. In contrast, scholars can manipulate more directly trust in vignettes. For example, Tetlock et al. (Citation2013) conducted an experiment in which participants read one of four descriptions of a single firm. In the low-trustworthiness condition, participants are told that employees at this firm tend to work as infrequently as possible to earn their wage, whereas participants in the high-trustworthiness condition are told that employees at this firm work diligently and take great care in their work.

As previously noted, our review of the literature revealed that is has been much more common to measure trust as a dependent variable rather than manipulate trust as an independent variable. The most common ways in which trust or perceived trustworthiness tend to be measured are through either an attitudinal or a behavioural approach (or in some experiments, both). Attitudinal trust is most often assessed using survey measures, whereas behavioural trust is typically measured in terms of the extent to which an action or decision in a behavioural experiment requires participants to assume risk at the hands of another actor (consistent with Mayer et al.’s (Citation1995) definition of trust). Of course, not each risk-taking behaviour is a good representation for trust, just as perceptual measures differ in their construct validity. For example, a frequently used measure of attitudinal trust is Mayer and Davis's (Citation1999) trust scale. Though the original scale was developed to study employees’ trustworthiness perceptions of their organisation’s top management (e.g. ‘top management is very concerned about my welfare,’ ‘most people can be counted on to do what they say they will do’), these items are often adapted to assess the trustworthiness of another actor in general. We refer to McEvily and Tortoriello (Citation2011) for a more in-depth discussion of different survey measures of trust. To measure behavioural trust, many studies have participants risk their own money in the trust or investment game (Berg et al., Citation1995), which we will summarise below.

Definitional convergence

A total of 54 of the 119 papers in our sample (45%) referenced either Mayer et al. (Citation1995) or Rousseau et al. (Citation1998) in their definition of trust, with 12 of these articles referencing both seminal definitions. Of the remaining 65 articles that do not cite either of these definitions, 24 reference another definition or briefly state their own without citation, and the remaining 41 offer no explicit trust definition whatsoever. It appears that there is a fair degree of definitional convergence amongst researchers who included an explicit definition (54 of 78, roughly 69%). Nonetheless, it is striking that a considerable number of publications do not include any clarification of their conceptualisation, especially as the meaning of trust may in principle vary substantially. For instance, beyond Mayer et al.’s (Citation1995) or Rousseau et al.’s (Citation1998) conceptualisation of trust in relational terms, the term trust can have a different meaning when researchers study it in its generalised form (i.e. propensity to trust) and understand it as ‘a belief in the overall benevolence of human nature’ (Yao et al., Citation2017, p. 86). Relational and generalised trust differ in fundamental ways (Schilke et al., Citation2021), and both can be studied experimentally (e.g. Cao & Galinsky, Citation2020), making it critical for researchers to explicitly state the type of trust they are investigating. In sum, researchers cannot ensure conceptual clarity without offering a specific trust definition.

In addition to calling for greater conceptual coherence across articles, we also call for greater conceptual coherence within articles. It is essential that definitions of trust align with experimental methods. As David Schoorman explained at the Nebraska Symposium on Motivation in 2014, ‘you may decide to use a different definition, but once you subscribe to their definition, you have to live up to it in your methods’ (Lyon et al., Citation2015, p. 3). Different trust conceptualizations imply different guidelines and boundaries for potential operationalisation. An explicit statement of how, specifically, trust has been conceptualised must serve as the starting point for ensuring construct validity in experimental research.

Degree of replication and prominent experimental designs

Our analysis of the literature shows that trust research employing experimental methods is characterised by considerable methodological fragmentation. Based on the frequency of original experimentation, adaptation, and replication of existing procedures, we discovered that 86 individual studies used a newly created paradigm (42.2%), 107 studies adapted (i.e. changed to fit research context) an existing experimental paradigm (52.4%), and only 11 studies replicated an existing design verbatim (5.4%). This diversity reflects both the conceptual richness and complexity of trust and the lack of coherence in the trust literature.

In , we list the original paradigms that were used (i.e. adapted or replicated) at least twice in our sample. Please see the Appendix for a detailed summary of the experimental procedures, strengths, and limitations of the ten most frequently used experimental designs in our sample.

Table 1. Summary of most frequently used experimental designs.

The most frequently used experimental paradigm, by far, is that of the trust or investment game (Berg et al., Citation1995). In this game, two participants are matched and assigned to the role of either Player A or B. Player A receives a starting allotment (e.g. $10) and is asked how much of their starting allotment they would like to send to Player B. This amount is then typically tripled, and Player B must choose what amount to return to Player A, ranging from $0 to the tripled amount. The amount that Player A chooses to send is typically used as a measure of behavioural trust in the partner. Common variations include a binary version of the game where Player A can only keep or send all the money (e.g. Schilke & Huang, Citation2018), an online version (e.g. Piff et al., Citation2010), and an extension to multiple rounds with the same pairings (Bottom et al., Citation2002; Lount et al., Citation2008; Schweitzer et al., Citation2006). The trust game captures benevolence-based trust; passing money to the trustee reflects a belief that the trustee will act with positive intentions toward the trustor (Levine & Schweitzer, Citation2015). A key strength of the trust game is that it closely reflects the four key parameters of trust originally proposed by Coleman (Citation1990): (1) the decision to trust is voluntary, (2) a time lag exists between the trust and the trustworthiness decision, (3) the trustee can abuse or honour the trustor’s trust only if the trustor does indeed exhibit trust, and (4) if the trustee (fully) abuses the demonstrated trust, the trustor will be in a worse position than if no trust had been shown, producing vulnerability (Alós-Ferrer & Farolfi, Citation2019).

Though scholars have closely linked attitudinal behaviour with passing behaviour in the trust game (see Schweitzer et al., Citation2006), trust game behaviour is a somewhat limited measure of trust. First, it only reflects benevolence-based trust (Levine & Schweitzer, Citation2015). Second, behavioural trust in the trust game may conflate trust with other constructs, such as reciprocity, altruism (Cox, Citation2004), or betrayal aversion (Bohnet et al., Citation2008). Trust game behaviour is also sensitive to even subtle changes in implementation (Johnson & Mislin, Citation2011). As a result, there has been ample discussion and analysis of the trust game and its different variants (Brülhart & Usunier, Citation2012; Johnson & Mislin, Citation2011; Tzieropoulos, Citation2013; Yamagishi et al., Citation2013).

To address some of these limitations, Levine and Schweitzer (Citation2015) introduced the rely-or-verify game to measure integrity-based trust. In this game, the Red Player (the trustee) has perfect information regarding the amount of money in a jar, and they make a claim to their counterpart whether the sum of the coins is odd or even. The Blue Player (the trustor) can choose to rely upon this claim or to verify its veracity at a cost. The payoff schedule is designed such that the Red Player benefits most from telling a lie and having it relied upon, while the Blue Player benefits most from relying on a truthful claim. Mayer et al. (Citation1995) noted that ‘the relationship between integrity and trust involves the trustor’s perception that the trustee adheres to a set of principles that the trustor finds acceptable’ (p. 719). The rely-or-verify game is particularly well-suited for capturing integrity-based trust because the Blue Player’s choice reflects their determination of whether their counterpart will tell a lie and breach principles that the Red Player would view as acceptable or tell the truth and uphold them. Of course, the rely-or-verify game, like most other behavioural games, lacks contextual content, focuses on short-term relationships, and involves relatively low stakes. We thus advocate for complementing behavioural experiments with other methodological approaches, including vignette, survey, or field studies to develop a fuller understanding of trust.

For instance, Kim et al. (Citation2004) developed a series of noteworthy trust vignettes which they continued to adapt in 2006 and 2013. In the original experiment, participants are asked to watch video footage of interviews with potential new hires and read their transcripts. These materials describe an interviewee whose references stated that they were involved with an accounting violation at their previous workplace. Depending upon condition, this violation reflects either a lack of ability or a lack of integrity. The interviewee’s response to this claim also varies across conditions, such that they either apologise or deny responsibility for the violation. Participants are asked to rate the perceived ability and integrity of the individual and indicate whether they would hire them, which reflects a behavioural intention measure of trust. In this way, Kim and colleagues have designed a vignette experiment and template to study how different violation responses may be particularly appropriate or inappropriate to assist in rebuilding trust, given context. Incorporating two of the bases for trust (i.e. ability and integrity) from the Mayer et al. (Citation1995) model allowed Kim and colleagues to further explicate how these bases contribute to perceptions of trustworthiness as a whole and how a perceived lack of one or the other may be particularly damaging across different situations.

Finally, Aven et al.’s (Citation2019) study incorporates a particularly creative manipulation and is designed to address a longstanding difficulty in experimental research. As scholars have noted, trust is a dynamic and history-dependent process (Blau, Citation1964). While the individual appraisals of behaviours and interpersonal interactions can be noted immediately after experiencing an event, these appraisals gather and manifest in individual cognitions over time, and subsequent behaviours and events may contradict previously formed beliefs about others. Individuals with a long history of interacting with each other have the privilege (or misfortune) of observing many actions of another actor, enabling them to construct a well-informed idea of the extent to which they can trust this actor. Within the confines of an experiment, it is tricky to manipulate a rich history or relationship.

To investigate how existing relationships influence trust, Aven et al. (Citation2019) utilised a sampling technique in which participants are asked to bring someone to the experiment site with whom they shared a relationship of either fewer than three years (considered ‘weak tenure’), three to five years (‘moderate tenure’), or more than five years (‘strong tenure’). Additionally, participants who are randomly assigned to the ‘stranger’ control condition are matched with a participant unknown to them. Participants are then assigned to act as either a banker or an auditor in an audit simulation. Bankers are instructed to prepare three financial statements for a hypothetical firm which either over-reported earnings or reported them accurately. In the over-report condition, bankers are monetarily incentivized to over-report without being caught, while auditors are incentivized to catch any errors. In the control condition, dyad partners are incentivized to achieve the same goal of reporting and auditing accurately. The authors were primarily interested in understanding the connection between relationship strength and monitoring practices as mediated by trust. The sampling method used in the Aven et al. (Citation2019) study may sacrifice some of the causality associated with truly random samplingFootnote6 but can nonetheless make a significant contribution to our understanding of the consequences of long-standing pre-existing relationships that would be virtually impossible to create within the confines of an experimental study.

Investigating the interplay between relationship history and trust represents an important direction for future scholarship, but several scholars have advanced our understanding of relationships and trust within cleverly designed experiments. For example, Wilson et al. (Citation2006) designed an experiment in which participants meet three times per week over the course of three weeks (either online or face-to-face) to make stock purchasing decisions. Participants are incentivized to coordinate their individual decisions – if each team member selects to purchase the same stock, the team is granted an additional share of this stock which adds to the final value of their portfolio. Although this experiment does not yield insight into the influence of prior familiarity between participants, longitudinal designs of this nature allow for investigation into the effects of prior social interactions between participants on later coordination and trust decisions (also see Schilke et al., Citation2013).

Selected substantive findings

In , we depict the most commonly investigated antecedents and consequences of trust in our study sample. We used a threshold of six or more studies to determine inclusion of determinants in this graphic and a threshold of two studies to determine inclusion of consequences (as relatively few studies in our sample investigated trust outcomes).

Figure 2. Most investigated determinants and consequences of trust in experimental research.

Figure 2. Most investigated determinants and consequences of trust in experimental research.

conveys only a portion of the rich and growing literature investigating the antecedents and consequences of trust, and we discuss only some of this work here. First, the role of contracts in the development and maintenance of trust is an often-studied yet complex topic (Lumineau, Citation2017), and the observed effects often depend on the type of contract under investigation (Schilke & Lumineau, Citation2018). For instance, promotion contracts which highlight positive behaviour may foster trusting intentions at a higher rate than prevention contracts which specify the absence of negative behaviour (Weber & Bauman, Citation2019). Further, Harmon et al. (Citation2015) found that a letter violation (failure to fulfil a documented expectation expressed in the contract) results in greater loss in trust than a spirit violation (failure to fulfil an undocumented but tacitly agreed upon expectation).

Emotions represent another key antecedent of trust (Dunn & Schweitzer, Citation2005), and specific emotions differently influence trust. For instance, Gino and Schweitzer (Citation2008) showed that incidental gratitude leads people to become more trusting, whereas incidental anger harms trust.

When individuals have had some interpersonal contact with one another – e.g. through simple conversation beyond the context of whatever joint task they will engage in – they tend to trust each other more once the task begins. Research has also shown effects of communication medium (face-to-face vs. online interaction) on trust development (Naquin & Paulson, Citation2003). When engaging either face-to-face or over the phone, individuals can interpret verbal cues which would otherwise not be present in online interactions to assist in determining the extent to which their partner can be trusted. The presence of these verbal cues appears to allow individuals to engage in other-focused perspective taking, which ultimately contributes to the ability to make more accurate trust judgments (Schilke & Huang, Citation2018). Interestingly, Wilson et al. (Citation2006) found that while trust between individuals in computer-mediated groups started at a lower point than trust between individuals that met face-to-face (resulting from the relative lack of available social context cues for group members to interpret), the trust levels between these group types became roughly equivalent over time as online groups gradually exchanged social information.

A variety of leader characteristics have been shown to affect subordinate trust in the leader. For instance, leader prototypicality has been shown to result in greater subordinate trust (Giessner & van Knippenberg, Citation2008). Similarly, leader vision, clear vision implementation, and charismatic communication style are also drivers of subordinate trust (Kirkpatrick & Locke, Citation1996), alongside consideration of subordinates’ inputs (Korsgaard et al., Citation1995).

However, individuals who feel that they are in a position of power (especially if that power is unstable) tend to be less trusting of others (Mooijman et al., Citation2019; Schilke et al., Citation2015). In general, individuals are more trusting of supervisors, arbitrators, and others in positions of power, especially if these more powerful individuals are transparent and consistent in regard to procedural justice. For example, Johnson and Lord (Citation2010) found that participants trusted an experimenter more when they distributed their scores and compensation in a just manner (via an indirect effect through increased sense of self-identity).

Violation type refers to the dimension of trust (integrity, benevolence, or ability) that has been breached, ultimately damaging trust in an interpersonal relationship. Violation response refers to the way an actor who has committed a breach of trust deals with this breach, often by apologising or denying responsibility or even the existence of the breach. For instance, Kim et al. (Citation2006) found that after violations of integrity, trust repair was more successful following an apology placing some blame on external factors. In contrast, after an ability-based violation, trust repair was more successful if the individual in violation offered an apology with an internal focus (taking full responsibility). Therefore, the efficacy of a violation response appears to depend on the nature of the violation.

Even though an investigation of trust’s consequences has been rare, their wide variety is noteworthy. For instance, Welsh and Navarro (Citation2012) found that individuals are more likely to incorporate base rate information (also known as prior probabilities or facts before additional information is provided) from a trustworthy source than they are from a less trustworthy source.

We observed multiple studies in our sample that investigated the role of initial trust between individuals engaging in dyads or groups as a driver of cooperative behaviours. For example, van Dijke et al. (Citation2018) found that group members contribute more to a common resource pool if they trust the group member with the greatest authority. Starke and Notz (Citation1981) investigated whether initial trust between two negotiators has an effect on cooperation or negotiation outcomes but found no significant effects of trust.

Findings from our sample also suggest that trust influences governance and contracting preferences. Mellewigt et al. (Citation2017) found that individuals in business relationships featuring high partner-specific trust tend to prefer alliance over acquisition in order to access their partner’s vital resources. In addition, individuals tend to prefer a lesser degree of contract specificity within an existing business relationship if they have reason to believe this business tie exhibits in-context trustworthiness (referring to the extent to which the business tie can be expected to make good on arrangements in this particular context, rather than generally; Connelly et al., Citation2012).

We also observed some investigation of the effect of trust on joint task performance. Ferrin and Dirks (Citation2003) sought to determine to what extent initial trust in an unfamiliar partner would affect subsequent joint performance but found no effect of the initial trust condition (high vs. low trust in partner) on successful completion of a joint task which required information sharing. However, Meier et al. (Citation2019) discovered that groups of three individuals with a high degree of initial trust tended to complete an online block-clicking task to a greater extent than groups with a low degree of initial trust, perhaps because individuals in these groups expended relatively less effort to monitor the actions of their groupmates.

The relationship between trust and perceived conflict has also received some attention. Individuals report a lower degree of perceived conflict when they trust each other (Huang et al., Citation2015) and when they both trust a third-party mediator (Ross & Wieland, Citation1996). In fact, individuals who initially perceive their groupmates as trustworthy feel that the group is closer and more cohesive (Zand, Citation1972).

Taken together, a growing literature has expanded our understanding of the determinants and consequences of trust. This experimental work has leveraged the ability of experiments to identify causality and capture crucial theoretical processes that would be difficult to measure with other methodologies.

Discussion

Choosing and applying an existing method or starting fresh?

Though many experimentalists have converged on the use of the trust game, several alternatives exist. In this section, we offer guidance for scholars seeking to study trust experimentally. Specifically, we offer suggestions for scholars to either use an existing experimental paradigm or to develop a new one. These suggestions build on and synthesise related discussions of methodological best practices (e.g. Bolinger et al., Citation2022; Lonati et al., Citation2018; Stone-Romero, Citation2011), with a specific emphasis on trust research. We summarise our recommendations in .

Figure 3. Guidelines for developing or adapting an appropriate trust experiment.

Figure 3. Guidelines for developing or adapting an appropriate trust experiment.

First, scholars should clearly conceptualise and define trust with respect to their theoretical framework. This conceptualisation should then guide their operationalisation (rather than vice versa). Deciding whether to employ a behavioural experiment, a field experiment, or a vignette is the next step. Behavioural experiments afford the possibility of capturing action rather than perceptions and intentions (Reypens & Levine, Citation2017), but vignettes can serve as powerful instruments in situations where constructing a behavioural experiment to study a hypothesised relationship is impractical (Aguinis & Bradley, Citation2014; Cavanaugh & Fritzsche, Citation1985; Finch, Citation1987; Wallander, Citation2009). For instance, when a primary research goal is to examine concrete organisational settings, rather than build general theory (Bitektine et al., Citation2018), researchers may opt for either a field experiment or a vignette study. Field experiments enable scholars to study trust in situ but lack the flexibility and experimental control of vignette studies. A key concern of vignette studies, however, is that participants may struggle to place themselves in the context of the study. Most notably, participants cannot be realistically asked to imagine that they hold a role that is well beyond their realm of expertise. For example, it is unrealistic to expect undergraduate students or Mturk workers to imagine themselves as the CEO of a Fortune 500 company.

We also caution that vignettes can only capture behavioural intentions rather than actual trusting behaviours. It is always possible that participants may misreport how they would actually act out of social desirability concerns (Baumeister et al., Citation2007). For instance, participants may be reluctant to report the intention to engage in a non-trustworthy behaviour, but when faced with incentives, individuals may succumb to the temptation to act differently from how they report they would hypothetically (Ajzen et al., Citation2004). In general, behavioural experiments afford a key advantage over vignette studies: they capture behaviour rather than behavioural intentions. However, constructing behavioural tasks and manipulations that are consistent with the theoretical constructs and framework and clear often represents a challenge.

Similar to criticisms of economic experiments (Dickson, Citation2011), behavioural trust experiments often have a high level of abstraction. That is, behavioural trust experiments are often designed to be intentionally abstract. As a result, these experiments may be well-suited to explicating trust’s role as a social mechanism and studying how trust is established, breached, or restored between two or more individuals, but findings from these studies may not directly extend to concrete organisational settings, such as how supervisors and subordinates would actually act.

A particularly promising approach to study how manipulating a variable affects trusting behaviour within an organisational context involves conducting a field experiment (Chatterji et al., Citation2016; Eden, Citation2017). Field experiments, however, are difficult to run, and our sample included only three field experiments (Earley, Citation1988; Korsgaard et al., Citation1998; Rose et al., Citation2021).

For an example of a field experiment designed to study trust outcomes, consider Baldassarri (Citation2015), despite the fact that this paper did not meet the inclusion criteria for our review as it was published in the American Journal of Sociology. Her lab-in-the-field design supplements field interviews and archival data with laboratory-style experiments conducted in the field setting – farmer associations in rural Uganda. In this design, an initial survey determining the nature of the sample’s social links and networks is followed by participants engaging in multiple versions of the dictator game with either strangers or individuals with whom they were familiar in order to examine cooperation dynamics across multiple rounds, especially under the threat of potential sanctioning. In Baldassarri’s (Citation2015) study, individuals contribute much more to a familiar individual from their village than to unknown individuals, and significantly more still if the other actor was another farmer from the same producer organisation. After running multiple variations and instances of these games, Baldassarri (Citation2015) concluded that general altruistic behaviour, group solidarity, and reciprocity arising through communication served as mechanisms which contributed to the farmers’ trusting behaviour. By first interviewing these farmers and investigating existing relationships between individuals before conducting behavioural games, Baldassarri (Citation2015) could leverage existing relationships between participants to construct an independent variable in order to study the mechanisms (including trust) which contribute to long-term trends of cooperation between actors with relationship tenure.

Getting an organisation to allow researchers perform field experiments that could plausibly affect productivity or work relationships is no easy task. Even if access is granted, constructing and employing a field experiment on trust in an ethical manner and also being able to control relevant extraneous factors is clearly challenging (Bitektine et al., Citation2022). Nonetheless, studying trust through field experiments provides a unique opportunity to test whether theoretical predictions hold in natural conditions. Chatterji et al. (Citation2016) have argued in favour of increased field experimentation to investigate questions in the strategy literature, and this logic also applies to experimental trust scholarship. Further field experimentation may resolve existing questions regarding the extent to which causal attributions regarding the antecedents and consequences of trust can be extended from lab findings to organisational contexts. Field experiments also provide the benefit of internal validity stemming from the ability to vary individual factors regarding treatments. However, Chatterji et al. (Citation2016) note that background factors inherent to the field environment may interact with the treatment, obfuscating results and the relationships which researchers seek to examine.

Once a general type of experiment has been selected which suits the needs of the research context, one should carefully review previously conducted experiments to ascertain whether an existing method can be used or extended for these purposes. We hope that helps to inform their decisions about which experimental paradigms to use.

Suggestions for developing a new behavioural or vignette experiment

In crafting a new behavioural paradigm to study trust, researchers should attend to the core elements identified by Cook and Cooper (Citation2003): actors’ potential underlying motivations, incentive structures in games and resulting strategies, and social context factors. First, actors’ motivations refer to underlying assumptions which can lead to individually predetermined intentions regarding behaviour (and in this case, how these assumptions may lead to behaviours which influence games, potentially regardless of experimenters’ manipulations). Cook and Cooper (Citation2003) note that general motives which may influence behaviours in games include the assumption of egoism, altruism, competition, or cooperation (McClintock, Citation1972; Yamagishi & Yamagishi, Citation1994). These potential social orientations may lead to a tendency for participants to interpret incentive structures in particular ways and act in accordance with these underlying assumptions, rather than acting mainly on perceptions of trust regarding another actor.

Second, regardless of intrinsic social orientations which may influence perceptions generally, the nature of built-in payoff structures may prompt participants to engage in behaviours they may not otherwise consider under different conditions for incentivization. For instance, if Player A in the trust game is given an unusually large sum at the start of the game ($500, for example), this may prompt risk aversive behaviours.

Third, social factors, such as the social comparisons participants make with each other, can profoundly influence trust (Dunn et al., Citation2012). We suggest that scholars interested in studying social features build on existing paradigms and the growing literature that has advanced our understanding of social factors in trust. For example, scholars should take care to establish a payoff structure which does not motivate individuals to act in ways that will obfuscate potential trusting behaviours or intentions. Once individual social orientations are accounted for (if possible), a reasonable baseline for social context should be established and held constant for all participants (e.g. dyadic interaction featuring online communication with the game or task being performed for a single instance or round). Once these core factors have been accounted for with the intention of application across the entire sample, individual social context factors may be varied across groups with the goal of manipulating an independent variable or a set of independent variables.

As for vignette experiments, we suggest researchers consult and follow the series of guidelines offered by Aguinis and Bradley (Citation2014), many of which can be directly applied to trust research. First, researchers need to choose a specific type of vignette. Aguinis and Bradley (Citation2014) differentiate so-called paper people studies from policy capturing and conjoint analysis studies. In brief, paper people studies involve gauging individuals’ explicit responses to specific scenarios or subjects and are typically utilised to investigate those explicit processes and attitudes which participants are reasonable aware of. Policy capturing and conjoint analysis studies, on the other hand, are primarily designed to assess implicit cognitive mechanisms which contribute to participants’ decision processes.

One of the next decisions is to choose among between-subjects, within-subjects, or mixed designs. In between-subjects vignette experiments, each participant reads a single vignette depending upon their treatment group, and comparisons are drawn across participants. Within-subjects designs require that participants read a set of vignettes, and comparisons are drawn across vignettes within the same individual. In mixed designs, participants within groups read the same set of vignettes, but different groups are given different sets. Although Aguinis and Bradley (Citation2014) caution against using a between-subjects design in many scenario experiments, this approach has been usefully employed in researching trust (e.g. Baer et al., Citation2018; Kim et al., Citation2004).

Next, researchers should decide whether to employ any technology besides written text in their study in order to allow participants to become immersed in the vignette task at hand. Incorporating video or audio recordings for participants to consume and react to alongside a baseline text description of the scenario may contribute to participants feeling more engaged in the situation (Lucas, Citation2003a). For example, going back to Kim and colleagues’ series of trust vignettes involving a hiring scenario (Kim et al., Citation2004; Kim et al., Citation2006; Kim et al., Citation2013), these vignette experiments include both a written transcript of the account and video footage of the interview which participants are asked to engage with. Incorporating elements which can contribute to participant immersion in this way may curb the most common and worrying criticism of vignette studies – the fear that participants may not have taken the task seriously.

The final steps in designing a vignette experiment involve selecting the number of independent variables to manipulate and the number of levels for these variables. Aguinis and Bradley (Citation2014) suggest utilising an attribute-driven design approach or an ‘actual derived cases’ approach in tackling this final preparatory aspect. In an attribute-driven design, experimenters select independent variables which have no relationship between each other. A potential downside in selecting variables in this manner is that combining too many orthogonal variables in a single vignette experiment may lead to scenarios which end up being unrealistic. In taking an ‘actual derived cases’ approach, experimenters construct scenarios based on values which are plausible in actual organisational settings.

Having devised either a behavioural task or a vignette ready for application, another critical question pertains to the type of study participants. There is an ongoing argument regarding the extent to which data obtained from student participants is generalisable to wider populations (Hanel & Vione, Citation2016). In many cases, both in behavioural and vignette experiments, samples may be largely comprised of students (Falk & Heckman, Citation2009), and we note that there is mounting evidence that students’ responses are often generalisable to other populations (Fréchette, Citation2015).

Limitations of our review

A notable limitation of this review is the range of its scope – that is, its focus on articles published in a restricted list of eleven journals. Although we took care to evaluate all applicable research within the focal outlets of our analysis, there exists promising and effective trust research employing experimental methods in other journals, both inside and outside of organisational studies, particularly in economics, sociology, and psychology. Therefore, our paper has certainly not captured all promising experimental research on trust and can essentially only speak to research published in these eleven journals, which is why we urge readers to also examine work in other fields before developing new experiments. In particular, our primary focus on what are often considered top journals may have produced a sample with a bias toward novelty over replication, which may in part explain our critical assessment of the replication of experimental methodology in the literature. Of course, the sample of articles covered in this review may also be biased in a number of other ways that cannot be easily identified, which is why we welcome further reviews of experimental trust research that use other sampling approaches. Future reviews may also focus on subfields of trust, such as trust recovery.

More broadly, our review only addresses trust research employing experimental methodology, which is of course only one of several methods in the trust researcher’s toolkit (see Lyon et al., Citation2012 for an overview). No doubt, we not only need additional research using experiments (as discussed in the next section) but also investigations employing a wide variety of other empirical techniques (Falk & Heckman, Citation2009). For instance, even though they were not included in our systematic review, we see substantial value in quasi-experimental designs (see Grant & Wall, Citation2009 for a comprehensive discussion). Random assignment is a key strength of experimental methods (Podsakoff & Podsakoff, Citation2019), but it may at times be impractical and/or unethical, making quasi experiments a useful alternative (Bitektine et al., Citation2022; Stone-Romero, Citation2011). Further, compared to true experiments, quasi experiments can make it more feasible to access the population that the study strives to generalise to and to conduct longitudinal research that involves longer time periods. This is why it is not surprising that quasi experiments have an important place in trust research, as exemplified by the seminal study by Mayer and Davis (Citation1999).

Call for future research

In conducting this review, we sought to survey existing scholarship regarding common features and trends in experimental research on trust within organisational studies. Our analyses display a reasonable degree of definitional convergence within the experimental trust literature in organisational studies. However, our analyses point to a lack of paradigmatic convergence (beyond the trust game), and we hope this review will serve as a basis for making an informed choice among available designs.

As scholars use experimental methods to advance our understanding of trust, we call for additional research in several specific areas. First, we call for the use of experimental paradigms beyond the trust game. While the trust game represents the dominant experimental paradigm to measure trust, it may suffer from potential confounds, and it only measures benevolence-based trust. We call for future work to expand our understanding of integrity-based trust by using the rely-or-verify game (Levine & Schweitzer, Citation2015). And we call for future work to develop paradigms to assess ability-based trust (see Reimann et al., Citation2022 for a possible starting point).

Second, we call for future work to expand our understanding of the influence of different organisational settings on trust. Vignette-based scholarship represents the most popular method to advance our understanding of organisational factors, but we also call for creative experimental methods to study their influence on trusting behaviour, both in the lab and the field.

Third, we call for future experimental studies to investigate the interplay between trust and relationships. This work should explore relationship tenure and the maintenance of trust over time. This work should also further advance our understanding of trust recovery.

Fourth, we call for experimental investigations of trust at higher levels of analysis, such as between groups and organisations. Although experiments have been increasingly common in strategy and organisation theory research (Di Stefano & Gutierrez, Citation2019; Schilke et al., Citation2019), and this trend also applies to the experimental study of trust (e.g. Connelly et al., Citation2012; Mellewigt et al., Citation2017), the vast majority of the studies in our sample focused on an individual trusting another individual – that is, the micro level. We have also observed cross-level analysis of the development of an individual’s trust in a collectivity or vendor organisation (Baer et al., Citation2018). However, we have little experimental insight into the process of trust development at the level of groups (but see Kugler et al., Citation2007 for an exception) or even organisations. In designing an experiment with the purpose of studying this area, one might examine whether groups view trustees as more or less trustworthy while varying factors such as consensus (de Jong et al., Citation2021; Haack et al., Citation2021) or other team characteristics. Further examining this broad topic would lead to valuable insight regarding group-level trust dynamics stemming from aggregated individual perceptions (Fulmer & Ostroff, Citation2021; Schilke & Cook, Citation2013).

Fifth, we call for experimental work to focus on the consequences of trust. In comparison to the number of studies that investigate trust’s antecedents, we found relatively few experimental studies that investigate the consequences of trust, especially in organisational contexts. As noted by de Jong et al. (Citation2017), despite important non-experimental work on this topic, there is a clear need for further investigation of the effects of trust on work-related outcomes. While further explicating trust’s role as a mechanism of cooperation which enables organisations to reap performance-based benefits lends the topic relevance to organisations, it would be beneficial to understand the causal effects of trust on other outcomes, such as loyalty and commitment, for example. Aside from studies focusing on how trust between two individuals leads to greater joint outcomes or performance in coordinated behavioural tasks (e.g. Meier et al., Citation2019), we observed no other investigation into the relationship between trust and other work-related outcomes in our sample of experimental work.

Sixth, on a related note, we call for future work to investigate the hazards of misplaced trust. That is, in contrast to the broad view that more trust is better, we call for scholarship that identifies key contingencies that explain when people are likely to be too trusting, such as in censored environments when they learn only limited information from a counterpart (see Schweitzer et al., Citation2018). At the group level, Langfred (Citation2004) found an interaction effect between trust and autonomy such that self-managing teams with high trust in each other and high levels of individual autonomy suffer negative performance consequences as a result of decreased monitoring efforts and coordination errors. We believe that the hazards of trust represent a topic ripe for experimentation (Schilke & Huang, 2018; Yip & Schweitzer, Citation2016).

Seventh, we call for scholarship on the maintenance of trust. While noteworthy experimental research has been conducted on the topics of trust violation, trust repair, and the interaction between the two, we observed no focus on the process of trust maintenance. In order to understand trust as a dynamic rather than static process, researchers should seek to understand the efficacy of maintenance practices in ensuring that trust remains stable in interpersonal relationships (Gustafsson et al., Citation2021). In addition, other than studies that focused on the development of trust between two parties in the presence of an arbitrator or mediator, we observed no experimental investigation of third-party trust or ‘trust transfer’ from a familiar third-party individual to one of their connections, despite previous calls for research in this area (de Jong et al., Citation2017). This topic is especially pertinent to organisational contexts where individuals may be familiar with others in their network without actually engaging with them and information regarding perceived trustworthiness can be passed along by familiar ties. Potential experiments studying the nature of relationships which form after a familiar tie indicates to one party to what extent another party should be trusted might yield fruitful insights.

Conclusion

Trust profoundly shapes organisational, group, and dyadic outcomes. Reflecting its importance, a growing literature has investigated trust. This work has fundamentally advanced our understanding of the complexity and multi-faceted nature of trust, but our experimental investigations have yet to catch-up to our growing theoretical understanding.

Acknowledgments

The authors are thankful for the insightful comments provided by the Editor-in-Chief, Guido Möllering, and by two anonymous reviewers. The authors gratefully acknowledge the insightful comments and suggestions on earlier drafts of the manuscript provided by Bart de Jong, Allison Gabriel, and Tamar Kugler. The authors are appreciative of able research assistance by Mackay Sennyey and Elizabeth Yee.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

Research support was provided by a National Science Foundation CAREER Award (1943688) granted to the first author. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Notes on contributors

Oliver Schilke

Oliver Schilke is an associate professor of management and organisations (with tenure) at the Eller College of Management and an associate professor of sociology (by courtesy) at the University of Arizona. He is also the founder and director of the Center for Trust Studies at the Eller College of Management. His research interests include trust, collaboration, organisational routines/capabilities, and microinstitutional processes. He received his Ph.D. from the University of California–Los Angeles. He has published in journals such as the Academy of Management Journal, Administrative Science Quarterly, American Sociological Review, Annual Review of Sociology, Journal of Applied Psychology, Organisation Science, Proceedings of the National Academy of Sciences, and Strategic Management Journal.

Andy Powell

Andy Powell is a Research Technician in the Department of Neuroscience, The University of Arizona. He is broadly interested in group dynamics (both within and between social groups). More specifically, his research focuses on interpersonal and organisational trust spanning multiple levels of analysis in addition to the experience of social status and its influence on group-oriented behaviours.

Maurice E. Schweitzer

Maurice E. Schweitzer is the Cecilia Yen Koo Professor, Professor of Operations, Information, and Decisions and Management, at the Wharton School of Business at the University of Pennsylvania. His research interests include trust, deception, negotiations, and decision making.

Notes

4 See, for example, the guidance by Indiana University’s Human Research Protection Program at https://research.iu.edu/compliance/human-subjects/guidance/deception.html

5 In addition, eleven experiments both manipulated trust as an independent variable and measured behavioural or attitudinal trust as a dependent variable (for reasons other than only checking the efficacy of manipulations).

6 We will return to this issue when discussing quasi-experimental approaches below.

References

  • Aguinis, H., & Bradley, K. J. (2014). Best practice recommendations for designing and implementing experimental vignette methodology studies. Organizational Research Methods, 17(4), 351–371. https://doi.org/10.1177/1094428114547952
  • Ajzen, I., Brown, T. C., & Carvajal, F. (2004). Explaining the discrepancy between intentions and actions: The case of hypothetical bias in contingent valuation. Personality and Social Psychology Bulletin, 30(9), 1108–1121. https://doi.org/10.1177/0146167204264079
  • Alós-Ferrer, C., & Farolfi, F. (2019). Trust games and beyond. Frontiers in Neuroscience, 13, 887. https://doi.org/10.3389/fnins.2019.00887
  • Aronson, E., Ellsworth, P. C., Carlsmith, J. M., & Gonzales, M. H. (1990). Methods of research in social psychology (2 ed.). McGraw-Hill.
  • Audia, P. G., Locke, E. A., & Smith, K. G. (2000). The paradox of success: An archival and a laboratory study of strategic persistence following radical environmental change. Academy of Management Journal, 43(5), 837–853. https://doi.org/10.2307/1556413
  • Aven, B., Morse, L., & Iorio, A. (2019). The valley of trust: The effect of relational strength on monitoring quality. Organizational Behavior and Human Decision Processes, 179–193. https://doi.org/10.1016/j.obhdp.2019.07.004
  • Aviram, H. (2012). What would you do? Conducting web-based factorial vignette surveys. In L. Gideon (Ed.), Handbook of survey methodology for the social sciences (pp. 463–473). Springer.
  • Axelrod, R. (1997). The complexity of cooperation: Agent-based models of competition and collaboration. Princeton University Press.
  • Baer, M. D., van der Werff, L., Colquitt, J. A., Rodell, J. B., Zipay, K. P., & Buckley, F. (2018). Trusting the “look and feel”: Situational normality, situational aesthetics, and the perceived trustworthiness of organizations. Academy of Management Journal, 61(5), 1718–1740. https://doi.org/10.5465/amj.2016.0248
  • Baldassarri, D. (2015). Cooperative networks: Altruism, group solidarity, reciprocity, and sanctioning in Ugandan producer organizations. American Journal of Sociology, 121(2), 355–395. https://doi.org/10.1086/682418
  • Barney, J. B., & Hansen, M. H. (1994). Trustworthiness as a source of competitive advantage. Strategic Management Journal, 15(8), 175–190. https://doi.org/10.1002/smj.4250150912
  • Barrera, D. (2008). The social mechanisms of trust. Sociologica, 2(2), 1–32. https://doi.org/10.2383/27728
  • Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2(4), 396–403. https://doi.org/10.1111/j.1745-6916.2007.00051.x
  • Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142. https://doi.org/10.1006/game.1995.1027
  • Bies, R. J., & Shapiro, D. L. (1987). Interactional fairness judgments: The influence of causal accounts. Social Justice Research, 1(2), 199–218. https://doi.org/10.1007/BF01048016
  • Bitektine, A., Lucas, J., Schilke, O., & Aeon, B. (2022). Oxford research encyclopedia of business and management. Oxford Research Encyclopedia of Business and Management, https://doi.org/10.1093/acrefore/9780190224851.013.284
  • Bitektine, A., Lucas, J. W., & Schilke, O. (2018). Institutions under a microscope: Experimental methods in institutional theory (pp. 147-167).
  • Blader, S. L., & Chen, Y.-R. (2011). What influences how higher-status people respond to lower-status others? Effects of procedural fairness, outcome favorability, and concerns about status. Organization Science, 22(4), 1040–1060. https://doi.org/10.1287/orsc.1100.0558
  • Blau, P. M. (1964). Exchange and power in social life. Wiley.
  • Bohnet, I., Greig, F., Herrmann, B., & Zeckhauser, R. (2008). Betrayal aversion: Evidence from Brazil, China, Oman, Switzerland, Turkey, and the United States. American Economic Review, 98(1), 294–310. https://doi.org/10.1257/aer.98.1.294
  • Bolinger, M. T., Josefy, M. A., Stevenson, R., & Hitt, M. A. (2022). Experiments in strategy research: A critical review and future research opportunities. Journal of Management, 48(1), 77–113. https://doi.org/10.1177/01492063211044416
  • Bolton, G. E., Katok, E., & Ockenfels, A. (2004). How effective are electronic reputation mechanisms? An experimental investigation. Management Science, 50(11), 1587–1602. https://doi.org/10.1287/mnsc.1030.0199
  • Bottom, W., Gibson, K., Daniels, S., & Murnighan, J. (2002). When talk is not cheap: Substantive penance and expressions of intent in rebuilding cooperation. Organization Science, 13(5). https://doi.org/10.1287/orsc.13.5.497.7816
  • Brase, G. L. (2009). How different types of participant payments alter task performance. Judgment and Decision Making, 4(5), 419–428. https://doi.org/10.1017/S1930297500001248
  • Brewer, M. (1985). Experimental research and social policy: Must it be rigor versus relevance? Journal of Social Issues, 41(4), 159–176. https://doi.org/10.1111/j.1540-4560.1985.tb01149.x
  • Brülhart, M., & Usunier, J.-C. (2012). Does the trust game measure trust? Economics Letters, 115(1), 20–23. https://doi.org/10.1016/j.econlet.2011.11.039
  • Buck, S., Nutefall, J., & Bridges, L. (2012). “We thought it might encourage participation.” Using lottery incentives to improve LibQUAL+™ response rates among students. The Journal of Academic Librarianship, 38(6), 400–408. https://doi.org/10.1016/j.acalib.2012.07.004
  • Cao, J., & Galinsky, A. D. (2020). The diversity-uncertainty-valence (DUV) model of generalized trust development. Organizational Behavior and Human Decision Processes, 161, 49–64. https://doi.org/10.1016/j.obhdp.2020.03.007
  • Cavanaugh, G. F., & Fritzsche, D. J. (1985). Using vignettes in business ethics research. Research in Corporate Social Performance and Policy, 7, 279–293.
  • Chatterji, A. K., Findley, M., Jensen, N. M., Meier, S., & Nielson, D. (2016). Field experiments in strategy research. Strategic Management Journal, 37(1), 116–132. https://doi.org/10.1002/smj.2449
  • Cheshin, A., Amit, A., & van Kleef, G. A. (2018). The interpersonal effects of emotion intensity in customer service: Perceived appropriateness and authenticity of attendants’ emotional displays shape customer trust and satisfaction. Organizational Behavior and Human Decision Processes, 144, 97–111. https://doi.org/10.1016/j.obhdp.2017.10.002
  • Chua, R. Y. J., Morris, M. W., & Mor, S. (2012). Collaborating across cultures: Cultural metacognition and affect-based trust in creative collaboration. Organizational Behavior and Human Decision Processes, 118(2), 116–131. https://doi.org/10.1016/j.obhdp.2012.03.009
  • Cojuharenco, I., & Karelaia, N. (2020). When leaders ask questions: Can humility premiums buffer the effects of competence penalties? Organizational Behavior and Human Decision Processes, 156, 113–134. https://doi.org/10.1016/j.obhdp.2019.12.001
  • Coleman, J. S. (1990). Foundations of social theory. Harvard University Press.
  • Colquitt, J. A., Scott, B. A., Judge, T. A., & Shaw, J. C. (2006). Justice and personality: Using integrative theories to derive moderators of justice effects. Organizational Behavior and Human Decision Processes, 100(1), 110–127. https://doi.org/10.1016/j.obhdp.2005.09.001
  • Connelly, B. L., Miller, T., & Devers, C. E. (2012). Under a cloud of suspicion: Trust, distrust, and their interactive effect in interorganizational contracting. Strategic Management Journal, 33(7), 820–833. https://doi.org/10.1002/smj.974
  • Cook, K. S., & Cooper, R. M. (2003). Experimental studies of cooperation, trust and social exchange. In E. Ostrom & J. Walker (Eds.), Trust, reciprocity and gains from association: interdisciplinary lessons from experimental research (pp. 277–333). Russell Sage Foundation. https://doi.org/10.7758/9781610444347.7
  • Cook, K. S., & Yamagishi, T. (2008). A defense of deception on scientific grounds. Social Psychology Quarterly, 71(3), 215–221. https://doi.org/10.1177/019027250807100303
  • Cox, J. C. (2004). How to identify trust and reciprocity. Games and Economic Behavior, 46(2), 260–281. https://doi.org/10.1016/S0899-8256(03)00119-2
  • Dasgupta, P. (1988). Trust as a commodity. In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 49–72). Blackwell.
  • De Cremer, D., van Dijke, M., Schminke, M., De Schutter, L., & Stouten, J. (2018). The trickle-down effects of perceived trustworthiness on subordinate performance. Journal of Applied Psychology, 103(12), 1335–1357. https://doi.org/10.1037/apl0000339
  • de Jong, B. A., & Elfring, T. (2010). How does trust affect the performance of ongoing teams? The mediating role of reflexivity, monitoring, and effort. Academy of Management Journal, 53(3), 535–549. https://doi.org/10.5465/amj.2010.51468649
  • de Jong, B. A., Gillespie, N., Williamson, I., & Gill, C. (2021). Trust consensus within culturally diverse teams: A multistudy investigation. Journal of Management, 47(8), 2135–2168. https://doi.org/10.1177/0149206320943658
  • de Jong, B. A., Kroon, D. P., & Schilke, O. (2017). The future of organizational trust research: A content-analytic synthesis of scholarly recommendations and review of recent developments. In P. A. M. Van Lange, B. Rockenbach, & T. Yamagishi (Eds.), Trust in social dilemmas (pp. 173–194). Oxford University Press.
  • Desmet, P. T. M., De Cremer, D., & van Dijk, E. (2011). In money we trust? The use of financial compensations to repair trust in the aftermath of distributive harm. Organizational Behavior and Human Decision Processes, 114(2), 75–86. https://doi.org/10.1016/j.obhdp.2010.10.006
  • Di Stefano, G., & Gutierrez, C. (2019). Under a magnifying glass: On the use of experiments in strategy research. Strategic Organization, 17(4), 497–507. https://doi.org/10.1177/1476127018803840
  • Di Stefano, G., & Gutierrez, C. (2019). Under a magnifying glass: On the use of experiments in strategy research. Strategic Organization, 17(4), 497–507. https://doi.org/10.1177/1476127018803840
  • Dickson, E. S. (2011). Economics versus psychology experiments. In A. Lupia, D. P. Green, J. H. Kuklinski, & J. N. Druckman (Eds.), Cambridge handbook of experimental political science (pp. 58–70). Cambridge University Press.
  • Dietz, G., & Den Hartog, D. N. (2006). Measuring trust inside organisations. Personnel Review, 35(5), 557–588. https://doi.org/10.1108/00483480610682299
  • Dirks, K. T., & Ferrin, D. L. (2001). The role of trust in organizational settings. Organization Science, 12(4), 450–467. https://doi.org/10.1287/orsc.12.4.450.10640
  • Dirks, K. T., & Ferrin, D. L. (2002). Trust in leadership: Meta-analytic findings and implications for research and practice. Journal of Applied Psychology, 87(4), 611–628. https://doi.org/10.1037/0021-9010.87.4.611
  • Dirks, K. T., Kim, P. H., Ferrin, D. L., & Cooper, C. D. (2011). Understanding the effects of substantive responses on trust following a transgression. Organizational Behavior and Human Decision Processes, 114(2), 87–103. https://doi.org/10.1016/j.obhdp.2010.10.003
  • Dunn, J. R., Ruedy, N. E., & Schweitzer, M. E. (2012). It hurts both ways: How social comparisons harm affective and cognitive trust. Organizational Behavior and Human Decision Processes, 117(1), 2–14. https://doi.org/10.1016/j.obhdp.2011.08.001
  • Dunn, J. R., & Schweitzer, M. E. (2005). Feeling and believing: The influence of emotion on trust. Journal of Personality and Social Psychology, 88(5), 736–748. https://doi.org/10.1037/0022-3514.88.5.736
  • Earley, P. C. (1988). Computer-generated performance feedback in the magazine-subscription industry. Organizational Behavior and Human Decision Processes, 41(1), 50–64. https://doi.org/10.1016/0749-5978(88)90046-5
  • Eden, D. (2017). Field experiments in organizations. Annual Review of Organizational Psychology and Organizational Behavior, 4(1), 91–122. https://doi.org/10.1146/annurev-orgpsych-041015-062400
  • Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538. https://doi.org/10.1126/science.1168244
  • Ferrin, D. L., & Dirks, K. T. (2003). The use of rewards to increase and decrease trust: Mediating processes and differential effects. Organization Science, 14(1), 18–31. https://doi.org/10.1287/orsc.14.1.18.12809
  • Finch, J. (1987). The vignette technique in survey research. Sociology, 21(1), 105–114. https://doi.org/10.1177/0038038587021001008
  • Fréchette, G. (2015). Laboratory experiments: Professionals versus students. In G. R. Frechette, & A. Schotter (Eds.), Handbook of experimental economic methodology (pp. 360–390). Oxford University Press.
  • Fulmer, C. A., & Ostroff, C. (2021). Trust conceptualizations across levels of analysis. In N. Gillespie, C. A. Fulmer, & R. J. Lewicki (Eds.), Understanding trust in organizations: A multilevel perspective (pp. 14–41). Routledge.
  • Giessner, S. R., & van Knippenberg, D. (2008). “License to Fail”: Goal definition, leader group prototypicality, and perceptions of leadership effectiveness after leader failure. Organizational Behavior and Human Decision Processes, 105(1), 14–35. https://doi.org/10.1016/j.obhdp.2007.04.002
  • Gino, F., & Schweitzer, M. E. (2008). Blinded by anger or feeling the love: How emotions influence advice taking. Journal of Applied Psychology, 93(5), 1165–1173. https://doi.org/10.1037/0021-9010.93.5.1165
  • Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., & Soutter, C. L. (2000). Measuring trust. Quarterly Journal of Economics, 115(3), 811–846. https://doi.org/10.1162/003355300554926
  • Grant, A. M., & Wall, T. D. (2009). The neglected science and art of quasi-experimentation. Organizational Research Methods, 12(4), 653–686. https://doi.org/10.1177/1094428108320737
  • Gustafsson, S., Gillespie, N., Searle, R., Hope Hailey, V., & Dietz, G. (2021). Preserving organizational trust during disruption. Organization Studies, 42(9), 1409–1433. https://doi.org/10.1177/0170840620912705
  • Haack, P., Schilke, O., & Zucker, L. G. (2021). Legitimacy revisited: Disentangling propriety, validity, and consensus. Journal of Management Studies., 58(3), 749–781. https://doi.org/10.1111/joms.12615
  • Hanel, P. H. P., & Vione, K. C. (2016). Do student samples provide an accurate estimate of the general public? PloS One, 11(12), e0168354–e0168354. https://doi.org/10.1371/journal.pone.0168354
  • Harmon, D. J., Kim, P. H., & Mayer, K. J. (2015). Breaking the letter vs. spirit of the law: How the interpretation of contract violations affects trust and the management of relationships. Strategic Management Journal, 36(4), 497–517. https://doi.org/10.1002/smj.2231
  • Hart, E., & Schweitzer, M. E. (2020). Getting to less: When negotiating harms post-agreement performance. Organizational Behavior and Human Decision Processes, 156, 155–175. https://doi.org/10.1016/j.obhdp.2019.09.005
  • Hill, N. S., Bartol, K. M., Tesluk, P. E., & Langa, G. A. (2009). Organizational context and face-to-face interaction: Influences on the development of trust and collaborative behaviors in computer-mediated groups. Organizational Behavior and Human Decision Processes, 108(2), 187–201. https://doi.org/10.1016/j.obhdp.2008.10.002
  • Holtz, B. C. (2015). From first impression to fairness perception: Investigating the impact of initial trustworthiness beliefs. Personnel Psychology, 68(3), 499–546. https://doi.org/10.1111/peps.12092
  • Huang, L., Gino, F., & Galinsky, A. D. (2015). The highest form of intelligence: Sarcasm increases creativity for both expressers and recipients. Organizational Behavior and Human Decision Processes, 131, 162–177. https://doi.org/10.1016/j.obhdp.2015.07.001
  • Hunt, J. S., & Budesheim, T. L. (2004). How jurors use and misuse character evidence. Journal of Applied Psychology, 89(2), 347–361. https://doi.org/10.1037/0021-9010.89.2.347
  • Johnson, N. D., & Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865–889. https://doi.org/10.1016/j.joep.2011.05.007
  • Johnson, R. E., & Lord, R. G. (2010). Implicit effects of justice on self-identity. Journal of Applied Psychology, 95(4), 681–695. https://doi.org/10.1037/a0019298
  • Kennedy, J. A., & Schweitzer, M. E. (2018). Building trust by tearing others down: When accusing others of unethical behavior engenders trust. Organizational Behavior and Human Decision Processes, 149, 111–128. https://doi.org/10.1016/j.obhdp.2018.10.001
  • Keren, G. (2007). Framing, intentions, and trust–choice incompatibility. Organizational Behavior and Human Decision Processes, 103(2), 238–255. https://doi.org/10.1016/j.obhdp.2007.02.002
  • Kim, P. H., Cooper, C. D., Dirks, K. T., & Ferrin, D. L. (2013). Repairing trust with individuals vs. groups. Organizational Behavior and Human Decision Processes, 120(1), 1–14. https://doi.org/10.1016/j.obhdp.2012.08.004
  • Kim, P. H., Dirks, K. T., Cooper, C. D., & Ferrin, D. L. (2006). When more blame is better than less: The implications of internal vs. External attributions for the repair of trust after a competence- vs. integrity-based trust violation. Organizational Behavior and Human Decision Processes, 99(1), 49–65. https://doi.org/10.1016/j.obhdp.2005.07.002
  • Kim, P. H., Ferrin, D. L., Cooper, C. D., & Dirks, K. T. (2004). Removing the shadow of suspicion: The effects of apology versus denial for repairing competence- versus integrity-based trust violations. Journal of Applied Psychology, 89(1), 104–118. https://doi.org/10.1037/0021-9010.89.1.104
  • Kirkpatrick, S. A., & Locke, E. A. (1996). Direct and indirect effects of three core charismatic leadership components on performance and attitudes. Journal of Applied Psychology, 81(1), 36–51. https://doi.org/10.1037/0021-9010.81.1.36
  • Koehler, J. J., & Mercer, M. (2009). Selection neglect in mutual fund advertisements. Management Science, 55(7), 1107–1121. https://doi.org/10.1287/mnsc.1090.1013
  • Korsgaard, M. A., Roberson, L., & Rymph, R. D. (1998). What motivates fairness? The role of subordinate assertive behavior on managers’ interactional fairness. Journal of Applied Psychology, 83(5), 731–744. https://doi.org/10.1037/0021-9010.83.5.731
  • Korsgaard, M. A., Schweiger, D. M., & Sapienza, H. J. (1995). Building commitment, attachment, and trust in strategic decision-making teams: The role of procedural justice. Academy of Management Journal, 38(1), 60–84. https://doi.org/10.2307/256728
  • Kramer, R. M. (1999). Trust and distrust in organizations: Emerging perspectives, enduring questions. Annual Review of Psychology, 50(1), 569–598. https://doi.org/10.1146/annurev.psych.50.1.569
  • Kugler, T., Bornstein, G., Kocher, M. G., & Sutter, M. (2007). Trust between individuals and groups: Groups are less trusting than individuals but just as trustworthy. Journal of Economic Psychology, 28(6), 646–657. https://doi.org/10.1016/j.joep.2006.12.003
  • Langfred, C. (2004). Too much of a good thing? Negative effects of high trust and individual autonomy in self-managed teams. Academy of Management Journal, 47(3), 385–399. https://doi.org/10.2307/20159588
  • Lazzarini, S. G., Miller, G. J., & Zenger, T. R. (2008). Dealing with the paradox of embeddedness: The role of contracts and trust in facilitating movement out of committed relationships. Organization Science, 19(5), 709–728. https://doi.org/10.1287/orsc.1070.0336
  • LeBel, E. P., Berger, D., Campbell, L., & Loving, T. J. (2017). Falsifiability is not optional. Journal of Personality and Social Psychology, 113(2), 254–261. https://doi.org/10.1037/pspi0000106
  • Levin, I. P. (1987). Associative effects of information framing. Bulletin of the Psychonomic Society, 25(2), 85–86. https://doi.org/10.3758/BF03330291
  • Levin, I. P., & Gaeth, G. (1988). How consumers are affected by the framing of attribute information before and after consuming the product. Journal of Consumer Research, 15(3), 374–378. https://doi.org/10.1086/209174
  • Levine, E. E., & Schweitzer, M. E. (2015). Prosocial lies: When deception breeds trust. Organizational Behavior and Human Decision Processes, 126, 88–106. https://doi.org/10.1016/j.obhdp.2014.10.007
  • Levine, E. E., & Wald, K. A. (2020). Fibbing about your feelings: How feigning happiness in the face of personal hardship affects trust. Organizational Behavior and Human Decision Processes, 156, 135–154. https://doi.org/10.1016/j.obhdp.2019.05.004
  • Lim, K. H., Sia, C. L., Lee, M. K. O., & Benbasat, I. (2006). Do I trust you online, and if so, will I buy? An empirical study of two trust-building strategies. Journal of Management Information Systems, 23(2), 233–266. https://doi.org/10.2753/MIS0742-1222230210
  • Lonati, S., Quiroga, B. F., Zehnder, C., & Antonakis, J. (2018). On doing relevant and rigorous experiments: Review and recommendations. Journal of Operations Management, 64(1), 19–40. https://doi.org/10.1016/j.jom.2018.10.003
  • Lount, R. B., Zhong, C.-B., Sivanathan, N., & Murnighan, J. (2008). Getting off on the wrong foot: The timing of a breach and the restoration of trust. Personality and Social Psychology Bulletin, 34(12), 1601–1612. https://doi.org/10.1177/0146167208324512
  • Lucas, J. W. (2003). Status processes and the institutionalization of women as leaders. American Sociological Review, 68(3), 464–480. https://doi.org/10.2307/1519733
  • Lucas, J. W. (2003). Theory-testing, generalization, and the problem of external validity. Sociological Theory, 21(3), 236–253. https://doi.org/10.1111/1467-9558.00187
  • Lumineau, F. (2017). How contracts influence trust and distrust. Journal of Management, 43(5), 1553–1577. https://doi.org/10.1177/0149206314556656
  • Lykken, D. T. (1968). Statistical significance in psychological research. Psychological Bulletin, 70(3), 151–159. https://doi.org/10.1037/h0026141
  • Lyon, F., Möllering, G., & Saunders, M. (2015). Introduction. Researching trust: The ongoing challenge of matching objectives and methods. In Handbook of research methods on trust (pp. 1–22). Edward Elgar.
  • Lyon, F., Möllering, G., & Saunders, M. N. K. (2012). Handbook of research methods on trust, (2nd ed.). Edward Elgar.
  • Malhotra, D., & Murnighan, J. K. (2002). The effects of contracts on interpersonal trust. Administrative Science Quarterly, 47(3), 534–559. https://doi.org/10.2307/3094850
  • Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal of Applied Psychology, 84(1), 123–136. https://doi.org/10.1037/0021-9010.84.1.123
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792
  • McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. https://doi.org/10.2307/256727
  • McAllister, D. J. (1997). The second face of trust: Reflections on the dark side of interpersonal trust in organizations. Research on Negotiation in Organizations, 6, 87–112.
  • McClintock, C. G. (1972). Social motivation—a set of propositions. Behavioral Science, 17(5), 438–454. https://doi.org/10.1002/bs.3830170505
  • McElroy, J. C., Summers, J. K., & Moore, K. (2014). The effect of facial piercing on perceptions of job applicants. Organizational Behavior and Human Decision Processes, 125(1), 26–38. https://doi.org/10.1016/j.obhdp.2014.05.003
  • McEvily, B., Perrone, V., & Zaheer, A. (2003). Trust as an organizing principle. Organization Science, 14(1), 91–103. https://doi.org/10.1287/orsc.14.1.91.12814
  • McEvily, B., Radzevick, J. R., & Weber, R. A. (2012). Whom do you distrust and how much does it cost? An experiment on the measurement of trust. Games and Economic Behavior, 74(1), 285–298. https://doi.org/10.1016/j.geb.2011.06.011
  • McEvily, B., & Tortoriello, M. (2011). Measuring trust in organisational research: Review and recommendations. Journal of Trust Research, 1(1), 23–63. https://doi.org/10.1080/21515581.2011.552424
  • McEvily, B., Weber, R., Bicchieri, C., & Ho, V. (2002). Can groups be trusted? An experimental study of collective trust. Handbook of Trust Research.
  • Meier, S., Stephenson, M., & Perkowski, P. (2019). Culture of trust and division of labor in nonhierarchical teams. Strategic Management Journal, 40(8), 1171–1193. https://doi.org/10.1002/smj.3024
  • Mellewigt, T., Thomas, A., Weller, I., & Zajac, E. J. (2017). Alliance or acquisition? A mechanisms-based, policy-capturing analysis. Strategic Management Journal, 38(12), 2353–2369. https://doi.org/10.1002/smj.2664
  • Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary groups. In R. M. Kramer & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 166–195). SAGE Publications, Inc.
  • Mislin, A. A., Campagna, R. L., & Bottom, W. P. (2011). After the deal: Talk, trust building and the implementation of negotiated agreements. Organizational Behavior and Human Decision Processes, 115(1), 55–68. https://doi.org/10.1016/j.obhdp.2011.01.002
  • Mooijman, M., van Dijk, W. W., van Dijk, E., & Ellemers, N. (2019). Leader power, power stability, and interpersonal trust. Organizational Behavior and Human Decision Processes, 152, 1–10. https://doi.org/10.1016/j.obhdp.2019.03.009
  • Mutz, D. C. (2011). Population-based survey experiments (STU (Student edition ed.). Princeton University Press.
  • Nakayachi, K., & Watabe, M. (2005). Restoring trustworthiness after adverse events: The signaling effects of voluntary “hostage posting” on trust. Organizational Behavior and Human Decision Processes, 97(1), 1–17. https://doi.org/10.1016/j.obhdp.2005.02.001
  • Naquin, C. E., & Paulson, G. D. (2003). Online bargaining and interpersonal trust. Journal of Applied Psychology, 88(1), 113–120. https://doi.org/10.1037/0021-9010.88.1.113
  • Neal, T., Shockley, E., & Schilke, O. (2016). The “dark side” of institutional trust (pp. 177–192).
  • Oldham, G. R. (1975). The impact of supervisory characteristics on goal acceptance. Academy of Management Journal, 18(3), 461–475. https://doi.org/10.2307/255677
  • Parco, J. E., Rapoport, A., & Stein, W. E. (2002). Effects of financial incentives on the breakdown of mutual trust. Psychological Science, 13(3), 292–297. https://doi.org/10.1111/1467-9280.00454
  • Piff, P., Kraus, M., Côté, S., Cheng, B., & Keltner, D. (2010). Having less, giving more: The influence of social class on prosocial behavior. Journal of Personality and Social Psychology, 99(5), 771–784. https://doi.org/10.1037/a0020092
  • Pitesa, M., Goh, Z., & Thau, S. (2018). Mandates of dishonesty: The psychological and social costs of mandated attitude expression. Organization Science, 29(3), 418–431. https://doi.org/10.1287/orsc.2017.1190
  • Podsakoff, P. M., & Podsakoff, N. P. (2019). Experimental designs in management and leadership research: Strengths, limitations, and recommendations for improving publishability. The Leadership Quarterly, 30(1), 11–33. https://doi.org/10.1016/j.leaqua.2018.11.002
  • Pruitt, D. G., & Lewis, S. A. (1975). Development of integrative solutions in bilateral negotiation. Journal of Personality and Social Psychology, 31(4), 621–633. https://doi.org/10.1037/0022-3514.31.4.621
  • Rafaeli, A., Sagy, Y., & Derfler-Rozin, R. (2008). Logos and initial compliance: A strong case of mindless trust. Organization Science, 19(6), 845–859. https://doi.org/10.1287/orsc.1070.0344
  • Reimann, M., Hüller, C., Schilke, O., & Cook, K. S. (2022). Impression management attenuates the effect of ability on trust in economic exchange. Proceedings of the National Academy of Sciences, 119(30), e2118548119. https://doi.org/10.1073/pnas.2118548119
  • Reypens, C., & Levine, S. S. (2017). To grasp cognition in action, combine behavioral experiments with protocol analysis. In R. J. Galavan, K. J. Sund, & G. P. Hodgkinson (Eds.), Methodological challenges and advances in managerial and organizational cognition (Vol. 2, pp. 123–146). Emerald.
  • Rose, S. L., Sah, S., Dweik, R., Schmidt, C., Mercer, M., Mitchum, A., Kattan, M., Karafa, M., & Robertson, C. (2021). Patient responses to physician disclosures of industry conflicts of interest: A randomized field experiment. Organizational Behavior and Human Decision Processes, 27–38. https://doi.org/10.1016/j.obhdp.2019.03.005
  • Ross, W. H., & Wieland, C. (1996). Effects of interpersonal trust and time pressure on managerial mediation strategy in a simulated organizational dispute. Journal of Applied Psychology, 81(3), 228–248. https://doi.org/10.1037/0021-9010.81.3.228
  • Rousseau, D., Sitkin, S., Burt, R., & Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3), 393–404. https://doi.org/10.5465/amr.1998.926617
  • Sah, S., & Loewenstein, G. (2015). Conflicted advice and second opinions: Benefits, but unintended consequences. Organizational Behavior and Human Decision Processes, 130, 89–107. https://doi.org/10.1016/j.obhdp.2015.06.005
  • Sah, S., Malaviya, P., & Thompson, D. (2018). Conflict of interest disclosure as an expertise cue: Differential effects due to automatic versus deliberative processing. Organizational Behavior and Human Decision Processes, 147, 127–146. https://doi.org/10.1016/j.obhdp.2018.05.008
  • Schabram, K., Robinson, S. L., & Cruz, K. S. (2018). Honor among thieves: The interaction of team and member deviance on trust in the team. Journal of Applied Psychology, 103(9), 1057–1066. https://doi.org/10.1037/apl0000311
  • Schilke, O., & Cook, K. S. (2013). A cross–level process theory of trust development in interorganizational relationships. Strategic Organization, 11(3), 281–303. https://doi.org/10.1177/1476127012472096
  • Schilke, O., & Huang, L. (2018). Worthy of swift trust? How brief interpersonal contact affects trust accuracy. Journal of Applied Psychology, 103(11), 1181–1197. https://doi.org/10.1037/apl0000321
  • Schilke, O., Levine, S. S., Kacperczyk, O., & Zucker, L. G. (2019). Call for papers-special issue on experiments in organizational theory. Organization Science, 30(1), 232–234. https://doi.org/10.1287/orsc.2018.1257
  • Schilke, O., & Lumineau, F. (2018). The double-edged effect of contracts on alliance performance. Journal of Management, 44(7), 2827–2858. https://doi.org/10.1177/0149206316655872
  • Schilke, O., Reimann, M., & Cook, K. S. (2013). Effect of relationship experience on trust recovery following a breach. Proceedings of the National Academy of Sciences, 110(38), 15236–15241. https://doi.org/10.1073/pnas.1314857110
  • Schilke, O., Reimann, M., & Cook, K. S. (2015). Power decreases trust in social exchange. Proceedings of the National Academy of Sciences, 112(42), 12950–12955. https://doi.org/10.1073/pnas.1517057112
  • Schilke, O., Reimann, M., & Cook, K. S. (2021). Trust in social relations. Annual Review of Sociology, 47(1), 239–259. https://doi.org/10.1146/annurev-soc-082120-082850
  • Schweitzer, M. E., Hershey, J. C., & Bradlow, E. T. (2006). Promises and lies: Restoring violated trust. Organizational Behavior and Human Decision Processes, 101(1), 1–19. https://doi.org/10.1016/j.obhdp.2006.05.005
  • Schweitzer, M. E., Ho, T.-H., & Zhang, X. (2018). How monitoring influences trust: A tale of two faces. Management Science, 64(1), 253–270. https://doi.org/10.1287/mnsc.2016.2586
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
  • Shadish, W. R., & Luellen, J. K. (2005). Quasi-experimental designs. Encyclopedia of Statistics in Behavioral Science. https://onlinelibrary.wiley.com/doi/abs/10.10020470013192.bsa521.
  • Shah, R. H., & Swaminathan, V. (2008). Factors influencing partner selection in strategic alliances: The moderating role of alliance context. Strategic Management Journal, 29(5), 471–494. https://doi.org/10.1002/smj.656
  • Shaver, J. M. (1998). Accounting for endogeneity when assessing strategy performance: Does entry mode choice affect FDI survival? Management Science, 44(4), 571–585. https://doi.org/10.1287/mnsc.44.4.571
  • Sniezek, J. A., & Van Swol, L. M. (2001). Trust, confidence, and expertise in a judge-advisor system. Organizational Behavior and Human Decision Processes, 84(2), 288–307. https://doi.org/10.1006/obhd.2000.2926
  • Spencer, S. J., Zanna, M. P., & Fong, G. T. (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89(6), 845–851. https://doi.org/10.1037/0022-3514.89.6.845
  • Starke, F. A., & Notz, W. W. (1981). Pre- and post-intervention effects of conventional versus final offer arbitration. Academy of Management Journal, 24(4), 832–850. https://doi.org/10.2307/256180
  • Stewart, K. J. (2003). Trust transfer on the world wide web. Organization Science, 14(1), 5–17. https://doi.org/10.1287/orsc.14.1.5.12810
  • Stewart, K. J. (2006). How hypertext links influence consumer perceptions to build and degrade trust online. Journal of Management Information Systems, 23(1), 183–210. https://doi.org/10.2753/MIS0742-1222230106
  • Stone-Romero, E. F. (2011). Research strategies in industrial and organizational psychology: Nonexperimental, quasi-experimental, and randomized experimental research in special purpose and nonspecial purpose settings. In S. Zedeck (Ed.), Apa handbook of industrial and organizational psychology, vol 1: Building and developing the organization (pp. 37–72). American Psychological Association.
  • Strauss, A., & Corbin, J. M. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Sage Publications, Inc.
  • Tetlock, P. E., Vieider, F. M., Patil, S. V., & Grant, A. M. (2013). Accountability and ideology: When left looks right and right looks left. Organizational Behavior and Human Decision Processes, 122(1), 22–35. https://doi.org/10.1016/j.obhdp.2013.03.007
  • Tzieropoulos, H. (2013). The trust game in neuroscience: A short review. Social Neuroscience, 8(5), 407–416. https://doi.org/10.1080/17470919.2013.832375
  • van der Werff, L., Legood, A., Buckley, F., Weibel, A., & de Cremer, D. (2019). Trust motivation: The self-regulatory processes underlying trust decisions. Organizational Psychology Review, 9(2-3), 99–123. https://doi.org/10.1177/2041386619873616
  • van Dijke, M., De Cremer, D., Langendijk, G., & Anderson, C. (2018). Ranking low, feeling high: How hierarchical position and experienced power promote prosocial behavior in response to procedural justice. Journal of Applied Psychology, 103(2), 164–181. https://doi.org/10.1037/apl0000260
  • van Dijke, M., Mayer, D. M., & De Cremer, D. (2010). The role of authority power in explaining procedural fairness effects. Journal of Applied Psychology, 95(3), 488–502. https://doi.org/10.1037/a0018921
  • Wallander, L. (2009). 25 years of factorial surveys in sociology: A review. Social Science Research, 38(3), 505–520. https://doi.org/10.1016/j.ssresearch.2009.03.004
  • Walther, J. B. (1995). Relational aspects of computer-mediated communication: Experimental observations over time. Organization Science, 6(2), 186–203. https://doi.org/10.1287/orsc.6.2.186
  • Wang, L., & Murnighan, J. K. (2017). The dynamics of punishment and trust. Journal of Applied Psychology, 102(10), 1385–1402. https://doi.org/10.1037/apl0000178
  • Weber, L., & Bauman, C. W. (2019). The cognitive and behavioral impact of promotion and prevention contracts on trust in repeated exchanges. Academy of Management Journal, 62(2), 361–382. https://doi.org/10.5465/amj.2016.1230
  • Welsh, M. B., & Navarro, D. J. (2012). Seeing is believing: Priors, trust, and base rate neglect. Organizational Behavior and Human Decision Processes, 119(1), 1–14. https://doi.org/10.1016/j.obhdp.2012.04.001
  • Wilson, J. M., Straus, S. G., & McEvily, B. (2006). All in due time: The development of trust in computer-mediated and face-to-face teams. Organizational Behavior and Human Decision Processes, 99(1), 16–33. https://doi.org/10.1016/j.obhdp.2005.08.001
  • Wood, A. (2020). A nonverbal signal of trustworthiness: An evolutionarily relevant model. Journal of Trust Research, 10(2), 134–158. https://doi.org/10.1080/21515581.2021.1922912
  • Yamagishi, T., Mifune, N., Li, Y., Shinada, M., Hashimoto, H., Horita, Y., Miura, A., Inukai, K., Tanida, S., Kiyonari, T., Takagishi, H., & Simunovic, D. (2013). Is behavioral pro-sociality game-specific? Pro-social preference and expectations of pro-sociality. Organizational Behavior and Human Decision Processes, 120(2), 260–271. https://doi.org/10.1016/j.obhdp.2012.06.002
  • Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and Japan. Motivation and Emotion, 18(2), 129–166. https://doi.org/10.1007/BF02249397
  • Yao, J., Zhang, Z.-X., Brett, J., & Murnighan, J. K. (2017). Understanding the trust deficit in China: Mapping positive experience and trust in strangers. Organizational Behavior and Human Decision Processes, 143, 85–97. https://doi.org/10.1016/j.obhdp.2016.12.003
  • Yip, J. A., & Schweitzer, M. E. (2016). Mad and misleading: Incidental anger promotes deception. Organizational Behavior and Human Decision Processes, 137, 207–217. https://doi.org/10.1016/j.obhdp.2016.09.006
  • Zaheer, A., McEvily, B., & Perrone, V. (1998). Does trust matter? Exploring the effects of interorganizational and interpersonal trust on performance. Organization Science, 9(2), 141–159. https://doi.org/10.1287/orsc.9.2.141
  • Zand, D. E. (1972). Trust and managerial problem solving. Administrative Science Quarterly, 17(2), 229–239. doi:10.2307/2393957

Appendix: Description of the eleven most frequently used experimental designs

In this appendix, we briefly describe the eleven most frequently used experimental designs listed in and note their relative strengths and limitations.

(1)

Berg et al. (Citation1995)

Description: The investment or trust game was utilised significantly more often than any other experimental design in our sample. In this game, participants are matched in dyads and assigned to the role of sender or receiver. The sender must select how much of their starting allotment (e.g., $10) they would like to send to their partner (typically in $1 increments). This amount is tripled upon transfer, and the receiver must select what amount between $0 and three times the sent amount they will return to their partner.

Strengths and limitations: The amount that the sender chooses to send to their partner is often used as a behavioural measure of benevolence-based trust. If participants select to send a significant portion of their starting funds to their partner, this indicates a willingness to make themselves vulnerable based on the belief that the receiver will act with positive intentions toward the sender. The trust game, for this reason, is well suited for investigating how varying relationship or individual characteristics affect perceptions of benevolence. Unless significantly adapted and repurposed, this method is not particularly appropriate for measuring either integrity- or ability-based trust, and it is unclear whether it directly generalizes to trust in non-monetary settings.

(2)

De Cremer et al. (Citation2018), Study 2a

Description: Participants initially respond to a generalised trust scale (adapted from Yamagishi & Yamagishi, Citation1994) and are then told they will engage in a group task with four other participants, with one person being assigned to act as manager, one as supervisor, and the remaining three as subordinates (in reality, all participants are assigned to the supervisor role). Next, participants are given information regarding their manager’s trustworthiness, ostensibly derived from the manager’s responses to the trust scale. Depending upon condition, they are told that, relative to the average person, their manager can or cannot be trusted. Participants next receive an email from the manager containing a participation manipulation. In the high participation condition, participants read that the manager actively seeks supervisors’ opinions on organisational decisions, while in the low participation condition, participants read that the manager will not incorporate supervisor feedback in their decision-making.

After receiving this information, participants are told that they would be supervising three subordinates as they complete three tasks. In light of this information, they are asked to indicate on a 7-point scale the extent to which they would like to monitor and control their subordinates’ decisions. Participants’ response to this question serves as a measure of trusting behaviour toward subordinates.

The authors adapted the format of this study to gauge how a supervisor’s trustworthiness affects trust in subordinates. In the subsequent study, all participants are assigned to the subordinate role and are matched to one of the supervisors from the previous study. Participants are shown the extent to which their supervisor plans to monitor and control them before being asked to rate the trustworthiness of their supervisor on a 7-point scale.

Strengths and limitations: This series of experiments was designed to investigate how perceptions of trust may trickle down between levels in organisations. This design can thus be fruitfully utilised to investigate how trust transfer occurs across hierarchical levels in organisation. In addition, this design could be extended to study the effect of perceptions of trustworthiness across departments within an organisation as it applies to the ability to coordinate.

This design is limited in the sense that participants do not engage in group tasks after responding to survey measures, so we cannot gauge the effects of manager trustworthiness on supervisors’ actual behaviour. While participants’ responses to the scale regarding the extent to which they prefer to control their subordinates is deemed a measure of trusting behaviour, this measure rather captures distrust intentions. If this design were extended to include actual group tasks where supervisors must select how much effort to invest toward monitoring and controlling subordinates, we could gain better insight into the trickle-down effects of (mis)trust on monitoring costs, for example.

(3)

Kim et al. (Citation2004), Study 1

Description: Kim and colleagues designed a series of hiring vignettes for the purpose of investigating trust violation and repair. In their procedure, participants are asked to fill the role of a manager tasked with both hiring and supervising a senior-level tax accountant. Participants watch a video recording (supplemented with a written transcript) of a recruiter interviewing a potential new hire. The footage states that the applicant allegedly made an important error on a client’s tax return at their former workplace. Depending upon condition, this trust violation is ascribed to either a lack of competence or a lack of integrity on the part of the applicant. The applicant’s immediate response to this allegation also varies across two levels. They either apologise for the violation and promise that it would never happen again, or they deny that they are responsible for the transgression, instead blaming internal politics at their previous workplace.

Strengths and limitations: The experimental design contains powerful elements that can be fruitfully applied to a variety of other research questions. For instance, the violation response manipulations in this study could be broadened to include other potential responses which have not been studied in such a context (e.g. responding by adding they have learned from the infraction). In addition, this design allows for variable violation type. The video interview and transcript can reasonably be altered to account for potential violations of trust resulting from lack of ability, integrity, or benevolence. Therefore, this experimental design can extend to cover a broad set of research questions covering potential trust violation responses which afford the greatest trust repair following a violation.

(4)

Levin (Citation1987)

Description: In Levin’s (Citation1987) original design, participants are sorted in two groups and are instructed to either consider a purchase of 75% lean ground beef or of 25% fat ground beef. Participants are asked to rate the extent to which they associate the hypothetical product with four indicators of quality (e.g. good tasting) on a 7-point scale.

For the purpose of studying trust, Keren (Citation2007) repurposed this study to measure the effect of advertisement framing on how participants perceive one vendor or the other as trustworthy and how they determine who to purchase from. In the first study of this type, participants read that one vendor advertises their product as 75% lean and the other as 25% fat. Depending upon condition, they are either told that both butchers are considered trustworthy locally, only one is trustworthy (but participants are not told which one), or neither are trustworthy. While the dependent variable in this initial study is purchasing intention, the study design was further adapted to generally measure how various vendor requests and advertising phrases result in greater consumer trust.

Strengths and limitations: This original vignette study was adapted for a specific research context to explore the effects of positive framing on consumer choice and negative framing on consumer trust. While this experiment yielded interesting results of ‘trust-choice incompatibility’ (Keren, Citation2007, p. 252), its scope appears relatively narrow in terms of applications to organisational settings.

(5)

Sah and Loewenstein (Citation2015), Study 1

Description: In the first study incorporating this design, participants are assigned to act as either advisors or advisees in the experiment. Advisees are shown nine dots out of a 30 dot by 30 dot grid where each dot could be empty (white) or filled in (black). They are tasked with estimating the total number of black dots on the full grid. Depending upon condition, advisees either receive advice from a single primary advisor or from a primary and then a secondary advisor. In groups with two advisors, half of the primary advisors are notified of the existence of the secondary advisor, while half are not notified. Advisees first hear from their primary advisor before hearing separately from their secondary advisor, if available.

The incentives for primary and secondary advisors also differ. Primary advisors are explicitly told the correct number of dots and they have access to the grid to check the accuracy of this information for themselves. They are told that they could maximize their reward by getting their advisee to guess a number which overshoots the correct answer. Secondary advisors are not explicitly told the correct number, but they have access to the grid to count. Their reward would be maximized if their advisee’s guess is accurate within ten dots. After seeing the nine-dot section and hearing from all necessary advisors, advisees give their estimates and respond to a 5-point scale regarding the extent to which they trusted their advisor(s) during the experiment.

Strengths and limitations: This experimental design is narrowly focused in the sense that it is specifically tailored to investigating situations where an individual receives conflicted advice from a well-informed individual and unconflicted advice from a less well-informed individual. However, the context of the experiment could be altered to generally study how individuals incorporate information from disagreeing sources and how trust forms in such situations. Insights in this area might be especially pertinent to uncertain competitive environments where information is scarce to individual decision-makers or organisations.

(6)

Cheshin et al. (Citation2018), Study 1

Description: Participants read a vignette which asks them to imagine that they have been prompted to visit a store to buy a cell phone because of a sale advertised for their preferred device. Upon arriving, participants learn that the phone is still available, but that the sale is either still going or has just ended, depending upon condition. Participants next watch a video showing the store employee’s reaction upon giving this news. Their reaction is always appropriate in terms of valence (i.e. happy when the sale is ongoing and unhappy when it has ended), but is either mild or intense, depending upon condition. These dimensions are conveyed through the actors’ facial expressions, movements, and speech. After viewing the vignette materials, participants respond to five items on a 7-point scale regarding their trust in the sales associate.

Strengths and limitations: This experimental design for studying the relationship between vendors’ emotional displays and consumers’ subsequent trustworthiness perceptions is unique in the extent to which its visual materials were developed. Such materials assist in immersing research subjects in the vignette scenario. For this same reason, however, this study might be difficult for other researchers to adapt because of the careful attention to detail required for creating the visual scenario demonstration.

(7)

Kennedy and Schweitzer (Citation2018), Study 1

Description: Kennedy and Schweitzer (Citation2018) ran a series of vignette experiments to investigate how individuals form trusting perceptions of someone who denounces a position or suggestion as unethical. In these written scenarios, participants first read the description of a company that uses an important input in their manufacturing process that will soon become illegal. All participants read a series of ethical suggestions for how the company might address this concern, ostensibly written by a prior participant referred to as Presenter A. The materials after this point vary depending upon condition.

In the treatment condition (accusation), participants next read a series of unethical suggestions made by another presenter, deemed Presenter B. After reading these suggestions, participants are shown Presenter A’s reaction, in which they directly call Presenter B unethical. In one of the three control conditions, participants also read Presenter B’s suggestions before seeing Presenter A’s reaction, in which they state they have no further comments. In the second and third control conditions, participants read Presenter A’s additional comments directly following their own presentation. In the second control group (moral pronouncement), participants read an additional statement from Presenter A, stating that any solutions that involved selling the illegal product in developing countries would be illegal. In the third control group, Presenter A offers no further comments after their own presentation.

Strengths and limitations: Similar to various studies above that were designed to investigate trust’s role in specific social contexts (e.g. Sah & Loewenstein, Citation2015), the design used by Kennedy and Schweitzer (Citation2018) is especially appropriate for addressing research questions involving trust after accusations, but it is likely difficult to adapt to other contexts.

(8)

Sah et al. (Citation2018), Study 4

Description: In this vignette experiment, participants read materials from a female college graduate’s blog, including her general biography and one of three posts about interior design in which the blogger gives tips on how to make a small apartment appear larger. In all conditions, this blog post is sponsored by a housing company called Apartment Guide. In one post, the blogger explicitly mentions the existence of this paid sponsorship and defines the contractual agreement between parties. The second post only implicitly mentions this relationship without explaining it. The third post makes no mention of the sponsorship. Participants are asked to rate multiple items on a 7-point scale regarding the blogger’s perceived trustworthiness along three dimensions: expertise, benevolence, and integrity (Mayer et al., Citation1995).

Strengths and limitations: This experimental design and others of its type can prove valuable to trust experimentation because of their adaptability and potentially malleable scripts. While replicating or redesigning a user interface for a blog website would require significant effort, this general setup can be broadly adapted for a variety of research questions involving how individuals perceive the trustworthiness of online personalities or organisations with an online presence.

(9)

Stewart (Citation2003), Study 1

Description: Participants begin by reading materials on a source website which gives general information about laptops and what to look for when buying a laptop. They are told that the task involves first viewing general information about laptops to help participants determine their shopping criteria and then browsing a vendor site to select a product.

The website is for a computing magazine, and it is designed to act as a source of trust to be transferred to subsequently tied laptop vendors. Based on pretests with the subjects in the original study indicating they were familiar with the magazine and trusted it, the source website was expected to elicit high initial trust to place onto network ties. This website viewed by participants either contains zero links, one link, or nine links to potential laptop vendors. After reading through this page, participants indicate the factors that are most important to them when buying a laptop and they also complete a survey measuring trusting beliefs in the source website.

Next, participants either follow one of the provided links or click a link in the instruction bar if their source website has no link. They all end up on the same target website which sells laptops. They browse for a laptop which fits their desires, then after marking it, rate the trustworthiness of the target site.

Strengths and limitations: This study has been adapted by both Lim et al. (Citation2006) and Stewart (Citation2006) in order to further study how consumers may place trust in unfamiliar vendors or organisations if they have a perceived network tie with a familiar, trusted organisation. These adaptations include familiar brand logos and conditions designed to untangle how perceived trustworthiness in the eyes of consumers may transfer to different extents between advertisers and vendors, respectively. In this way, this study design is specifically well-suited for investigating trust transfer in online vendor contexts where numerous factors could be varied in the future (e.g. including/excluding explicit statement of business relationship).

(10)

van Dijke et al. (Citation2018), Study 3

Description: Participants are told that they will be assigned to online groups of five members (in reality, they engage with predetermined responses from a computer for the duration of the experiment). Next, the following instructions are delivered: the task will last for two rounds and participants will be given 100 starting points in each round (which represent lottery tickets for a prize drawing after the experiment concludes). Participants are tasked with selecting how many points to contribute to an organisational pool in each round. No matter the number of pooled points at the end, the pool will be distributed equally among participants. If this pool reaches 250 points, however, the points will be doubled before being distributed. Ostensibly, each five-person group will be comprised of one individual in a high-ranking position, two in middle positions, and two in low-ranking positions.

After being shown a network connection popup on their computer terminal, participants are randomly assigned to either a middle position or low-ranking position before the first round of contributions begins. Once the first round concludes, participants are told that the highest-ranking member will take a while to evaluate their contributions. In the meantime, the experimenters manipulate participants’ sense of power by asking participants to describe either a situation in which they have held power over another actor or a situation in which another actor has held power over them.

Procedural justice is next manipulated across two levels. Participants are told that the highest-ranking member has finished their evaluation and will decide how to split the points after the next round. Participants are either given a chance to explain their contribution or are not given such a chance. Immediately after, participants respond to five survey items regarding the highest-ranking member’s perceived benevolence and six survey items regarding their perceived integrity (both adapted from Mayer & Davis, Citation1999).

Strengths and limitations: We see this design as having broad potential for studying the development of trust in situations requiring sharing and actors’ benevolence. The procedure could be modified to incorporate confederates instead of simulated responses and/or manipulations for subordinate voice and the degree to which the high-ranking member incorporates feedback, to name a few potential variables of interest. As for limitations, the incentive structure used in this experiment might lead to a lack of participant investment in the task when compared to pre-incentives or flat post-incentives. There exists conflicting evidence regarding the effect of postpaid lottery incentives on eliciting increased response rates in survey studies (Buck et al., Citation2012). While not inherently problematic, experimental trust researchers can benefit from keeping in mind the consequences of their chosen incentive systems.

(11)

Welsh and Navarro (Citation2012), Study 1a

Description: Participants are asked to read a vignette which states that they are researchers attempting to determine whether predators may pose a threat to humans in a certain area. Participants read that they have access a relatively old data sample and a relatively new data sample with which they must determine the predicted future rate of occurrence of predator attacks on humans. The nature of the comparatively old data varies by condition. In the high trustworthiness condition, these prior observations were collected relatively recently by the research team in the same location. In the low trustworthiness condition, prior observations were collected in the distant past by an individual not on the research team in a distant location. In all conditions, the sample size of the new data is smaller than that of the old data. The implied base is either 25% (e.g. 50 of 200 predators) or 75% (e.g. 150 of 200 predators) pose a threat to humans, with the new data implying the alternative. Participants are asked to indicate how many predators in the area post a threat to humans.

Strengths and limitations: This experimental design is one of the few in this sample not intended for use in studying interpersonal trust. While this does not detract from its potential value in studying base rate neglect, many organisational trust researchers may not find this method suitable for their investigations.