468
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Examining the Differential Effectiveness of Fear Appeals in Information Security Management Using Two-Stage Meta-Analysis

, , &
 

ABSTRACT

Most of the information security management research involving fear appeals is guided by either protection motivation theory or the extended parallel processing model. Over time, extant research has extended these theories, as well as their derivative theories, in a variety of ways, leading to several theoretical and empirical inconsistencies. The large body of fragmented, and sometimes conflicting, research has muddied the broader understanding of what drives protection- and defensive motivation. We provide guidance to the security discourse by offering the first study in the literature to employ two-stage meta-analytic structural equation modeling (TSSEM), which combines covariance-based structural equation modeling and meta-analysis. Information systems (IS) researchers have traditionally used meta-analysis for structural equation modeling for such purposes—an approach that has several serious statistical flaws. Using 341 systematically selected empirical security articles (representing 383 unique studies) and TSSEM, we pool a large series of five datasets to test six models, from which we examine the effects of constructs and paths in the security fear-appeals literature. We compare and test six versions of models inspired by issues in the broader fear-appeals literature. We confirm the importance of both the threat- and coping-appraisal processes; establish the central role of fear and that it has greater importance than threat; show that efficacy is a stronger predictor of protection motivation than is threat; demonstrate that response costs as currently measured are ineffective but that maladaptive rewards have a strong negative effect on protection motivation and a positive effect on defensive motivation; and provide evidence that dual models of danger control and fear control should be used.

Supplementary Material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/07421222.2023.2267318

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

1 PM is people’s attitudes, intentions, or behaviors to protect themselves from a perceived threat per the recommended action [Citation58, Citation119, Citation126].

2 Formally, a threat is defined as an individual’s perception that something or someone has the intention to cause them harm [Citation153].

3 A threat can be perceived to exist by the individual only if both conditions are met. For example, an illness that results in death (high severity) but that has been completely eradicated (no susceptibility) results in no threat, whereas the current threat of H1N1 flu is considered high due to the highly uncomfortable symptoms (severity) and the ease with which it is passed to others (high susceptibility).

4 A person’s ability to cope with a perceived threat is possible only if they believe that the response can reduce or remove the threat (i.e., by reducing the severity or susceptibility) and that they can execute this response. Extending the previous example, an individual who perceives H1N1 flu as a threat will appraise their ability to cope as high if they believe that the H1N1 vaccine reduces the susceptibility of H1N1 flu and that they are able to receive the vaccine. However, if the individual is either unable to obtain the vaccine or does not believe it reduces the possibility of getting H1N1 flu, the coping appraisal will be low.

5 Effect size is fundamentally about “what works” and thus has long been a bridge between research and practice. The effective analysis of effect size allows scientists to remove copious amounts of noise (significant vs. not significant) and find the signal (does it matter and does it work in a meaningful way that would change practice outcomes?). In basic meta-analysis, studies are compared by calculating an effect size, which “reflects the magnitude and directionality of the association between the two variables” [11, p. 478]. Meta-analysis has been applied in many fields, and it has been used successfully in traditional management and IS research [e.g., Citation57, Citation59, Citation84, 98, Citation107, 133, 160]. Meta-analysis is designed primarily for the testing of simple pairings of relationships between one independent variable and one dependent variable and for the testing of simple moderation.

6 Papers that only had descriptive data or data that could not be used to derive correlations were excluded. Whenever a study should have been in scope but the authors did not provide correlation data, we contacted the authors and requested the correlations (or raw data) two times; the studies were excluded if the authors did not respond or could not provide the necessary data.

7 For example, if authors published a conference paper and used the same dataset for a subsequent journal publication, then the conference publication was excluded from the sample population. We used the latter to ensure the unit of analysis involved only independent studies [Citation14, Citation75, Citation98].

8 The two data files with five datasets are necessary because of an interesting nomological network difference between PMT and the EPPM: PMT uses two first-order representations of threat (severity + vulnerability) and two first-order representations of efficacy (self-efficacy + response efficacy). Conversely, the EPPM combines and uses these as second-order constructs called threat and efficacy. Thus, to test the EPPM-related models (2, 4, and 6), all the existing first-order threats and efficacies needed to be collapsed into second-order constructs and we could not use any first-order versions of these. By contrast, all the PMT-related models required only first-order versions of these constructs, whereas any second-order version had to be excluded.

9 If any reliability data were missing, we used an assumed level of 0.80, but only if the study’s other reliabilities were .80 or higher and the authors followed expected methodological procedures, as suggested by research [Citation3, Citation11]. Otherwise, the study was dropped.

10 We also followed other meta-analytic studies in dealing with results that are generalizable across a broad domain [Citation64, Citation133]; consequently, we did not consider the number of measurement items used for a given measure and treated all such measures equally, as suggested by leading guides to meta-analysis [Citation2, Citation14, Citation22]. However, we did gather reliability statistics for each measure (α) to make adjustments for measurement reliability, as suggested by leading MASEM research [Citation11].

11 Assessing the file drawer problem for MASEM, including TSSEM, is still in its infancy [Citation32]. Thus, we used conventional meta-analysis (i.e., analyzing single correlations instead of correlation matrices) to assess the file drawer problem. We used the random-effects model to aggregate the observed effect sizes of each hypothesis in the best fit models (Models 3, 5, 6). The observed effect sizes were Fisher r-to-z transformed before applying the random-effects model [Citation13]. The resultant aggregated effect sizes were used to assess the file drawer problem [Citation130].

12 A typeII error occurs when a statistical test fails to reject a null hypothesis when it is false. Statistical power is the probability that a statistical test will reject a null hypothesis when the alternative hypothesis is true [Citation43].

13 Multicollinearity exists when a factor in the research model is highly correlated with one or more factors [Citation6].

14 The VIF score of a focal predictor is calculated as 1 divided by 1 – R2, where R2 is the R-squared value obtained by regressing that focal predictor on all other predictors in the model [Citation63, Citation108].

15 See the following URL for full documentation of and access to “metaSEM” for implementing TSSEM: https://cran.r-project.org/web/packages/metaSEM/vignettes/Examples.html

16 TSSEM has already been extensively developed and compared to other techniques (e.g., traditional MASEM, SEM, regression, and meta-analysis) and has been peer reviewed in multiple elite interdisciplinary journals, including multiple publications in Psychological Methods, the top APA journal on methodology and statistics [Citation27, Citation31, Citation38, Citation79]; Structural Equation Modeling, the leading interdisciplinary journal on SEM [Citation28, Citation29, Citation39]; Behavior Research Methods [Citation30, Citation78]; Journal of Applied Psychology [Citation35]; Research Synthesis Methods [Citation37, Citation40]; Health Psychology Review [Citation41]; Multivariate Behavioral Research [Citation77]; and other outlets. However, to date, its use is absent from top management journals, which are still using the older, flawed version of MASEM.

17 To resolve this issue with traditional MASEM, the PCM in TSSEM is not an observed correlation matrix and is not used as an “inputted covariance matrix” to fit the structural model (as in traditional MASEM). Only the final covariance matrix is used to fit the model with a CB-SEM analysis.

18 To resolve this issue with traditional MASEM, TSSEM accounts for the sample variation and interdependency among effect sizes across primary studies by weighting the PCM using the ACM [Citation38, Citation121]. Such weighting means that the larger the sampling variation, the lesser the weight given to the correlations, an approach consistent with traditional meta-analysis [Citation32, Citation75]. Consequently, TSSEM provides each observation with its appropriate weight to be considered when estimating the final parameter values [Citation38, Citation121].

19 To resolve this issue with traditional MASEM, through the weighting approach, TSSEM uses the total sample size corresponding to each effect size to compute the final covariance matrix rather than using ad hoc approaches such as harmonic mean or mean sample [Citation32, Citation38]. Consequently, in TSSEM, the type I, type II, and standard errors of parameter estimates are controlled so that correct inferences can be made from them [Citation39, Citation108].

20 A fixed-effects model assumes effect-size homogeneity, in which a single true population effect size applies to all the primary studies included in the analyses [Citation108]. This model applies when primary studies in a meta-analytic dataset share common conditions and measures. Thus, the pooled effect size is considered the population effect size, and effect-size variations across studies are attributed to sampling errors [Citation108].

21 By contrast, a random-effects model assumes effect-size heterogeneity, in which each study has its own true population effect size [Citation76]. This model applies when there is variation in study conditions and measures, as is the norm for organizational or behavioral datasets. Thus, the pooled effect size is the mean of the distribution of effect sizes, and the effect-size variations across studies are attributed to sampling errors and between-study heterogeneity [Citation108].

22 The latest standard is to exclude nonpositive definite matrices before proceeding with the TSSEM Stage 1 analysis [Citation143]. A nonpositive definite matrix indicates that the values of the matrix are inconsistent [Citation41] and challenge the structural equation modeling assumptions [Citation32, Citation130]. Adjusting effect sizes for reliability, a procedure applied in our analysis as is common for meta-analysis, is one of the sources of nonpositive definiteness [Citation41, Citation143, Citation159].

23 A good-fitting model will have a comparative fit index (CFI) and Tucker-Lewis Index (TLI) of 0.9 or higher, a root mean square error of approximation (RMSEA) of 0.08 or lower, and standard root mean residual (SRMR) values below 0.08 [Citation72, Citation108].

24 Crucially, the chi-square test is sensitive to large sample sizes such that the larger the sample (e.g., > 400) the higher the likelihood that the null hypothesis is rejected [Citation121]. This issue is more pronounced in TSSEM (and traditional MASEM) because where the total samples are extremely large and, as a result, the exact fit rarely holds [Citation78]. Thus, in conducting TSSEM, we use RMSEA, SRMR, CFI, and TLI to assess model fit, as is the standard TSSEM practice [Citation78, Citation121].

25 The AIC is an especially effective means of non-nested model selection, because it estimates the quality of each model relative to that of each of the other models [Citation5, Citation16, Citation101, Citation141]. The BIC has properties comparable to those of the AIC but is slightly more conservative in penalizing potentially unwieldy non-nested models that have a higher likelihood of overfitting. Like the AIC, the BIC is recognized by leading methodologists as an effective criterion for model selection among a finite set of non-nested models, based on a likelihood function, with the lowest BIC indicating the most preferred model [Citation111, Citation122, Citation152]. However, in both cases, these must be based on the same covariance matrix, and thus they cannot be used to compare models with different variables and different datasets. This is why we can only horse race Models 4 and 6.

26 An indirect effect between a given pair of independent and dependent variables indicates the effect of that independent variable on the dependent variable transmitted through an intervening (i.e., mediating) variable [Citation108]. The indirect effect is estimated using the product of the direct effects on the indirect route between the independent and dependent variables [Citation108, Citation143]. For example, in Model 6, the indirect effect of 0.090 (0.049, 0.130) between T2n and PM is the product of T2n → Fear and Fear → PM direct effects. This indirect effect indicates the extent to which the effect of T2n on PM is transmitted through the intervening variable, Fear. The total effect refers to the sum of the direct and indirect effects between a given pair of independent and dependent variables [Citation108]. For example, in Model 6, the total effect 0.194 (0.157, 0.230) is the sum of the direct effect of 0.104 (0.046, 0.162) and the indirect effect of 0.090 (0.049, 0.130) between T2n and PM. The total effect if 0.194 (0.157, 0.230) indicates that a one standard deviation increase in T2n results in 0.194 standard deviation increase in PM.

27 For example, in PMT Model 5, the average efficacy effect size is 0.226 and the average threat effect size is 0.0585, for a 384 percent higher effect; in EPPM Model 6, efficacy is 0.307 and threat is 0.104, for a 295 percent higher effect.

28 Response costs work well only in incomplete models when maladaptive rewards are not considered.

29 Only maladaptive rewards works as a positive predictor of DM, and only threat severity acts as a negative predictor.

Additional information

Funding

This work was supported by the Hong Kong Research Grants Council General Research Fund (GRF) [11520816]; City University of Hong Kong Strategic Research Grant (SRG) [7004122]; The University of Hong Kong and University Grants Council start-up and seed grants; City University of Hong Kong and University Grants Council Start-Up Grant [7200256]; Hong Kong Research Grants Council General Research Fund (GRF) [147712].

Notes on contributors

Paul Benjamin Lowry

Paul Benjamin Lowry ([email protected]; corresponding author) is an Eminent Scholar and the Suzanne Parker Thornhill Chair Professor in Business Information Technology (BIT) at the Pamplin College of Business at Virginia Tech where he serves as the BIT PhD and Graduate Programs Director. He received his PhD in Management Information Systems from the University of Arizona. Dr. Lowry’s research interests include: organizational and behavioral security and privacy; online deviance, online harassment, and computer ethics; human-computer interaction, social media, and gamification; and business analytics, decision sciences, innovation, and supply chains. He has over 290 publications, including more than 165 journal articles in the Journal of Management Information Systems (JMIS), Information Systems Research (ISR), MIS Quarterly, and others. He is a member of the Editorial Board of JMIS. He is also a senior editor of Journal of the AIS and associate editor of ISR.

Gregory D. Moody

Gregory D. Moody ([email protected]) is Lee Professor of Information Systems and a Troesh Scholar in the Management, Entrepreneurship and Technology Department in the Lee Business School at the University of Nevada, Las Vegas and Director of the Graduate MIS program. He received a PhD from University of Pittsburgh and a PhD from the University of Oulu, Finland. Dr. Moody has published in the Journal of Management Information Systems, Information Systems Research, MIS Quarterly, Journal of the AIS, and other journals. His interests include IS security and privacy, e-business (electronic markets and trust) and human-computer interaction (Web site browsing and entertainment). He is a senior editor of Information Systems Journal and Transactions on Human-Computer Interaction.

Srikanth Parameswaran

Srikanth Parameswaran ([email protected]) is an assistant professor of management information systems in the School of Management at Binghamton University. He earned his PhD in Management Science and Systems from the State University of New York at Buffalo. Dr. Parameswaran’s research interests are in the areas of IT innovation, meta-analytic structural equation modeling, user-generated and web content mining, and technology-mediated health outcomes. His research has been published in the Journal of Management Information Systems, Journal of the Association for Information Systems, Organizational Research Methods, Information & Management, Journal of the American Medical Informatics Association, Information Systems Frontiers, and other journals. He was featured on the Poets & Quants list of the “Top 50 Undergraduate Business Professors of 2021.”

Nicholas James Brown

Nicholas Brown ([email protected]) is an assistant professor of information systems at the Kelley School of Business at Indiana University. He earned his PhD in Business Information Technology at the Pamplin College of Business at Virginia Tech. Dr. Brown’s research interests include privacy in machine learning systems, information security management, and IS designs that modify consumer behaviors and decisions. His work has been published in the Journal of Management Information Systems, Computers & Security, and other outlets. He has served as an ad hoc reviewer for leading conferences and journals and has won “Best Reviewer” and “Runner-up Best Reviewer” awards at multiple ICIS conferences. He has substantial industry experience and MBA teaching experience.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.