548
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The nuts and bolts of publishing quantitative research

This article details handling research method-related issues when publishing quantitative research in Human Resource Development International and other social science journals. First, to do so, we must acknowledge the need for an Introduction that presents a clear, compelling rationale for the study, scientifically interesting research questions and hypotheses, a strong theoretical framework, relevant empirical literature, and discussion of the study’s possible scientific contributions. Keeping the aforesaid in mind, important pointers are presented related to publishing quantitative work starting with the Method section, and subsequently followed by the Results, Discussion, and Limitations and Recommendations for Future Research sections of a quantitative research article.

Method

In the paragraph before the Method section, presenting the specific research questions and/or hypotheses, rather than taking a narrative approach, is one of two effective ways to communicate the research questions/hypotheses. Narrative approaches can be confusing, especially to new researchers because it can be difficult extracting what is being hypothesised; thus, keeping the reading audience in mind, listing the research questions/hypotheses would be a welcome step for simplifying things for the reader. Another useful approach would be introducing each research question/hypothesis at the end of a corresponding subsection in the Introduction. Either way, after putting forward the research questions/hypotheses, a figure that presents the model (e.g. path model) accompanied by arrows that link all the research variables should be included. For the sake of clarity, models that include the specific hypothesis related to each path are most illuminating.

Design

The research design, which must be clearly communicated and speaks to the research problem, refers to the structure of the study. The design’s primary role is to limit possible chances of drawing invalid inferences from the data (Dannels Citation2010). In plain language, it is the strategy used for making certain the research methods align with answering the research questions or testing the hypotheses. The design can be descriptive (i.e. answers What? questions) or explanatory (i.e. answers Why? or How? questions). Importantly, the results of descriptive research present the essential preliminary evidence supporting conducting later explanatory, causal research (Reio Citation2016). Neither is superior per se, as each contributes significantly to what we know about social phenomena. What matters most is that they are what will be required to address the research questions/hypotheses.

When presenting the design, we must be clear about its causal or non-causal nature. The researcher needs to set forth the strengths and weaknesses inherent with the selected design and disclose how appropriate inferences will be made later in the article because of the said design. For example, if it were a descriptive study, then we could advise that the results could not support causal inferences. For an explanatory study, we could speak to causality, understanding there may be ecological validity issues.

As part of considering the research design, it is important to address possible issues with common method variance (CMV) bias through the design when using monomethod (e.g. self-reports; observations) approaches, as correlations among variables can be inflated or deflated, thereby skewing the results, and possibly invalidating the research findings (see Podsakoff et al. Citation2003). Fortunately, there are procedural and statistical approaches to reduce CMV’s likelihood. Interestingly, more recently, scholars have begun to challenge strongly the notion that CMV is really a significant problem in social science research. For example, Bozionelos and Simmering (Citation2022) found little evidence of significant CMV issues in six of the major human resource management journals over the past ten years. Nonetheless, although Spector (Citation2006) also has argued for years that CMV was not really a significant concern, Editors and reviewers tend to demand that it be addressed in the paper. Consequently, when designing the study, addressing procedural remedies (e.g. assure participant anonymity, clarify that there are no right or wrong answers, provide clear instructions, use well-validated measures, collect independent and dependent variable data at different times, include a theoretically relevant control variable as an extra measure in the study [e.g. affect; social desirability]) would be a positive step in controlling for possible CMV bias. Second, after the data has been collected, statistical remedies like Harman’s single-factor diagnostic test (preliminary step), and the general factor covariate and unmeasured latent method factor techniques can be employed also to test for possible CMV bias (Podsakoff et al. Citation2003). The point is to be certain to discuss how possible CMV bias was addressed in the study and based on the procedural and statistical steps taken, how CMV was or was not likely to be a significant issue.

Participants

One of the almost maddening issues when reading the results of a quantitative study is the lack of sufficient information to acquire a sense of who the participants were. Authors must describe the participants by reporting the demographics of interest (e.g. gender, age, ethnicity, SES). This effort will support secondary data analysis and future replication efforts (see Dannels Citation2010). One of the most egregious oversights is when the author, working to save space, will make a statement like, ‘the ethnic composition of the sample is representative of the research population’, without presenting any ethnic group data. Thus, we have no idea what the ethnic breakdown of the sample might be, which could have theoretical, empirical, practical, and ethical relevance. Effective authors often display a table that thoroughly presents frequencies and percentages of categorical data (e.g. gender, ethnic group), and the means, standard deviations, and ranges of continuous data (e.g. age; years of experience).

Research Measures

Indubitably, we need high-quality operationalisations of variables for the sake of being able to make reliable and valid inferences (Reio Citation2021). The author should provide information about the measure’s name, how to administer it, on whom it was normed, accompanied by reliability (e.g. test-retest, Cronbach’s alpha [α]), and validity (e.g. construct) evidence. Talk about the advantages and disadvantages associated with the measure in question. For example, if prior researchers had reported uneven results regarding yielding acceptable reliability scores (generally, researchers expect to see reliability scores of no less than .70), it should be disclosed and noted as a possible limitation. Because it is common seeing new measures demonstrating lower reliabilities, especially in cross-cultural research, Chretien et al. (Citation2020) urge researchers to not relegate low alpha coefficient measures necessarily to the file drawer of forgotten data. Although beyond the scope of this article, Chretien and colleagues offer a number of statistical remedies that warrant in-depth examination for expertly handling low alphas. Thus, low alphas are not automatically ‘bad’; rather, they can be examined systematically to provide support for retaining a low alpha measure in a study.

It is appropriate, too, to inform the reader that the measure has been used successfully in empirical research with similar samples in similar settings. For example, in a Chinese workplace study, if the research measure was developed initially for use in a western setting, clarify how the scale was appropriate for use in this non-western setting. Citing prior researchers who have used the measure in non-western settings like China would be helpful for justifying the measure’s use and better support the valid interpretation of the scores derived from the measure.

In addition, a sample item from each of the scales and/or subscales should be included in the write-up. If it were a Likert scale, then present the anchors; for example, on a behavioural engagement scale (Shuck, Adelson, and Reio Citation2017) where the respondent is asked to rate level of agreement with a statement like ‘I do more than what is expected of me’, the authors would report the anchors of 1 = ‘strongly disagree’ to 5 = ‘strongly agree’. If there are a number of measures in the study with the same Likert anchors, then one could state that a battery of measures shares the same anchors to avoid repetition.

Procedures (protocol)

The importance of providing clear, step-by-step guidance as to how the data were collected ethically cannot be overemphasised because it guides possible replication efforts. Institutional Review Board guidance is presented typically in this section to convince the reader that ethical research practices were followed to the letter.

Replicability is vital for building confidence in the scientific merit of one’s research. Although non-replicable research suggests that the research design or methods may be inadequate or require further refinement, paradoxically it may warrant examination and subsequent publication in that it still is contributing to the research literature. Maxwell et al. (Citation2015) note the problem with replication in social science research like psychology, or by extension HRD research, is that many studies are not replicable because of small sample sizes (e.g. due to language and geographic location barriers) and low statistical power, issues that especially plague cross-cultural research. Such conditions set the stage for sometimes uneven and unreplicable research findings where the null hypothesis is rejected when it was in fact true (false-positive; Type I error) or when one fails to reject the null hypothesis when it was false (false-negative; Type II error). In plain language, Type I error is when the researcher thinks they have found a significant effect when there was not one. On the other hand, Type II error is when the researcher overlooked a possible effect when there actually was one (Cohen et al. Citation2003). It may be unrealistic, therefore, based upon the results of a single replication study, to claim the prior research was invalid despite having an appropriately large sample size and sufficient statistical power to identify true effects (Maxwell, Lau, and Howard Citation2015). With null findings in replication research, a stronger contribution might be offering additional steps that one might take to replicate the research more amply to meet the conditions supporting null findings of the previous research.

The information should include justification for the data collection approach. If a web-based survey was used to collect the data, tell us why this approach was selected versus other viable data collection approaches (e.g. paper-and-pencil, telephone, texting). Dillman (Citation2007), for instance, provide guidance for sampling approaches (e.g. random, stratified, multi-stage) and successfully administering a survey, which includes means to ensure clear instruction and content validity, and ultimately optimise response rates.

Conventional wisdom dictates that trained researchers should be collecting the data. On a multi-author paper, the individual who collected the data should be identified. For example, in the case where the second author was primarily responsible for collecting the data, then the authors might state, ‘The data were collected by the second author’. The qualifications of those collecting the data should be made manifest also, if they are not among the authors. Thus, in a workplace study, if graduate students or supervisors were charged with data collection, assure the reader that those individuals were provided appropriate research method instruction to maintain the ethical requirements and fidelity of the study. Likewise, in an intervention, if a corporate trainer was selected to deliver it, then again, the reader needs assurance the trainer had the training required to deliver the intervention in alignment with the protocol(s) designated for the study.

Results

The Results section is where the researcher describes and presents what they discovered after analysing the data. It is in this section, then, that the demographic data are examined, and research questions/hypotheses are tested. First, present the descriptive data (e.g. demographics) in a table with enough information (e.g. frequencies) to allow the reader to understand the composition of the sample. Tie discussion about the demographic data to the table, focusing on variables that are most theoretically and empirically interesting. Second, presenting a zero-order correlation table of the research variables would be in order, along with mean and standard deviation information, as it is important for supporting meta-analytic work (Dannels Citation2010). The correlations should be examined preliminarily to determine if the magnitude and direction of relationships were roughly as expected. If the direction of relationship is opposite of what was expected, check for input errors (Cohen et al. Citation2003). Third, the separate research question/hypothesis being tested should be listed at the beginning of the section, followed by the analyses used to answer the research question or test the hypothesis. This approach is especially helpful because it keeps the reader from having to go back-and-forth locating the actual research question or hypothesis. For example, it is not a service to the reader when the author simply states that H3 was supported, as they could not be expected to recall exactly what the hypothesis actually was. Simply reminding the reader what it was up front seems far more appropriate.

Information about normality (skewness should be between −1.0 and +1.0), and how missing data and outliers were handled should be presented. Further, the data-analytic techniques should be described, with testing assumptions, followed by their justification for being used. It would be prudent also to note possible limitations associated with the statistical technique, assuring the reader that the technique, despite its limitations, is the best approach over competing approaches. For example, when conducting exploratory factor analysis, the researcher has numerous possible rotations (e.g. orthogonal; oblique) that might yield results, but if the researcher for theoretical reasons desires a correlation between the factors, then the oblique rotation approach would be best (Cohen et al. Citation2003).

Resist neglecting to report effect sizes and confidence intervals (see Reio and Callahan Citation2004) because they add so much richness to the interpretation of the analyses beyond null hypothesis statistical testing (i.e. p < .05). Effect size is about the magnitude of differences between groups or the strength of relationships between variables, while confidence intervals add precision to a study; that is, a 99% confidence interval would be the range of values, with 99% certainty, that contains the true population mean (Cohen et al. Citation2003). Again, this type of information is meaningful for those conducting meta-analytic work (Dannels Citation2010).

Increasingly, moderation and mediation analysis is being used to test hypotheses in organisational research. The contributions of such analyses are hard to minimise because they allow for far more sophisticated analyses that can yield significant new theoretical, empirical, and practical insights. As mentioned before, be certain to support the use of moderation and mediation analysis and discuss any possible limitations.

All tables used to present the findings should follow The Chicago Manual of Style format for Human Resource Development International. Be certain to distinguish all acronyms and include the ‘N’ for each table, along with symbols being used to identify p-values. Further, simply state findings in straight Chicago Style language, meaning that interpretation of p-values ≥ .05 (i.e. ‘null’ results) should not be attempted because they would not be tenable. At best, suggest that p ≥ .05 results indicate the need for future research.

Discussion

The Discussion section is where the author attempts to make sense of everything related to the study in light of the research literature. Thus, the author finally gets to reveal what they found, compare and contrast it to prior research, consider its theoretical, empirical, and practical implications, specify the study’s possible limitations accompanied by recommendations for future research, and ending by offering thoughtful conclusions. Unfortunately, in my experience as Editor and reviewer, many fail to attend sufficiently to the Discussion section, unnecessarily increasing the manuscript’s likelihood of being rejected for publication.

In the opening paragraph, briefly describe what may be the possible weaknesses of the study related to design (e.g. cross-sectional design precludes making causal claims), measure quality (e.g. a measure had a problematic Cronbach’s alpha), or unforeseen issues (e.g. less than desirable sample size due to COVID) and how they were handled appropriately. Then, end the paragraph in a positive way (Crane et al. Citation2017). For example, the author might say, ‘Notwithstanding the aforementioned issues, the findings largely corroborate the results of prior organizational studies, which has interesting and consequential contributions to HRD research and practice’.

One of the more fascinating parts of a paper is where the author enters into a conversation with the literature by comparing and contrasting the findings to what was cited earlier in the manuscript. Take some of the more seminal or significant prior research studies, make the comparisons, and proffer an explanation as to why the current findings did or did not support the findings of that study. Be sure to make meaning through the theories undergirding the research to explain the theoretical, empirical and practical significance of the findings. For example, if the findings support the prior research, state that this is the case and indicate how the findings enrich what we knew already about the theories and prior research undergirding the study. Conversely, in a workplace incivility study where the researcher has found a far less powerful negative association between uncivil and creative behaviour than suggested from the prior research, explain the inconsistent findings through incivility and creativity theories. The researcher might suggest the presence perhaps of unmeasured moderator or mediator variables and the need for additional research to explain what we do not understand to extend theory and research further. Taking these actions demonstrates how the researcher has extended the research literature and bolsters the scientific merit of the research.

Limitations and recommendations for future research

Research studies always have limitations, and it is vital that the researcher discloses the potential limitations and how they were addressed through the research design, data collection protocols or analyses. For instance, in a study using a web-based survey where the response rate was but 5%, talk about how low response rates are common when web-based surveys are used and describe the steps taken to ascertain whether the low response rate was actually problematic. The researcher could take a random sample of 30 individuals from the research population, call them and compare the findings to the data already collected. If the results are consistent between each, then the researcher could claim that although it was not ideal to have such a low response rate, there is evidence that the responders and non-responders were similar, indicating the lack of evidence that the sample was biased (Rogelberg and Luong Citation1998). Limitations are opportunities for recommending new research, so the researcher could recommend new research that would surmount the low response rate, increase sample size, and be more representative of the population. In general, once future studies have been recommended that would overcome the current study’s limitations, additional recommendations for research are strongly urged. Overall, recommending at least six new research studies that would extend the current research would support further the scientific merit of the research.

Conclusions

The road to publishing rigorous, quality peer-reviewed research can be burdened with quandaries, but steps can be taken to make the publishing journey so much less onerous. Researchers need to have a strong sense of the ‘nuts and bolts’ of actually designing and conducting a study, analysing the results, and interpreting the findings through theoretical lenses. Theoretical, empirical and practical implications must be addressed, along with disclosing the study’s possible limitations. Finally, recommendations for future research to surmount the limitations and extend the findings are a must, as attention to such matters are what Editors, expert reviewers, and the scholarly audience expect to see.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Bozionelos, N., and M. J. Simmering. 2022. “Methodological Threat or Myth? Evaluating the Current State of Evidence on Common Method Variance in Human Resource Management Research.” Human Resource Management Journal 32 (1): 194–215.
  • Chretien, J., K. Nimon, T. G. Reio Jr., and J. Lewis. 2020. “Responding to Low Coefficient Alpha: Potential Alternatives to the File Drawer.” Human Resource Development Review 19 (3): 215–239. https://doi.org/10.1177/1534484320924151.
  • Cohen, J., P. Cohen, S. G. West, and L. S. Aiken. 2003. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. 3rd ed. Mahwah, NJ: Lawrence Erlbaum.
  • Crane, A., I. Henriques, B. W. Husted, and D. Matten. 2017. “Twelve Tips for Getting Published in Business & Society.” Business & Society 56 (1): 3–10.
  • Dannels, S. N. 2010. “Meta-analysis.” In The Reviewer’s Guide to Quantitative Methods in the Social Sciences, edited by G. R. Hancock, and R. O. Mueller, 343–355. New York: Routledge.
  • Dillman, D. A. 2007. Mail and Internet Surveys: The Tailored Design Method. 2nd ed. Hoboken, New Jersey: Wiley & Sons.
  • Maxwell, S. E., M. Y. Lau, and G. S. Howard. 2015. “Is Psychology Suffering from a Replication Crisis? What Does “Failure to Replicate” Really Mean?” The American Psychologist 70 (6): 487–498. https://doi.org/10.1037/a0039400.
  • Podsakoff, P. M., S. B. MacKenzie, J. Lee, and N. P. Podsakoff. 2003. “Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies.” Journal of Applied Psychology 88 (5): 879–903. https://doi.org/10.1037/0021-9010.88.5.879.
  • Reio, T. G., Jr. 2016. “Nonexperimental Research: Strengths, Weaknesses and Issues of Precision.” European Journal of Training & Development 40 (8/9): 676–690.
  • Reio, T. G., Jr. 2021. “The ten Research Questions: An Analytic Tool for Critiquing Empirical Studies and Teaching Research Rigor.” Human Resource Development Review 20 (3): 374–390. https://doi.org/10.1177/15344843211025182.
  • Reio, T. G., and J. L. Callahan. 2004. “Affect, Curiosity, and Socialization-Related Learning: A Path Analysis of Antecedents to Job Performance.” Journal of Business & Psychology 19 (1): 3–22. https://doi.org/10.1023/B:JOBU.0000040269.72795.ce.
  • Rogelberg, S. G., and A. Luong. 1998. “Nonresponse to Mailed Surveys: A Review and Guide.” Current Directions in Psychological Science 7 (2): 60–65.
  • Shuck, B., J. L. Adelson, and T. G. Reio Jr. 2017. “The Employee Engagement Scale: Initial Evidence for Construct Validity and Implications for Theory and Practice.” Human Resource Management 56 (6): 953–977.
  • Spector, P. E. 2006. “Method Variance in Organizational Research: Truth or Urban Legend?” Organizational Research Methods 9 (2): 221–232. https://doi.org/10.1177/1094428105284955.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.