961
Views
0
CrossRef citations to date
0
Altmetric

Communication Methods and Measures has made tremendous progress and achievements since its inaugural issue in 2007 and since the last Editorial (Matthes et al., Citation2016), as reflected in the impact factor and other alternate metrics in recent years as partial evidence. The aims and scope of the journal have remained unchanged since 2007:

CMM aims to bring developments in methodology, both qualitative and quantitative, to the attention of communication scholars, to provide an outlet for discussion and dissemination of methodological tools and approaches to researchers across the field, to comment on practices with suggestions for improvement in both research design and analysis, and to introduce new methods of measurement useful to communication scientists or improvements on existing methods.

Yet, as the previous editorial team (Matthes et al., Citation2016, p. 2) also acutely noted, much work remains to be done:

compared with other disciplines within the social and behavioral sciences, the field of communication sometimes lags a bit behind … . There is a clear lack of standardization for the field … New measurement scales are often proposed and used in substantive research without proper replication or validation, and the widespread use of outdated data analysis techniques or other forms of statistical practice that experts have discredited.

In addition, new challenges and issues have emerged for CMM since the introduction of computational methods to the social sciences (e.g., Lazer et al., Citation2009) and the communication discipline in particular (e.g., van Atteveldt & Peng, Citation2018). Computational methods are new and still fast developing, and interdisciplinary. There also might be a lack of a clear benchmark for what qualifies as novel contributions to communication science research methodology.

Given where CMM is today and the issues that we have seen in the field of communication science research methodology and witnessed in submissions to the journal, the editorial team sees a need to lay out clearly a set of standards for research methodology in communication science and expectations for articles that CMM strives to publish. Simply put, we need to take communication science and research methodology seriously.

What is communication science?

Communication science is the discipline of the social scientific study of human communication that explores the processes, behaviors/practices, and effects of human communication, which encompasses a wide range of topics and approaches that examine how individuals, groups, organizations, and societies create, share, process, and use information transmitted in various channels and media, and across various contexts such as interpersonal/relational, organizational, mass media, and social and digital media. Researchers in the communication science discipline use a variety of methodologies, including quantitative and qualitative methods, to investigate communication phenomena. Burleson (Citation1992) eloquently argued that the study of “communication per se” should not be about the specific content, context, function, or effects of human communication, but the communication processes and patterns, which is echoed by Berger et al. (Citation2010) that communication science should be concerned with regularities in communicative conduct that may not be directly observable.

Given such a scope for communication science, research methodologies that don’t concern communication would fall out of the realm of communication science, and hence won’t be a good fit for CMM. For example, manuscripts that are solely on the computations and algorithms of textual analysis, machine learning, or large language models (LLM) are not taking communication seriously. Researchers from the disciplines of engineering, computer science, and information technology are certainly welcome to submit their work to CMM. But they have to make their work relevant to communication science. Otherwise, their work will be analogous to articles on the mathematics behind statistical procedures or the technicality of statistical software packages in that they simply belong to a different discipline.

Another case is manuscripts on qualitative methods. Social science methods can be both quantitative and qualitative, and CMM aims to publish both. On one hand, there is a clear overlap between qualitative methods in the social sciences and the humanities. On the other hand, not all qualitative methods are scientific. Qualitative methods that are mainly interpretative, subjective, and/or critical in nature, while they are totally legitimate research methods and completely on text and information (i.e., they take communication seriously), are not considered scientific. Scientific research should be empirical, systematic, objective, generalizable, and reproducible, although, at some initial and exploratory stages of a research project, the researcher might engage in some interpretative and subjective activities as part of the deduction and abduction processes.

What constitutes novel contributions to research methods?

As stated in the submission guidelines, CMM seeks to devote as many pages as possible to novel contributions to the methods and measurement literature that have broad appeal and applicability. “Novel” contribution to research methods can come in different forms: 1) a new statistical analysis procedure that is relatively unknown to the communication discipline, 2) a creative approach to the study design, measurement, and/or data analysis to address a research problem, 3) developing and validating a new measurement scale, 4) refinement and modification of an existing scale, etc. Reviews and summaries of certain existing methodological practices in the literature without critical analyses and evaluation tend to be descriptive and would not qualify as novel contributions. Nor would essays that recommend “best practices” based on personal opinions/preferences but without newly generated empirical evidence (from comparing alternative options). Examples and demonstrations are often required to make a point or illustrate a method, but the manuscript should not be framed around the examples and illustrations. Good methods are guided by theories (e.g., Greenwald, Citation2012); the reverse might not be true – innovations and contributions to theory testing and development do not necessarily constitute contributions to methodology. Novel and creative approaches to conceptualization and theory construction are extremely important and valuable to communication science research. Nevertheless, such work is best sent to journals that publish work in theory testing and development or targeting researchers in a specific area of inquiry rather than to CMM.

Novel contributions can also be in the form of state-of-the-art assessments of the critical issues in communication research methods. Such papers should consist of reflections on empirical research in communication science, for example, those that examine the appropriateness of the assumptions, procedures, and principles of research vis-à-vis the goals of the researcher. They elucidate the criteria for assessing the quality of communication science research and enumerate the standards that any good communication science study should meet. At the same time, however, we remind authors and readers that no single criterion or standard should dictate the only right way to conduct high-quality research in communication science. For example, self-report data, despite its known deficiencies, might remain the best way to approach what is in the mind of our object of study (e.g., measures of emotions, see Mauss & Robinson, Citation2009; Scherer, Citation2005). In the case of pre-registration: It is a valuable practice to guard against researcher degrees of freedom, but it also has been misused by some, such as with too much flexibility (e.g., allowing for a substantial amount of latitude to deviate from the pre-registered plan), as an excuse (e.g. for not complying with reviewers’ and editors’ legitimate requests of revisions), or as a dogma (i.e., the tendency to equate pre-registration with study quality and unfairly dismiss research without pre-registration). Consider large-scale longitudinal studies, big data, and exploratory research, where it might not be possible for hypotheses, data collection, and data analysis procedures to be pre-determined and subsequently properly pre-registered. Novel contributions might lie in identifying the problem and finding new, better, and multiple ways to clean the water in the bathtub instead of throwing away the bath water with the baby.

Expectations for specific areas

Roughly speaking, recent submissions to CMM can be organized into three groups: 1) Scale development and validation, 2) computational methods, and 3) other methodological issues. The editorial team was formed to cover these areas. Below we discuss the expectations for submissions.

Scale development and validation

Scale development and validation are foundational to the empirical and theoretical advancement of any research field. Scales are instruments designed to measure abstract and latent concepts by quantifying the attributes and characteristics to differentiate units of analysis. Once a proposed construct and its measurement scale are published, they spawn new research, and future studies often directly use the scale as “established” without further validation efforts. CMM hence expects and applies high standards of conceptual and empirical rigor to such submissions.

A sound concept explication process, clearly specifying conceptual and operational definitions, is the prerequisite. Sufficient justifications should be given to content validity, situated in the theoretical context in which the scale is constructed and to be interpreted. Exploratory factor analysis (EFA) is apt for the stage of developing measurement items but will not suffice for a construct validation paper without confirmatory factor analysis (CFA), which tests the proposed measurement model over other alternative models. When there exists a sound theoretical basis and a clear understanding of and/or hypotheses on the factor structure and psychometric properties of the scale under investigation, the researcher may bypass EFA and perform CFA directly. In the case when both EFA and CFA are performed, they should use data collected from independent samples. Performing either analysis on a randomly split half of the same data is not acceptable. By default, random splitting means the two halves are supposed to be identical, including the factor structure to be obtained. Hence, nothing is being confirmed or replicated in such practices.

Samples for empirical testing should be selected from the target population for which the use of scale is intended, with sample sizes justified. For CFA, statistical power for the model as a whole and individual parameter estimates can be estimated to derive the necessary sample size. Model fit decisions should be based on multiple indices, and deliberations, where due, are not to be omitted or glossed over. The approach of exploratory structural equation modeling (ESEM, Asparouhov & Muthén, Citation2009) may be an option to combine EFA and CFA in the early stages of scale development. Post-hoc model modifications should be sparingly used in CFA, only with sound theoretical and/or methodological justifications, and transparently discussed and carefully interpreted. Model re-specification (e.g., trimming a non-significant indicator) means a revised hypothesized structure, making additional data collection for validation a necessity. Construct validity and criteria validity should be assessed with regard to external correlates in the “nomological net” (Cronbach & Meehl, Citation1955). Evidence of internal and external consistency (Anderson et al., Citation1987), and of convergent and discriminant validities (Campbell & Fiske, Citation1959), should be presented. Model-based reliability coefficients, such as omega (Hayes & Coutts, Citation2020; McDonald, Citation1999), are based on less stringent assumptions than Cronbach’s alpha and thus preferred (see also Raykov, Citationin press). Preferably, factor structure, psychometric properties, and other empirical analysis results are replicated with multiple independent samples, along with measurement invariance (Clark & Dollellan, Citation2021).

The importance of theoretical justifications throughout the process cannot be over-emphasized. Rigorous concept explication hinges on a convincing theoretical basis; item development and modification in the exploratory stage require theoretical familiarity and justifications; and model validation is essentially testing theoretical predictions. Whereas empirical guidelines or conventions exist for researchers to resort to, numerous methodological decisions throughout the process, such as factor extraction in EFA, model fit and adjustment, factor interpretation, or even naming a factor, involve theoretical considerations and implications. As with other research endeavors, scale development and validation are science and art, methodological rigor fused with theoretical finesse.

Computational methods

Computational methods are a vibrant area of research out of which a significant amount of innovation, novel approaches, and novel objects of research flow into our field. The communication discipline has embraced computational phenomena over the past decades, which has contributed to the growth of the discipline and the issues and volumes of CMM. In addressing contemporary issues in communication, such computational research draws on individual or combinations of the following three fundamental areas of computational methods: 1) advanced analytical methods use computational power and novel algorithms to generate better or easier data summaries, improved content and textual analysis, and more rigorous insights. 2) computational tools that are needed to ensure the quality and even the possibility of data collection and measurements of human communication experience online, for example, by providing tools for data donations or robust access to platform APIs. And 3) computational methods necessary to study the effects of algorithms of social media platforms.

Desirable as it is, the high pace of innovation in computational methods incurs costs to the field through additional effort toward developing quality criteria, best practices, and a coherent methodological canon. The goal of methodological developments should, therefore, ultimately be an advancement of the field, not merely an improvement of some arbitrary “state of the art.” Contributions to CMM are thus expected to recognize the special responsibility associated with computational methods and seek to facilitate understanding, validation, and critique throughout three areas.

Computational approaches frequently favor the consistency of internal concepts (e.g., monosemic representations of words through a single embedding) over ecological validity, which oftentimes leads to a tradeoff between internal precision and generalizability. Robustness checks must be implemented and cross-validation performed. Novel methods must aim to offer independent validation not only within the context of the method’s original development but also for applications in communication research. In doing so, they should adhere to the highest established standards for validation in all involved disciplines.

As responses to a volatile information environment, computational methods are flexible and easy to use. Both features threaten to contribute to a fragmentation of insights which makes it hard for peers to extend. Disappearing content (Buehling, Citation2023), a growing landscape of specialized tools, and undocumented design decisions encumber the vital social processes that ensure the consolidation of evidence. We urge computational scholars to actively counteract such erosive forces. Cumulative insight requires integrative work such as building on extant research, including replicating prior findings, publishing data and tools, and contributing to collective projects.

Research transparency and replicability

John et al. (Citation2012) and Flake and Fried (Citation2020) draw attention to questionable research practices (QRPs) (see also Matthes et al., Citation2015; Vermeulen & Hartmann, Citation2015), which is a major source of the replication crisis (Świątkowski & Dompnier, Citation2017). In building a cumulative communication science, CMM seeks to cultivate a culture of transparent reporting and data sharing to promote research. Supplemental materials (e.g., data files, data collection instruments, syntaxes and codes, packages, etc.) can shared on public repositories (e.g., the Open Science Foundation platform) and be used to supply the information that cannot fit in the main text. Some of the key information that should be part of the standard reporting, whether in the main text, footnotes, or Supplemental materials, includes the handling of missing data, checking the distributional properties and potential violation of assumptions (e.g., the multivariate normality), screening and managing univariate and multivariate outliers, choice of estimation methods, model modification steps and decisions, and the name and version of the software used. We expect authors to provide details of research processes and decision-making and encourage them to adopt open science initiatives in open data and materials for reproducibility and replicability.

Looking forward

This issue marks the beginning of the second term under the current Editor-in-Chief and the editorial team. We have been honored and privileged to have taken responsibility for the editorship of the one and only journal in our field that is devoted to social science research methods. We are grateful for and humbled by the tremendous hard work and accomplishments of the previous editorial teams led by David Ewoldsen, Andrew Hayes, and Jorg Matthes over the first 14 years (2007–2020), which brought CMM to where it stands today. We strive to maintain the high quality of the research published in CMM and aspire to extend its impact beyond the communication science discipline. We also acknowledge the ongoing trust and support of the Communication Theory and Methodology division of the Association for Education in Journalism and Mass Communication. We thank all members of the current editorial board, all past and current reviewers and authors, and the Taylor & Francis staff for their ongoing support.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Lijiang Shen

Lijiang Shen is a Professor in the Department of Communication Arts & Sciences, Pennsylvania State University. His primary area of research considers the impact of message features and audience characteristics in persuasive health communication, message processing, and the process of persuasion/resistance to persuasion as well as quantitative research methods in communication. His research has been published in major communication and related journals.

Ye Sun

Ye Sun is an Associate Professor in the Department of Media and Communication at the City University of Hong Kong. Her research addresses various questions related to media effects and persuasion in health and environmental communication contexts, using quantitative research methods including experiments, surveys, and meta-analyses.

Pascal Jürgens

Pascal Jürgens is a professor of computational communication at Trier University, Germany. His research addresses the effects of algorithms on audiences and society, methods of data collection in personalized information environments, and computational analyses. Past notable works include publications on political communication on Twitter, measurement of search engine personalization, and effects of platforms on the diversity of news exposure.

Baohua Zhou

Baohua Zhou is a professor and associate dean of the School of Journalism, Fudan University. He is also a research fellow of the Center for Information and Communication Studies, director of the Computational and AI Communication Research Center, and PI of MOE Laboratory for National Development and Intelligent Governance at Fudan University, China. His research focuses on new media and society, computational and AI communication, media effects, and public opinion. His work has been published in peer-reviewed journals including New Media & Society, Computers in Human Behavior, International Journal of Communication, Information Processing and Management, and other leading journals in China.

Marko Bachl

Marko Bachl is an assistant professor at the Institute for Media and Communication Studies at Freie Universität Berlin. His digital research methods group aims to develop, evaluate, and teach innovative digital and computational research methods. He was an associate editor for Communication Methods and Measures from 2021 to 2023.

References

  • Anderson, J. C., Gerbing, D. W., & Hunter, J. E. (1987). On the assessment of unidimensional measurement: Internal and external consistency, and overall consistency criteria. Journal of Marketing Research, 24(4), 432–437. https://doi.org/10.1177/002224378702400412
  • Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 16(3), 397–438. https://doi.org/10.1080/10705510903008204
  • Berger, C. R., Roloff, E. R., & Roskos-Ewoldsen, D. (2010). What is communication science? In C. R. Berger, E. R. Roloff, & D. Roskos-Ewoldsen (Eds.), The handbook of communication science (pp. 2–21). Sage.
  • Buehling, K. (2023). Message deletion on telegram: Affected data types and implications for computational analysis. Communication Methods and Measures, 1–23. https://doi.org/10.1080/19312458.2023.2183188
  • Burleson, B. (1992). Taking communication seriously. Communication Monographs, 59(1), 79–86. https://doi.org/10.1080/03637759209376250
  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. https://doi.org/10.1037/h0046016
  • Clark, D. A., & Dollellan, M. B. (2021). What if apples become oranges? A primer on measurement invariance in repeated measures research. In J. F. Rauthmann (Ed.), The handbook of personality dynamics and processes (pp. 838–854). Academic Press.
  • Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. https://doi.org/10.1037/h0040957
  • Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393
  • Greenwald, A. G. (2012). There is nothing so theoretical as a good method. Perspectives on Psychological Science, 7(2), 99–108. https://doi.org/10.1177/1745691611434210
  • Hayes, A. F., & Coutts, J. J. (2020). Use omega rather than Cronbach’s alpha for estimating reliability. But … Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629
  • John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
  • Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabási, A. L., Brewer, D. … Van Alstyne, M. (2009). Computational social science. Science, 323(5915), 721–723. https://doi.org/10.1126/science.1167742
  • Matthes, J., Marquart, F., Naderer, B., Arendt, F., Schmuck, D., & Adam, K. (2015). Questionable research practices in experimental communication research: A systematic analysis from 1980 to 2013. Communication Methods and Measures, 9(4), 193–207. https://doi.org/10.1080/19312458.2015.1096334
  • Matthes, J., Niedderdeppe, J., & Shen, F. C. (2016). Reflections on the need for a journal devoted to communication research methodologies: Ten years later. Communication Methods and Measures, 10(1), 1–3. https://doi.org/10.1080/19312458.2016.1136514
  • Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition and Emotion, 23(2), 209–237. https://doi.org/10.1080/02699930802204677
  • McDonald, R. P. (Ed.). (1999). Test homogeneity, reliability, and generalizability. In Test theory: A unified approach (pp. 76–120). Lawrence Erlbaum Associates.
  • Raykov, T. (in press). Coefficient alpha and reliability of communication science measurement scales: A note on Hayes and Coutts. Communication Methods and Measures.
  • Scherer, K. R. (2005). What are emotions? And how can they be measured? Social Science Information, 44(4), 695–729. https://doi.org/10.1177/0539018405058216
  • Świątkowski, W., & Dompnier, B. (2017). Replicability crisis in social psychology: Looking at the past to find new pathways for the future. International Review of Social Psychology, 30(1), 111–124. https://doi.org/10.5334/irsp.66
  • van Atteveldt, W., & Peng, T. (2018). When communication meets computation: Opportunities, challenges, and pitfalls in computational communication science. Communication Methods and Measures, 12(2–3), 81–92. https://doi.org/10.1080/19312458.2018.1458084
  • Vermeulen, I., & Hartmann, T. (2015). Questionable research and publication practices in communication science. Communication Methods and Measures, 9(4), 189–192. https://doi.org/10.1080/19312458.2015.1096331

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.