428
Views
0
CrossRef citations to date
0
Altmetric
Editorial Preface

Using case research to advance process theory

Introduction

Before diving into process theory and its potential for being integrated with case research, it is important to be clear about the purpose of theory in science and how case study research methods relate to theory in general. Eisenhardt (Citation1989) discusses at length the process of “building”Footnote1 theory from cases, but never formally defines theory assuming, perhaps, that we all share a common meaning for the term. Considering Gregor’s (Citation2006) description of multiple types of theory, it is clear that various types of theory can exhibit different dynamics such that generalizations fully applicable to one sort of theory may be misleading for others.

My own general definition of theory holds that the support level garnered by a theory is an ever-changing attribute rather than part of the identity of the theory itself. The theory is a proposed relationship among entities where its level of support can rise and fall with each new test (Niederman, Citation2021). If we dismiss early stage theory as not being theory just because it hasn’t been “proven,” how will it be made available for widespread testing?

A more traditional view is put forward by Popper (Citation1992), an important philosopher of science, who conceived theory as statements about relationships among constructs in the world. He further distinguished theory from non-theory by contrasting propositions that are refutable versus those that are not. If by the structure of the proposition it cannot be refuted (e.g., My opinion is correct whether consistent with observations or not, just because I am me – this is not refutable, whether or not it might be true) then there is no way to show that the statement is false (if it is) and should not be added to the accumulation of knowledge. Popper went further suggesting that where test results aligned with predictions, they had no value arguing largely that the next test might disconfirm the same prediction. Such a view is based on the presumption that knowledge must be absolute and permanent, rather than being a tentative approximation of best knowledge at a point in time. The number of reversals and extensions of our best collective understandings throughout history should be solid evidence that our common understandings evolve rather than remain static.

Thus, a productive way to look at theory is as a tool for accumulating knowledge applied generally across domains of interest with variations in specifics depending on the character of the domain. Ideally, as more observations are gathered, our understanding, remaining imperfect, becomes ever clearer about the robustness (e.g., how often predictions are supported) and the strength (the degree to which the prediction is, or can be expected to be, fulfilled) for each instance.

Another productive way to think about theory is as a living being analogous to a tree within a forest, where the forest represents an accumulation of knowledge. Many seedlings are likely to be present (e.g., initial theories with limited evidence having been gathered to either support or refute propositions). Over time due to lack of testing or mediocre robustness and strength, or simply being about unimportant phenomena, some theories stagnate while others grow and flourish. Of course, this assumes a rather idealistic perspective where theories are not assumed to be “true” just because they were published a single time in a respectable journal and tested by the same authors who proposed the theory. I’m not suggesting anything illicit about such work or views, just that the scholarly community should consider these to be fragile saplings requiring replication or testing rather than giant oaks (which themselves are, on occasion, toppled) (Niederman & March, Citation2015).

If we pursue this analogy, there are two main things that scholars do relative to theory – either of which can be done using case methods. First, theories are initiated. This can be done inductively by extending observations of past cases to the next ones; abductively by observing underlying patterns such that new case results can be predicted even when the particular outcome has not before been directly observed; and deductively by projecting from an axiom, premise, or assumption what must logically follow. Once initiated, scholars may enact the second part of theory building by testing the theory to observe whether its predictions are in whole, in part, or not at all empirical outcomes. Where they do follow, the theory gains in support (robustness and/or strength), where they do not follow a search for more knowledge should follow – e.g., were conflicting results functions of different methods; were there significant variations within the same method used (e.g., different statistical tests or variations in samples from a population among interviewees), are there contingencies that influence outcomes, and/or has the phenomenon materially changed in the interval between tests.

Case methods

Having considered theory, we might turn more fully to their relationship with case study methods. Eisenhardt (Citation1989) defines a case as a study conducted in a single location, leaving the definition of “location” unspecified, though generally in organizational studies it refers to a particular organization or department, though the organization and department can have employees or offices in many “locations” across the globe. Nonetheless, the idea of a holistic setting representing a particular instance seems the key to her understanding of case research. Note the inherent subjectivity or choice of the researcher (subject to the acceptance of reviewers and editor) of what constitutes a setting. Clearly in comparison with other nations a study of IT in New Zealand, for example, or the study of IT in the hospital systems of New Zealand, or the study of IT in a particular New Zealand hospital could each represent a case. Note how in each example, a single case can be shifted to a multiple case by (1) considering multiple countries; (2) considering multiple social services in New Zealand; and/or (3) by considering multiple hospitals.

Eisenhardt (Citation1989) references grounded theory where Glaser and Strauss (Citation1967) treated each interview as a sort of case. The first transcript provides an opportunity to generate an initial tentative theory with each subsequent interview testing that initial theory providing an opportunity to refine its content. Clearly, cases of any level of analysis substituting any sort of data collection from participant observation to individual trials in an experiment, can substitute for interviews and transcripts as a source of initiating theory through cases.

Cases can also be used to test theory. Yin (Citation1984) describes two distinct ways to do this connected to the selection of particular cases. In one approach he suggests from a population of cases selecting one that should be most likely to support the predictions of the theory. If the predictions are supported, the allocated robustness of the theory should increase. If the predictions are not supported, investigation of the reasons should follow as always with data refuting theory. The second approach is to select a least likely case where the theory’s prediction should be supported. If it is or is not supported, the breadth or boundary within which the theory is thought to hold should be adjusted.

Processes as action sequences

Turning now to process. Process theory is one subset of theory “families” or meta-theories (Niederman & March, Citation2019). As with variance theory, co-evolution theory, and systems theory, these categories are really too broad to be considered “a theory” but should be viewed as a paradigm within which theories about specific propositions in particular domains can be generated. These meta-theories act like paradigms in the sense of defining terms (e.g., what is a process in process theory; what is a construct in variance theory; what is a system in system theory), the types of relationships among them, e.g., what elements are included, and rules for evaluating propositions. These metatheories are largely independent of one another though they overlap in details, for example all might allow propositions to be represented as predictive sentences, diagrams, or mathematical formulae. As one would expect, the key element in process theory is “process” how it is constructed and how particular processes relate to outcomes generated from following them.

At this point, it is important to consider what constitutes a “process.” Processes incorporate a sense of time and duration. A sequence of actionsFootnote2 unfolding in a particular order represents a particular sort of process. This view of process plays off the views of time as articulated by Husserl (Citation1964, Citation2017) and Bergson (Citation1998, Citation2015). Both view that time and duration need to be viewed as continuous and indivisible. Alternatively, we can freeze moments approximating the actuality of time for utilitarian purposes. Note that as individual static “frames,” whether ephemeral or of relatively long duration, can be measured in ways that are of importance for taking action in the material world. When such frames are considered sequentially, they can be studied for changes over time that took place in the past (and might serve as a guide to prediction for the future). Note that such consideration does not examine the dynamics of change but measures the distances between beginning and ending point of objects. Consider when you lift a spoon to your mouth the entire activity occurs in a single motion whose dynamics are not explained merely by noting the start at the plate and ending at the mouth (returning the spoon to the plate – is it the same dynamic or a similar one running in reverse, or a different one altogether?) Shifting from the movie as a sequence of frames to the illusion where motion becomes visible, new characteristics emerge related not to the muscles involved in lifting a spoon but the fellowship over an after dinner bowl of ice cream or over a hot bowl of soup in a refugee shelter. Such emergent properties flow from purpose, free will, consciousness, and intention.

Processes as we tend to look at them within an IS setting can be simplified to being a sequence of actions as one would find in a plan describing an intended series of steps (or the observation of actual steps taken) or as a more general template applicable to multiple cases (Niederman et al., Citation2018). Note that a particular template can be implemented in many instances with variance the faithfulness by which it is followed (DeSanctis and Poole, Citation1994; Markus and Silver, Citation2008; Niederman et al., Citation2018). Alternate conceptualizations of process include (1) a set of stages like the Capability Maturity Model (Paulk et al., Citation1993) where change or movement between levels is not necessary – the organization may remain continually in a single level or move higher or lower among the levels and (2) descriptions of deterministic patterns of change which may include cycles, punctuated equilibria, continued growth or progress, and/or step functions.

Process theory

Process theory can be viewed as one of several families of “metatheory.” In this sense metatheory is not a theory but a framework for constructing a particular kind of theory. As a framework it is comprised of a generic structure and a set of rules and practices particular to that theory type. These rules and practices act as a central tendency considering many instances of theory in that family, but where rules and practices are subject to variation and exceptions. Process theory addresses the full range of theory instances pertaining to process, the steps by which entities change from state to state. In contrast variance metatheory addresses the relationship between entities or constructs and network theory which addresses the positioning of nodes or individuals within a network, the shape of networks, and the evolution of networks over time. An instance of process theory might be exemplified as a proposition of the form: “Using all steps in the system development lifecycle (SDLC) yields better, more thorough, and maintainable systems.”

Applying case method to map a phenomenon, generate theory, and test theory relative to processes should be reasonably straightforward. Assuming a case as data collected in a particular “location” or situation, say a particular department or organization installing a software package update, the actions of key agents over time can be recorded in their sequential unfolding, perhaps documenting durations and intervals between them (Niederman et al., Citation2018). Sometimes a particular process is repeated (as an IT department updating multiple software packages as new versions become available, multiple observations of process action sequences as actually enacted can be observed and documented. Further analysis should reveal the value of individual actions, subsets of sequences, lengths of duration, and other specifications of process that aid or detract from the level of success of that process in each instance and in general. Initial observations may be generalized as theory (subject to testing of the robustness and strength of relationships to indicate the level of support existing for that theory). Thus, cases can generate empirical results for mapping key elements within a domain and/or initiating or testing theory.

Process theory and Gregor’s taxonomy

Considering process theory through cases from another perspective, we can use Gregor’s (Citation2006) heavily utilized taxonomy of theory types as a basis for further reflection.

  1. Analysis. Gregor’s (Citation2006) first theory type is entitled “analysis” but largely consists of developing understanding and vocabulary for the domain. It references taxonomy or categorization as a key element in analysis. It differs from taxonomy in that it need not be comprehensive nor, necessarily, capable of assigning every instance to a particular category. It acknowledges that key elements may change over time, particularly as technology (e.g., shifting from departmental to enterprise systems) form part of the socio-technical system and are constantly evolving. Note that a process like the SDLC comes ready made with a set of actions and their sequence already defined, though with variation among the various proponents of particular versions.

  2. Prediction. Where a process theory is proposed of the type: “Using the sequence of actions in the SDLC (or perhaps a particular flavor of it) as prescribed will produce better long-term system success,” we can shift this into a prediction of the sort: “The next project using the SDLC will have better outcomes than a similar one that does not use it.” Assuming we can assure reasonably similar projects with adequately comparable staff, tools, deadlines, budget, and all sorts of other potential influences and that we can measure outcomes on common scales, we can then test this prediction. We might make the prediction more sophisticated by suggesting that in a sample of the 50 next projects, we will see a statistically significant better outcome from those that follow the SDLS. Or we might consider whether the 50 instances form definable clusters based on one or multiple differences in action sequences and whether there are systematic differences in outcome. The results could lead to more confidence in the robustness and strength of the theory or to refutations that suggest revision or addition of contingencies to the theory statements.

  3. Understanding. Considering the same general theory regarding the SDLC, we might want to propose a rationale for our findings or perhaps test multiple possible rationales. For example, we might propose the amount and/or manner of testing as accounting for SDLC success. We might discover that the SDLC performs best when testing comprises a larger percentage of total process activity and/or that it works best when broken into many short bursts rather than held out as a final process step. This specification of how specific testing quantity or quality can serve as an explanation for SDLC success overall. Such understanding, however, can be further tested in terms of the relative influence of testing versus risk analysis, breadth of information requirements, rate of acquiring user feedback, or many other potential influences on project outcomes.

  4. Prediction and understanding. Combining the findings of theory types 2 and 3 above should serve as an example of prediction and understanding. We can predict that SDLC provides better performance and this is because of how it treats testing.

  5. Design science. It might be considered stretching a bit to consider a typical design science or action research project as a case, but the design, construction and evaluation of a particular artifact can certainly be confined within a particular setting or location. Design through multiple prototype iterations or across multiple artifacts testing the same design principles can be viewed as multiple theory testing cases. Lessons from design science projects come in two forms – those dealing with the specifics of the particular case and those dealing with the design processes themselves. Both provide opportunities for Gregor type 2-4 theorizing; the first in terms of proposing particular relationships between features, affordances, and outcomes for specific artifacts or families thereof and the second in terms of processes and granular level methods which can lead to more effective or efficient process design projects. This latter fits well within the rubric of action sequence process theorizing when identified segments or entire phases seem to produce replicable results.

Conclusion

Ultimately, the purpose of science is to accumulate knowledge privileging domains where it can make a practical difference. Theory provides a mechanism for displaying the state of knowledge at a given time in a format that encourages constant improvement cycle of stating, testing, observing, and revising (where needed). Process can be viewed as action sequences – either as abstract plans or concrete observed actions. This is a pragmatic way to look at activities and their outcomes. Advantages include connecting academics to practice and providing prescriptive insights for practitioners. Practitioners can use such general theory to initiate their own local customization for their unique organizational circumstances, if they were to choose a path toward continual improvement. Other views of process the trajectory shape of multiple instances of particular activities unwinding may also be useful where regularities are observed or central tendencies serve as helpful guides consider the Gartner hype cycle (Dedehayir & Steinert, Citation2016) and learning curves (Kemerer, Citation1992), but these are less specific and less prescriptive than examining the action sequence where resultant knowledge can influence practitioner decisions and actions. Combining process and theory within the rubric of case methods in their array of flavors and variations promises the ability to address an expanded set of the “how” and “why” socio-technical questions of interest to practice and academics in the IS domain.

Additional information

Notes on contributors

Fred Niederman

Dr. Fred Niederman serves as Shaughnessy Endowed Professor at Saint Louis University. He was selected as a Fellow of the Association for Information Systems in 2020. He serves as editor in chief for Communications of AIS. He recently published a monograph on Process Theory with NOW publishing. His work on the co-evolution of IT worker skills and generations of technology platforms has been recognized as a ‘publication of the year’ by AIS in 2015. His areas of research interest include: IS personnel, IS project management, and philosophy of science applied to IS. He has served as senior editor for Journal of AIS, selected twice as senior editor of the year, and Project Management Journal. He has served as program chair for ICIS (2010) and as a member of the doctoral consortium faculty members (2018). He is recognized as a member of the “circle of compadres” for the KMPG PhD Project.

Notes

1 I highlight “building” here because it is a vague and problematic term. Generally, when I’ve come across it in the literature, it meant “originate” and develop theory with the implicit premise being that a proposition is not a theory until it is somehow “proven.” Eisenhardt (Citation1989) uses it ambiguously stating that theory generally comes from “ … previous literature, common sense, and experience (p. 532).” However, the process of “building” theory can obscure rather than clarify by conflating initiation and development as if it made sense that a proposition could miraculously grow into a theory as there are no standardized guidelines specifying the amount and type of evidence that would create such a transition or what would happen if a cycle of 3 studies supporting a proposition were followed by 3 refuting it. Would it no longer be a theory rather than a discredited or modestly robust and strong one? Assuming a sequence of such cycles would it have a history of being, not being, then again becoming a theory?

2 Actions are defined as activities undertaken through the will of an actor, generally an individual. Note that the action can alternatively be undertaken without particular intension as the result of habit or default. In contrast events are activities or the results of activities that happen to the actor. For example, if my department chair offers me a raise, to me that is an event (happens to me) to the department chair it is an action (he/she takes the action). In this sense the action/event is a question of whose perspective it is viewed from. An activity refers to an action/event combination from an external omniscient viewpoint as if it “just happened” or occurred without an intention at all such as a ripe apple falling from a tree.

References

  • Bergson, H. (1998). Creative evolution. TranslatEd by Mitchell, A. ( TranslatEd by). Creative Evolution. Dover Publications.
  • Bergson, H. (2015). Time and free will: An essay on the immediate data of consciousness. (F. L. Pogson, Ed.). Martino Publishing.
  • Dedehayir, O., & Steinert, M. (2016). The hype cycle model: A review and future directions. Technological Forecasting and Social Change, 108, 28–41. https://doi.org/10.1016/j.techfore.2016.04.005
  • DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121–147. https://doi.org/10.1287/orsc.5.2.121
  • Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. https://doi.org/10.2307/258557
  • Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: strategies of qualitative research. Wiedenfeld and Nicholson.
  • Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642. https://doi.org/10.2307/25148742
  • Husserl, E. (1964). The phenomenology of internal time-consciousness. ( M. Heidegger; Eds. J. S. Churchill, Translated by). Indiana University Press.
  • Husserl, E. (2017). Ideas: General introduction to pure phenomenology. (W. R. B. Gibson, Ed.). Martino Fine Books.
  • Kemerer, C. F. (1992). Now the learning curve affects CASE tool adoption. IEEE Software, 9(3), 23–28. https://doi.org/10.1109/52.136161
  • Markus, M. L., & Silver, M. S. (2008). A foundation for the study of IT effects: A new look at DeSanctis and Poole’s concepts of structural features and spirit. Journal of the Association for Information Systems, 9(10), 5. https://doi.org/10.17705/1jais.00176
  • Niederman, F. (2021). The philosopher’s corner: A minimalist view of theory: Why this promises advancement for the is discipline. ACM SIGMIS Database: The DATABASE for Advances in Information Systems, 52(4), 119–130. https://doi.org/10.1145/3508484.3508491
  • Niederman, F., & March, S. (2015). Reflections on replications. Transactions on Replication Research, 1, 1–16. https://doi.org/10.17705/1atrr.00007
  • Niederman, F., & March, S. T. (2019). Broadening the conceptualization of theory in the information systems discipline: A meta-theory approach. ACM SIGMIS Database: The DATA BASE for Advances in Information Systems, 50(2), 18–44. https://doi.org/10.1145/3330472.3330476
  • Niederman, F., March, S. T., & Müller, B. (2018). Using process theory for accumulating project management knowledge: a seven-category model. Project Management Journal, 49(1), 6–24. https://doi.org/10.1177/875697281804900102
  • Paulk, M. C., Curtis, B., Chrissis, M. B., & Weber, C. V. (1993). Capability maturity model, version 1.1. IEEE Software, 10(4), 18–27. https://doi.org/10.1109/52.219617
  • Popper, K. (1992). Conjectures and refutations: The growth of scientific knowledge (Routledge Classics) (Vol. 17, 5th ed.). Routledge.
  • Yin, R. (1984). Case study research. Sage Publications.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.