825
Views
0
CrossRef citations to date
0
Altmetric
Articles

Exploring the relationship between departmental characteristics and research performance

ORCID Icon, , ORCID Icon &
Pages 12-20 | Received 02 Jun 2022, Accepted 02 Jun 2023, Published online: 29 Jun 2023

ABSTRACT

Many recent higher education reforms worldwide have been legitimated by their potential impact on the performance of universities and colleges. However, we know less about the actual impact of the changes implemented. This article examines the extent to which research performance can be associated with specific organizational characteristics at the department level. The analysis is based on Norwegian university departments, where high- and low-performing departments have been selected as cases for further investigations. The policy context is the organizational reform in Norway from 2016 onwards aiming at reorganizing the higher education landscape through institutional mergers. The key findings indicate that there are few distinct departmental characteristics associated with research performance, such as elected or appointed leadership, single or multi-campus organization, or departmental size. However, the study reveals that highly productive individuals do matter and suggests that cultural dimensions and working conditions may be interesting factors to pursue in further research.

Introduction

Governmental attempts to make universities and colleges more effective and efficient have been a recurrent effort in various reform initiatives during the last decades throughout the world (Christensen Citation2011; Enders, de Boer, and Weyer Citation2013). Although reforms tend to be country specific, there are also a number of commonalities among them including the ambitions of providing institutions with more autonomy, streamlining institutional governance, developing new incentive structures for universities, introducing accountability schemes and performance targets, and aligning organisational structures to strategic aims (Capano Citation2011; Frølich, Christensen, and Stensaker Citation2019; Hicks Citation2012; Thomas et al. Citation2020).

While much attention has been devoted to the reform ambitions and their consequences for university organising and functioning (Bleiklie, Enders, and Leppori Citation2015; Ramirez and Christensen Citation2013), we have far less knowledge about the actual impact of the initiatives taken with respect to research performance.

The current study aims to contribute to this area by exploring the potential links between research performance at department level and selected organisational characteristics of these departments. By comparing high- and low-performing departments in the area of research in a sample of Norwegian universities, our study is guided by the following research question: What is the relationship between research performance and organisational characteristics at departmental level?

In a policy perspective, this study is highly relevant as it sheds light on popular assumptions behind reform attempts, not least arguments advocating more professional organisational structures within universities and the need for more effective ways to govern universities.

Analytical framework

What factors explain high-performance in research?

In the literature, research performance has been defined and understood in different ways. Research as an activity involves many different operations, and performance can therefore include participation in research projects, success in achieving research funding, patents stemming from the research conducted, PhD supervision and the number of PhD candidates supervised, number of research articles produced in peer review journals, or the impact of the articles published (Ramsden Citation1994). It is perhaps because of this huge variety that many studies have tried to avoid the concept of performance and instead used productivity as the key term (Dundar and Lewis Citation1998). The latter concept focuses mainly on the number of articles a researcher or a research unit produces, although sometimes citations as a measure of impact is also included (Bazeley Citation2010). With the huge increase in scientific publication outlets, research performance measured by citation indicators has become a central measure (Perkmann et al. Citation2021). We also observe an increase in the use of bibliometric indicators in the context of research evaluation as well as higher education policy more generally. However, the application of bibliometric indicators for assessing research performance has always been controversial (Aksnes, Langfeldt, and Wouters Citation2019).

Studies of research productivity and performance have found that individual characteristics and system-level characteristics are factors explaining much variation in research performance (Bazeley Citation2010; Hesli and Lee Citation2011; Ramsden Citation1994). Key findings suggests that high research performance is associated with demographic characteristics related to age, gender, being a professor, but also personal characteristics such as motivation, ambitions and orientation towards research (Kwiek Citation2018; Rørstad and Aksnes Citation2015). Other studies have suggested that research performance is strongly correlated to system-level input factors such as the level of research funding available, the (competitive) design of the domestic research system, and the economic incentive structures provided (Aghion et al. Citation2010). A generic insight from these studies is that the more resources invested into the system, the more research is produced.

Studies have also highlighted the social aspects of research and its potential impact on research performance. These studies underline that research is embedded in cultural practices and distinct social interactions that enable socialisation of individuals and the development of strong norms and values driving performance (Quimbo and Sulabo Citation2014; Smeby and Try Citation2005; Way et al. Citation2019). Cultural factors are still not always found to be key drivers for research performance (Edgar and Geare Citation2013).

The effects of unit size on research performance have been investigated several times, using departments or research groups as analytical entities. However, results have been mixed. Some claim that research performance tends to rise as group size increases, but at a certain threshold this effect tails off (von Tunzelmann et al. Citation2003). This threshold varies between research fields as social organisation of research also matter (Kyvik Citation1995; Whitley Citation2000).

The staff composition of the department has also an impact on research performance (Bauer et al. Citation2013; Carayol and Matt Citation2004). Departments with more staff in full professor positions publish more than departments with a high level of post-doc and doctoral students (Horta and Lacy Citation2011). Moreover, the gender composition of the staff may matter as women on average publish fewer publications than men (Nygaard, Aksnes, and Piro Citation2022). Other factors influencing research performance and productivity is time available for research (Ajjawi, Crampton, and Rees Citation2018) and teaching modes (Horta and Lacy Citation2011).

Theoretical assumptions on the relationship between research performance and organisational characteristics

Reform initiatives intended to change institutional governance and the internal organisational structures are often based on rationalistic assumptions of organising – that research is an activity that can be governed in instrumental ways driving productivity (Christensen Citation2011; Smeby and Try Citation2005). These initiatives can be said to be reactions to more inherent ways of organising academic work – where research is seen as a cultural and social embedded activity (Bazeley Citation2010). The latter institutional explanations tend to emphasise norms, values and distinct practices as factors driving research productivity while the former instrumental explanations give more weight to formalisation of practices and organisational structure, including the need for hierarchy in decision-making (Ramirez and Christensen Citation2013).

These two theoretical lenses may provide the basis for empirically testable expectations with respect to research performance, for example, related to how and in what ways formal leadership matters for research performance. In the instrumental perspective, formal leadership is important, not least related to the professionalisation of the leadership function (Frølich, Christensen, and Stensaker Citation2019; Paradeise et al. Citation2009). In this perspective being able to select and appoint prominent academic leaders will positively affect research performance. An institutional perspective would on the other hand portray formal academic leadership as a more symbolic feature having little impact on the research performance.

Instrumental and institutional explanations may also be relevant when considering the potential impact of organisational size and the importance of formal organisational structures for research performance. Larger size is in the instrumental perspective a factor that drive economies of scale positively affecting research performance (Jordan, Meador, and Walters Citation1989; Kyvik Citation1995) as larger size provide more and better research support, offer more networking and collaborative opportunities, create more opportunities for research partnerships, and more efficient ways of organising teaching (Fox and Nikivincze Citation2021; Wills, Ridley, and Mitev Citation2013).

In the institutional perspective, organisational size and formal structure play much less prominent roles as factors determining and impacting research performance. In this perspective, where culture and social organising is important, a strong degree of formalisation would rather be seen as procedures and practices adding bureaucracy, and a development imagined to stifle research productivity (Wills, Ridley, and Mitev Citation2013) – not least for innovative and highly productive individuals (Horta and Santos Citation2020; Kwiek Citation2018). The argument is that too formally organised departments also create complex decision-making structures. The institutional perspective would emphasise that de-centralised organising and academic autonomy – perhaps in geographically separated departments as part of larger multi-campus universities – is an important organisational characteristic boosting performance (Edgar and Geare Citation2013). Hence, it may not be the larger departments or the existence of formal research groups that is important for research performance but the individual researchers that has the discretion to pursue their own research agendas regardless of the type of organising they are embedded in (Heesen Citation2017; Kwiek Citation2018). In this perspective, one could also imagine that a high student:staff ratio could be tackled in different ways – depending on the specific disciplinary traditions, values and norms associated with linking students to research activities (Ramsden Citation1994; Smeby and Try Citation2005).

Empirical context, data and methods

Empirical context

The empirical context for the study is the structural reform in Norway from 2016 onwards where the government’s ambitions was to reorganise the higher education landscape through institutional mergers. Key objectives driving the reform were economies of scale and increased quality in research and education. The reform reduced the number of public higher education institutions from 32 to 21. The mergers were both horizontal and vertical, including universities merging with university colleges and university colleges merging with other university colleges – the latter often with the aim of becoming universities.

The intended outcome of the reforms was more streamlined institutions with larger and more solid departments boosting both research and educational productivity and quality. However, not all institutions merged and thus the Norwegian higher education landscape is currently composed of institutions with different organisational characteristics – creating a natural experiment setting allowing for departmental comparisons on a number of dimensions. It should also be mentioned that research performance – i.e. productivity in research output – is an element in the result-based funding system of higher education in Norway. As such, it is an important dimension high on the agenda of universities and colleges.

Data and methods

The main data source is the Current Research Information System, Cristin, which contains complete and verified data on the publication output of all Norwegian researchers (Sivertsen Citation2018). Unlike commercially produced databases like Scopus or Web of Science (WoS), where book publishing and publication in national languages are less adequately covered (Aksnes and Sivertsen Citation2019), this database has a complete coverage of all types of scholarly and scientific publications. The completeness and the quality of the database make it very well suited for bibliometric analysis.

Citation statistics of the publications are based on data from a local version WoS maintained by the Norwegian Agency for Shared Services in Education and Research. This means that the citation analysis is limited to WoS indexed publications only (see below).

Data on staff and students have been retrieved from the national database for higher education statistics in Norway (DBH), where data on the number of employees and students are available at department level. Finally, to obtain information on the leadership model and campus organisation of the departments, we used information available on the institutions’ webpages, combined with a minor e-mail survey to departments with limited information on their webpages where we asked whether the head of department was elected or appointed.

The sample include a total of 291 departments covering the entire Norwegian HE-sector (universities and university colleges). Based on data from these registers we identified the top and lower quartile of departments with respect to research outputs in both merged and non-merged higher education institutions. To account for variations in research performance from one year to another, we selected the period between 2017 and 2020 and calculated the average research performance in these departments for the whole period.Footnote1 To make the measure for research performance more robust and to reflect different dimensions of the activity, we also developed a composite index based on three measures: (i) productivity per staff member measured as publication points in the Cristin database, (ii) proportion of publications in high-quality outlets (based on rankings of journals/publishers in Cristin), (iii) the citation rate of the publications measured as relative citation index. The publication point indicator is designed to be field neutral allowing cross-disciplinary comparisons (Sivertsen Citation2018). Field neutrality is also obtained in this citation indicator by normalising the citation counts by field – a common procedure in evaluative citation analyses (Aksnes, Langfeldt, and Wouters Citation2019). The departments analysed encompass all fields of learning. As our study employs field-neutral bibliometric performance indicators, we have not included field as a variable in our analysis.

We ranked the departments on each of the three measures. Then we calculated the average score and the departments in the top and lower quartiles were selected for the further analyses. Our dataset includes 147 departments.

As the socio-organisational variables, we used the following measures:

  • Size, staff numbers

  • Student-staff ratio

  • Leadership model

  • Multi-campus vs single campus

  • Productivity skewness

The latter variable was included because both productivity and citations distributions are very skewed at the level of individuals (Bornmann and Leydesdorff Citation2017; Ruiz-Castillo and Costas Citation2014). Therefore, the average overall score of a department is often strongly influenced by the contributions of a few individuals. In order to assess the importance of this factor we calculated the Gini coefficient at department level using individual publication counts. A coefficient of 1 means that all publications of a department are attributed a single person while a coefficient of 0 means that all individuals have identical publication counts. In the analysis, the original sample of departments was divided into four equal groups according to their Gini index. By increasing numbers these groups are termed: low, moderate, high and very high.

It is important to underline that this methodological design tests possible correlations between research performance and departmental characteristics, not their possible causal relationships. However, correlations are still interesting as such analysis may contribute to qualify the current discussions on research performance – both by eliminating irrelevant assumptions concerning organisational factors and research performance and generate more sophisticated hypothesis regarding this relationship.

Results

Research performance and department size

In the analysis of department size, number of academic full-time equivalents (FTEs) and number of professors/research positions are used as variables. Our data set consists of units with quite large variations in size, where the number of academic positions ranges from nine to more than 100. Small departments would typically be in the range of 10-20 academic positions and large departments of 50 and more.

When comparing the top and lower quartile in the departmental performance rank, we see that the top quartile departments are larger, this holds for both size variables (academic FTEs and professor/research positions). The results are shown using the median and arithmetic mean as central tendency measures ().

Table 1. Average number of employees per department by research performance rank groups.

At the same time, the variation in size is quite large for both groups and this is illustrated in the box plot in , showing data distributed into quartiles, the mean (x) and outliers.

Figure 1. Distribution of departments by number of professors/research positions and research performance rank groups.

Figure 1. Distribution of departments by number of professors/research positions and research performance rank groups.

Research performance and teaching load – student-staff ratios

When it comes to teaching load, average student-staff ratios are used as proxy. Comparing the two groups, we observe that the lower quartile has a higher average ratio than the top quartile (), 15.3 versus 10.9, respectively. Thus, there is a tendency that the high performing departments have fewer students per employee.

Table 2. Average student-staff ratios per departmenta by research performance rank groups.

The underlying distribution is shown in .

Figure 2. Distribution of departments by student-staff ratios* and research performance rank groups.

Note. *In cases where figures are not available at department level, we have used the average for the faculties instead.

Figure 2. Distribution of departments by student-staff ratios* and research performance rank groups.Note. *In cases where figures are not available at department level, we have used the average for the faculties instead.

Research performance and leadership model

Norwegian HEIs may choose their leadership model. Of the 147 departments included in this analysis, only 11% had elected department heads, while remaining 89% had appointed (). In the lower quartile rank group, all departments applied an appointed leadership model, but also in the top quartile group this model accounted for the large majority of the departments (78%).

Table 3. Distribution of departments by leadership model and research performance rank group.

The lack of a larger sample of departments with elected leadership model, makes it difficult to draw conclusions regarding this organisational factor. We therefore made an additional case study of one large university applying mixed leadership models. Here all departments were included, not just groups of high and low performing, and their rank position nationally used as variable (). We observe that there are large variations within each leadership model, but as shown in the figure, we only find elected leaders in the top performing departments, not in the departments with lower performance.

Figure 3. Case study: one university (n = 43). Relationship between national performance rank and leadership model.

Figure 3. Case study: one university (n = 43). Relationship between national performance rank and leadership model.

Research performance and geographically separated departments

The analysis of multi-campus institutions with geographically separated departments revealed that the large majority of these were in the lower quartile group (). Only 3% of the top quartile departments are spread across several campuses, while this percentage is 30 for the lower quartile group. Multi-campus departments here refer to units being present in different cities in Norway, where the geographical distance may vary from a few dozen kilometres to several hundred.

Table 4. Distribution of departments by leadership model and research performance rank group.

Research performance and skewness

In the analysis of the staff composition, we did not observe large age differences, with an average age difference of four years only (44 vs 48). However, the proportion of men was higher in the top than in the lower quartile group (58% vs 48%). Still differences in gender composition does only explain a small part of the variance in the performance rank order of the departments. The correlation coefficient (Pearson r = 0.26) suggests a weak relationship between the two dimensions. Moreover, it was found that the departments in the top quartile group tend to have a large skewness in the individual productivity of staff (). In most cases, this is due to the presence of one or a few highly prolific individuals.

Table 5. Skewness research productivity and research performance rank groups.a

Discussion

The first dimension analysed – the relationship between departmental size and research productivity suggests that larger departments indeed have higher research productivity than smaller ones. While the difference is not very large, these results support earlier studies suggesting a positive relationship between departmental size and productivity (Jordan, Meador, and Walters Citation1989; Kyvik Citation1995; von Tunzelmann et al. Citation2003). The relationship still seems to be robust as we find larger high performing departments both when measuring this through the number of academic FTEs, and when we look at departments having academic staff with the highest academic qualifications. Earlier studies have found that academic qualifications are important for research productivity (Fox and Nikivincze Citation2021; Rørstad and Aksnes Citation2015; Smeby and Try Citation2005), and while this might well be true, our study also suggests that size is an interesting intermediate factor that may affect the relationship between academic qualifications and research productivity. However, the fact that there are large variations in size in both high and lower performing departments suggests also that departmental size in itself is a factor with limited explanatory power for research productivity.

A related dimension is whether geographical distance affect research productivity – specifically whether multi-campus departments are unaffected in their research productivity by being located at different geographical locations. With respect to this relationship, relatively few merged institutions have ended up with a multi-campus departmental structure, and most of both the top- and low-performing departments have a single campus organisation. Most of the multi-campus departments are still found to be poorly performing units though. Geographical distance may in this respect function as a hinderance for the establishment of larger networks and research opportunities (see also Kyvik Citation1995).

Regarding the third dimension – the relationship between research productivity and whether the department leadership is elected or appointed, we have less solid data to build upon. As both merged and non-merged institutions seem to have mostly appointed leadership, the number of departments having elected leadership is rather limited and a phenomenon mostly found in some of the older and more established universities. Still, it is interesting that the lowest performing quartile of departments all have appointed leadership, while more than one-fifth of the top performing departments have elected leadership. Hence, it seems that having elected leadership at department level is not hindering these departments in being high-performance units, alternatively that appointed leadership is not the cure for poor performance. As such, the rationalisation of the university (Ramirez and Christensen Citation2013) – measured through more professional (appointed) leadership – seems to have few links to research performance. The results should still not be interpreted as an indication that leadership is unimportant. For example, leadership may have an impact in organisational climate, culture and work climate which we know tend to have a positive impact on research productivity (Fox and Nikivincze Citation2021; Heinze et al. Citation2009; Smeby and Try Citation2005). Our results may suggest though that the ability to create such beneficial working conditions is less related to whether the leadership is elected or appointed.

The fourth dimension investigated the relationship between student:staff ratio and research performance – with the expectation that a lower student:staff ratio would enable more time to conduct research - positively impacting overall research productivity. This expectation was supported in our analysis. High-performing departments do have a slightly lower student:staff ratio than low-performing departments.

The fifth dimension explored the skewness of performance and confirms earlier studies indicating the importance of relatively few highly prolific researchers for the overall research performance of departments (Fox and Nikivincze Citation2021; Kwiek Citation2018). Studies that have explored the impact of individuals in more detail have also found that the existence of large collaborative networks is a factor related to highly productive individuals (see e.g. Ramsden Citation1994). Given the limited impact of departmental size found in our data, this suggests that such networks are more international and to a lesser extent established within the department the highly-productive individuals is affiliated with (Kwiek Citation2018).

Future research agendas

The current study has an exploratory purpose, although our findings of weak correlations suggest that there is not a strong relationship between research performance and the departmental characteristics. The only strong correlation we identify underline the importance of some high performers in research able to boost the overall research performance of the department they belong to (Fox and Nikivincze Citation2021; Kwiek Citation2018).

Our study is limited to Norway and situated in a policy context with merger of institutions. Although we believe that the findings using Norway as a case have general relevance, we do think our research design also could be applied to other settings including research assessments and similar exercises.

Thus, our study points to the value of an institutional perspective on research performance and exploring the organisational context which embed such academic high-performers would be an interesting follow up study. Previous studies of such performers have underlined the relationship between high-performance and academic rank (being a professor), their academic network – not least internationally, but also the working climate they are embedded within (Fox and Nikivincze Citation2021; Kwiek Citation2018; Shin, Lee, and Kim Citation2018). The current study did not explore such cultural factors, and an interesting question from a governance perspective is to what extent such cultural factors reflect specific patterns of organising? More qualitative approaches are needed to shed light on these issues including the specific working conditions of productive researchers, they ways and extent they are embedded in (or perhaps shielded from) departmental activities, and the internal governance structures facilitating such academic high-performers.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Norges Forskningsråd [grant number 298969] .

Notes on contributors

Dag W. Aksnes

Dr. Dag W. Aksnes is a research professor at the Nordic Institute for studies in Innovation, Research and Education (NIFU) and affiliated with the Centre for Research Quality and Policy Impact Studies (R-QUEST). Aksnes’ research covers various topics within the field of bibliometrics, such as studies of citations, citation analyses and assessments of research performance.

Siri Brorstad Borlaug

Dr. Siri Brorstad Borlaug is a senior researcher at NIFU. She is the deputy director of studies of research and innovation and of R-QUEST. Her main fields include research policy, research quality, organisation studies and particularly higher education institutions, and university-society relations.

Thea Eide

Thea Eide is a research assistant at NIFU and has a master’s degree in sociology. Her research interests are management and organisation of higher education, organisational changes in higher education and the relationship between science and policy making.

Bjørn Stensaker

Dr. Bjørn Stensaker is a professor of higher education at University of Oslo, and a research professor at NIFU. He has a special interest in studies of policy reform, governance and organisational change in higher education institutions. He has published widely on these topics in a range of international journals and books.

Notes

1 It should be noted that the last year of observation coincides with the Covid-pandemic (from March 2020). However, due to the publication lag from research to published paper (usually one year or more), the impact of the pandemic is unlikely to be of relevance for the study.

References

  • Aghion, P., M. Dewatripont, C. Hoxby, A. Mas-Colell, and A. Sapir. 2010. “The Governance and Performance of Universities: Evidence from Europe and the US.” Economic Policy 25 (61): 7–59. https://doi.org/10.1111/j.1468-0327.2009.00238.x
  • Ajjawi, R., P. E. S. Crampton, and C. E. Rees. 2018. “What Really Matters for Successful Research Environments? A Realist Synthesis.” Medical Education 52 (9): 936–950. https://doi.org/10.1111/medu.13643
  • Aksnes, D. W., L. Langfeldt, and P. Wouters. 2019. “Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories.” SAGE Open 9 (1): 1–17. https://doi.org/10.1177/2158244019829575
  • Aksnes, D. W., and G. Sivertsen. 2019. “A Criteria-Based Assessment of the Coverage of Scopus and Web of Science.” Journal of Data and Information Science 4 (1): 1–21. https://doi.org/10.2478/jdis-2019-0001
  • Bauer, H. P., G. Schui, A. von Eye, and G. Krampen. 2013. “How Does Scientific Success Relate to Individual and Organizational Characteristics? A Scientometric Study of Psychology Researchers in the German-Speaking Countries.” Scientometrics 94 (2): 523–539. https://doi.org/10.1007/s11192-012-0760-3
  • Bazeley, P. 2010. “Conceptualizing Research Performance.” Studies in Higher Education 35 (8): 889–903. https://doi.org/10.1080/03075070903348404
  • Bleiklie, I., J. Enders, and B. Leppori. 2015. “Organizations as Penetrated Hierarchies: Environmental Pressures and Control in Professional Organizations.” Organizational Studies 36 (7): 873–896. https://doi.org/10.1177/0170840615571960
  • Bornmann, L., and L. Leydesdorff. 2017. “Skewness of Citation Impact Data and Covariates of Citation Distributions: A Large-Scale Empirical Analysis Based on Web of Science Data.” Journal of Informetrics 11 (1): 164–175. https://doi.org/10.1016/j.joi.2016.12.001
  • Capano, G. 2011. “Government Continous to Do Its Job. A Comparative Study of Governance Shifts in the Higher Education Sector.” Public Administration 89 (4): 1622–1642. https://doi.org/10.1111/j.1467-9299.2011.01936.x
  • Carayol, N., and M. Matt. 2004. “Does Research Organization Influence Academic Production?: Laboratory Level Evidence from a Large European University.” Research Policy 33 (8): 1081–1102. https://doi.org/10.1016/j.respol.2004.03.004
  • Christensen, T. 2011. “University Governance Reforms: Potential Problems of More Autonomy?” Higher Education 62 (4): 503–517. https://doi.org/10.1007/s10734-010-9401-z
  • Dundar, H., and D. R. Lewis. 1998. “Determinants of Research Productivity in Higher Education.” Research in Higher Education 39 (6): 607–631. https://doi.org/10.1023/A:1018705823763
  • Edgar, F., and A. Geare. 2013. “Factors Influencing University Research Performance.” Studies in Higher Education 38 (5): 774–792. https://doi.org/10.1080/03075079.2011.601811
  • Enders, J., H. de Boer, and E. Weyer. 2013. “Regulatory Autonomy and Performance: The Reform of Higher Education re-Visited.” Higher Education 65 (1): 5–23. https://doi.org/10.1007/s10734-012-9578-4
  • Fox, F. F., and I. Nikivincze. 2021. “Being Highly Prolific in Academic Science: Characteristics of Individuals and Their Departments.” Higher Education 81 (6): 1237–1255. https://doi.org/10.1007/s10734-020-00609-z
  • Frølich, N., T. Christensen, and B. Stensaker. 2019. “Strengthening the Strategic Capacity of Public Universities: The Role of Internal Governance Models.” Public Policy and Administration 34 (4): 475–493. https://doi.org/10.1177/0952076718762041
  • Heesen, R. 2017. “Academic Superstars: Competent or Lucky?” Synthese 194 (11): 4499–4518. https://doi.org/10.1007/s11229-016-1146-5
  • Heinze, T., P. Shapira, J. D. Rogers, and J. M. Senker. 2009. “Organizational and Institutional Influences on Creativity in Scientific Research.” Research Policy 38 (4): 610–623. https://doi.org/10.1016/j.respol.2009.01.014
  • Hesli, V. L., and J. M. Lee. 2011. “Faculty Research Productivity: Why do Some of our Colleagues Publish More Than Others?” PS: Political Science and Politics 44 (2): 393–408. https://doi.org/10.1017/S1049096511000242
  • Hicks, D. 2012. “Performance-based University Research Funding Systems.” Research Policy 41 (2): 251–261. https://doi.org/10.1016/j.respol.2011.09.007
  • Horta, H., and T. A. Lacy. 2011. “How Does Size Matter for Science? Exploring the Effects of Research Unit Size on Academics’ Scientific Productivity and Information Exchange Behaviors.” Science and Public Policy 38 (6): 449–460. https://doi.org/10.3152/030234211X12960315267813
  • Horta, H., and J. M. Santos. 2020. “Organisational Factors and Academic Research Agendas: An Analysis of Academics in the Social Sciences.” Studies in Higher Education 45 (12): 2382–2397. https://doi.org/10.1080/03075079.2019.1612351
  • Jordan, J. M., M. Meador, and S. J. K. Walters. 1989. “Academic Research Productivity, Department Size, and Organization: Further Results.” Economics of Education Review 8 (4): 345–352. https://doi.org/10.1016/0272-7757(89)90020-4
  • Kwiek, M. 2018. “High Research Productivity in Vertidal Undifferentiated Higher Education Systems: Who are the Top Performers?” Scientometrics 115 (1): 415–462. https://doi.org/10.1007/s11192-018-2644-7
  • Kyvik, S. 1995. “Are Big Departments Better Than Small Ones?” Higher Education 30 (3): 295–304. https://doi.org/10.1007/BF01383753
  • Nygaard, L. P., D. W. Aksnes, and F. N. Piro. 2022. “Identifying Gender Disparities in Research Performance: The Importance of Comparing Apples with Apples.” Higher Education, 84 (5): 1127–1142. https://doi.org/10.1007/s10734-022-00820-0.
  • Paradeise, C., E. Reale, I. Bleiklie, and E. Ferlie. 2009. University Governance. Western European Comparative Perspectives. Dordrecht: Springer.
  • Perkmann, M., R. Salandra, V. Tartari, M. McKelvey, and A. Hughes. 2021. “Academic Engagement: A Review of the Literature 2011-2019.” Research Policy 50 (1): 104114. https://doi.org/10.1016/j.respol.2020.104114
  • Quimbo, M., and E. C. Sulabo. 2014. “Research Productivity and its Political Implications in Higher Education Institutions.” Studies in Higher Education 39 (10): 1955–1971. https://doi.org/10.1080/03075079.2013.818639
  • Ramirez, F. O., and T. Christensen. 2013. “The Formalization of the University: Rules, Roots, and Routes.” Higher Education 65 (6): 695–708. https://doi.org/10.1007/s10734-012-9571-y
  • Ramsden, P. 1994. “Describing and Explaining Research Productivity.” Higher Education 28 (2): 207–226. https://doi.org/10.1007/BF01383729
  • Rørstad, K., and D. W. Aksnes. 2015. “Publication Rate Expressed by Age, Gender and Academic Position - A Large-Scale Analysis of Norwegian Academic Staff.” Journal of Informetrics 9 (2): 317–333. https://doi.org/10.1016/j.joi.2015.02.003
  • Ruiz-Castillo, J., and R. Costas. 2014. “The Skewness of Scientific Productivity.” Journal of Informetrics 8 (4): 917–934. https://doi.org/10.1016/j.joi.2014.09.006
  • Shin, J. C., S. J. Lee, and Y. Kim. 2018. “Does Governance Matter? Empirical Analysis of Job Satisfaction and Research Productivity.” In Higher Education Governance in East Asia, edited by J. C. Shin, 243–259. Singapore: Springer Nature.
  • Sivertsen, G. 2018. “The Norwegian Model in Norway.” Journal of Data and Information Science 3 (4): 3–19. https://doi.org/10.2478/jdis-2018-0017
  • Smeby, J. C., and S. Try. 2005. “Departmental Contexts and Faculty Research Activity in Norway.” Research in Higher Education 46 (6): 593–619. https://doi.org/10.1007/s11162-004-4136-2
  • Thomas, D. A., M. Nedeva, M. M. Tirado, and M. Jacob. 2020. “Changing Research on Research Evaluation: A Critical Literature Review to Revisit the Agenda.” Research Evaluation 29 (3): 275–288. https://doi.org/10.1093/reseval/rvaa008
  • von Tunzelmann, N., M. Ranga, B. Martin, and A. Geuna. 2003. The Effects of Size on Research Performance: A SPRU Review. Report Prepared for the Office of Science and Technology, Department of Trade and Industry. Brighton: SPRU.
  • Way, S., A. C. Morgan, D. Larremore, and A. Clauset. 2019. “Productivity, Prominence, and the Effects of Academic Environment.” PNAS 116 (22): 10729–10733. https://doi.org/10.1073/pnas.1817431116
  • Whitley, R. 2000. The Intellectual and Social Organization of the Sciences. New York: Oxford University Press.
  • Wills, D., G. Ridley, and H. Mitev. 2013. “Research Productivity of Accounting Academics in Changing and Challenging Times.” Journal of Accounting and Organizational Change 9 (1): 4–25. https://doi.org/10.1108/18325911311307186