995
Views
0
CrossRef citations to date
0
Altmetric
Research Article

“It’s Not so Easy to Measure impact”: A Qualitative Analysis of How Universities Measure and Evaluate Their Communication

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

Universities are key actors at the intersection of science and society and their strategic communication and its effective implementation is essential, as is the measurement and evaluation (M&E) of its impact. Despite the growing relevance of M&E in university communication, however, empirical studies on the subject are rare. Therefore, this study explores M&E practices of central communication departments at Swiss universities through semi-structured interviews and document analysis. Findings show that M&E are still in its infancy at most universities. The study identifies resources and skills of communication professionals, digital technologies as well as a culture of learning as the primary factors enabling or constraining M&E. Findings show that the organizational context, notably the (perceived) degree of competition, the availability of personnel resources, and the professionalism of the communication department, rather than the sector of higher education itself, influences M&E practices. In addition to resources, our findings point to an engaged leadership actively supporting a culture of learning and the further development of skills in communication departments as the most important factor for M&E practices.

Introduction

Universities are at the forefront of science communication, being key actors at the science-society interface (Engwall, Citation2008; Marcinkowski et al., Citation2014). Apart from creating new scientific knowledge and providing society with a skilled workforce, universities play a key role in enhancing public understanding of science and establishing a dialogue with the broader public (Laredo, Citation2007). In recent decades, the competition among universities for funding and talent and the expectation to actively engage with societal needs has risen (Krücken, Citation2021; Pinheiro et al., Citation2015). Since communication is crucial to all these areas, the personnel and material resources for strategic communication at universities have grown (Entradas & Bauer, Citation2022; Entradas et al., Citation2023; Fürst et al., Citation2022; Höhn, Citation2011). With the expansion of communication activities, questions about the impact and effectiveness of university communication have followed suit (Jensen, Citation2014; Marcinkowski et al., Citation2013). Along with this, measurement and evaluation (M&E) of university communication have gained importance. While the terms measurement and evaluation are often used interchangeably, evaluation is the systematic assessment of the value of an object – in this case, university communication – and is based on measurement, which uses quantitative and qualitative social science research methods such as surveys or content analyses to collect data on the effects of communication (Buhmann & Volk, Citation2022; Pellegrini, Citation2021).

Despite the growing relevance of the evaluation of communication in public sector organizations (for an overview see, e.g., Luoma–aho & Canel, Citation2020) and especially in scientific organizations (Niemann et al., Citation2023; Raupp & Osterheider, Citation2019), empirical studies dedicated to examining M&E of university communication are almost absent in scholarship. We seek to remedy this with a qualitative study analyzing the M&E of universities’ central communication departments. Our study combines semi-structured interviews with the staff and heads of central communication departments as well as with members of university leadership with a document analysis of evaluation reports from eight selected universities in Switzerland. The respective universities were sampled based on an empirically derived typology (Fürst et al., Citation2024) to represent four distinct types of institutional university communication. The study seeks to understand how, if at all, central communication departments at Swiss universities conduct M&E, and with what focus. Furthermore, we analyze which factors are enabling and constraining M&E and explore the special role that university leadership plays in this context. The study provides in-depth insights into how impact measurement of university communication is adopted, or rather ignored, and has practical implications for universities and strategic science communication more broadly.

Literature review

Research on the subject of M&E of university communication is found at the cross-section between science communication, higher education research, and strategic communication and public relations (PR). In the field of science communication, empirical studies on M&E in university contexts are still limited. But scholarly interest has been growing in recent years, as reflected in commentaries and edited volumes (e.g., Jensen & Gerber, Citation2020; Niemann et al., Citation2023) as well as empirical studies assessing the effectiveness of training programs (e.g., Rodgers et al., Citation2018), citizen science campaigns (e.g., Andersen et al., Citation2021), science festivals (e.g., Adhikari et al., Citation2019; Pennisi & Lakey, Citation2018) or institutional communication by science centres (e.g., King et al., Citation2015) and museums (e.g., Chen et al., Citation2017). These contributions demonstrate that M&E efforts in science communication are driven by the need for evidence-based findings on the effectiveness of communication strategies that enable better-informed decision-making processes (Ziegler et al., Citation2021). Such expectations are related to a broader professionalization of science communication (Raupp & Osterheider, Citation2019), but also tied to the increased expectations of external stakeholders (e.g., science funders or politicians) for ‘hard evidence’ on the effectiveness and impact of science communication (e.g., Jensen & Gerber, Citation2020). This is also affected by recent changes in the higher education system as outlined in the following section.

Changes in higher education and its impact on communication

The higher education sector has undergone fundamental changes over the past three decades: New public management (NPM) reforms have re-shaped the public sector, importing norms and practices from the private sector and re-structuring public services as quasi-markets (Fredriksson & Pallas, Citation2018; Marcinkowski et al., Citation2014). The NPM reforms have resulted in changing governance of universities, triggering an increased competition for talent, funding, and public visibility on the organizational level (Engwall, Citation2008; Krücken, Citation2021). As a result, public communication has become more important for higher education institutions (Entradas & Bauer, Citation2022; Fürst et al., Citation2022), reflected in the expansion of communication departments (Engwall, Citation2008; Schwetje et al., Citation2020) and communication activities (Engwall, Citation2008; Höhn, Citation2011; Marcinkowski et al., Citation2013; Schwetje et al., Citation2020). The growing trend towards more professional and strategic communication among universities (Fürst et al., Citation2022) also poses new questions about the impact of communication strategies, emphasizing the need to evaluate university communication. While such trends are often assumed to apply to higher education communication in general, more detailed analyses reveal that there are considerable differences among the central communication departments between universities: These differ with regard to the level of professionalism, the diversity of channels used and stakeholders addressed, but also the available resources for communication (Bühler et al., Citation2007; Fürst et al., Citation2022; Höhn, Citation2011). Universities also face, or at least perceive, different degrees of competition, for example, for research funds or students, which might also impact their communication (Lepori et al., Citation2014; Marcinkowski et al., Citation2014). Quite plausibly, such characteristics of communication departments may also have implications for M&E practices in universities.

Conceptualizing M&E of university communication

Research on M&E of communication in organizational contexts stems from the notion that organizations use “deliberate and purposive communication (…) to reach set goals” (Holtzhausen & Zerfass, Citation2013, p. 74) and asks whether communication contributes to achieving predefined organizational goals or missions. Accordingly, an evaluation necessitates the a priori definition of communication goals, which are usually derived from the university’s strategic goals in the planning phase. The quantitative or qualitative data generated through measurements provide indicators to compare achieved results with target goals and allow evaluation of the value created through communication. On the one hand, evaluation thereby enables communicators in organizations to assess the contribution of communication to organizational goals and societal impact. On the other hand, such data can also be used for learning and optimizing internal processes at the level of the communication department or single projects (Buhmann & Likely, Citation2018).

Numerous models, methods, and metrics have been developed for monitoring and evaluating communication efforts in different contexts and with different purposes (Buhmann & Volk, Citation2022). Especially PR research has dealt extensively with questions of M&E in private and public sector organizations (e.g., Zerfass et al., Citation2017), and many of the conceptual logics and models of evaluation apply to universities as well (Volk, Citation2023).

One of the most widely used models is AMEC’s integrated evaluation framework (AMEC, Citation2020), developed for evaluating communication in all types of organizations (for an overview see, e.g., Macnamara & Gregory, Citation2018). Despite variations among the models, most agree that communication evaluation can be conducted across several stages, namely, inputs, outputs, outcomes, and impacts. Raupp and Osterheider (Citation2019) adapted these four stages for university communication. They are described as follows:

  • Inputs focus on the efforts and resources invested in a communication activity, such as the time spent and personnel costs associated with, e.g., creating press releases, processing journalists’ requests, or producing social media content.

  • Outputs can be further distinguished into primary and secondary outputs: Primary (sometimes also called ‘internal’) outputs comprise the communication activities that are created or published, often quantifiable e.g., as the number of press releases or social media posts published, the quantity of print products, newsletters, or website updates. Secondary (sometimes also called ‘external’) outputs assess to what extent target groups were (potentially) reached through communication activities, measured e.g., as the achieved media coverage (pick-ups and reprints), web traffic, and social media reach (views or impressions).

  • Outcomes can also be subdivided into direct and indirect outcomes: Direct outcomes (sometimes also called ‘outtakes’) focus on measuring what target groups do in response to a communication activity, i.e. on their immediate reactions, responses, or engagement with communication, measured, e.g., via social media interactions (likes, shares, comments), followers or subscriptions, or downloads. Indirect outcomes refer to the short- or medium-term effects of a communication activity on target groups, particularly changes in cognitions, attitudes, emotions, or behavioral intentions, such as beneficial relationships with alumni or a positive organizational image among (members of) the public.

  • Impacts (sometimes also called ‘outflows’) refer to the long-term and/or substantial value created through communication activities at the organizational or societal level, which can be both quantitative and qualitative. Examples include reputation value or position in rankings, the number of third-party-funded grants or collaborative projects, or student enrolment numbers.

Empirical research on measurement and evaluation of university communication

While a comprehensive approach to M&E of university communication assesses communication along these four dimensions, scarce research shows that the practice of evaluation is quite different. Early surveys of German universities show a modest extent of M&E in university communication and reveal that, during the early 2000s, 28% of German universities did evaluate their communication practices (Bühler et al., Citation2007). A few years later, in 2009, this was still the case for less than half of German universities (Höhn, Citation2011). Yet when they did, universities mostly created media clippings or relied on self-evaluations of data such as event visitor numbers, and on personal, non-systematic observations (Bühler et al., Citation2007; Höhn, Citation2011). Similarly, an early study of communication departments at Spanish universities (Busto Salinas, Citation2013) showed that most evaluated media presence (97%) and tracked visits to the institutional web page (78%). The question of how evaluation data is ultimately used has hardly been researched. However, Sauter-Sachs (Citation1992) gives the example of an image survey among Swiss citizens, the results of which were used as an argument for hiring more employees for university communication.

The findings on university communication fit into the larger picture that most public sector organizations do not evaluate their activities overly thoroughly: They often focus on easily measurable indicators and effects, namely news clippings and the quantitative extent of website/intranet use, but less frequently measure mid-term and long-term effects of their communication (Zerfass & Volk, Citation2020). In light of the lack of up-to-date empirical research on the evaluation of university communication, we ask:

RQ1:

How, if at all, do central communication departments at universities evaluate their communication?

Factors influencing measurement and evaluation of communication

Given infrequent evaluations of communication across organizations, research in the field of science communication and PR has attempted to identify factors that lead to such a “deadlock” (Macnamara, Citation2015). Among the identified constraining factors are a lack of sufficient time or budget for evaluation at the organizational level (Besley, Citation2020; Jensen, Citation2014), the absence of standards and clear regulations regarding data protection and privacy (Economou et al., Citation2023), missing methodological competencies and knowledge (Zerfass et al., Citation2017), and lack of motivation or even disinterest among the communicators at the individual level (Buhmann & Brønn, Citation2018; Nothhaft & Stensson, Citation2019).

Facilitating factors are primarily found at the organizational level and comprise a holistic approach to evaluation, sufficient investment of resources, alignment with organizational processes and structures, and a supportive culture (Gilkerson et al., Citation2019; Romenti et al., Citation2019). In addition, recent research has identified new technologies, customizable digital tools, and processes of automation as enablers, since organizations can nowadays collect more data than ever before, in almost real-time, and constantly monitor their communication efforts as well as digital stakeholder behavior (Economou et al., Citation2023; Fitzpatrick & Weissman, Citation2021).

What remains unclear is whether the enabling and constraining factors identified in the above-mentioned studies also influence the evaluation practices in communication departments at universities, which differ considerably from corporations, NGOs, and other organizations (Musselin, Citation2007; Raupp & Osterheider, Citation2019). Against this background, we ask:

RQ2:

Which factors enable and constrain evaluation practices of central communication departments at universities?

The role of organizational leadership

Leadership expectations for reporting on the impact of communication and demonstrating accountability have been identified as another factor that can promote the use of evaluation (e.g., Gilkerson et al., Citation2019; Swenson et al., Citation2019): By demanding transparency over communication spending and effectiveness, organizational leaders can exert pressure on communication departments to measure performance and justify decisions based on data. However, until today, empirical research has mostly examined communicators’ perceptions of the expectations of organizational leadership regarding reporting communication impact. Conversely, only very few studies have surveyed organizational leaders regarding their expectations or understanding of the impact of communication (e.g., Marcinkowski et al., Citation2013; for corporate communication see Brønn, Citation2014). To our knowledge, leaders’ expectations toward impact measurement and evaluation of communication remain thoroughly under-researched, particularly for universities where organizational leadership exerts a strong influence on the practices and routines in communication departments (Engwall, Citation2008; Fürst et al., Citation2022; Schwetje et al., Citation2020). We therefore ask:

RQ3:

What expectations do university leadership have toward communication evaluation?

Methods

This study uses data from semi-structured interviews with 17 heads, deputies, and employees of central communication departments and 13 members of university management (i.e., rector and deputy rector as well as secretary general) in eight Swiss universities that are representative of the higher education landscape in Switzerland. Moreover, it relies on an analysis of 17 documents comprising evaluation reports as well as relevant strategic and organizational documents. Notably, our study did not analyze the decentral level, i.e., M&E of communication from faculties, departments, or institutes, which, nonetheless, also contributes to the communication of higher education institutions (Entradas & Bauer, Citation2022).

Sampling procedure

Switzerland, where the study was conducted, hosts 42 universities of three major types: research universities (RU), universities of applied sciences (UAS), and universities of teacher education (UTE). The selection of eight universities for this study was based on a quantitative whole-population survey of 203 communication practitioners from 37 universities of all organizational types. Survey data were aggregated at the organizational level, and hierarchical cluster analysis used to identify four ‘types’ of university communication departments, based on 10 indices and variables that encompassed the level of professionalism, diversity, and intensity of communication as well as the degree of strategic orientation and proximity to university leadership (Fürst et al., Citation2024):

  1. The minimalists (5 universities) have low intensity of communication, meaning they devote the least personnel resources to communication, produce the least output, and perceive a medium level of competition with other universities. They also have low to medium levels of professionalism and diversity and the least pronounced strategic orientation of communication.

  2. The well-resourced competitors (16 universities) have the strongest competitive orientation and greatest diversity in their communication portfolio. They show very high levels of intensity in communication, meaning that they have by far the largest communication teams, produce an above-average amount of output, and perceive the highest level of competition with other universities. In contrast, they have low to medium levels of strategic orientation and a below-average degree of professionalism in their communication departments and among their staff.

  3. The specialized strategists (9 universities) have low intensity of communication overall, meaning that they commit comparatively few personnel resources to communication, produce less output than most other universities, and perceive the lowest level of competition with other universities. This type shows a very strong strategic orientation, but low diversity, and a medium level of professionalism among the communication department and its staff.

  4. The professional all-rounders (7 universities) have an overall medium to high intensity of communication, meaning that their communication teams are of medium size and their perceived competition with other universities is average while they produce and disseminate the largest amount of output. This cluster also has a strongly pronounced strategic orientation, high diversity, and by far the most pronounced professionalism of the communication department and its staff.

For contextualization, we provide results from the standardized survey of 37 universities (): Findings overall show that two-thirds of communication departments carry out evaluation themselves (65%), with the minimalists cluster clearly lagging behind. Overall, most communication departments in universities are involved in evaluation practices. However, 10% of the respondents stated that their department does not conduct any evaluations. Moreover, on a 7-point scale from 0 to 6, evaluations are only moderately used to optimize communication processes (M = 3.6), with the specialized strategists and professional all-rounders clusters scoring above average in their utilization of data for optimization.

Table 1. Sample description of clusters.

From each of the four clusters, we selected the two most typical universities for our interview study (see ), conducting 30 interviews in total in these eight institutions (see ). Most interviews were conducted in person, often at the premises of the analyzed institution, and in a few cases online via Teams or Zoom due to COVID-19 restrictions. For each institution, the first point of contact was the head or deputy head of communication who approved the participation of the university in the study. In each interview, the interviewee was asked to share the names of other relevant potential interviewees within the central communication department or among university leadership. Once interviews with communicators were finalized, interviews with university leadership were conducted.

Table 2. Selection of eight cases.

Data collection and analysis

The 30 interviews were conducted in two waves from July 2021 to September 2021 and from March 2022 until March 2023Footnote1 in German, French and Italian. Semi-structured interview guides were used that comprised seven larger themes (see appendix section 1). These were adapted to communicators and university leaders, respectively, and allowed for openness and flexibility in the individual interviews. In addition, 17 documents from seven universities were obtained through the heads of the central communication department at each university. The documents were provided in three different formats:

  1. Word or PDF documents (61%) which described communication strategies or marketing plans, or comprised self- and external evaluation reports;

  2. Powerpoint presentations and illustrations (27%) which described strategies, measures and measurement results;

  3. Excel sheets (12%) which comprised data from measurements.

Most documents were created between 2019 and 2021, while a few were older (i.e. 2017 or 2015). The vast majority of documents were intended for the internal staff at the communication department with a few exceptions, in which documents were either intended for the university leadership or publicly available online (e.g., communication evaluation report in the context of wider organizational evaluation). For a few documents, it was not possible to determine the target audience or the specific author(s) of a document.

The interview transcripts and documents were analyzed using qualitative content analysis in MAXQDA following Rädiker and Kuckartz (Citation2019) and Mayring (Citation2014). The codebook comprised seven categories with sixteen sub-categories for evaluation and was used to analyze both the interview transcripts and documents (see appendix section 2). Categories were developed both deductively from the literature (e.g., stages of evaluation, methods of evaluation, time of evaluation) and inductively from the material (e.g., expectations from leadership, subcategories of enabling and constraining factors). The data set was analyzed by the first author, including continuous iterations between data and literature and discussions with all other authors.

Findings

Evaluation practices (RQ1)

With regards to RQ1—how, if at all, communication departments at universities evaluate their communication – the interviews with university communicators and the content analysis of documents revealed similarities and differences concerning the evaluation stages, but also the responsibilities for and utilization of evaluation insights.

Stages of evaluation

Input measurement is systematically done by one university (from the professional all-rounder’s type) and elements of input measurement are broached by two other universities as part of controlling efforts (well-resourced competitor) or by mentioning it in communication guidelines (professional all-rounder).

The most advanced input measurement was found at a university of applied sciences (professional all-rounder), which has developed a systematic “controlling of communication”. Input measures included financial budget and working hours at the level of single tasks and campaigns and were readily available in a dashboard. The head of marketing commented: “Planning and controlling efforts have certainly increased [in recent years].”

While input is rarely measured, all universities – including the minimalists—measure primary outputs, such as the number of media releases or social media posts created. Almost all universities also measure secondary outputs such as views or the amount of news coverage. Typically, data on media coverage and clippings are sourced through media monitoring software, while online visits to websites get quantified through data from Google Analytics, and social media reach and impressions are obtained from the platforms directly. Most interviewees gave weight to secondary outputs and direct outcomes of digital communication, as illustrated by the following quote from a head of communication (specialized strategist): “What we measure is effectiveness at the behavioral level; we monitor the reach and interaction on social media, those are indicators that point in a good direction.”

In one university, advertising equivalence values (AVEs) – the equivalent cost of coverage in the media based on paid advertisements – are used for measuring the success of media coverage, even though the use of AVEs has long been identified as unsound and invalid metrics (cf. AMEC, Citation2020; Watson & Zerfass, Citation2011). Interestingly, the interviewee was somewhat aware of possible pitfalls hereof:

Well, sound measures … We had this once and then stopped and now we use it again. Here, the advertising equivalency is shown. So if we place half a page [in the newspaper], that’s equivalent to 5,000 CHF. But that’s a little bit to be taken with a grain of salt. […] And maybe you’re also saying we’re doing it wrong. […] It’s a bit nit-picky on the part of the provider who offers this. – Team Leader (well-resourced competitor)

Despite a strong focus on quantitative indicators such as reach or impressions, one head of communication (well-resourced competitor) emphasized the importance of qualitative measures: “I think at universities, compared to companies, the qualitative response is also very important. That is, that people have the feeling it was well done.”

Most communicators reported evaluating direct outcomes such as reactions and engagement on social media channels and/or websites, but they often do not move beyond “a purely quantitative analysis” (Head of Communication, professional all-rounder). Another frequent indicator of success is the sheer number of participants at events such as open days:

We can see each week how many people are registered [for an event], that allows us to conclude if this [campaign] may have had a result because there are more people registered this week. – Social Media Manager (specialized strategist)

Few universities measure indirect outcomes based on audience or population surveys, for example, with the goal of evaluating readers’ interest and perceived credibility of in-house print products. Only one university commissions regular surveys from an opinion polling company to capture the university’s public image among stakeholders such as the public or students. Two universities conducted surveys as part of one-time usability tests to optimize their websites:

We did a broad usability test of our website […] with an agency that specializes in exactly that. That’s pretty resource intensive but has helped us a lot. It’s very, very, very informative. We derived several measures on how to optimize the web presence based on that. – Head of Communication (well-resourced competitor)

Three universities conduct impact measurements, focusing mainly on enrolment numbers of new students in specific study programs. Two of those universities adapted the logic of a customer journey and used various metrics along the typical touchpoints that students have with the university. As the following quote illustrates, the impact measures are understood and traced through the stages of cognitive, conative, and behavioral outcomes towards acquiring a target number of new students in a continuous cycle of optimization, powered by real-time data:

We do a lot of individual campaigns for all study programs … multi-layered campaigns that work by first generating attention, then stimulating a thought process and then an action … We have an entire monitoring cockpit where we see, okay, the campaigns are working. The inscription numbers are rolling in. And if we identify a problem somewhere—the video is not being clicked at all, nobody is watching that—then the video needs to be removed or placed differently. – Head of Marketing (professional all-rounder)

Of the universities not measuring impact, a communicator reflected on the difficulties of tracing impact:

It’s not so easy to measure impact. Likes and similar metrics don’t tell us much about whether communication had the effect of getting more people interested in studying […] in the time frame of a campaign. – Team Leader (well-resourced competitor)

Overall, while external communication is evaluated by all universities, internal communication is not measured at all by half of the departments. Only one UAS belonging to the professional all-rounders cluster conducts evaluation at all four levels as illustrated in . Another university from the professional all-rounders has recently adopted a stage model for the evaluation of its communication.

Table 3. Methods used to evaluate communication input, output, outcome, and impact.

In terms of responsibility for evaluation, one university (professional all-rounder) stands out with a full-time employee for research and analytics as part of the marketing department. In all other universities, evaluation tasks are conducted mostly as self-evaluations with communicators evaluating their own area of expertise such as media relations, social media, or web. Often, this self-evaluation is intrinsically motivated by staff in the department. All universities had quantification of media clippings conducted by external service providers (see also ).

Evaluation reporting

In addition to the statements of the interviewees, the document analysis provided insights into the underlying M&E concepts and measurement data and how these were recorded in writing or prepared in reports (): Communication strategy or concept documents often specified the channels subjected to evaluation. Evaluation data was then compiled in different formats (e.g., in Powerpoint presentations, Excel sheets) and partly condensed in reports, often with ranging frequencies, from monthly to quarterly to annual editions. Reports were not structured by evaluation stages, but typically differentiated by channels or communication goals. Four documents included information on evaluation concepts and responsibilities.

In the document analysis, we identified seven types of M&E reports:

  1. Media monitoring reports, differentiated by subdivisions/schools and countries/continents, including metrics such as the number of media reports or the top 10 media releases and their potential reach in readership;

  2. social media reports for the university’s social media channels (Twitter, Instagram, Facebook, LinkedIn, etc.), including metrics such as the total number of followers, follower growth per month, competitor analysis of followers and engagements of other universities in Switzerland; impressions, engagements, clicks, reach per story, and relative growth compared to previous years; top and flop posts based on impressions;

  3. website reports, including metrics such as page views, differentiated by microsites/topics, location of users, and relative growth compared to the previous quarter of the year;

  4. app reports, used by employees and students, including metrics such as downloads of the app, visits differentiated by topics (e.g., canteen), search terms, etc.;

  5. newsletter and mailing reports, sent to employees and students, including metrics such as open and click rate, top 5 articles, and relative growth compared to the previous year;

  6. corporate publishing reports of the university’s own magazines, directed towards employees or students and the general public, including metrics such as number of print/e-editions, clicks, reach differentiated by region, number of advertisements, but also a commissioned readership survey using metrics such as interest, awareness, perceived transparency, and credibility; and

  7. usability reports for the university websites, using, e.g., interviews and surveys, ergonomic tests, including metrics such as searchability, comprehensibility, information, and usability.

Overall, these documents show a strong focus on relatively simple, quantitatively driven output and outcome measures – and very few universities provided us with all of the above types of reports. Qualitative indicators were almost absent in the documents, even if some concepts mentioned the quality of communication as an important factor of communicative success. In a few cases, quantitative data was commented on in reports with qualitative interpretations and contextual information, e.g., about external events influencing social media metrics.

The analysis further shows that reports are merely descriptive and report past performance (“summative evaluation”). When it comes to real-time monitoring of digital communication activities (“processual evaluation”), three universities use a cockpit including a heatmap (from a professional all-rounder), a risk management map (from a minimalist), and dashboards (from a specialized strategist), which allow for a continuous overview and monitoring of digital communication activities. Few reports or presentation slides include concrete suggestions for improvements (“formative evaluation”) or explanations for why certain activities were more successful than others. Hence, based only on the reports and with no accompanying oral presentation and comment on these results, in most cases, it remains unclear what can be learned from the data and how to optimize communication activities. Notably, M&E reports included very few elements of benchmarking against the performance of other universities.

Utilization of evaluation

Concerning the utilization of evaluation data to optimize communication, two interviewees emphasized the relevance of collecting data as part of understanding stakeholders’ interests or channel preferences (“formative evaluation”) and using these insights for strategy development or channel selection, as illustrated by the quote.

Today, we can measure much more. That’s cool, also for deriving learnings. And based on these learnings, and also internal and external surveys we have adapted our channels accordingly. – Head of Communication (well-resourced competitor)

One interviewee mentioned that analytics and data about trending topics are used for approaching scientists with requests for content (specialized strategist). Another interviewee added that the use of evaluation data for benchmarking purposes is important as well, that is, comparing the university’s own performance on social media channels with the performance of other universities (professional all-rounder). Finally, data gathered through evaluations are utilized by communication departments (professional all-rounder) to justify an increase in budget and personnel towards university leadership. One communicator reported:

My first position was the result of an evaluation and later I got a second position for media work. That was also the result of an evaluation, where people [from university management] saw that the press office was actually understaffed. – Deputy Head of Communication (well-resourced competitor)

Enabling and constraining factors of evaluation (RQ2)

With regards to RQ2—which factors enable and constrain evaluation practices—the interviews with university communicators and leadership reveal three factors that have both enabling and constraining aspects.

First, the resources and skills of communication professionals are reflected in the interviews with communicators and university leadership as both an enabling and constraining force to evaluate communication activities, both on the organizational and individual level. While sufficient knowledge, time, and resources enable evaluation, the lack thereof hinders it. Many communicators reported missing time and lack of priority as the main obstacle for evaluation:

I think more staff [would be needed] so that we have more time to evaluate the things better that we do. At the moment, we have so much on our plate that we just produce, produce, produce. But you can’t measure or optimize anything. We really don’t have the time, and that’s actually a shame. – Event Manager (professional all-rounder)

Second, digital technologies were mentioned in interviews and documents, mainly as enabling factors for evaluation. This includes the potential to customize and automate measurements through analytics provided by tools such as Google Analytics, to collaborate across departments, units, and even organizations through tools like DeskNet, and to source data in real-time from different places on social media in an integrated interface through tools like dashboards or Heatmaps. In several interviews, communicators agreed that digitalization enables evaluation, but they diverged in different approaches to their use of digital technologies. Some embraced available tools to benchmark their own performance with that of the “industry” of higher education through new tools and market research products (professional all-rounder) or engaged with external service providers offering automated sentiment analyses and usability tests (well-resourced competitor). Using external tools over those built in-house sometimes led to resistance within the organization:

I’m totally into [using] software that you don’t have to build yourself … I don’t want to deal with technical issues. I want to make content. So we did it—behind the back of the IT department, who don’t really think it’s funny when we use external systems—but [those systems] are made for us, and they’re being developed further. – Head of Communication (well-resourced competitors)

While in this particular case, circumventing the IT department was successful, this approach entails several risks related to data security, maintenance, and lack of support from IT. Another group of communicators seemed keen to keep tools for evaluation as an in-house project, thus maintaining control over the process:

In part, outsourcing can be even more costly than doing it yourself. You have to explain everything and check and control it over and over. There are certain jobs where you have to understand the DNA of our university … Then you better do it yourself right away. – Head of Communication (well-resourced competitor)

Third, the existence or lack of a culture of learning was described as an influence on the conduct and also perceived benefits of evaluation. One team leader from a well-resourced competitor reflected that in earlier times, a culture of indifference towards communication evaluation prevailed: “In the past, evaluation was really not an issue at all. You didn’t monitor anything, you just had to produce. ‘Fire and forget!’ is what I used to call it.” In contrast, today, a culture of continuous improvement was perceived by a few interviewees, which influences the way evaluation is viewed, conducted, and supported, reflecting a formative and processual rather than a summative approach:

Today, we don’t just take shots in the dark: when we publish something, we measure the results as precisely as possible … We are always continuously developing. That is typical for our culture. … The idea of innovation is really lived in communication. – Head of Communication (well-resourced competitor)

This perception was particularly pronounced in the clusters of the professional all-rounders and well-resourced competitors (especially for UAS), where a strong culture of learning was connected to a perceived competition and market pressure to acquire and maintain students and staff, which was mirrored in a more marketing-driven logic of the communication departments. This dynamic is illustrated by the following quote in which competition as an enabler of M&E is reflected:

That’s also a difference to [research] universities. At universities of applied sciences, we are much more business-oriented. We’ve only been around for 20 years and had to fight harder … so we’ve expanded these efforts [in M&E] because strategy also has to do with analytics, with evaluation, with getting to know the target groups better and knowing what works and what doesn’t. – Head of Communication (well-resourced competitor)

The opposite was shown for the minimalist cluster (particularly the RU), in which interviewees perceived less competitive pressures and did not indicate a strong culture of learning. This also coincided with lesser elaborated activities of M&E.

Expectations from university leadership (RQ3)

Expectations from university leadership can contribute to the advancement or lack of M&E activities at universities, which is why we examine university leadership’s expectations toward communication evaluation (RQ3). The analysis shows substantial differences between organizations, both in university leaders’ expectations toward evaluation as well as in the communicators’ perception of these expectations. While most university leaders show support and approve of evaluation in communication departments, one did not think much about such:

The communication department maybe conducts a survey or something. But because I am well connected [with people], I found the spontaneous feedback to be almost the most honest, because that cannot be faked. – Rector (well-resourced competitor)

A communicator reported that the university management was not interested in evaluation reports, but associated communicative success mostly with a report of media clippings in the most important daily newspaper:

At the media desk, we report every month to the university management, but I don’t know if they read it. We have rarely heard any feedback. … The perception is rather anecdotal, like ‘I read that in the newspaper.’ – Deputy Head of Communication (well-resourced competitor)

However, in two cases leaders were intrinsically and personally interested in communication evaluation, as illustrated by the following quote:

As rector, I have an account on all these social media platforms myself and I’m always up to date on what’s going on, what we’re doing. I always look at it, actually several times a day … and from time to time, I get asked ‘Why do you do that? That’s a lot of effort!’ For me, it’s no effort at all. I find it highly exciting. I also have a special media report … every morning at 10:00 on my mail. I look what is happening. – Rector (professional all-rounder)

However, such high personal involvement breeds high expectations. From the perspective of communicators, in one university (professional all-rounders), there were pronounced expectations from leaders to legitimize budgets but also justify strategic decisions regarding communication, as illustrated by the following quotes:

The fact that you have to explain things more, why we are doing this, has become more common. And that’s not even necessarily just the budgets, but also: ‘Why are you doing these channels? Why this and this?’ – Head of Marketing (professional all-rounder)

Similarly, university leaders described increased pressures to legitimize public spending and protect organizational reputation as a driver for organization-wide evaluation:

I think a lot has changed, also in the sense that we now have to account for the money we have … This was simply not the case in the past, or not to the same extent. Reporting within the university was already there, but it wasn’t done systematically. Then came the evaluations, where the performance of units was systematically surveyed … and the ‘lawyerfiction’ … before you say anything, you always have a lawyer on your side, and you can’t just say something out loud. – Secretary General (well-resourced competitor)

A concern and motivation for evaluation was related to monitoring communication to protect the reputation of the university against possible crises: “We also had to start monitoring […] so that we don’t suddenly get tangled up in a crisis.” – Rector (professional all-rounder)

Furthermore, it was notable that a few university leaders attributed the position in university rankings partially to a positive influence of communication and reputation building, while at the same time acknowledging that this is difficult to measure:

We suspect that the decline in the last rankings, a few places, seven places actually, has something to do with the drop in reputation, but we have no proof of this really. – Deputy Rector (Well-resourced Competitor)

Other university leaders however were skeptical about rankings in general and explicitly rejected rankings as a measure of the impact or success.

When it comes to the use of evaluation data, university leaders across a third of universities furthermore expected competitive analyses, such as benchmarking products, services, and campaigns against other relevant players in higher education in terms of the relation of inputs and outcomes:

[We] could now make a comparison and say, ‘How many employees in the communication departments do other universities have?’ And then compare these numbers along the lines of: ‘What’s the input in other communication departments? How does it relate to outcomes in terms of reputation, in terms of public perception?’ – Deputy Rector (well-resourced competitor)

Discussion

Our study indicates that universities’ communication departments still rarely assess the effectiveness of their communication – even though this communication, in general, has become more important, extensive, and professionalized (Engwall, Citation2008; Entradas et al., Citation2023; Fürst et al., Citation2022; Schwetje et al., Citation2020). Findings show a strong focus of universities’ M&E activities on primary outputs, such as the number of press releases, and secondary outputs like media coverage or social media reach and website impressions. The dominance of output measures aligns with previous, considerably older studies on university communication (Bühler et al., Citation2007; Höhn, Citation2011) and across sectors (cf. Zerfass et al., Citation2017), and comes with pitfalls: While outputs and direct outcomes indicate quantities, they do not provide universities with an understanding of the actual effects on relevant target groups at the outcome stage (indirect outcomes) or the impact of communication. Moreover, the fact that M&E of internal communication is hardly institutionalized is noteworthy, especially since the relevance of internal communication has grown during the COVID-19 pandemic. Recent studies from the U.S. emphasize both the pitfalls of malfunctioning internal communication (Lemon & VanDyke, Citation2024) as well as its enabling role as a facilitator of interdisciplinary collaboration within universities (Lemon & VanDyke, Citation2023). This points to a discrepancy between the increased relevance of internal communication structures and their evaluation in higher education. Furthermore, it is interesting that student enrolment is tracked, while other possible indicators of impact – such as the acquisition of third-party funding or collaborations – were not mentioned by the communication professionals in our sample. Overall, with only a few universities moving towards more comprehensive and mature M&E practices, the evaluation of university communication is still largely in its infancy. Yet, the fact that the relevance of evaluation is undisputed by university communicators points to a professionalization process – albeit at a different pace.

Findings show three enabling and constraining factors that align with prior studies (e.g., Jensen, Citation2014; Ziegler et al., Citation2021): First, sufficient knowledge, time, and resources were found to advance evaluation, while the lack of these hinders it. Many communicators reported missing time and a lack of priority as the main obstacle to evaluation. Second, digital technologies were found to be mainly an enabling factor for evaluation, echoing findings of previous studies (e.g., Fitzpatrick & Weissman, Citation2021). The potential to customize and automate measurements through analytics and collaborate across departments and units were identified as facilitators. Constraints included risks related to data security and maintenance as observed in earlier studies (e.g., Economou et al., Citation2023), lack of support from IT, as well as costs of technologies and tools both in-house developed and externally purchased. Third, interviews showed that a culture of indifference towards M&E prevailed in earlier times, while today, more universities strive for continuous improvement and learning in a formative sense. A culture of learning (or lack thereof) was shown to influence how M&E was viewed, conducted, and supported within the communication departments. Interestingly, all of the identified factors have been shown to be influential in previous studies and literature reviews, suggesting that the existing systematization of factors (cf. Economou et al., Citation2023; Gilkerson et al., Citation2019; Romenti et al., Citation2019) are comprehensive and applicable to both private and public sector organizations.

The study shows that expectations from university leadership toward M&E differ widely, ranging from being disinterested to highly supportive. We examined and contrasted the perspective of university leaders with the perceptions of communicators and found that both actors partly perceived increasing pressures to legitimize actions and public spending, in line with previous studies (e.g., Engwall, Citation2008; Marcinkowski et al., Citation2013). University leaders who were personally interested in communication had higher expectations and more often supported evaluation initiatives. Communicators at such universities showed motivation and more elaborate activities in the field of communication evaluation. The opposite also seemed true: In universities where leaders were disinterested or even suspicious of M&E, communication departments had less elaborate evaluation activities. Leaders had three types of expectations towards evaluation: gaining a competitive advantage through evaluation by benchmarking products, services, and campaigns; protecting the reputation of the university through monitoring; and showcasing value-for-taxpayers-money through communication evaluation. It is interesting to note that what is measured in terms of impact by communication departments – student enrolment figures – was hardly mentioned by the university management as an indicator of success or impact. This indicates a discrepancy between what communication departments measure as an indicator of impact and what university leaders attribute to communication and could possibly explain why university management ascribes less strategic influence to communication departments than communication practitioners do (Fürst et al., Citation2022).

Finally, our study reveals interesting differences across the four clusters of universities, which were based on a prior quantitative analysis that distinguished four types of communication departments: minimalists, well-resourced competitors, specialized strategists, and professional all-rounders. A comparison shows notable differences in M&E:

  1. The minimalists (RU, UTE) were characterized by low to medium levels of professionalism, few resources, low levels of output, and an average level of competition with other universities. They conducted the most basic level of evaluation, focusing only on output measurements. They also reported that they lacked resources and skills for M&E.

  2. The well-resourced competitors (RU, UAS) were characterized by a below-average level of professionalism, but strong competitive orientation and the highest personal resources while producing a lot of output. They conducted more advanced evaluations including output and outcome measurements and partly input measurements but did not assess impacts. Within this cluster, Case D (UAS) seems to advance towards a more mature evaluation, having a pronounced culture of learning.

  3. The specialized strategists (UTE, UTE) were characterized by medium levels of professionalism, few personal resources, low levels of output, and comparatively low competition with other universities. They conducted standard output and simple outcome measurements and tracked impact based on new student registrations, but nothing more. We assume that the specialized strategists may attribute a particularly high priority to new student registrations, as this cluster comprises by far the highest proportion of UTEs of all clusters, which have the lowest student numbers (). In this cluster, university leadership seemed however little interested in M&E and there were few indications of a culture of learning.

  4. Finally, the professional all-rounders (UAS, UTE) were characterized by the highest level of professionalism, perceiving a medium level of competition with other universities, having medium personal resources, and producing the largest amount of output. This cluster had the most mature evaluation practices, including measurements along all stages as well as a pronounced culture of learning and a supportive university leadership. Within this cluster, Case H (UAS) stands out as the most pronounced in applying digital technologies to optimize M&E, driven by a strong marketing logic, and with a full-time staff member dedicated to M&E.

Overall, the differences support the assumption that the characteristics, as well as the positioning of communication departments vis-à-vis leadership, are critical to understanding and explaining differences in their adoption of M&E practices. Despite a small sample of cases, our findings lead us to assume that differences among universities can be explained by the cluster characteristics and notably the different degrees of competition universities face, the resources they have and the output they produce: both professional all-rounders and well-resourced competitors have the most resources for communication and produce the most output while navigating a competitive environment. This coincides with the most elaborate use of M&E for optimization and learning purposes (especially Case H).

Conclusion

This study explores the evaluation practices of communication departments at Swiss universities through semi-structured interviews and document analysis. It adds insights to the sparse and outdated research on M&E practices in university communication (cf. Bühler et al., Citation2007; Höhn, Citation2011), capturing not only the perspective of communicators but also that of university leadership, mirroring previous research regarding the relevance of university leaders (e.g., Engwall, Citation2008; Fürst et al., Citation2022; Marcinkowski et al., Citation2013). Contrasted with findings from different sectors including the public sector (Zerfass et al., Citation2017), our results suggest that evaluation practices in universities are similarly focused on outputs, but also fall behind at the level of impact measurement and face similar constraints.

A key implication of this study is that organizational context and the characteristics of the communication department, notably the (perceived) degree of competition, the degree of intensity and professionalism, rather than the sector determine maturity in M&E practices. Our findings clearly show that the level of resources available for M&E and an engaged leadership actively supporting a culture of learning and development of skills in communication departments are very important. Future research could systematically test if the differences in M&E are indeed primarily affected by organizational and departmental specificities, rather than by individual-level characteristics or differences across sectors.

The main limitations of this study are its single-country perspective and its focus on M&E in central communication departments, thereby neglecting evaluation practices of decentral communicators (e.g., in faculties and schools) which are also important in university communication (cf. Entradas & Bauer, Citation2022). Moreover, as this study only analyzed public universities in Switzerland, future studies should shed light on differences across higher education systems with varying degrees of market orientation and across public versus private universities to identify further macro-level and organizational-level factors driving M&E practices. Future studies should also rely on mixed methods (including ethnographic and observational methods) to minimize risks of social desirability effects. Also, given that technological innovations may open up new possibilities for M&E but may come with ethical and privacy concerns (Economou et al., Citation2023), future studies should scrutinize whether digital data and AI-based tools are used responsibly and ethically in M&E (Volk & Buhmann, Citation2023).

Our study also has several practical implications: M&E can help equip university communication departments with the necessary evidence to explain and display the contribution of communication to the overall goals of the university to leadership, which can be particularly helpful in times of cost-cutting. Our findings show that university leadership has an important role to play in both ensuring enough resources and encouraging a culture of learning, which are necessary for the advancement of M&E of communication at universities. Communication professionals who work in well-resourced departments and already conduct evaluations should reflect on how they present the services provided by their department to university leadership and whether they condense evaluation data in meaningful reports. Based on our findings, there seems to be a lack of reporting toward university leadership on how communication contributes to achieving strategic objectives of the university, as evaluation data is hardly compiled in strategy-oriented year-end reports. In communication departments with scarce resources and little leadership interest in evaluation, communication professionals can start conducting small-scale pilot evaluations for selected, strategically relevant projects or formats (e.g., open days) using less costly, informal evaluation methods (e.g., feedback forms, graffiti walls) (see Grand & Sardo, Citation2017). They can compile evaluation data from pilot projects in reports to try to convince the leadership to allocate resources for evaluation. Importantly, communication professionals should not use lack of time as an excuse for not conducting meaningful evaluations, as it is ultimately also a matter of priorities and willingness (cf. Nothhaft & Stensson, Citation2019’s discussion on practitioners’ strategic disinterest in evaluation).

Against the backdrop of digitalization, communication practitioners should be aware of an overemphasis on digital communication and quantitative metrics, which could result in a negative spiral where channels and formats whose “success” is harder to grasp in evaluations, lose importance in the communication portfolio. Qualitative indicators, such as the tonality of media coverage, event feedback, or sentiments in social media comments, are important for better understanding whether university communication is actually affecting target audiences’ opinions or attitudes about science, and eventually contributing to fulfilling their third mission as public organizations in society. Rather than focusing on social media metrics or attendance numbers of university events, such as open days or science fairs (critically discussed in Weingart & Joubert, Citation2019), emphasis should be put on the evaluation of real engagement and participation of audiences, which aligns with the third mission of universities and expectations from societal stakeholders. This could be done, for example, using informal and gamified, often low-cost evaluation methods (for an overview see Grand & Sardo, Citation2017).

In Switzerland, the results of this study may serve as a source of inspiration and guidance for universities aspiring to develop their M&E efforts. To support the knowledge transfer, we offered all participating universities an in-person briefing on the results of the study and six invited us. The results proved to strongly resonate with the lived reality of practitioners and confirm a widespread ambition on the side of communication practitioners to advance in this field.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Swiss National Science Foundation under grant agreement nr. 184992.

Notes

1 The data were collected in two waves due to the maternity leave of the researcher conducting most interviews (first author). However, since the interview study was focused on long-term developments and established practices in university communication, the overall period of data collection from July 2021 to March 2023 does not compromise the quality and comparability of data.

2 The interviews were conducted mainly by the first and some with the third author. The data were collected in two waves due to the maternity leave of the researcher conducting most interviews (first author). However, since the interview study was focused on long-term developments and established practices in university communication, the overall period of data collection from July 2021 to March 2023 does not compromise the quality and comparability of data.

References

  • Adhikari, B., Hlaing, P. H., Robinson, M. R., Ruecker, A., Tan, N. H., Jatupornpimol, N., Chanviriyavuth, R., & Cheah, P. Y. (2019). Evaluation of the Pint of Science festival in Thailand. Public Library of Science ONE, 7(14), e0219983. https://doi.org/10.1371/journal.pone.0219983
  • AMEC. (2020). Barcelona Principles 3.0. https://amecorg.com/barcelona-principles-3-0-translations/
  • Andersen, T. O., Dissing, A. S., Varga, T. V., Rod, N. H., & Delcea, C. (2021). The SmartSleep Experiment: Evaluation of changes in night-time smartphone behavior following a mass media citizen science campaign. Public Library of Science ONE, 16(7), e0253783. https://doi.org/10.1371/journal.pone.0253783
  • Besley, J. C. (2020). Five thoughts about improving science communication as an organizational activity. Journal of Communication Management, 24(3), 155–161. https://doi.org/10.1108/JCOM-03-2020-0022
  • Brønn, P. (2014). How others see us: Leaders’ perceptions of communication and communication managers. Journal of Communication Management, 18(1), 58–79. https://doi.org/10.1108/JCOM-03-2013-0028
  • Bühler, H., Naderer, G., Koch, R., & Schuster, C. (2007). Hochschul-PR in Deutschland: Ziele, Strategien und perspektiven [Public relations of German higher education institutions: Goals, strategies, and perspectives]. Deutscher Universitäts-Verlag.
  • Buhmann, A., & Brønn, P. S. (2018). Applying Ajzen’s theory of planned behavior to predict practitioners’ intentions to measure and evaluate communication outcomes. Corporate Communications, 23(3), 377–391. https://doi.org/10.1108/CCIJ-11-2017-0107
  • Buhmann, A., & Likely, F. (2018). Evaluation and measurement. In R. L. Heath & W. Johansen (Eds.), The international encyclopedia of strategic communication (Vol. 1, pp. 625–640). Wiley Blackwell.
  • Buhmann, A., & Volk, S. C. (2022). Measurement and evaluation: Framework, methods, and critique. In J. Falkheimer & M. Heide (Eds.), Research handbook on strategic communication (pp. 475–489). Edward Elgar Publishing. https://doi.org/10.4337/9781800379893.00039
  • Busto Salinas, L. (2013). University communication departments. Analysis of the situation in Spain. Estudios sobre el mensaje periodístico, 19, 641–649. https://doi.org/10.5209/rev_ESMP.2013.v19.42147
  • Chen, G., Xin, Y., & Chen, N.-S. (2017). Informal learning in science museum: Development and evaluation of a mobile exhibit label system with iBeacon technology. Educational Technology Research & Development, 65(3), 719–741. https://doi.org/10.1007/s11423-016-9506-x
  • Economou, E., Luck, E., & Bartlett, J. (2023). Between rules, norms and shared understandings: How institutional pressures shape the implementation of data-driven communications. Journal of Communication Management, 27(1), 103–119. https://doi.org/10.1108/JCOM-01-2022-0009
  • Engwall, L. (2008). Minerva and the media. Universities protecting and promoting themselves. In C. Mazza, P. Quattrone, & A. Riccaboni (Eds.), European universities in transition: Issues, models and cases (pp. 31–48). Edward Elgar.
  • Entradas, M., & Bauer, M. W. (Eds.). (2022). Public communication of research universities. Routledge.
  • Entradas, M., Marcinkowski, F., Bauer, M. W., Pellegrini, G., & Wolniak, R. (2023). University central offices are moving away from doing towards facilitating science communication: A European cross-comparison. Public Library of Science ONE, 18(10), e0290504. https://doi.org/10.1371/journal.pone.0290504
  • Fitzpatrick, K. R., & Weissman, P. L. (2021). Public relations in the age of data: Corporate perspectives on social media analytics (SMA). Journal of Communication Management, 25(4), 401–416. https://doi.org/10.1108/JCOM-09-2020-0092
  • Fredriksson, M., & Pallas, J. (2018). New public management. In R. L. Heath & W. Johansen (Eds.), The international encyclopedia of strategic communication (pp. 1–6). John Wiley & Sons. https://doi.org/10.1002/9781119010722.iesc0119
  • Fürst, S., Vogler, D., Schäfer, M. S., & Sörensen, I. (2024). From “minimalists” to “professional all-rounders”: Typologizing Swiss universities’ communication practices and structures. The European Journal of Communication Research. Manuscript In Press. Preprint available at https://osf.io/vrk8g
  • Fürst, S., Volk, S. C., Schäfer, M. S., Vogler, D., & Sörensen, I. (2022). Assessing changes in the public communication of higher education institutions: A survey of leaders of Swiss universities and colleges. Studies in Communication Sciences, 22(3), 515–534. https://doi.org/10.24434/j.scoms.2022.03.3489
  • Gilkerson, N. D., Swenson, R., & Likely, F. (2019). Maturity as a way forward for improving organizations’ communication evaluation and measurement practices. Journal of Communication Management, 23(3), 246–264. https://doi.org/10.1108/JCOM-12-2018-0130
  • Grand, A., & Sardo, A. M. (2017). What works in the field? Evaluating informal science events. Frontiers in Communication, 2, 22. https://doi.org/10.3389/fcomm.2017.00022
  • Höhn, T. D. (2011). Wissenschafts-PR. Eine Studie zur Öffentlichkeitsarbeit von Hochschulen und außeruniversitären Forschungseinrichtungen [Science PR: A study on public relations at higher education institutions and non-university research organizations]. UVK.
  • Holtzhausen, D., & Zerfass, A. (2013). Strategic communication – Pillars and perspectives of an alternative paradigm. In A. Zerfaß, L. Rademacher, & S. Wehmeier (Eds.), Organisationskommunikation und Public Relations (pp. 73–94). Springer VS. https://doi.org/10.1007/978-3-531-18961-1_4
  • Jensen, E. (2014). The problems with science communication evaluation. Journal of Science Communication, 13(1), C04. https://doi.org/10.22323/2.13010304
  • Jensen, E., & Gerber, A. (2020). Evidence-based science communication. Frontiers in Communication, 4, 78. https://doi.org/10.3389/fcomm.2019.00078
  • King, H., Steiner, K., Hobson, M., Robinson, A., & Clipson, H. (2015). Highlighting the value of evidence-based evaluation: Pushing back on demands for ‘impact’. Journal of Science Communication, 14(2), A02. https://doi.org/10.22323/2.14020202
  • Krücken, G. (2021). Multiple competitions in higher education: a conceptual approach. The Innovation, 23(2), 163–181. https://doi.org/10.1080/14479338.2019.1684652
  • Laredo, P. (2007). Revisiting the third mission of universities: Toward a renewed categorization of university activities? Higher Education Policy, 20(4), 441–456. https://doi.org/10.1057/palgrave.hep.8300169
  • Lemon, L. L., & VanDyke, M. S. (2023). Addressing grand challenges: Perceptions of interdisciplinary research and how communication structures facilitate interdisciplinary research at US research-intensive universities. Journal of Communication Management, 27(4), 522–538. https://doi.org/10.1108/JCOM-04-2022-0035
  • Lemon, L. L., & VanDyke, M. S. (2024). Pandemic problems in the ivory tower: Exploring employee engagement during the COVID-19 crisis. International Journal of Strategic Communication, 18(1), 38–55. https://doi.org/10.1080/1553118X.2023.2235333
  • Lepori, B., Huisman, J., & Seeber, M. (2014). Convergence and differentiation processes in Swiss higher education: An empirical analysis. Studies in Higher Education, 39(2), 197–218. https://doi.org/10.1080/03075079.2011.647765
  • Luoma–aho, V., & Canel, M.-J. (2020). Introduction to public sector communication. In V. Luoma–aho & M.-J. Canel (Eds.), The handbook of public sector communication (pp. 1–25). Wiley Blackwell. https://doi.org/10.1002/9781119263203.ch0
  • Macnamara, J. (2015). Breaking the measurement and evaluation deadlock: A new approach and model. Journal of Communication Management, 19(4), 371–387. https://doi.org/10.1108/JCOM-04-2014-0020
  • Macnamara, J., & Gregory, A. (2018). Expanding evaluation to progress strategic communication: Beyond message tracking to open listening. International Journal of Strategic Communication, 12(4), 469–486. https://doi.org/10.1080/1553118X.2018.1450255
  • Marcinkowski, F., Kohring, M., Friedrichsmeier, A., & Fürst, S. (2013). Neue Governance und die Öffentlichkeit der Hochschulen [New governance and the publics of higher education institutions. In E. Grande, D. Jansen, O. Jarren, A. Rip, U. Schimank, & P. Weingart (Eds.), Neue Governance der Wissenschaft: Reorganisation – externe Anforderungen – Medialisierung (pp. 257–288). Transcript. https://doi.org/10.1515/transcript.9783839422724.257
  • Marcinkowski, F., Kohring, M., Fürst, S., & Friedrichsmeier, A. (2014). Organizational influence on scientists’ efforts to go public: An empirical investigation. Science Communication, 36(1), 56–80. https://doi.org/10.1177/1075547013494022
  • Mayring, P. (2014). Qualitative Content Analysis. Theoretical Foundation, Basic Procedures and Software Solution. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-395173
  • Musselin, C. (2007). Are universities specific organisations? In G. Krücken, A. Kosmützky, & M. Torka (Eds.), Towards a multiversity? Universities between global trends and national traditions (pp. 63–84). Transcript.
  • Niemann, P., van den Bogaert, V., & Ziegler, R. (Eds.). (2023). Evaluationsmethoden der Wissenschaftskommunikation [Evaluation methods in science communication]. Springer VS. https://doi.org/10.1007/978-3-658-39582-7
  • Nothhaft, H., & Stensson, H. (2019). Explaining the measurement and evaluation stasis: A thought experiment and a note on functional stupidity. Journal of Communication Management, 23(3), 213–227. https://doi.org/10.1108/JCOM-12-2018-0135
  • Pellegrini, G. (2021). Evaluating science communication: Concepts and tools for realistic assessment. In M. Bucchi & B. Trench (Eds.), Routledge handbook of public communication of science and technology (pp. 305–322). Routledge. https://doi.org/10.4324/9781003039242
  • Pennisi, L., & Lackey, N. Q. (2018). A multiyear evaluation of the NaturePalooza science festival. The Journal of Extension, 56(7). Article 8. https://doi.org/10.34068/joe.56.07.08
  • Pinheiro, R., Langa, P. V., & Pausits, A. (2015). One and two equals three? The third mission of higher education institutions. European Journal of Higher Education, 3(5), 233–249. https://doi.org/10.1080/21568235.2015.1044552
  • Rädiker, S., & Kuckartz, U. (2019). Analyse qualitativer Daten mit MAXQDA. Text, Audio und Video [analyzing qualitative data with MAXQDA: Text, audio, and video]. Springer VS. https://doi.org/10.1007/978-3-658-22095-2
  • Raupp, J., & Osterheider, A. (2019). Evaluation von hochschulkommunikation [evaluation of university communication. In B. Fähnrich, J. Metag, S. Post, & M. S. Schäfer (Eds.), Forschungsfeld Hochschulkommunikation (pp. 181–205). Springer VS.
  • Rodgers, S., Wang, Z., Maras, M. A., Burgoyne, S., Balakrishnan, B., Stemmle, J., & Schultz, J. C. (2018). Decoding science: Development and evaluation of a science communication training program using a triangulated framework. Science Communication, 40(1), 3–32. https://doi.org/10.1177/1075547017747285
  • Romenti, S., Murtarelli, G., Miglietta, A., & Gregory, A. (2019). Investigating the role of contextual factors in effectively executing communication evaluation and measurement. Journal of Communication Management, 23(3), 228–245. https://doi.org/10.1108/JCOM-12-2018-0131
  • Sauter-Sachs, S. (1992). Public Relations der Universität am Beispiel der Universität Zürich [Public relations of universities: The case of the University of Zurich]. Haupt.
  • Schwetje, T., Hauser, C., Böschen, S., & Leßmöllmann, A. (2020). Communicating science in higher education and research institutions. Journal of Communication Management, 24(3), 189–205. https://doi.org/10.1108/JCOM-06-2019-0094
  • Swenson, R., Gilkerson, N., Likely, F., Anderson, F. W., & Ziviani, M. (2019). Insights from industry leaders: A maturity model for strengthening communication measurement and evaluation. International Journal of Strategic Communication, 13(1), 1–21. https://doi.org/10.1080/1553118X.2018.1533555
  • Volk, S. C. (2023). Evaluation der Wissenschaftskommunikation: Modelle, Stufen, Methoden [Evaluation of science communication: Models, phases, methods]. In P. Niemann, V. van den Bogaert, & R. Ziegler (Eds.), Evaluationsmethoden der Wissenschaftskommunikation (pp. 33–49). Springer VS. https://doi.org/10.1007/978-3-658-39582-7_3
  • Volk, S. C., & Buhmann, A. (2023). Digital corporate communication and measurement and evaluation. In V. Luoma-Aho & M. Badham (Eds.), Handbook on Digital Corporate Communication (pp. 118–133). Edward Elgar Publishing. https://doi.org/10.4337/9781802201963.00018
  • Watson, T., & Zerfass, A. (2011). Return on investment in public relations. A critique of concepts used by practitioners from communication and management sciences perspectives. PRism, 8(1), 1–14.
  • Weingart, P., & Joubert, M. (2019). The conflation of motives of science communication — causes, consequences, remedies. Journal of Science Communication, 18(3), 1–13. https://doi.org/10.22323/2.18030401
  • Zerfass, A., Verčič, D., & Volk, S. C. (2017). Communication evaluation and measurement: Skills, practices and utilization in European organizations. Corporate Communications: An International Journal, 22(1), 2–18. https://doi.org/10.1108/CCIJ-08-2016-0056
  • Zerfass, A., & Volk, S. C. (2020). Aligning and linking communication with organizational goals. In V. Luoma–aho & M. Canel (Eds.), The handbook of public sector communication (pp. 417–434). Wiley. https://doi.org/10.1002/9781119263203.ch27
  • Ziegler, R., Hedder, I. R., & Fischer, L. (2021). Evaluation of science communication: Current practices, challenges, and future implications. Frontiers in Communication, 6, 669744. https://doi.org/10.3389/fcomm.2021.669744

Appendix

A) Interview guides

The 30 interviews were conducted in two waves from July 2021 to September 2021 and from March 2022 until March 2023Footnote2 in three languages (German, French and Italian). Semi-structured interview guides were used that comprised seven larger themes. These was adapted to communicators and university managers respectively and allowed for openness and flexibility in the individual interviews. The following is an English translation of an exemplary interview guides for an interview with a communication professional (Table A-1) and members of university leadership (Table A-2).

Table A1. Exemplary interview guide with university communication professional (CP).

Table A2. Exemplary interview guide with member of university leadership (UL).

B) Codebook

The interview transcripts and documents were analyzed using qualitative content analysis in MAXQDA following Rädiker and Kuckartz (Citation2019) and Mayring (Citation2014). The codebook comprised seven categories with sixteen sub-categories for evaluation and was used to analyze both the interview transcripts and documents. Categories were developed both deductively from the literature (e.g., stages of evaluation, methods of evaluation, time of evaluation) and inductively from the material (e.g., expectations from leadership, subcategories of enabling and constraining factors). The data set was analyzed by the first author, including continuous iterations between data and literature and discussions with all other authors.

The codebook is available at https://doi.org/10.17605/OSF.IO/ZMDCU