1,319
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Reactions to experts in deliberative democracy: the 2016–2018 Irish Citizens’ Assembly

ORCID Icon, ORCID Icon &

ABSTRACT

Many citizens support the involvement of experts in political decision-making, yet we know little about how citizens react to expert opinions. Bridging recent evidence on technocratic attitudes and deliberative democracy, we study citizen responses to experts during influential deliberative mini-publics. Combining automated speech transcription of over 380,000 spoken words and quantitative text analysis, we estimate the topic prevalence in all expert testimonials, Q&A sessions, and other agenda items in the Irish Citizens’ Assembly (2016–2018), one of the prime examples of impactful deliberative forums. We find that inputs of experts structure subsequent discussions but do not dominate them. This correlation persists with various measures of topic prevalence and is robust to several modelling approaches. We also find that participants tended to react less strongly to testimonials by female experts. These conditional effects should encourage organisers to invite experts with diverse backgrounds in order to enhance inclusive decision-making.

Introduction

Representative party government has experienced challenges across the globe (Mair, Citation2013). Mistrust towards parties and politicians, the rise of populist and challenger parties, and higher policy complexity have encouraged citizens and governments to look for alternative ways of making political decisions (Alexiadou & Gunaydin, Citation2019; Bertsou & Caramani, Citation2022; Wratil & Pastorella, Citation2018). The establishment of deliberative assemblies around the world was one of the reactions to these developments. Between 2019 and 2021 alone, 123 representative deliberative processes in OECD and EU member states have been initiated or completed (OECD, Citation2021). In these deliberative forums, citizens discuss policy issues, often resulting in recommendations for policymakers. So-called ‘deliberative mini-publics’ are a popular and effective form of deliberation (Bedock, Citation2017; Curato et al., Citation2021). Mini-publics provide an alternative way of contributing to democracy, offering a platform through which randomly selected citizens can have their voices heard and can hear the voices of others.

Recent surveys underscore that citizens support deliberative assemblies. For example, 78 per cent of respondents in the 2020 Irish Election Study agreed or strongly agreed with the statement that ‘politics in Ireland would benefit from more Citizens’ Assemblies’ (Walsh & Elkink, Citation2021, p. 660). Surveys fielded in 15 Western European democracies show that support for deliberative assemblies is highest among the less educated with a low sense of political competence (Pilet, Bol, Vittori, & Paulis, Citation2022). Deliberative assemblies can supplement and improve representative democracy if the forums are conducted effectively and if political decision-makers consider the assembly’s recommendations. Deliberative mini-publics are also one of the few institutionalised forums where experts and citizens interact directly (Dryzek et al., Citation2019; Elstub, Citation2018). Minipublics can also change public views on policies. For example, experimental work shows that endorsement of expert information by members of a minipublic can reduce misperceptions even among non-participants (Muradova, Culloty, & Suiter, Citation2023). While many citizens approve of expert involvement, we do not know how citizens react to experts during political discussions.

In this paper, we study whether and to what degree citizens discuss topics mentioned by experts in Q&A sessions that followed intensive discussions in small groups. Moreover, we investigate whether reactions differ depending on the experts’ gender or professional background (Harris, Farrell, Suiter, & Brennan, Citation2021; Karpowitz, Mendelberg, & Shaker, Citation2012; Roberts et al., Citation2022). If expert testimonies fulfil their function of enabling discussions, we would expect that Q&A sessions cover the topics mentioned in preceding expert presentations. Failing to find a relationship between experts’ issue emphasis and subsequent conversations could imply that members simply ignore experts when deliberating. In this case, future assemblies could save significant time and expense, and focus on alternative inputs.

We assess the agenda-setting power of experts by focusing on one of the prime examples of deliberative mini-publics: the Irish Citizens’ Assembly, conducted between 2016 and 2018. As Dryzek et al. (Citation2019, p. 1145) summarise, the Irish assemblies ‘reinvigorated the political landscape after the political disasters that the global financial crisis unleashed on Ireland.’ Experts and citizens discussed a range of issues, and the assemblies’ recommendations reshaped Irish society (Devaney et al., Citation2020; Farrell, Suiter, Harris, & Cunningham, Citation2020; Suiter, Farrell, Harris, & Murphy, Citation2022). The Constitutional Convention (2012–2014), which preceded the Citizens’ Assembly, covered, amongst other issues, the question of legalising same-sex marriage. The Citizens’ Assembly included the issue of legalising abortion. The assemblies’ recommendations had a substantive influence over the decision to hold referenda on both topics (Elkink, Farrell, Marien, Reidy, & Suiter, Citation2020; Field, Citation2018). The Irish assemblies received international recognition and praise (Farrell & Suiter, Citation2019) and serve as role models for the success of deliberative mini-publics. Despite the importance and popularity of deliberative assemblies, full texts of speeches and discussions from deliberative mini-publics have rarely been studied systematically (for notable exceptions see Muradova, Walker, & Colli, Citation2020; Parkinson, De Laile, & Franco-Guillén, Citation2022; Parthasarathy, Rao, & Palaniswamy, Citation2019). Farrell and Stone (Citation2020:, p. 228) note that ‘[w]hile deliberative democracy is now long established as a dominant field of interest in political theory, its empirical application is more recent.’

We contribute to the empirical assessment of deliberative democracy by combining automated speech transcription (Proksch, Wratil, & Wäckerle, Citation2019; Wratil, Wäckerle, & Proksch, Citation2022) and quantitative text analysis (Benoit et al., Citation2018; Grimmer, Roberts, & Stewart, Citation2022). Our novel text corpus comprises over 380,000 spoken words from 64 expert presentations, 24 Q&A sessions, and 19 other agenda items. This corpus allows us to test whether expert testimony predicts the content of subsequent discussions. Following the approach introduced by Parthasarathy et al. (Citation2019), we use topic models (Roberts, Stewart, & Tingley, Citation2019, Citation2014) to measure recurrence and reactions to expert presentations.

The empirical analysis reveals three main findings. First, the topics raised in expert testimonials are regularly picked up in subsequent Q&A sessions. This effect is strongest when experts focus on a single topic. We fail to detect this relationship for other agenda items, such as opening addresses, voting proceedings, or introductions. Participants react to experts, rather than topics discussed during previous Q&A sessions. Second, we find that an expert’s gender mediates this relationship. Third, first-hand accounts from witnesses and academic experts’ insights structure subsequent Q&A sessions.

These findings translate into recommendations for the structure of deliberative assemblies. Expert testimonials fulfil their intended function of initiating discussions, but in most cases, they do not completely dominate subsequent agenda items. Moreover, our findings speak to concerns regarding the inequality and power in deliberative democracy (Lupia & Norton, Citation2017). The stronger reactions to male experts support experimental work uncovering a gender gap in voice and authority in deliberative participation (Karpowitz et al., Citation2012). We conclude that organisers of mini-publics should invite a diverse pool of experts to support inclusive decision-making.

Experts and deliberative mini-publics

A growing body of work aims to devise best practices for structuring deliberative mini-publics (e.g. Felicetti, Niemeyer, & Curato, Citation2016). While the organisation and structure of deliberative mini-publics vary substantively (Farrell & Field, Citation2022), most assemblies share some core characteristics. The most important of these are representativeness and deliberation (Curato et al., Citation2021). A representative sample of citizens should discuss issues collaboratively. Curato et al. (Citation2021) also state that these assemblies must be carefully structured and consequential. Being carefully structured is necessary because the design of deliberative mini-publics depends on the context, and certain features will work better in different settings or with different issues. To be consequential, their output must inform policymakers, voters, and other relevant parties.

Aside from these core features, there is considerable scope for creativity and optimisation. The OECD (Citation2020) offer 12 models of representative deliberative processes.Footnote1 Escobar and Elstub (Citation2017) provide a somewhat clearer framework, outlining five different models.Footnote2 All models involve a phase in which participants are informed on the topic they are deliberating. Testimonies by academics, lobbyists, or individuals with personal experience of the issue are supposed to provide this information and initiate discussions.

Ensuring that participants are well-informed on the topic they are discussing is key to effective deliberation. Information plays both an ‘instrumental’ and a ‘procedural’ role. The instrumental role is that participants must have a certain level of understanding of the topic on which they are deliberating. The procedural role is that the information phase increases participants’ knowledge of the subject and empowers participants to discuss an issue on equal terms. Deliberation without an information phase can be counterproductive. Cognitive errors become amplified through participants’ influence over one another, and polarisation can cause people to take more extreme positions than they held before the deliberation. Information can address these problems by moving people away from the range of arguments they might have made without that information (Sunstein, Citation2005).

While the information phase is a crucial ingredient for effective deliberation, the question remains as to what their effect is in practice. As Rossiter (Citation2022) points out, much of the previous quantitative work is limited to analysing speech length or word counts instead of identifying agenda setters and dynamics during these discussions. Only a few studies investigate agenda-setting effects and the content of deliberative assemblies systematically (Kostovicova & Paskhalis, Citation2021; Parkinson et al., Citation2022; Parthasarathy et al., Citation2019). Understanding how citizens react to expert opinions is vital to improving the institutional design of these mini-publics. It is possible that participants largely ignore expert testimonies and rely primarily on their prior knowledge of issues. It is also possible that participants could go too far in the other direction, blindly accepting whatever experts say and ignoring their initial opinions on the issue (Lafont, Citation2015). This would entirely negate the purpose of deliberation. In this case, one could simply ask the experts for their recommendations. This, however, could be highly problematic when experts push their agendas (Brown, Citation2014).

Existing studies on the information phase in deliberative mini-publics conclude that experts influence deliberation. For instance, Muradova et al. (Citation2020) find that proposals repeated by several experts made them more likely to be recommended. Similarly, Goodin and Niemeyer (Citation2003) conclude that information phases have the largest impact on participants’ opinions by triggering ‘deliberation within’. Other analyses suggest that combining evidence alongside deliberation can lead to greater knowledge of the issue (Setälä, Grönlund, & Herne, Citation2010). Based on these findings and the intended function of testimonials, we expect that experts set the agenda in deliberative mini-publics and that citizens respond to experts.

Hypothesis 1: The proportion of a discussion dedicated to a particular topic will be higher if that topic was discussed to a greater extent by an expert in the immediately preceding sessions.

We also investigate whether the gender of experts or their professional background conditions their influence. Prior work demonstrated that different genders are treated, and act, differently in deliberative forums. Karpowitz et al.’s (Citation2012) experimental work reveals power asymmetries between male and female participants. Evidence from a Swiss assembly identifies gender gaps in attendance and participation (Gerber, Schaub, & Mueller, Citation2019). Similarly, the text-as-data study by Parthasarathy et al. (Citation2019) finds that men are more likely to speak, have higher agenda-setting power, and are more likely to receive responses from officials.

Evidence from televised leaders’ debates also shows that citizens respond differently to male and female politicians (Boussalis, Coan, Holman, & Müller, Citation2021). While many studies identify gender biases in discussions and citizen reactions, work on the Constitutional Convention, a predecessor of the Citizens’ Assembly in Ireland, suggests that female participants participated more in small group discussions than men (Harris et al., Citation2021). Yet, overall, prior work suggests that female experts exert lower influence in deliberative forums than male experts.

Hypothesis 2: Female experts exert a lower influence on topics discussed during Q&A sessions than male experts.

We also explore whether influence differs depending on the expert’s professional background. Deliberative mini-publics usually seek to invite expert witnesses from various backgrounds. Some experts may be working in industry, politics or think tanks. Others may have been personally affected by the topic under discussion. Organisers of assemblies invite academics as experts for several reasons. First, academics are considered to provide factual, scientific evidence of a given topic. Second, public trust in academics tends to be high, relative to trust in politicians or lobbyists (Aitken, Cunningham-Burley, & Pagliari, Citation2016). For example, according to the 2022 Ipsos MORI Veracity Index, 83 per cent of Irish respondents trust scientists, while 48 per cent trust Charity Executives and only 27 per cent trust politicians (Ipsos, Citation2022). Third, many assemblies are monitored or established by academics who may use their networks to recommend colleagues as expert witnesses.

Evidence on academics’ influence in deliberative forums is scarce. Interviews with expert witnesses reveal that some expert witnesses working in the private sector were accused of being an ‘industry mouthpiece’ (Roberts, Lightbody, Low, & Elstub, Citation2020). Non-academic witnesses perceived academics as ‘loose cannons’ who cannot be held accountable for their statements. Consequently, academics may be freer to express their own opinions. In contrast, the language of academics could be less engaging than the rhetoric of non-academic experts who often share more personal insights. As a result, engagement with academics could be lower. A case study of assemblies on measures to contain the Covid-19 virus supports this possibility: the academic field of experts did not have an influence on opinion change (Leino, Kulha, Setälä, & Ylisalo, Citation2022). The authors conclude that expert hearings may not dominate the content discussed in mini-publics. Overall, the qualitative evidence is mixed. Academics could have a higher or lower influence on discussions than experts from other fields. Therefore, we refrain from formulating a directional hypothesis, while still exploring differences between academics and experts from other backgrounds.

The Irish Citizens’ Assembly

We study citizen reactions to experts during the Irish Citizens’ Assembly, conducted at various weekends between 2016 and 2018. The Citizens’ Assembly is one of five mini- publics that have taken place in Ireland over the past decade (Farrell & Suiter, Citation2019). The other four were the ‘We the Citizens’ pilot in 2011 (Farrell, Suiter, & O’Malley, Citation2013), the Constitutional Convention (2012–2014) and the more recent Citizens’ Assemblies on Gender Equality (2020–2021) and Biodiversity Loss (2022). These assemblies were notable because of their effect on landmark referenda (Elkink et al., Citation2020, Citation2017) and their systematisation (Farrell, Suiter, & Harris, Citation2019, Citation2020).

A representative sample of 99 citizens participated in the Citizens’ Assembly. Participants were chosen with the help of an independent market research company. These citizens met regularly over two years. Each meeting began with an opening address, followed by three to five experts usually speaking on either side of a particular aspect of the issue. Academics, industry experts, advocacy groups, and citizens with first-hand experiences of the issues were invited as experts. Following these expert presentations, members engaged in private group roundtable discussions. Each group would then draw up questions to ask the experts in the subsequent Q&A session. summarises the order of agenda items during the Citizens’ Assembly. Black vertical bars indicate public sessions that included live recordings. The grey bars are private sessions that were not recorded. Roundtable discussions, which preceded the Q&A sessions, were not recorded, and therefore could not be considered for the analysis.

Figure 1. The order and availability of agenda items during Irish Citizens’ Assembly. Note: items are ordered chronologically.

Figure 1. The order and availability of agenda items during Irish Citizens’ Assembly. Note: items are ordered chronologically.

shows the structure of a typical day of deliberation. The day started with a welcome speech, followed by an expert testimonial. After this (recorded) expert testimonial, private roundtable discussions took place. The Q&A sessions immediately followed this roundtable discussion. The (recorded) Q&As are thus the direct consequence of the small group discussions.

Table 1. Structure of a day during the Irish Citizens’ Assembly (26 November 2016).

Text corpus, data, and methods

We assembled a new text corpus of the full texts of all expert testimonials, Q&A sessions, and other agenda items that were live-recorded and uploaded to YouTube. Since the recordings are incomplete for the Constitutional Convention (2012–2014), we limited the analysis to the Citizens’ Assembly, which was held on several weekends between November 2016 and January 2018 (Figure A2). We downloaded auto-generated English captions of all videos following the approach recommended by Proksch et al. (Citation2019). We reviewed a sample of videos consisting of 34,237 words in the corpus and found a word error rate of 0.7 per cent.Footnote3

We focus on the four most important issues during the Citizens’ Assembly. These issues were: The Eighth Amendment of the Constitution, discussing the legalisation of abortion in Ireland; Challenges and Opportunities of an Ageing Population; Making Ireland a Leader in Tackling Climate Change; and The Manner in Which Referenda are Held. The remaining issue on Fixed Term Parliaments had insufficient agenda items for a meaningful quantitative analysis. summarises the number of documents in the text corpus that are used to detect topic emphasis across agenda items.Footnote4 Our text corpus consists of three levels: the four main issues; several agenda items on each issue (e.g. expert testimonials, Q&A sessions; other items); and topics that emerged from our textual analysis of agenda items.

Table 2. Overview of documents in text corpus.

Treating Q&A sessions as proxies of the private, non-recorded roundtable discussions requires that experts provide a relevant response to the questions they receive from the citizens. To test this assumption, we randomly selected one Q&A session on each of the four issues, manually extracted all questions, and coded whether or not the expert’s response directly addressed the question. The four Q&A sessions comprised of 18,500 words, 50 questions and 69 answers. We calculate the proportions of answers that address the initial question ().

Table 3. Coding results of expert responses to questions.

The category Direct response in comprises expert reactions during the Q&A sessions coded as ‘direct response’ or ‘very direct response’. Marginally direct/no response includes answers by experts coded as only a ‘marginally direct response’ or ‘no response’ to a question. Overall, 63 out of 69 responses (91 per cent) directly or very directly addressed the question. In all four Q&A sessions, the experts responded to at least 80 per cent of the questions. SI Section C lists coding instructions and provides examples of direct responses.

In addition, surveys conducted during the Citizens’ Assembly underscore that almost all respondents agreed that they had ample speaking opportunities and that no single member at the table dominated the roundtable discussion (Farrell et al., Citation2019, p. 118). The qualitative coding and the survey evidence support the assumption that Q&A sessions provide a balanced summary of the topics raised by citizens.

We closely follow Parthasarathy et al.’s (Citation2019) approach of identifying agenda setters during deliberative assemblies. Using the quanteda R package (Benoit et al., Citation2018) we pre-process the text corpus by removing stopwords, replacing some incorrect transcriptions, and removing very infrequent terms and words with only a single character. Afterwards, we run topic models for each of the four main issues (Roberts et al., Citation2019).Footnote5 Separate topic models for each issue ensure that the models pick up topics covering specific aspects on a given issue. We generated diagnostic values to select the most appropriate number of topics.

For each of the four issues, we test topic models ranging from 4 to 20 topics, resulting in 17 topic models per issue. We selected topic numbers with high held-out likelihood, semantic coherence, and low residuals. We examined the topics and labelled each topic based on their most frequent and exclusive terms and by reading documents with high probabilities of belonging to a specific topic. Figure A3 provides a detailed overview of model diagnostics. Based on the model diagnostics and manual assessments of various topic models, we selected the following configurations: Abortion (Eighth Amendment): 10 topics; Ageing Population: 8 topics; Climate Change: 9 topics; Referenda: 7 topics.

After determining the number of topics, we calculated the proportion of each topic in each agenda item. shows the topics, their prevalence, and their most frequent and exclusive terms (FREX). The figure underscores that the topic models identified meaningful and relevant clusters within the discussion of each agenda item. For example, prevalent topics on abortion are the trimesters of pregnancy, assaults and mental health, fertilisation, human rights, and travelling abroad to get an abortion. Quality of life, pensions, the economy, housing, and poverty are topics discussed during the testimonials and Q&A sessions on an ageing population. The climate change discussion centred around energy, agriculture, and community engagement. The topics on reforming referenda include media balance, turnout, and citizens’ initiatives. also shows that ‘procedural’ topics on votes and results appear across all agenda items. As we show below, our results become even stronger after excluding these topics from the analysis.

Figure 2. Estimated topic proportions across the four main issues.

Figure 2. Estimated topic proportions across the four main issues.

Variables and models

Having summarised our dataset and measure of topic prevalence, we outline the main variables and regression models. The dependent variable Topic Recurrence builds on the approach introduced by Parthasarathy et al. (Citation2019) and measures the maximum topic proportion of the subsequent Q&A sessions allocated to a given topic. For each text, we investigate the next five agenda items in the text corpus to determine if any of them are Q&A sessions. If there are Q&A sessions in the subsequent texts, we collect the proportion of those texts dedicated to the topic. If there are multiple Q&A sessions in the following five items, we calculate the maximum and average values across these items. Neither value gives a perfect representation of the data for different reasons. We choose the maximum value as our main measure because certain Q&A sessions are limited to questions on specific topics. We also rerun all models using the average topic prevalence in subsequent Q&A sessions (Table A3). The substantive conclusions are the same.

The independent variable Topic Prevalence measures the proportion of the observed agenda item allocated to the given topic. The variable Expert Input captures whether the observation is an expert testimonial or a different type of agenda item (such as opening addresses, voting procedures or introductions). We include these other agenda items as a counterfactual since we only expect a correlation between the topics of expert inputs and Q&A sessions, but not between topics of other agenda items and Q&A sessions. The interaction effect between Topic Prevalence and Expert Input captures whether experts’ inputs are more likely to increase the proportion of a topic discussed in subsequent Q&A sessions.

We run fractional logistic regression models because the dependent variable Topic Prevalence is bound between 0 and 1. In all models, we control for the length of each agenda item (in words) and the length of subsequent Q&A sessions. We control for the number of additional agenda items occurring between the observed agenda item and the relevant subsequent Q&A sessions since a larger gap may decrease attention to an expert testimonial. Finally, we include issue- and topic-fixed effects in all models. To test for different reactions conditional on actor-related characteristics, we only consider agenda items of expert testimonials and subsequent discussions during Q&A sessions. We interact the variable Topic Prevalence with Gender for Hypothesis 2 and with Professional Background for the explanatory analysis.

Results

In this section, we test whether citizens respond to experts in deliberative mini-publics. We also explore how experts’ gender and professional backgrounds condition the strength of reactions and summarise various robustness tests.

Reactions to expert testimonials

First, we assess the degree to which Q&A sessions discuss topics mentioned during expert testimonials and other agenda items. presents the main results from the fractional logistic regression models. Models 1 and 2 use the continuous measure of topic prevalence; Models 3 and 4 use a discrete measure based on terciles (low, medium, high prevalence). In both model specifications, we observe the expected interaction between expert inputs and topic prevalence. The interaction effect between the session topic proportion and the session type is large, positive, and statistically significant in all models. Increasing emphasis on a topic by an expert correlates with a more extensive discussion of this topic in subsequent Q&A sessions. plots predicted proportions for both model specifications. The continuous model (a) predicts an emphasis of around 0.5 [0.4, 0.7] when an expert focuses almost exclusively on one topic. We do not observe this effect for other agenda items, showing that the agenda-setting power is limited to expert inputs. The results are similar, but somewhat smaller in effect size when using the discrete measure (b).

Figure 3. Predicting maximum topic proportions in subsequent Q&A sessions. Shaded areas/vertical bars show 95 per cent confidence intervals. Predicted values are based on the interaction between Expert Input and Topic Prevalence in Models 1 and 3 of .

Figure 3. Predicting maximum topic proportions in subsequent Q&A sessions. Shaded areas/vertical bars show 95 per cent confidence intervals. Predicted values are based on the interaction between Expert Input and Topic Prevalence in Models 1 and 3 of Table 4.

Table 4. Predicting issue emphasis in Q&A sessions.

All topic models identify ‘procedural topics’ on voting and assembly procedures (). These topics do not fall into the category of substantive discussions (see also Wratil et al., Citation2022). Models 2 and 4 exclude these topics from the sample.Footnote6 The coefficient of the interaction effect increases for the reduced sample from 2.25 (Model 1) to 2.73 (Model 2), highlighting that the substantive topics, not the procedural topics, drive our results.

We conduct several additional analyses to assess the robustness of these findings (see SI Sections D and E). First, we calculate the average topic prevalence in the five subsequent Q&A sessions instead of the maximum proportions. Figures A4 and A5 show that the substantive results stay the same. Second, we test whether a given agenda item or topic drives the findings. We employ a jackknife-style regression approach (Neumayer & Plümper, Citation2017) by excluding one issue at a time and storing the relevant regression coefficients.

The interaction coefficient between Topic Prevalence and Expert Input remains statistically significant (Figure A6). We also run jackknife-style models on the topics (Figure A7). The interaction coefficient is comparable and statistically significant across all specifications. Neither a single issue nor topic drives our results. Third, we separate each document into chunks of 500 words instead of using the full agenda item as the unit of analysis (SI Section E). Reducing the length of each document allows for a more detailed identification of topics and avoids entire documents being allocated to only one topic (Cross & Greene, Citation2020). The main findings persist when dividing up documents into smaller units. Overall, we detect a robust relationship between expert inputs and Q&A discussions.

Differences across gender and professional backgrounds

Do actor-related characteristics condition the influence of experts? We limit the corpus to expert testimonials to compare how actor-related characteristics (gender and professional background of a speaker) correlate with participants’ reactions. We examine 64 expert testimonials, 41 of which were delivered by industry experts (24 male and 17 female) and 23 of which were delivered by academic experts (17 male and 6 female).

We conduct two types of analyses. In the first analysis, we identify the topic with the highest prevalence in each expert testimonial. Afterwards, we extract the maximum proportion of this topic in the subsequent Q&A sessions. We then calculate our measure of Expert Influence by subtracting the topic proportion during the expert testimonial from the topic proportion during the Q&A session. Higher values imply that experts managed to draw a lot of attention to their most prominent topic. More specifically, a value of 0 means that the emphasis on the topic during the Q&A sessions was identical to the emphasis on this topic in the expert testimonial. A value close to –1 indicates that the Q&A sessions did not discuss a topic at all, even though the expert focused almost entirely on this topic.

First, we visualise the raw data and distribution of Expert Influence for the 64 expert testimonials in . Black circles report the average values; the horizontal bars show 95 per cent confidence intervals. The graph reveals substantial differences between male and female speakers. The influence of female experts tends to be lower than the influence of male experts. We also observe a lower influence of academics on Q&A sessions, relative to witnesses who work in other areas.

Figure 4. The difference between an expert’s most prevalent topic and the same topic’s prevalence in subsequent Q&A sessions. Each grey dot marks one expert testimonial. The black squares indicate the average values. Horizontal bars show 95 per cent confidence intervals.

Figure 4. The difference between an expert’s most prevalent topic and the same topic’s prevalence in subsequent Q&A sessions. Each grey dot marks one expert testimonial. The black squares indicate the average values. Horizontal bars show 95 per cent confidence intervals.

Next, we run ordinary least squares regression models with Expert Influence as the dependent variable. Model 1 in includes gender and background as our main independent variables. Model 2 controls for the gap between the testimonial and the next Q&A session since a longer time span between a testimonial and the next discussion may reduce participants’ awareness of the topics raised. We also control for the issue being discussed to account for unobserved heterogeneity across the issues.

Table 5. Predicting the difference between the recurrence of the experts’ most prevalent and the maximum prevalence of the same topic in subsequent Q&A sessions.

The regression analysis supports the descriptive evidence from . In both models, female experts are less influential than male experts. The coefficients of –0.22 for Female and –0.24 for Background: Academia in Model 2 correspond to over 55 per cent of the standard deviation of the dependent variable, pointing to a sizeable difference between male and female experts, and between academics and other types of experts. These analyses suggest that female experts and academics exert lower influence on Q&A sessions than male experts and speakers from other professional backgrounds.

The small sample size of 64 expert testimonials might be problematic since outliers or influential values could drive our findings. To test for this possibility, we run jackknife-style regression models, excluding one expert at a time, and store the regression coefficients of interest. Figure A8 shows that the size of the results for female experts and academics do not depend on a specific testimonial.

To assess the robustness of these results, we rerun the fractional logistic regressions after limiting the sample to expert testimonials. The coefficients of interest in are the interactions between Topic Prevalence and Gender (Model 1) and Professional Background (Model 2). A negative coefficient of the interaction effect in Model 1 implies that participants react less strongly to female experts. A negative interaction effect in Model 2 means that experts working in academia have a lower influence on topics discussed in subsequent Q&A sessions. We include the same set of control variables and fixed effects as in the main analysis (). The negative and statistically significant interaction term in Model 1 of confirms that female experts exert lower influence than male experts. We do not find conclusive evidence regarding the influence of academics, though. While the coefficient has the expected negative sign, it is small and statistically insignificant.

Table 6. Predicting the difference between the recurrence of the experts’ most prevalent and the maximum prevalence of the same topic in subsequent Q&A sessions.

The analyses, based on two measures of expert influence, suggest that participants react less strongly to female experts. The evidence regarding experts working in academia is mixed and inconclusive. Reactions to first-hand experiences are as strong as reactions to academics. The relatively low number of experts does not allow us to assess possible mechanisms in detail. Potentially, the influence of experts varies across topics, which requires an in-depth qualitative analysis. We hope future research will identify potential mechanisms and further assess the robustness of our results.

Discussion and conclusion

How do citizens respond to experts when discussing complex policy issues? And do expert testimonies have the desired effect of informing participants in deliberative mini-publics and structuring subsequent Q&A sessions? Both questions are central to our understanding of expert influence in politics (Bertsou & Caramani, Citation2022) and the design of deliberative mini-publics worldwide. Using transcripts of all recorded agenda items during the influential Irish Citizens’ Assembly, we measure topic prevalence in expert presentations and Q&A sessions. An extensive qualitative coding approach revealed that experts almost always provide direct and relevant responses to the questions that followed the roundtable discussions. Q&A sessions are a good indicator of citizens’ discussions and priorities. We find that a higher focus on a certain topic in expert presentations predicts the prevalence of this topic in the following Q&A sessions. Moreover, we uncover differences depending on the gender and professional background of experts.

Our results speak to prior work on deliberative mini-publics and can contribute to the design of future deliberative forums. First, the results underscore the relevance of experts in deliberative mini-publics. Q&A sessions, the direct consequence of citizen deliberation, pick up the issues emphasised by experts. Second, in line with Leino et al. (Citation2022), Parthasarathy et al. (Citation2019), and Roberts et al. (Citation2020) we find that not all experts influence the agenda. Third, the higher levels of influence exerted by male speakers and non-academics translate into recommendations for future assemblies. The lower influence of academics suggests that assemblies should invite more practitioners and representatives of organisations. While academics are vital for providing background information and summarising scientific evidence, first-hand experiences are at least as important during Q&A sessions. This finding supports Roberts et al.’s (Citation2022) recommendation to focus on diversity and inclusion among participants and expert witnesses. Inclusive decision-making requires experts from a variety of backgrounds. Based on our findings, we recommend that different views should be included in recommendations for policymakers by selecting a diverse group of experts.

While our study relies on over 350,000 words, 64 expert testimonials, and 24 Q&A sessions, the analysis considers only one assembly in a Western European country. Similar combinations of qualitative validation and quantitative text analysis in future work will enhance our understanding of effective deliberation and assess whether our findings and recommendations are generalisable.

Supplemental material

Supplemental Material

Download PDF (558.7 KB)

Acknowledgements

We thank Killian Daly for contributing valuable input at the early stages of the project, Letícia Barbabela for research assistance, and James Cross, David Farrell, Sarah King, Joseph Lacey, and students in the MSc Politics and Data Science at University College Dublin for comments on previous versions of the paper.

Data availability statement

The data and R scripts required to verify the reproducibility of the results in this article are available on Harvard Dataverse at https://doi.org/10.7910/DVN/4Y1TBU.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the University College Dublin Ad Astra Start Up Grant.

Notes on contributors

Stefan Müller

Stefan Müller is an Assistant Professor and Ad Astra Fellow in the School of Politics and International Relations at University College Dublin. His research interests include political representation, party competition, political communication, public opinion, and computational social science.

Garrett Kennedy

Garrett Kennedy is a Research Executive at Red C Research and a former student of Politics and Data Science at University College Dublin. His research interests include party competition, political economy, political communication, and computational social science.

Tomás Maher

Tomás Maher is a former Politics and Data Science MSc student at University College Dublin.

Notes

1 These are Citizens’ Assembly, Citizens’ Jury/Panel, Consensus Conference, Planning Cell, G1000, Citizens’ Council, Citizens’ Dialogue, Deliberative Poll/Survey, World Wide Views, Citizens’ Initiative Review, The Ostbelgien Model, The City Observatory. They also state that these can be clustered into four types of purpose: 1. Informed citizen recommendations on policy questions; 2. Citizen opinion on policy questions; 3. Informed citizen evaluation of ballot measures; and 4. Permanent representative deliberative models.

2 These are Citizens’ juries, planning cells, consensus conferences, deliberative polls, citizens’ assemblies. Curato et al. (Citation2021, p. 7) argue that citizens’ initiative reviews should be considered a sixth model.

3 This value is considerably lower than the error rate of 3 per cent reported in Proksch et al. (Citation2019). We corrected Irish words that were always spelled incorrectly before calculating the word error rate (see SI Section B).

4 The total sample of texts used in the regression models is slightly smaller (see Table A1). The difference in observations occurs because the regression models only consider agenda items that were followed by a Q&A session within a window of two days and within the five subsequent agenda items.

5 We use the following R packages for preparing, analysing, and visualising the data: base (R Core Team, Citation2022), car (Fox & Weisberg, Citation2019), cowplot (Wilke, Citation2020), furrr (Vaughan & Dancho, Citation2022), ggeffects (Lüdecke, Citation2018), quanteda (Benoit et al., Citation2018), rio (Chan, Chan, Leeper, & Becker, Citation2021), stm (Roberts et al., Citation2019), texreg (Leifeld, Citation2013), tidyverse (Wickham et al., Citation2019), and xtable (Dahl, Scott, Roosen, Magnusson, & Swinton, Citation2019).

6 We exclude the following topics: General Proceedings; Voting/Results; Assembly Procedures; Voting; Ballots; Ballots: Results; Assembly Contributions.

References

  • Aitken, M., Cunningham-Burley, S., & Pagliari, C. (2016). Moving from trust to trustworthiness: Experiences of public engagement in the Scottish health informatics programme. Science and Public Policy, 43(5), 713–723.
  • Alexiadou, D., & Gunaydin, H. (2019). Commitment or expertise? Technocratic appointments as political responses to economic crises. European Journal of Political Research, 58(3), 845–865.
  • Bedock, C. (2017). Reforming Europe: Institutional engineering in Western Europe. Oxford: Oxford University Press.
  • Benoit, K., Watanabe, K., Wang, H., Nulty, P., Obeng, A., Müller, S., & Matsuo, A. (2018). Quanteda: An R package for the quantitative analysis of textual data. Journal of Open Source Software, 3(30), 774.
  • Bertsou, E., & Caramani, D. (2022). People haven’t had enough of experts: Technocratic attitudes among citizens in nine European democracies. American Journal of Political Science, 66(1), 5–23.
  • Boussalis, C., Coan, T. G., Holman, M. R., & Müller, S. (2021). Gender, candidate emotional expression, and voter reactions during televised debates. American Political Science Review, 115(4), 1242–1257.
  • Brown, M. B. (2014). Expertise and deliberative democracy. In S. Elstub, & P. McLaverty (Eds.), Deliberative democracy: Issues and cases (pp. 50–68). Edinburgh: Edinburgh University Press.
  • Chan, C. H., Chan, G. C., Leeper, T. J., & Becker, J. (2021). Rio: A Swiss-army knife for data file I/O. R package version 0.5.29.
  • Cross, J. P., & Greene, D. (2020). Talk is not cheap: Policy agendas, information processing, and the unusually proportional nature of European Central Bank communications policy responses. Governance, 33(2), 425–444.
  • Curato, N., Farrell, D. M., Geissel, B., Grönlund, K., Mockler, P., Pilet, J.-B., … Suiter, J. (2021). Deliberative mini-publics: Core design features. Bristol: Bristol University Press.
  • Dahl, D. B., Scott, D., Roosen, C., Magnusson, A., & Swinton, J. (2019). xtable: Export Tables to LaTeX or HTML. R package version 1.8-4.
  • Devaney, L., Brereton, P., Torney, D., Coleman, M., Boussalis, C., & Coan, T. G. (2020). Environmental literacy and deliberative democracy: A content analysis of written submissions to the Irish Citizens’ Assembly on climate change. Climatic Change, 162(4), 1965–1984.
  • Dryzek, J. S., Bächtinger, A., Chambers, S., Cohen, J., Druckman, J. N., Felicetti, A., … Warren, M. E. (2019). The crisis of democracy and the science of deliberation. Science, 363(6432), 1144–1146.
  • Elkink, J. A., Farrell, D. M., Marien, S., Reidy, T., & Suiter, J. (2020). The death of conservative Ireland? The 2018 abortion referendum. Electoral Studies, 65, 102142.
  • Elkink, J. A., Farrell, D. M., Reidy, T., & Suiter, J. (2017). Understanding the 2015 marriage referendum in Ireland: Context, campaign, and conservative Ireland. Irish Political Studies, 32(3), 361–381.
  • Elstub, S. (2018). Deliberative and participatory democracy. In A. Bächtinger, J. S. Dryzek, J. Mansbridge, & M. E. Warren (Eds.), The Oxford handbook of deliberative democracy (pp. 187–202). Oxford: Oxford University Press.
  • Escobar, O., & Elstub, S. (2017). Forms of mini-publics. New Democracy. URL: https://www.newdemocracy.com.au/2017/05/08/forms-of-mini-publics/.
  • Farrell, D. M., & Field, L. (2022). The growing prominence of deliberative mini-publics and their impact on democratic government. Irish Political Studies, 37(2), 285–302.
  • Farrell, D. M., & Stone, P. (2020). Sortition and mini-publics: A different kind of representation. In R. Rohrschneider, & J. Thomassen (Eds.), The Oxford handbook of political representation in liberal democracies (pp. 228–246). Oxford: Oxford University Press.
  • Farrell, D. M., & Suiter, J. (2019). Reimagining democracy: Lessons in deliberative democracy from the Irish front line. Ithaca: Cornell University Press.
  • Farrell, D. M., Suiter, J., & Harris, C. (2019). ‘Systematizing’ constitutional deliberation: The 2016–18 citizens’ assembly in Ireland. Irish Political Studies, 34(1), 113–123.
  • Farrell, D. M., Suiter, J., Harris, C., & Cunningham, K. (2020). The effects of mixed membership in a deliberative forum: The Irish constitutional convention of 2012–2014. Political Studies, 68(1), 54–73.
  • Farrell, D. M., Suiter, J., & O’Malley, E. (2013). Deliberative democracy in action Irish-style: The 2011 we the citizens pilot citizens’ assembly. Irish Political Studies, 28(1), 99–113.
  • Felicetti, A., Niemeyer, S., & Curato, N. (2016). Improving deliberative participation: Connecting mini-publics to deliberative systems. European Political Science Review, 8(3), 427–448.
  • Field, L. (2018). The abortion referendum of 2018 and a timeline of abortion politics in Ireland to date. Irish Political Studies, 33(4), 608–628.
  • Fox, J., & Weisberg, S. (2019). An R companion to applied regression (Third edition). Thousand Oaks, CA: Sage.
  • Gerber, M., Schaub, H.-P., & Mueller, S. (2019). O sister, where art thou? Theory and evidence on female participation at citizen assemblies. European Journal of Politics and Gender, 2(2), 173–195.
  • Goodin, R. E., & Niemeyer, S. J. (2003). When does deliberation begin? Internal reflection versus public discussion in deliberative democracy. Political Studies, 51(4), 627–649.
  • Grimmer, J., Roberts, M. E., & Stewart, B. M. (2022). Text as data: A new framework for machine learning and the social sciences. Princeton: Princeton University Press.
  • Harris, C., Farrell, D. M., Suiter, J., & Brennan, M. (2021). Women’s voices in a deliberative assembly: An analysis of gender rates of participation in Ireland’s Convention on the Constitution 2012–2014. The British Journal of Politics and International Relations, 23(1), 175–193.
  • Ipsos. (2022). Ipsos veracity index 2022. URL: https://www.ipsos.com/en-ie/ipsos-veracity-index-2022.
  • Karpowitz, C. F., Mendelberg, T., & Shaker, L. (2012). Gender inequality in deliberative participation. American Political Science Review, 106(3), 533–547.
  • Kostovicova, D., & Paskhalis, T. (2021). Gender, justice and deliberation: Why women don’t influence peacemaking. International Studies Quarterly, 65(2), 263–276.
  • Lafont, C. (2015). Deliberation, participation, and democratic legitimacy: Should deliberative mini-publics shape public policy? Journal of Political Philosophy, 23(1), 40–63.
  • Leifeld, P. (2013). texreg: Conversion of statistical model output in R to LaTeX and html tables. Journal of Statistical Software, 55(8), 1–24.
  • Leino, M., Kulha, K., Setälä, M., & Ylisalo, J. (2022). Expert hearings in mini-publics: How does the field of expertise influence deliberation and its outcomes? Policy Sciences, 55(3), 429–450.
  • Lüdecke, D. (2018). ggeffects: Tidy data frames of marginal effects from regression models. Journal of Open Source Software, 3(26), 772.
  • Lupia, A., & Norton, A. (2017). Inequality is always in the room: Language & power in deliberative democracy. Daedalus, 146(3), 64–76.
  • Mair, P. (2013). Ruling the void: The hollowing of western democracy. London: Verso.
  • Muradova, L., Culloty, E., & Suiter, J. (2023). Misperceptions and minipublics: Does endorsement of expert information by a minipublic influence misperceptions in the wider public? Political Communication, online first. doi:10.1080/10584609.2023.2200735
  • Muradova, L., Walker, H., & Colli, F. (2020). Climate change communication and public engagement in interpersonal deliberative settings: Evidence from the Irish Citizens’ Assembly. Climate Policy, 20(10), 1322–1335.
  • Neumayer, E., & Plümper, T. (2017). Robustness tests for quantitative research. Cambridge: Cambridge University Press.
  • OECD. (2020). Innovative citizen participation and new democratic institutions: Catching the deliberative wave.
  • OECD. (2021). OECD database of representative deliberative processes and institutions. URL: https://airtable.com/shrHEM12ogzPs0nQG/tbl1eKbt37N7hVFHF/viwxQgJNyONVHkmS6?
  • Parkinson, J., De Laile, S., & Franco-Guillén, N. (2022). Mapping deliberative systems with big data: The case of the Scottish independence referendum. Political Studies, 70(3), 543–565.
  • Parthasarathy, R., Rao, V., & Palaniswamy, N. (2019). Deliberative democracy in an unequal world: A text-as-data study of south India’s village assemblies. American Political Science Review, 113(3), 623–640.
  • Pilet, J.-B., Bol, D., Vittori, D., & Paulis, E. (2022). Public support for deliberative citizens’ assemblies selected through sortition: Evidence from 15 countries. European Journal of Political Research, online first. doi:10.1111/1475-6765.12541
  • Proksch, S.-O., Wratil, C., & Wäckerle, J. (2019). Testing the validity of automatic speech recognition for political text analysis. Political Analysis, 27(3), 339–359.
  • R Core Team. (2022). R: A language and environment for statistical computing. R foundation for statistical computing. Vienna, Austria.
  • Roberts, J. J., Lightbody, R., Low, R., & Elstub, S. (2020). Experts and evidence in deliberation: Scrutinising the role of witnesses and evidence in mini-publics, a case study. Policy Sciences, 53, 3–32.
  • Roberts, J. J., Salamon, H., Reggiani, M., Lightbody, R., Reher, S., & Pirie, C. (2022). Inclusion and diversity among expert witnesses in deliberative mini-publics.
  • Roberts, M. E., Stewart, B. M., & Tingley, D. (2019). stm: An R package for structural topic models. Journal of Statistical Software, 91(2), 1–40.
  • Roberts, M. E., Stewart, B. M., Tingley, D., Lucas, C., Leder-Luis, J., Gadarian, S. K., … Rand, D. G. (2014). Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4), 1064–1082.
  • Rossiter, E. L. (2022). Measuring agenda setting in interactive political communication. American Journal of Political Science, 66(2), 337–351.
  • Setälä, M., Grönlund, K., & Herne, K. (2010). Citizen deliberation on nuclear power: A comparison of two decision-making methods. Political Studies, 58(4), 688–714.
  • Suiter, J., Farrell, D. M., Harris, C., & Murphy, P. (2022). Measuring epistemic deliberation on polarized issues: The case of abortion provision in Ireland. Political Studies Review, 20(4), 630–647.
  • Sunstein, C. R. (2005). Group judgments: Statistical means, deliberation, and information markets. New York University Law Review, 80(3), 962–1049.
  • Vaughan, D., & Dancho, M. (2022). furrr: Apply mapping functions in parallel using futures. R package version 0.3.0.
  • Walsh, C. D., & Elkink, J. A. (2021). The dissatisfied and the engaged: Citizen support for citizens’ assemblies and their willingness to participate. Irish Political Studies, 36(4), 647–666.
  • Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., … Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686.
  • Wilke, C. O. (2020). cowplot: Streamlined plot theme and plot annotations for ggplot2. R Package Version 1.1.1.
  • Wratil, C., & Pastorella, G. (2018). Dodging the bullet: How crises trigger technocrat-led governments. European Journal of Political Research, 57(2), 450–472.
  • Wratil, C., Wäckerle, J., & Proksch, S.-O. (2022). Government rhetoric and the representation of public opinion in international negotiations. American Political Science Review, online first. doi:10.1017/S0003055422001198