Publication Cover
Inquiry
An Interdisciplinary Journal of Philosophy
Latest Articles
2,041
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Stereotypes and self-fulfilling prophecies in the Bayesian brain

ORCID Icon
Received 23 Jun 2022, Accepted 01 Jan 2023, Published online: 13 Jan 2023

ABSTRACT

Stereotypes are often described as being generally inaccurate and irrational. However, for years, a minority of social psychologists has been proclaiming that stereotype accuracy is among the most robust findings in the field. This same minority also opposes the majority by questioning the power of self-fulfilling prophecies and thereby the construction of social reality. The present paper examines this long-standing debate from the perspective of predictive processing, an increasingly influential cognitive science theory. In this theory, stereotype accuracy and self-fulfilling prophecies are two sides of the same coin, namely prediction error minimisation, pointing to a new middle course between the two existing views. On the one hand, predictive processing indicates that depicting stereotypes as generally inaccurate runs counter to their actual purpose of making the social world predictable, which supports the minority view. On the other hand, predictive processing supposes that expectations, including stereotypes, permanently affect perception and behaviour and thereby co-construct social reality, which supports the majority view. Therefore, from a predictive processing perspective, stereotypes are largely rational and not per se inaccurate, and self-fulfilling prophecies are omnipresent and greatly affect social reality. This new middle course appears to fit the empirical data better than the two existing views.

1. Introduction

In social psychology, there is an ongoing debate as to whether or not social perception is dominated by error and bias (see Jussim Citation2017). This debate has two facets. First, it involves the question of whether stereotypes are accurate or per se inaccurate. Most social psychologists assert the latter (for an overview, see Jussim Citation2012). For example, Miller and Turnbull (Citation1986) write: ‘The term stereotype refers to those interpersonal beliefs and expectancies that are both widely shared and generally invalid’ (233). Similarly, Stangor (Citation2016) states that ‘[s]tereotypes are problematic because they are negative, inaccurate, and unfair’ (4). However, a minority headed by Jussim (Citation2012, Citation2017) argues that stereotype accuracy is one of the most robust findings in social psychology but that ideological reasons seem to prevent many social psychologists from acknowledging this.

Second, social psychologists are divided as to how much power expectations have to shape reality and distort perceptions by becoming self-fulfilling prophecies. Such self-fulfilling prophecies therefore involve that expectations about a situation affect the (perceived) situation in such a way that the expectations become (partly) true. Many early studies find rather strong self-fulfilling effects (e.g. Darley and Gross Citation1983; Rosenhan Citation1973; Rosenthal and Jacobson Citation1968). Moreover, research on stereotype threat (Steele and Aronson Citation1995) and unconsciously produced self-fulfilling prophecies (Chen and Bargh Citation1997) provides evidence for the idea that stereotypes shape reality. That is why most social psychologists emphasise the reality shaping power of expectations (for an overview, see Jussim Citation2012). But again, a minority headed by Jussim (Citation2012, Citation2017) questions the majority view regarding the relevance and power of self-fulfilling prophecies: while they do not argue that self-fulfilling prophecies are non-existent, they state that their effects are small and might even dissipate over time.

We see that the debate as to whether or not social perception is dominated by error and bias leads to two diametrically opposed factions: a majority stating that stereotypes are (mainly) inaccurate and that people co-create social reality by means of their expectations; and a minority headed by Jussim stating that stereotypes are often accurate and that people (mainly) discover social reality. In between these two factions, there seems to be no middle course.

The present paper provides such a middle course, arguing that stereotypes are often accurate and self-fulfilling prophecies relevant and powerful. It does so by analysing the foundations of this debate from a predictive processing perspective: how do humans learn about groups (relevant for stereotype accuracy) and how do they act upon the world (relevant for the power of self-fulfilling prophecies). The theory of predictive processing has become increasingly influential in cognitive science and is currently one of the most studied theories in the field (cf. Hahn Citation2014; Walsh et al. Citation2020). In very short, predictive processing is a unified brain theory that describes the brain as a probabilistic prediction machine that runs on Bayesian principles and constantly works to minimise prediction error (Clark Citation2016; Friston Citation2010).

For the present paper, this idea of a Bayesian brain has three relevant implications. First, the brain generates a model of the world by determining its structure and patterns unveiling its structural form, which the brain then uses to make predictions. This suggests that stereotypes should be roughly accurate (unless the learning sample is not representative for the group as whole). Second, the brain does not passively assemble bottom-up sensory input until the percept is complete; rather, it uses top-down predictions to anticipate and ‘explain away’ bottom-up sensory input. Therefore, people perceive their predictions of bottom-up sensory input but not the input itself. This suggests that social perception is constantly influenced by our top-down predictions, including our stereotypes. Third, a mismatch between top-down predictions and bottom-up sensory input can be resolved by acting upon the sources eliciting that input in such a way that the consequent bottom-up sensory input coincides with one’s top-down prediction. This suggest that social reality is constantly co-created by our top-down predictions, promoting self-fulfilling prophecies. Together, the three implications convey that individuals infer largely accurate stereotypes from their social environments, use them as action steering top-down predictions and thereby reproduce their social environment in a self-fulfilling manner, contributing to (the maintenance of) stereotype accuracy. In fact, from a predictive processing perspective, stereotype accuracy (stemming from perceptual inference) and self-fulfilling prophecies (stemming from active inference) are two sides of the same coin: prediction error minimisation. This gives rise to a new middle course in the social psychological debate as to whether or not social perception is dominated by error and bias: stereotypes are often accurate and self-fulfilling prophecies relevant and powerful. And as will be shown throughout the paper, this middle course also appears to fit the empirical data better than the debate’s two current factions presented above.

The paper is structured as follows: Section 2 defines stereotypes in more detail, describes how they are manifested in the Bayesian brain and analyses what this implies for the question of stereotype accuracy. Section 3 discusses why self-fulfilling prophecies constitute a fundamental aspect of predictive processing, how they affect social perception, social behaviour and social reality and why this constructivist perspective is superior to perceptual realism.

2. Stereotypes in the Bayesian brain

Stereotypes are commonly defined as beliefs about the attributes of social groups (henceforth called ‘group-specific beliefs’) (Kite and Whitley Citation2016; Stangor Citation2016). However, there are three differing perspectives regarding whether all group-specific beliefs are stereotypes and whether they are always inaccurate. First, Allport (Citation1979), whose book The Nature of Prejudice is a foundational text in social psychology, defines stereotypes as a subgroup of group-specific beliefs. Allport asserts that, while there are rational beliefs about groups, stereotypes comprise group-specific beliefs that are faulty exaggerations, unjustifiably resistant to change and thus irrational. For instance, the belief that the Swiss consume more chocolate relative to other groups does not constitute a stereotype as it can be statistically proven and is therefore rational (McCarthy Citation2021). In contrast, the belief that all Swiss eat more chocolate than members of other groups constitutes a stereotype as it is a clearly inaccurate overgeneralisation and is therefore irrational. As such, according to Allport, though there are also accurate and rational group-specific beliefs, stereotypes are definitionally inaccurate and irrational.Footnote1 Second, in recent decades, many social psychologists such as Fiske (Citation1998) and Stangor (Citation2016) have ignored or rejected the potential accuracy and rationality of group-specific beliefs. As a result, these scientists concur with Allport’s description of stereotypes being inaccurate; however, they use the term ‘stereotypes’ to refer to group-specific beliefs in general. In other words, all group-specific beliefs are stereotypes and therefore inaccurate according to these scientists. Third, a minority of social psychologists headed by Jussim (Citation2012, Citation2017) also use the term ‘stereotypes’ to refer to group-specific beliefs in general, yet they leave the question of (in)accuracy open.

As can be seen, the difference between Allport’s and Jussim’s understanding of stereotypes is only terminological. While Allport’s terminology differentiates between rational group-specific beliefs and stereotypes, Jussim’s terminology differentiates between accurate stereotypes (which are rational) and inaccurate stereotypes (which are irrational). Unlike Allport and Jussim, Stangor and Fiske seem to exclude the existence of rational and accurate group-specific beliefs. There are three possible reasons for this: (1) group-specific beliefs are per se inaccurate; (2) the accuracy of a group-specific belief is generally not assessable; (3) group-specific beliefs can be accurate if they are formed in a rational and thus nuanced way, but people tend to form group-specific beliefs that are all-or-none, making them inaccurate.

Reasons (1) and (2) are implausible. First, Jussim (Citation2017) argues that, if all group-specific beliefs were inaccurate, then both the belief that two groups differ and the belief that two groups do not differ would be inaccurate. Because these two beliefs cannot simultaneously be inaccurate, Jussim rejects the claim that group-specific beliefs are per se inaccurate. Second, while there certainly are group-specific beliefs whose accuracy is not assessable, the accuracy of many group-specific beliefs can be determined (e.g. via objective criteria, group behaviour, agreement with other perceivers, agreement with experts and agreement with the group’s self-reports and self-perceptions). For instance, the beliefs that men are on average taller, earn more money, do less parenting and are more competitive than women are easily assessable (Brynin Citation2017; Buchanan, McFarlane, and Das Citation2016; Flory, Leibbrandt, and List Citation2015; Gneezy, Niederle, and Rustichini Citation2003; Koslowski Citation2021; Niederle and Vesterlund Citation2007).

This leaves the final reason: group-specific beliefs are (mostly) inaccurate because people tend not to form them in a rational and thus nuanced way, leading to inaccurate overgeneralisations. From a social science perspective, Jussim (Citation2012, Citation2017) provides and discusses a vast body of evidence indicating that this is not the case. Indeed, quite the opposite is true: stereotype accuracy is one of the most significant and replicable findings in social psychology (Jussim, Crawford, and Rubinstein Citation2015). This finding corresponds with the evolutionary perspective which suggests that, overall, stereotypes should be rather accurate than inaccurate since inaccurate beliefs come with fitness costs (Little Citation2017; Marczyk Citation2017). So, the depiction of stereotypes as being mostly inaccurate clashes with both empirical social research and theoretical evolutionary research. But regarding the cognitive science perspective, questions remain: do people actually form beliefs in a rational manner, leading to nuanced and not overgeneralised group-specific beliefs?Footnote2 According to the theory of predictive processing, which has become increasingly influential in cognitive science, they do (Clark Citation2013, Citation2016; Hohwy Citation2013).

Predictive processing, a Bayesian approach to brain function, describes the brain as a probabilistic prediction machine. Instead of passively converting bottom-up sensory input into a percept of the external world, the brain tries to actively predict bottom-up sensory input by using its best model of what is likely in the external world (Hohwy Citation2007, Citation2013). That is, the brain creates a generative model of its environment: by categorising experiences, it abstracts higher-order similarities from them, and, in this way, inductively infers the world’s underlying structure (cf. Tenenbaum, Griffiths, and Kemp Citation2006, Citation2011). Next, it uses this model to predict its current environment, which succeeds if the current environment is (at least to some degree) organised by the same patterns as previous environments. Throughout this process, the brain tries to minimise prediction error, meaning that it tries to minimise discrepancies between bottom-up sensory input from the current environment and top-down predictions formed by the generative model. Accordingly, the brain aims to avoid being surprised by anticipating possible outcomes. Put differently, the brain aims to minimises uncertainty, which is crucial for living beings.Footnote3 As Macrae and Bodenhausen (Citation2000) state: ‘[K]nowing what to expect – and exactly where, when, and from whom to expect it – is information that renders the world a meaningful, orderly, and predictable place’ (94).

There are two (non-exclusive) ways in which a mismatch between bottom-up sensory input and top-down prediction can be minimised: perceptual inference and active inference. To what extent each of them minimises prediction error depends on the precision (i.e. reliability) assigned to the bottom-up sensory input and the top-down prediction, respectively (these precision weightings are also part of the generative model).Footnote4 As we will see, perceptual inference is directly linked to stereotype accuracy, whereas active inference is directly linked to self-fulfilling prophecies. As a consequence, the accuracy of stereotypes and the existence of self-fulfilling prophecies are ultimately two sides of the same coin: prediction error minimisation. This inherently links the two phenomena, resulting in a self-perpetuation of largely accurate stereotypes through self-fulfilling prophecies. In this section, we focus on the connection between perceptual inference and stereotype accuracy; section 3 then focuses on the connection between active inference and self-fulfilling prophecies and their interaction with stereotypes and stereotype accuracy.

Perceptual inference is the process by which the brain minimises prediction error by updating its generative model and thereby adjusting its predictions to align with incoming sensory input. Put differently, perceptual inference nuances predictions (including stereotypes, see next paragraph). This occurs in an approximately Bayesian way: the brain’s prior predictions are combined with incoming sensory input (i.e. likelihood), resulting in a posterior prediction. As mentioned above, the estimated precision the brain assigns to prediction and sensory input is essential to the updating process. If sensory input is given relatively high precision, the posterior prediction is more strongly affected by it. In contrast, if the brain assigns relatively high precision to its prior prediction, the posterior prediction will resemble the prior prediction rather than the sensory input (Otten, Seth, and Pinto Citation2017).

In the Bayesian brain, stereotypes are, therefore, predictions about a group’s specific attributes that the brain inductively infers based on previous experiences involving that group. Stereotypes function as predictors, helping people to anticipate the behaviour of groups or their members and thereby to avoid surprises when interacting with them. For example, a Chinese woman in the Netherlands notices that the Netherlanders she meets are usually much taller than the average Chinese person. Because of this, she infers that Netherlanders are generally tall, leading to the prediction that, given a random Netherlander and a random Chinese person, the Netherlander is likely to be taller than the Chinese person. In other words, it is not a coincidence that the Netherlanders the woman meets are usually much taller than the average Chinese person. Instead, her observation reveals a specific (i.e. non-random) attribute of the category ‘Netherlanders’. Importantly, the Chinese woman will maintain the prediction of Netherlanders being tall if she encounters a group of Netherlanders who are equal in height to the average Chinese person. This is because she has accumulated ample evidence for her prediction, which means that she considers it relatively precise; this decreases the influence of new evidence. Only if such groups of Netherlanders become increasingly frequent will Bayesian updating at some point result in a revision of her prediction that Netherlanders are generally tall.

Perceptual inference as an essential part of predictive processing has two major implications for the debate on stereotype accuracy. First, if the brain updates its generative model in an approximately Bayesian manner, stereotypes, which are part of the generative model, should be largely rational. This is because Bayesianism is frequently used as the epistemic criterium for forming rational beliefs and expectations; for example, Bayesian decision theory has become the dominant theoretical model for rational decision-making and Bayesian epistemology is one the most important approaches in contemporary epistemology (Talbott Citation2008).Footnote5 Moreover, as long as the group members people have encountered in the past approximately represent the groups they categorise such group members as being part of, people’s stereotypes should also be accurate. Therefore, the stance that stereotypes are mostly inaccurate is implausible from a predictive processing perspective. If stereotypes were inaccurate, they would fail in their function of making the current environment more predictable and thereby minimising uncertainty. Second, though the brain’s predictive processes run continuously, most processes are non-conscious (Clark Citation2016). Though people constantly update their generative models to accommodate incoming sensory input, they are not aware of most of the predictions they inductively inferred from previous environments. Based on this idea, Hinton (Citation2017) concludes that implicit stereotypes (often also called implicit bias) are not the product of cognitive bias, but rather of a Bayesian brain learning in a ‘biased environment’. He writes:

If the predictive brain were to sample randomly or comprehensively then stereotypical associations would not be picked up if they did not represent the state of the world. However, people are born into culture, and communicate within social networks. Thus, the implicit stereotypical associations picked up by an individual do not reflect a cognitive bias but the associations prevalent within their culture – evidence of ‘culture in mind’. (1)

There are two ways in which an environment can be ‘culturally biased’. First, one’s environment is shaped by cultural norms (i.e. legal and/or social norms) that promote certain attributes in group X and others in group Y. Let’s imagine a country where asylum seekers have to wait months or years before receiving a decision about whether or not they can stay. Furthermore, in this state of uncertainty, they have little to do, live on the breadline and are unable to earn money legally. It would not be surprising if in that country asylum seekers were more likely to break the law by working illegally than those who have access to the legal labour market. Given criminal statistics confirmed that, holding the stereotype that in that country asylum seekers are more likely to come into conflict with the law than, for example, nationals would be rational. However, it is important to note that this stereotype would not reflect the nature of asylum seekers but rather the structural conditions they face during the asylum procedure of that country.

Second, one’s environment biases the sampling of group members (or second-hand group descriptions) such that the sample systematically misrepresents the group to which they belong. For instance, there is a widespread stereotype that (female) blondes are less intelligent than their counterparts with a different hair colour (Kyle and Mahler Citation1996; Weir and Fine-Davis Citation1989). This stereotype is manifested in a countless number of blonde jokes, which were particularly popular in the 1980s and 1990s. Moreover, ‘having a blonde moment’ is an English expression for displaying scatter-brained behaviour. Despite this clear association between blondeness and a lack of intelligence, there is no empirical evidence that blonde women are less intelligent than those with a different hair colour (Zagorsky Citation2016). If this is the case, how could the stereotype of the ‘dumb blonde’ nevertheless manifest? Hollywood productions of the last 70 years probably played a decisive role. The ‘Hollywood blonde’ is typically very attractive but somewhat unintelligent or naïve. Marilyn Monroe established this archetype (Thomas Citation1997). However, the stereotype has also been used in more recent films, and the Hollywood blonde has been portrayed in an increasingly one-dimensional way (Hornaday Citation2014). Therefore, the biased portrayal of blonde women in the film and media industry disproportionally exposed viewers to a non-representative subsample of blonde women. Therefore, people adopted the inaccurate stereotype that blonde women are unintelligent.

To summarise, in the Bayesian brain, stereotypes are predictions about a group’s specific attributes, and they serve to minimise uncertainty when interacting with that group. The brain inductively infers these predictions from experience and updates them following approximately Bayesian principles. Therefore, stereotypes reflect the underlying structure of the holder’s environment, including prevalent cultural norms. Because of their Bayesian formation, these stereotypes should be largely rational and, if the encountered group members roughly represent the group as a whole, largely accurate. This cognitive science perspective on stereotype accuracy corresponds with both empirical social research (Jussim Citation2012, Citation2017) and theoretical evolutionary research (Little Citation2017; Marczyk Citation2017). Still, it has to be emphasised that cultural bias can affect the sampling of new evidence, leading to inaccurate stereotypes based on overly salient subgroups believed to represent the group as a whole.

3. Self-fulfilling prophecies in the Bayesian brain

So far, the present paper supports the minority faction in the debate as to whether or not social perception is dominated by error and bias: predictive processing suggests that stereotypes are often accurate. Let’s continue with the second facet of this debate, namely the question of how relevant and powerful self-fulfilling prophecies are. As mentioned in the introduction, the term ‘self-fulfilling prophecy’ describes the phenomenon wherein expectations about a situation affect the situation in such a way that the expectations become (partly) true. Consequently, in the absence of these expectations, the situation would have evolved in a different way (Merton Citation1948). In the psychological literature, the finding that teachers’ expectations about their students influence their students’ IQ scores constitutes a classic example of a self-fulfilling prophecy (cf. Rosenthal and Jacobson Citation1968).

As can be seen, the power of self-fulfilling prophecies is directly linked to the power that expectations have in shaping social reality. The majority view in social psychology says that people do co-create social reality by means of their expectations and therefore attributes great power to self-fulfilling prophecies. In contrast, the minority view headed by Jussim argues that people (mainly) discover social reality and thus attributes little power to self-fulfilling prophecies. At this, Jussim seems to opt for some version of perceptual realism, which sees perception as a primarily bottom-up process (cf. Firestone and Scholl Citation2016; Gibson Citation1979). People do not co-construct reality by means of top-down processes since ‘all the information needed for perception is provided by the stimulus environment, and … our perceptual apparatus evolved to pick up just that information which allows us to perceive the world the way it really is’ (Kihlstrom Citation2017, 29). Accordingly, there is little room left for self-fulfilling prophecies.

Predictive processing strongly opposes such a concept of perception since top-down predictions constitute a fundamental part of the theory, making it constructivist in nature. As previously mentioned, the brain tries to actively predict bottom-up sensory input by using its best model of what is likely in the external world (Hohwy Citation2007, Citation2013). Thus, people do not perceive the bottom-up sensory input itself but their top-down predictions of that input. In principle, this leaves the door wide open for self-fulfilling prophecies. But how exactly do self-fulfilling prophecies take place in the predictive processing framework? In the previous section, we noted that there are two ways in which a mismatch between bottom-up sensory input and top-down prediction can be minimised: perceptual inference and active inference, with the former being linked to stereotype accuracy and the latter being linked to self-fulfilling prophecies. So, let us now examine the mechanism of active inference more closely.

During active inference, the brain resolves a mismatch between bottom-up sensory input and top-down prediction through action (Clark Citation2016; Pezzulo, Rigoli, and Friston Citation2015).Footnote6 More precisely, the brain acts upon the sources eliciting the sensory input in such a way that the consequent sensory input coincides with one’s top-down prediction. As Ramstead, Kirchhoff, and Friston (Citation2020) write: ‘[T]he expectations of the organism, as they figure under the generative model, are brought about by the organism in a kind of self-fulfilling prophecy through active inference’ (235). At this, the more precise the top-down prediction is estimated to be, relative to the bottom-up sensory input, the more likely active inference becomes (Clark Citation2015, Citation2016). Such active inference may aim at internal bodily process. For example, if I predict that a painkiller will relieve my pain, not knowing that it is actually a placebo, I can resolve the resulting prediction error by releasing endogenous opioids (Büchel et al. Citation2014; Ongaro and Kaptchuk Citation2019). Here, the self-fulfilling prophecy does not affect the environment itself (the placebo remains inert) but the individual’s perception of it. However, active inference can also aim at the environment itself. For example, if I predict my home to be clean and tidy but see that it is currently a mess, I can resolve the resulting prediction error by cleaning up.

If predictive processing as a constructivist theory is better at explaining social reality than perceptual realism, we should find examples where top-down predictions affect perceptions and co-create social reality. As the following two subsections will show, we do find such examples. We start with the influence of predictions on social visual perception.

3.1. How predictions affect social visual perception

There are several examples where top-down predictions (i.e. stereotypes) initiate active inferential processes affecting social visual perception (Bach and Schenke Citation2017; Freeman et al. Citation2011; Krosch et al. Citation2013; Wilson, Hugenberg, and Rule Citation2017). For instance, under identical luminance, a Black face is perceived as darker than a White face (Levin and Banaji Citation2006). Thus, when we see typical Black/White facial characteristics, we predict the face to be rather dark/light too and due to active inference also perceive it that way. Similarly, racial categorisation affects body perception. Black men are perceived as taller, heavier, more muscular and more physically formidable than White men despite the lack of actual physical differences in perceptual stimuli (Wilson, Hugenberg, and Rule Citation2017). Again, the stereotypes we hold about Black men lead to top-down predictions which (partly) come true when seeing Black men due to active inferential processes.

These findings are in conflict with perceptual realism since the experiments’ modifications of perceptual stimuli by themselves cannot explain the differences in perception. If we see the world as it is, how can facial characteristics influence the skin’s darkness/lightness? But of course, the self-fulfilling prophecies depicted here only concern perceived reality: when an individual perceives a Black face as darker than a White face with identical luminance, the Black face does not actually get darker. Still, these examples demonstrate how early perceptual processes are affected by top-down predictions and thereby by stereotypes.

Researchers argue that, in some cases, the effects of top-down predictions on perceptual processes potentially have severe implications. Shooter bias is a well-researched phenomenon in social psychology (for a meta-analysis, see Mekawi and Bresin Citation2015). In a virtual shooter task in which participants had to shoot armed targets and not shoot unarmed targets, participants were more likely to shoot black targets holding a harmless object than white targets doing so. Furthermore, compared to white targets, participants were quicker to shoot armed black targets and slower to not shoot unarmed black targets (Correll et al. Citation2002). Importantly, shooter bias is not a mere laboratory finding; it manifests in real-world police behaviour. A 2015 analysis of 990 fatal police shootings using data compiled by The Washington Post showed that Black civilians shot by police were more than twice as likely to have been unarmed than White civilians shot by police (Nix et al. Citation2017). Now, several authors assert that the shooter bias may not only be the product of individual racial prejudice but also of beliefs about the danger posed by specific groups (Correll et al. Citation2011; Kahn and Davies Citation2017; Sadler et al. Citation2012). In a Bayesian brain, these beliefs reflect the holder’s environment (which might be biased) and can affect perception. Regarding this connection, Barrett (Citation2017) argues that, for a moment, some police officers might actually see a weapon in an unarmed civilian’s hand due to top-down predictions influencing perceptual processes. She asserts that the fact that this seems to happen more often if the civilian is Black than if they are White could be linked to American stereotypes about race learned through cultural exposure. Such indirect experiences, made, for example, when watching TV or consuming other types of mainstream media, fine-tune people’s predictions about how dangerous individuals of certain ethnicities or socioeconomic status are. As Barrett writes: ‘Your mind is not only a function of your brain but also of the other brains in your culture’ (249). To conclude, active inference induced by culturally shaped racial stereotypes might contribute to shooter bias.Footnote7

3.2. How predictions affect social reality

Let us now continue with cases where active inferential processes not only affect how people perceive social reality but also co-create social reality through action. We start by putting the classic social psychological example of a self-fulfilling prophecy into predictive processing terms: teachers’ expectations about their students influence students’ achievements (e.g. Gentrup et al. Citation2020; Rosenthal and Jacobson Citation1968). A teacher forms an unfounded top-down prediction of how ‘good’ or ‘bad’ a student is and assigns high precision to this prediction. This prediction affects the teacher’s actions through active inferential processes: for example, when interacting with students or their work, the teacher tends to look for signs that confirm their predictions and to discard disconfirming signs as noise. In turn, the student notices the teacher’s feedback, which can be subtle in the form of body language, more direct in the form of a verbal comment, and/or straightforward in the form of a grade. Naturally, the teacher’s feedback provides the student with a basis for self-assessment. Accordingly, the student might adopt the teacher’s hypothesis about how ‘good’ or ‘bad’ they are and integrate this idea into their generative model through perceptual inference. In turn, this updated generative model then affects the student’s behaviour through active inferential processes: for instance, when frustrated because they do not immediately comprehend something, a student who feels that they are good thinks that they simply have to try harder, whereas a student who feels that they are bad thinks that they are simply unintelligent and gives up. The way such students behave provides new evidence for their teacher’s as well as their own prior predictions of how ‘good’ or ‘bad’ they are, further increasing the precision of these predictions and the likelihood of active inference overriding perceptual inference. The teacher’s prophecy is fulfilled, and the cycle of mutual prediction, which was originally unfounded, continues based on increasing evidence.Footnote8

What is the magnitude of self-fulfilling prophecies’ effects? The example of the last paragraph involves a dyadic relationship between one teacher and one student. A meta-analysis including 464 studies that were conducted over the first 25 years of research on the effects of self-fulfilling prophecies on dyadic relations found an average effect size of d = 0.63 (Rosenthal Citation1994). However, in everyday life, individuals are not only confronted with one perceiver’s expectations but with many perceivers’ expectations. For a long time, researchers hypothesised that, if several perceivers impose the same expectations upon a subject, self-fulfilling prophecies accumulate. This is in line with predictive processing as additional perceivers with similar expectations should increase the ascribed precision of these expectations and thereby their influence on action. Recent experiments provided the first evidence for this hypothesis (Madon et al. Citation2004, Citation2006, Citation2018). For instance, an experimental study found that, on average, self-fulfilling prophecies associated with a single perceiver’s expectations had an effect size of d = 0.65 – similar to the meta-analytic value found by Rosenthal. Adding a second perceiver with similar expectations led to an additional self-fulfilling prophecy with an effect size of 1.04. Therefore, both perceivers combined elicited a self-fulfilling prophecy with an effect size of d = 1.69 (Madon et al. Citation2018). This suggests that self-fulfilling prophecies might not only accumulate but also become disproportionally stronger with each additional perceiver holding similar expectations.

Group membership and stereotypes play an important role in self-fulfilling prophecies. First, predictions people form before engaging in individual interactions build on group specific beliefs (unless one applies a uniform prior prediction). In other words, an individual’s prior prediction about an unknown person must be based entirely on the stereotypes they hold when they lack direct evidence. For example, a teacher sees that a student has a foreign name. Because they hold the stereotype that foreigners are not good at writing, they form the prior prediction that the student’s essay will not be very good, which influences the teacher’s behaviour. This might explain why, in a German study, an essay apparently written by a pupil with a Turkish sounding name obtained a significantly lower grade and secondary school recommendations than the same essay apparently written by a pupil with a German sounding name (Sprietsma Citation2013). Second, since stereotypes are often culturally shared, members of the stereotyped group should frequently be confronted with people holding it, which increases the magnitude of self-fulfilling effects (cf. Madon et al. Citation2018). Therefore, stereotypes in general and culturally shared stereotypes in particular tend to promote stereotype-consistent behaviour in interpersonal interactions.

In addition to these effects, culturally shared stereotypes can also promote stereotype-consistent behaviour in non-interpersonal interactions; namely when members of a stereotyped group observe situations in which fellow group members with whom they identify are treated and/or behave in a manner consistent with stereotype. Observing such situations makes the portrayed stereotype salient. In turn, the theory of stereotype threat says that a salient stereotype tends to promote stereotype-consistent behaviour (Steele and Aronson Citation1995). In predictive processing terms: observing a stereotype-consistent situation increases the stereotype’s precision, which in turn promotes active inferential processes that aim at confirming the stereotype. A recent meta-analysis found that, in experimental settings, content usually shown in mass media (e.g. ads, newspaper articles, cartoons or excerpts of TV series) portraying a negative stereotype leads to stereotype threat with an average effect size of d = 0.38 (Appel and Weber Citation2021). The fact that a brief exposure to such content leads to substantial stereotype threat prompts the following hypothesis: outside the laboratory, where stereotypic portrayals of groups are omnipresent, stereotype threat is likely even stronger and might be almost constantly present. Altogether, these findings suggest that (culturally shared) stereotypes can reproduce themselves through interpersonal and non-interpersonal interactions, leading to a kind of ‘large-scale self-fulfilling prophecy’.

The gender–science stereotype seems to provide a well-researched example of such a ‘large-scale self-fulfilling prophecy’. This stereotype holds that men are more talented at science and math than women and is for example used as an explanation for the underrepresentation of women in STEM majors (science, technology, engineering, and mathematics) (Banaji and Greenwald Citation2013; Makarova, Aeschlimann, and Herzog Citation2019). But while the roots of the gender–science stereotype are frequently linked genetic differences between the sexes (e.g. Goldenberg Citation2015), strong evidence indicates that the stereotype is rather socially created. Researchers have found that women’s implicit association of science with masculinity as measured in an implicit association test (IAT) predicts worse math achievement, weaker self-ascribed ability, greater negativity towards math and less frequent participation (Nosek, Banaji, and Greenwald Citation2002; Nosek and Smyth Citation2011). Interestingly, explicit stereotypes have less predictive validity regarding these variables than implicit stereotypes. Therefore, it seems that implicit stereotypes play a crucial role in the maintenance of the gender–science stereotype and the gender gap in STEM fields. Similarly, in an analysis of more than half a million gender-science IATs across 34 countries, Nosek et al. (Citation2009) found the following: (1) the level of a nation’s implicit gender–science stereotype predicted national gender differences in 8th-grade science and mathematics achievement; and (2) concerning this achievement gap, explicit gender–science stereotypes did not provide additional predictive validity. Based on these findings, the authors assert that implicit gender-science stereotypes and gender differences in science participation and performance seem to be mutually reinforcing, thereby contributing to the persistent gender gap in STEM fields.

In this case, as in a self-fulfilling prophecy, it can be seen that belief and behaviour are hypothesised to mutually reinforce each other. Girls grow up in environments in which women are stated to be and portrayed as being less talented at math and science than men, a claim that seems to be confirmed by the fact that women are underrepresented in STEM fields. Unsurprisingly, many girls implicitly (and also explicitly) adopt the gender–science stereotype when growing up. In turn, holding this stereotype can affect girls’ behaviour by means of stereotype threat. For instance, women who were told that a math test produces gender differences performed worse on the test than those in a control group (Nguyen and Ryan Citation2008; Spencer, Steele, and Quinn Citation1999). Such stereotype threat is positively correlated with subjects’ implicit gender–science stereotype (Franceschini et al. Citation2014), their teachers’ implicit gender–science stereotype (Carlana Citation2019) and their mothers’ explicit endorsement of the gender–science stereotype (Tomasetto, Alparone, and Cadinu Citation2011). Furthermore, stereotype threat occurs before girls are even aware of the fact that there is a gender–science stereotype.Footnote9 In one study, six-year-old children displayed no signs of endorsing or even being aware of the gender–science stereotype (Galdi, Cadinu, and Tomasetto Citation2014). Nonetheless, girls showed automatic associations consistent with this stereotype. Moreover, their math performances were significantly worse in a stereotype-consistent condition than in a stereotype-inconsistent condition, with the magnitude of the girls’ gender–science stereotypes mediating the deterioration of their performances. In contrast, boys’ scores remained consistent across the two conditions. Finally, research has shown that, if women hold a strong implicit gender–science stereotype, no stereotypic cue is needed to initiate stereotype threat because the stereotype is chronically accessible (Kiefer and Sekaquaptewa Citation2007).

The gender–science stereotype seems to self-perpetuate by affecting women’s behaviour in such a way that they confirm it, leading to a large-scale self-fulfilling prophecy. In predictive processing terms, women infer from their social environment that they are less talented at math and science than men, and they integrate this belief into their generative model (perceptual inference). As part of their generative model, this belief leads to the prediction that they (and other women) will not excel, but rather underperform, in STEM fields. This prediction becomes increasingly precise if the social environment constantly provides (subtle) evidence to support it and, in particular, if the gender–science stereotype is directly made salient. In turn, the more precise the gender–science stereotype is assessed to be, the more it promotes stereotype-consistent behaviour through active inferential processes. This then provides new evidence that confirms the stereotype and thereby increases its precision. Finally, if the stereotype is chronically accessible, predictions derived from the stereotype have permanently high precision and therefore a permanent effect on behaviour.

The studies discussed above including the exemplary case of the gender–science stereotype provide ample evidence for the reality shaping power of expectations. People do not merely discover social reality as Jussim proposes but they constantly co-construct it by means of their actions. These actions get steered by top-down predictions which in turn are inferred from previous social environments. The result is a self-reproducing cycle which people only leave if their predictions start to change. Therefore, Jussim’s abandoning of social constructivism and steering towards perceptual realism seems premature. At this, it is important to notice that the theory of predictive processing does not question Jussim’s assumption that there is an objective reality and that the accuracy of beliefs can be judged by this objective reality (Clark Citation2016; Friston et al. Citation2012; Kiebel, Daunizeau, and Friston Citation2009). As Section 2 has shown, predictive processing even supports Jussim’s claim that stereotypes are often accurate. However, the precise configuration of the objective reality (and thus of accurate stereotypes) is not independent of the people living in it as their actions resulting from their predictions impact this reality, giving rise to the relevance and power of self-fulfilling prophecies.

We see, since perceptual inference and active inference are two sides of the same coin (i.e. prediction error minimisation), stereotype accuracy and self-fulfilling prophecy are inherently connected too: Individuals infer stereotypes from their social environments, integrate them into their generative model, use them as action steering top-down predictions and thereby reproduce their social environment. As a result of such self-fulfilling prophecies, the accuracy of stereotypes is maintained or even increases. This connection between the two phenomena provides a new middle course between the majority view and the minority view in a long-standing social psychological debate: stereotypes are often accurate and self-fulfilling prophecies relevant and powerful.

4. Conclusion

Predictive processing theory offers a fundamental perspective on the debate as to whether social perception is dominated by error and bias. First, it assumes that the brain constantly predicts sensory input by means of a generative model constructed based on prior experiences. Because the generative model is updated in an approximately Bayesian way and its predictions aim to minimise uncertainty, it is implausible that stereotypes are per se inaccurate or irrational (but the brain’s learning environment can be biased, which results in inaccurate stereotypes). Therefore, regarding the above-mentioned debate, predictive processing supports the minority view headed by Jussim which states that stereotypes are often accurate. Second, the brain can resolve prediction errors by acting on the world such that its predictions come true. Therefore, the general idea of self-fulfilling prophecies is inherent in predictive processing and implies that top-down predictions constantly affect social perception and behaviour. This emphasised relevance of self-fulfilling prophecies supports the majority view in the above-mentioned debate. Accordingly, predictive processing provides a new middle course in the debate on stereotype accuracy and self-fulfilling prophecies, endorsing the minority position regarding stereotype accuracy and the majority position regarding self-fulfilling prophecies. And this middle course also appears to fit the empirical data better than the existing majority or minority view.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung [grant number 186151].

Notes

1 The rationality and the accuracy of a belief are closely linked, yet not the same. A belief’s accuracy refers to whether the belief is true, whereas a belief’s rationality refers to how the belief was formed. Typically, a rationally formed belief leads to a true belief. But as we will see towards the end of Section 2, there are exceptions. Conversely, an irrationally formed belief typically leads to a wrong belief. Still, it is possible that, by chance, an irrationally formed belief turns out to be true regardless.

2 While Jussim shows that stereotypes are often accurate and infers from this finding that people therefore form beliefs in a rational manner, a cognitive science perspective on belief formation can tell us whether they really do.

3 Predictive processing is closely related to Karl Friston’s ‘free energy principle’ (Friston Citation2010; Friston, Kilner, and Harrison Citation2006). According to the free energy principle, living beings must minimise free energy in order to maintain homeostasis and survive. Given certain assumptions about the shape of the probability densities, minimising free energy corresponds to minimising average prediction error (Friston Citation2010; Hohwy Citation2017).

4 Therefore, next to accommodating sensory input, there is ‘a constant kind of second-order assessment (known as “precision estimation”) that determines the weighting assigned to specific predictions at all levels of processing and to different aspects of the incoming sensory signal’ (Clark Citation2015, 5).

5 There are biases that seem to distort the formation of stereotypes in a Bayesian way; examples include outgroup homogeneity bias (Park and Judd Citation1990), ultimate attribution error (Pettigrew Citation1979) and linguistic intergroup bias (Maass et al. Citation1989). However, these biases are unlikely to result in the inaccuracy of all stereotypes. For example, they should not affect beliefs about differences among outgroups. Moreover the presence of cognitive biases in general (e.g. Kahneman Citation2011; Nyhan and Reifler Citation2010) might cast doubt on the assumption of a Bayesian brain (cf. Williams Citation2018). However, there are Bayesian defences against these doubts (see Tappin and Gadsby Citation2019).

6 While the term ‘action’ tends to be associated with intentionality, it is used in a broader sense in the literature on predictive processing (including the present paper).

7 Unfortunately, no study has yet been conducted to directly test this hypothesis. Nevertheless, the study of Wilson, Hugenberg, and Rule (Citation2017) presented above appears to hint at this direction: Black men were perceived as bigger and more physically formidable than White men despite the lack of actual physical differences in perceptual stimuli. Furthermore, the authors state: ‘Biased formidability judgments in turn promoted participants’ justifications of hypothetical use of force against Black suspects of crime.’ (59) Therefore, it seems likely that top-down predictions including racial stereotypes also affect visual perception in a ‘shooter task’ situation and thereby contribute to the shooter bias.

8 Clark (Citation2016) writes about the self-fulfilling potential of mutual prediction in the predictive processing framework: ‘Mutual prediction … can greatly enhance interpersonal understanding. But when coupled with the profound effects of expectation upon perception and action, it can also provide a worrying recipe for self-fulfilling psycho-social knots and tangles’ (74–75).

9 This finding is in conflict with the typically assumed mechanisms behind stereotype threat (e.g. fear of confirming the stereotype) as they require stereotype awareness (cf. Aronson and Good Citation2003; Galdi, Cadinu, and Tomasetto Citation2014; Schmader, Johns, and Forbes Citation2008). However, the absence of stereotype awareness is not a problem for a predictive processing account of stereotype threat since the theory of predictive processing assumes that most processes and predictions are non-conscious anyway (Clark Citation2016).

References

  • Allport, G. W. 1979. The Nature of Prejudice. 2nd ed. New York: Perseus.
  • Appel, M., and S. Weber. 2021. “Do Mass Mediated Stereotypes Harm Members of Negatively Stereotyped Groups? A Meta-Analytical Review on Media-Generated Stereotype Threat and Stereotype Lift.” Communication Research 48 (2): 151–179. doi:10.1177/0093650217715543.
  • Aronson, J., and C. Good. 2003. “The Development and Consequences of Stereotype Vulnerability in Adolescents.” In Adolescence and Education: Vol. 2. Academic Motivation of Adolescents, edited by F. Pajares, and T. Urdan, 299–330. Greenwich, CT: Information Age.
  • Bach, P., and K. C. Schenke. 2017. “Predictive Social Perception: Towards a Unifying Framework from Action Observation to Person Knowledge.” Social and Personality Psychology Compass 11 (7): e12312. doi:10.1111/spc3.12312.
  • Banaji, M. R., and A. G. Greenwald. 2013. Blind Spot. New York: Random House.
  • Barrett, L. F. 2017. How Emotions Are Made – The Secret Life of the Brain. New York: Houghton Mifflin Harcourt.
  • Brynin, M. 2017. The Gender Pay Gap. Report 109. Equality and Human Rights Commission (EHRC) Research. Accessed March 22, 2017. https://www.equalityhumanrights.com/sites/default/files/research-report-109-the-gender-pay-gap.pdf.
  • Buchanan, T., A. McFarlane, and A. Das. 2016. “A Counterfactual Analysis of the Gender Gap in Parenting Time: Explained and Unexplained Variances at Different Stages of Parenting.” Journal of Comparative Family Studies 47 (2): 193–219. doi:10.3138/jcfs.47.2.193.
  • Büchel, C., S. Geuter, C. Sprenger, and F. Eippert. 2014. “Placebo Analgesia: A Predictive Coding Perspective.” Neuron 81 (6): 1223–1239. doi:10.1016/j.neuron.2014.02.042.
  • Carlana, M. 2019. “Implicit Stereotypes: Evidence from Teachers’ Gender Bias.” The Quarterly Journal of Economics 134 (3): 1163–1224. doi:10.1093/qje/qjz008.
  • Chen, M., and J. A. Bargh. 1997. “Nonconscious Behavioral Confirmation Processes: The Self-Fulfilling Consequences of Automatic Stereotype Activation.” Journal of Experimental Social Psychology 33 (5): 541–560. doi:10.1006/jesp.1997.1329.
  • Clark, A. 2013. “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” Behavioral and Brain Sciences 36 (3): 181–204. doi:10.1017/S0140525X12000477.
  • Clark, A. 2015. “Radical Predictive Processing.” The Southern Journal of Philosophy 53 (S1): 3–27. doi:10.1111/sjp.12120.
  • Clark, A. 2016. Surfing Uncertainty. Oxford: Oxford University Press.
  • Correll, J., B. Park, C. M. Judd, and B. Wittenbrink. 2002. “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals.” Journal of Personality and Social Psychology 83 (6): 1314–1329. doi:10.1037/0022-3514.83.6.1314.
  • Correll, J., B. Wittenbrink, B. Park, C. M. Judd, and A. Goyle. 2011. “Dangerous Enough: Moderating Racial Bias with Contextual Threat Cues.” Journal of Experimental Social Psychology 47 (1): 184–189. doi:10.1016/j.jesp.2010.08.017.
  • Darley, J. M., and P. H. Gross. 1983. “A Hypothesis-Confirming Bias in Labeling Effects.” Journal of Personality and Social Psychology 44 (1): 20–33. doi:10.1037/0022-3514.44.1.20.
  • Firestone, C., and B. J. Scholl. 2016. “Cognition Does Not Affect Perception: Evaluating the Evidence for “Top-Down” Effects.” Behavioral and Brain Sciences 39: e229. doi:10.1017/S0140525X15000965.
  • Fiske, S. T. 1998. “Stereotyping, Prejudice, and Discrimination.” In The Handbook of Social Psychology, edited by D. T. Gilbert, S. T. Fiske, and G. Lindzey, 4th ed., Vol. 2, 357–411. New York: McGraw-Hill.
  • Flory, J. A., A. Leibbrandt, and J. A. List. 2015. “Do Competitive Workplaces Deter Female Workers? A Large-Scale Natural Field Experiment on Job Entry Decisions.” The Review of Economic Studies 82 (1): 122–155. doi:10.1093/restud/rdu030.
  • Franceschini, G., S. Galli, F. Chiesi, and C. Primi. 2014. “Implicit Gender–Math Stereotype and Women’s Susceptibility to Stereotype Threat and Stereotype Lift.” Learning and Individual Differences 32: 273–277. doi:10.1016/j.lindif.2014.03.020.
  • Freeman, J. B., A. M. Penner, A. Saperstein, M. Scheutz, and N. Ambady. 2011. “Looking the Part: Social Status Cues Shape Race Perception.” PLoS ONE 6 (9): e25107. doi:10.1371/journal.pone.0025107.
  • Friston, K. 2010. “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11 (2): 127–138. doi:10.1038/nrn2787.
  • Friston, K., R. A. Adams, L. Perrinet, and M. Breakspear. 2012. “Perceptions as Hypotheses: Saccades as Experiments.” Frontiers in Psychology 3: 151. doi:10.3389/fpsyg.2012.00151.
  • Friston, K., J. Kilner, and L. Harrison. 2006. “A Free Energy Principle for the Brain.” Journal of Physiology-Paris 100 (1-3): 70–87. doi:10.1016/j.jphysparis.2006.10.001.
  • Galdi, S., M. Cadinu, and C. Tomasetto. 2014. “The Roots of Stereotype Threat: When Automatic Associations Disrupt Girls’ Math Performance.” Child Development 85 (1): 250–263. doi:10.1111/cdev.12128.
  • Gentrup, S., G. Lorenz, C. Kristen, and I. Kogan. 2020. “Self-Fulfilling Prophecies in the Classroom: Teacher Expectations, Teacher Feedback and Student Achievement.” Learning and Instruction 66: 101296. doi:10.1016/j.learninstruc.2019.101296.
  • Gibson, J. J. 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
  • Gneezy, U., M. Niederle, and A. Rustichini. 2003. “Performance in Competitive Environments: Gender Differences.” The Quarterly Journal of Economics 118 (3): 1049–1074. doi:10.1162/00335530360698496.
  • Goldenberg, S. 2015. “Why Women Are Poor at Science, by Harvard President.” The Guardian. https://www.theguardian.com/science/2005/jan/18/educationsgendergap.genderissues.
  • Hahn, U. 2014. “The Bayesian Boom: Good Thing or Bad?” Frontiers in Psychology 5: 765. doi:10.3389/fpsyg.2014.00765.
  • Hinton, P. 2017. “Implicit Stereotypes and the Predictive Brain: Cognition and Culture in “Biased” Person Perception.” Palgrave Communications 3 (1): 17086. doi:10.1057/palcomms.2017.86.
  • Hohwy, J. 2007. “Functional Integration and the Mind.” Synthese 159 (3): 315–328. doi:10.1007/s11229-007-9240-3.
  • Hohwy, J. 2013. The Predictive Mind. Oxford: Oxford University Press.
  • Hohwy, J. 2017. “How to Entrain Your Evil Demon.” In Philosophy and Predictive Processing, edited by T. K. Metzinger, and W. Wiese, 1–15. Frankfurt am Main: MIND Group. http://www.predictive-mind.net/DOI?isbn=9783958573048.
  • Hornaday, A. 2014. “At Its Worst, the Dumb Blonde Is a Tired Stereotype. At Its Best, It’s a Sublime Example of Cinematic Subversion.” Washington Post, May 3. https://www.washingtonpost.com/lifestyle/style/in-praise-of-the-dumb-blonde-an-archetype-in-need-of-saving/2014/05/01/4fdf67b4-cfbe-11e3-a6b1-45c4dffb85a6_story.html.
  • Jussim, L. 2012. Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophesy. New York: Oxford University Press.
  • Jussim, L. 2017. “Précis of Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy.” Behavioral and Brain Sciences 40: e1. doi:10.1017/S0140525X1500062X.
  • Jussim, L., J. T. Crawford, and R. S. Rubinstein. 2015. “Stereotype (In)Accuracy in Perceptions of Groups and Individuals.” Current Directions in Psychological Science 24 (6): 490–497. doi:10.1177/0963721415605257.
  • Kahn, K. B., and P. G. Davies. 2017. “What Influences Shooter Bias? The Effects of Suspect Race, Neighborhood, and Clothing on Decisions to Shoot.” Journal of Social Issues 73 (4): 723–743. doi:10.1111/josi.12245.
  • Kahneman, D. 2011. Thinking, Fast and Slow. London: Macmillan.
  • Kiebel, S. J., J. Daunizeau, and K. J. Friston. 2009. “Perception and Hierarchical Dynamics.” Frontiers in Neuroinformatics 3: 20. doi:10.3389/neuro.11.020.2009.
  • Kiefer, A. K., and D. Sekaquaptewa. 2007. “Implicit Stereotypes and Women’s Math Performance: How Implicit Gender-Math Stereotypes Influence Women’s Susceptibility to Stereotype Threat.” Journal of Experimental Social Psychology 43 (5): 825–832. doi:10.1016/j.jesp.2006.08.004.
  • Kihlstrom, J. F. 2017. “Realism and Constructivism in Social Perception.” Behavioral and Brain Sciences 40: 28–30. doi:10.1017/S0140525X15002344.
  • Kite, M. E., and B. E. Whitley. 2016. Psychology of Prejudice and Discriminaton. New York: Routledge.
  • Koslowski, A. (2021). Capturing the Gender Gap in the Scope of Parenting Related Leave Policies Across Nations. Social Inclusion 9: 250–261. doi:10.17645/si.v9i2.3852.
  • Krosch, A. R., L. Berntsen, D. M. Amodio, J. T. Jost, and J. J. Van Bavel. 2013. “On the Ideology of Hypodescent: Political Conservatism Predicts Categorization of Racially Ambiguous Faces as Black.” Journal of Experimental Social Psychology 49 (6): 1196–1203. doi:10.1016/j.jesp.2013.05.009.
  • Kyle, D. J., and H. I. Mahler. 1996. “The Effects of Hair Color and Cosmetic Use on Perceptions of a Female’s Ability.” Psychology of Women Quarterly 20 (3): 447–455. doi:10.1111/j.1471-6402.1996.tb00311.x.
  • Levin, D. T., and M. R. Banaji. 2006. “Distortions in the Perceived Lightness of Faces: The Role of Race Categories.” Journal of Experimental Psychology: General 135 (4): 501–512. doi:10.1037/0096-3445.135.4.501.
  • Little, A. C. 2017. “An evolutionary approach to accuracy in social perception.” Behavioral and Brain Sciences 40: e8. doi:10.1017/S0140525X15002356.
  • Maass, A., D. Salvi, L. Arcuri, and G. R. Semin. 1989. “Language Use in Intergroup Contexts: The Linguistic Intergroup Bias.” Journal of Personality and Social Psychology 57 (6): 981–993. doi:10.1037/0022-3514.57.6.981.
  • Macrae, C. N., and G. Bodenhausen. 2000. “Social Cognition: Thinking Categorically About Others.” Annual Review of Psychology 51: 93–120.
  • Madon, S., M. Guyll, R. Spoth, and J. Willard. 2004. “Self-Fulfilling Prophecies: The Synergistic Accumulative Effect of Parents’ Beliefs on Children’s Drinking Behavior.” Psychological Science 15 (12): 837–845. doi:10.1111/j.0956-7976.2004.00764.x.
  • Madon, S., L. Jussim, M. Guyll, H. Nofziger, E. R. Salib, J. Willard, and K. C. Scherr. 2018. “The Accumulation of Stereotype-Based Self-Fulfilling Prophecies.” Journal of Personality and Social Psychology 115 (5): 825–844. doi:10.1037/pspi0000142.
  • Madon, S., J. Willard, M. Guyll, L. Trudeau, and R. Spoth. 2006. “Self-Fulfilling Prophecy Effects of Mothers’ Beliefs on Children’s Alcohol Use: Accumulation, Dissipation, and Stability Over Time.” Journal of Personality and Social Psychology 90 (6): 911–926. doi:10.1037/0022-3514.90.6.911.
  • Makarova, E., B. Aeschlimann, and W. Herzog. 2019. “The Gender Gap in STEM Fields: The Impact of the Gender Stereotype of Math and Science on Secondary Students’ Career Aspirations.” Frontiers in Education 460. doi:10.3389/feduc.2019.00060.
  • Marczyk, J. 2017. “Why Would We Expect the Mind to Work That Way? The Fitness Costs to Inaccurate Beliefs.” Behavioral and Brain Sciences 40: 33–34. doi:10.1017/S0140525X1500237X.
  • McCarthy, N. 2021. Switzerland Comes First For Chocolate Consumption. Statista. https://www.statista.com/chart/3668/the-worlds-biggest-chocolate-consumers/.
  • Mekawi, Y., and K. Bresin. 2015. “Is the Evidence from Racial Bias Shooting Task Studies a Smoking Gun? Results from a Meta-Analysis.” Journal of Experimental Social Psychology 61: 120–130. doi:10.1016/j.jesp.2015.08.002.
  • Merton, R. K. 1948. “The Self-Fulfilling Prophecy.” The Antioch Review 8 (2): 193–210. doi:10.2307/4609267.
  • Miller, D. T., and W. Turnbull. 1986. “Expectancies and Interpersonal Processes.” Annual Review of Psychology 37 (1): 233–256. doi:10.1146/annurev.ps.37.020186.001313.
  • Nguyen, H.-H. D., and A. M. Ryan. 2008. “Does Stereotype Threat Affect Test Performance of Minorities and Women? A Meta-Analysis of Experimental Evidence.” Journal of Applied Psychology 93 (6): 1314–1334. doi:10.1037/a0012702.
  • Niederle, M., and L. Vesterlund. 2007. “Do Women Shy Away from Competition? Do Men Compete too Much?” The Quarterly Journal of Economics 122 (3): 1067–1101. doi:10.1162/qjec.122.3.1067.
  • Nix, J., B. A. Campbell, E. H. Byers, and G. P. Alpert. 2017. “A Bird’s Eye View of Civilians Killed by Police in 2015.” Criminology & Public Policy 16 (1): 309–340. doi:10.1111/1745-9133.12269.
  • Nosek, B. A., M. R. Banaji, and A. G. Greenwald. 2002. “Harvesting Implicit Group Attitudes and Beliefs from a Demonstration Web Site.” Group Dynamics: Theory, Research, and Practice 6 (1): 101–115. doi:10.1037/1089-2699.6.1.101.
  • Nosek, B. A., and F. L. Smyth. 2011. “Implicit Social Cognitions Predict Sex Differences in Math Engagement and Achievement.” American Educational Research Journal 48 (5): 1125–1156. doi:10.3102/0002831211410683.
  • Nosek, B. A., F. L. Smyth, N. Sriram, N. M. Lindner, T. Devos, A. Ayala, Y. Bar-Anan, et al. 2009. “National Differences in Gender–Science Stereotypes Predict National Sex Differences in Science and Math Achievement.” Proceedings of the National Academy of Sciences 106 (26): 10593–10597. doi:10.1073/pnas.0809921106.
  • Nyhan, B., and J. Reifler. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32 (2): 303–330. doi:10.1007/s11109-010-9112-2.
  • Ongaro, G., and T. J. Kaptchuk. 2019. “Symptom Perception, Placebo Effects, and the Bayesian Brain.” Pain 160 (1): 1–4. doi:10.1097/j.pain.0000000000001367.
  • Otten, M., A. K. Seth, and Y. Pinto. 2017. “A Social Bayesian Brain: How Social Knowledge Can Shape Visual Perception.” Brain and Cognition 112: 69–77. doi:10.1016/j.bandc.2016.05.002.
  • Park, B., and C. M. Judd. 1990. “Measures and Models of Perceived Group Variability.” Journal of Personality and Social Psychology 59 (2): 173–191. doi:10.1037/0022-3514.59.2.173.
  • Pettigrew, T. F. 1979. “The Ultimate Attribution Error: Extending Allport’s Cognitive Analysis of Prejudice.” Personality and Social Psychology Bulletin 5 (4): 461–476. doi:10.1177/014616727900500407.
  • Pezzulo, G., F. Rigoli, and K. Friston. 2015. “Active Inference, Homeostatic Regulation and Adaptive Behavioural Control.” Progress in Neurobiology 134: 17–35. doi:10.1016/j.pneurobio.2015.09.001.
  • Ramstead, M. J., M. D. Kirchhoff, and K. J. Friston. 2020. “A Tale of Two Densities: Active Inference is Enactive Inference.” Adaptive Behavior 28 (4): 225–239. doi:10.1177/1059712319862774.
  • Rosenhan, D. L. 1973. “On Being Sane in Insane Places.” Science 179 (4070): 250–258. doi:10.1126/science.179.4070.250.
  • Rosenthal, R. 1994. “Interpersonal Expectancy Effects: A 30-Year Perspective.” Current Directions in Psychological Science 3 (6): 176–179. doi:10.1111/1467-8721.ep10770698.
  • Rosenthal, R., and L. Jacobson. 1968. “Pygmalion in the Classroom.” The Urban Review 3 (1): 16–20. doi:10.1007/BF02322211.
  • Sadler, M. S., J. Correll, B. Park, and C. M. Judd. 2012. “The World Is Not Black and White: Racial Bias in the Decision to Shoot in a Multiethnic Context.” Journal of Social Issues 68 (2): 286–313. doi:10.1111/j.1540-4560.2012.01749.x.
  • Schmader, T., M. Johns, and C. Forbes. 2008. “An Integrated Process Model of Stereotype Threat Effects on Performance.” Psychological Review 115 (2): 336–356. doi:10.1037/0033-295X.115.2.336.
  • Spencer, S. J., C. M. Steele, and D. M. Quinn. 1999. “Stereotype Threat and Women’s Math Performance.” Journal of Experimental Social Psychology 35 (1): 4–28. doi:10.1006/jesp.1998.1373.
  • Sprietsma, M. 2013. “Discrimination in Grading: Experimental Evidence from Primary School Teachers.” Empirical Economics 45 (1): 523–538. doi:10.1007/s00181-012-0609-x.
  • Stangor, C. 2016. “The Study of Stereotyping, Prejudice, and Discrimination within Social Psychology: A Quick History of Theory and Research.” In Handbook of Prejudice, Stereotyping, and Discrimination, edited by T. D. Nelson, 2nd ed., 3–27. New York: Psychology Press.
  • Steele, C. M., and J. Aronson. 1995. “Stereotype Threat and the Intellectual Test Performance of African Americans.” Journal of Personality and Social Psychology 69 (5): 797–811. doi:10.1037/0022-3514.69.5.797.
  • Talbott, W. 2008. “Bayesian Epistemology.” In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta. Winter 2016. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2016/entries/epistemology-bayesian/.
  • Tappin, B. M., and S. Gadsby. 2019. “Biased Belief in the Bayesian Brain: A Deeper Look at the Evidence.” Consciousness and Cognition 68: 107–114. doi:10.1016/j.concog.2019.01.006.
  • Tenenbaum, J. B., T. L. Griffiths, and C. Kemp. 2006. “Theory-Based Bayesian Models of Inductive Learning and Reasoning.” Trends in Cognitive Sciences 10 (7): 309–318. doi:10.1016/j.tics.2006.05.009.
  • Tenenbaum, J. B., C. Kemp, T. L. Griffiths, and N. D. Goodman. 2011. “How to Grow a Mind: Statistics, Structure, and Abstraction.” Science 331 (6022): 1279–1285. doi:10.1126/science.1192788.
  • Thomas, J. B. 1997. “Dumb Blondes, Dan Quayle, and Hillary Clinton: Gender, Sexuality, and Stupidity in Jokes.” The Journal of American Folklore 110 (437): 277–313. doi:10.2307/541162.
  • Tomasetto, C., F. R. Alparone, and M. Cadinu. 2011. “Girls’ Math Performance under Stereotype Threat: The Moderating Role of Mothers’ Gender Stereotypes.” Developmental Psychology 47 (4): 943–949. doi:10.1037/a0024047.
  • Walsh, K. S., D. P. McGovern, A. Clark, and R. G. O’Connell. 2020. “Evaluating the Neurophysiological Evidence for Predictive Processing as a Model of Perception.” Annals of the New York Academy of Sciences 1464 (1): 242–268. doi:10.1111/nyas.14321.
  • Weir, S., and M. Fine-Davis. 1989. “‘Dumb Blonde’ and ‘Temperamental Redhead’: The Effect of Hair Colour on Some Attributed Personality Characteristics of Women.” The Irish Journal of Psychology 10 (1): 11–19. doi:10.1080/03033910.1989.10557730.
  • Williams, D. 2018. “Hierarchical Bayesian Models of Delusion.” Consciousness and Cognition 61: 129–147. doi:10.1016/j.concog.2018.03.003.
  • Wilson, J. P., K. Hugenberg, and N. O. Rule. 2017. “Racial Bias in Judgments of Physical Size and Formidability: From Size to Threat.” Journal of Personality and Social Psychology 113 (1): 59–80. doi:10.1037/pspi0000092.
  • Zagorsky, J. L. 2016. “Are Blondes Really Dumb?” Economics Bulletin 36 (1): 401–410.