703
Views
0
CrossRef citations to date
0
Altmetric
COGNITIVE & EXPERIMENTAL PSYCHOLOGY

The effect of challenging people’s fundamental assumptions about a task: Introducing uncertainty for reducing overprecision

, &
Article: 2196102 | Received 16 Aug 2022, Accepted 23 Mar 2023, Published online: 09 Apr 2023

Abstract

Most of us understand that our rationality is bounded by our cognitive limitations, knowledge, set of beliefs, etc. Generally, however, people are not sufficiently aware of their own bounded rationality and demonstrate overprecision vis-à-vis their decisions. In the current research we evaluate a new method to reduce the overprecision and improve the understanding of bounded rationality: In many cases people underestimate the extent of the bounds of their rationality, and, as a result, demonstrate overprecision in making their decisions. The evaluated method is challenging a person’s fundamental assumptions about a task through introducing uncertainty. Our study’s 120 participants were asked to predict the actions of a virtual player in a series of rounds of SET®, a popular card game. Challenging the fundamental assumption was done by changing the virtual player’s choice rules after 60 rounds. We juxtaposed this with a second method—explaining, giving information about bounded rationality at the beginning of the task and after rounds 10 and 62. Four experimental groups played the game in a 2 by 2 design, with the conditions Change (yes or no) and Explanations (yes or no). The results demonstrate that both methods, Explanation and Change, increased the post-knowledge of results times. We show that the new method to improve the understanding of bounded rationality, challenging the fundamental assumptions about a task through introducing uncertainty, is effective, and should be evaluated further.

1. Introduction

Bounded rationality is a concept coined by Simon (Citation1972, Citation1990) to describe a rational choice that takes into account the cognitive limitations of the decision maker. In their decision-making process, the decision maker generates alternatives, either estimating the probability distributions of the possible outcomes or not taking into account the outcome probabilities and using a satisfying strategy. The decision maker is willing to make the best rational choice, but is bounded by their knowledge, set of beliefs, cognitive limitations, time, and so on.

The notion of bounded rationality can explain several phenomena in human behaviour. For example, people tend to use their intuition to make decisions, and intuitive thoughts are based on the ease with which mental models come to mind (Kahneman, Citation2003). Collet (Citation2009) proposed that habits reflect bounded rationality, because they are the products of the individual’s history and rules that they can follow without referring to them. In organizations, bounded rationality can explain the tendency of organizations to make decisions in a way that simplify the information processing and computational load placed on human decision makers (Morecroft, Citation1983), and choices are affected by standard operating procedures in order to cope with uncertainty (Carter, Citation1971). Ferreira et al. (Citation2010) found that physicians, in their decision making, rely on simple heuristics associated with environmental factors rather than diagnostic clinical practice guidelines, in order to save cognitive energy and enable robustness and simplicity.

Bounded rationality, although it can lead to fast, frugal, computationally cheap, and satisfying decisions (Gigerenzer, Citation2020), can also lead to bad decisions (Kahneman, Citation2003). We can assume that most people have a general understanding of the bounds of their rationality, and have experienced situations in which their knowledge, computation limitations, environmental constraints, time pressure, etc., caused them to make non-optimal decisions. In many cases, however, people underestimate the extent of these bounds, and, as a result, demonstrate overprecision in making their decisions. Interestingly, they might even consider others as less sophisticated decision-makers than themselves, according to the cognitive hierarchy and level k models (Camerer et al., Citation2004; Chong et al., Citation2016; Koriyama & Ozkes, Citation2021).

Overprecision is one of the most robust types of overconfidence. It can be defined as excessive certainty regarding the accuracy of one’s belief (Moore & Healy, Citation2008). Studies demonstrate that often people are too sure that they know the correct answer (Alpert & Raiffa, Citation1982; Klayman et al., Citation1999; Soll & Klayman, Citation2004). Moore et al. (Citation2015) presented several domains in which the overprecision is pronounced. In one domain, they showed overprecision as demonstrated by physicians, who tend to focus on a diagnosis without considering a sufficient number of alternatives (Arkes et al., Citation1981; Christensen-Szalanski & Bushyhead, Citation1981; Hill et al., Citation2005). In addition, overprecision of investors can explain their tendency to trade when it is better not to, and, as a result, phenomena such as market volatility and speculative price bubbles result (Scheinkman & Xiong, Citation2003). In companies, as well, forecasts about future demands tend to demonstrate overprecision (Ben David et al., Citation2013; Du et al., Citation2011; Makridakis et al., Citation2009).

A common strategy to reduce people’s overprecision when making decisions is to give them suitable explanations about the tendency to overprecision and how to avoid it. For example, Koriat, Lichtenstein and Fischhoff (Citation1980) demonstrated that when participants were asked to consider evidence contradicting their answer, their overprecision decreased. Other successful forms of explanations were to guide people to consider multiple alternatives (Hirt & Markman, Citation1995) and to decompose the alternatives into smaller sub-alternatives (Fischhoff et al., Citation1978).

Nevertheless, explanations may not be helpful, since people may ignore them or, in the worst case, they may stimulate processes that produce the opposite of the intended effect. For example, explanations might give the person a better feeling of knowing, but this will not lead to better performance. Koriat (Citation1993) demonstrated that the feeling of knowing is not directly correlated with the actual retrieval of information and is affected by the latter’s accessibility. Explanations can also lead to the illusion of competence. Koriat and Bjork (Citation2005) found that participants who were trained by seeing a cue word together with its associated target words judge their competence to recall the target when seeing only the cue as higher than their real performance. In sum, the use of explanations that emphasize retrieving certain information as a method to reduce overprecision is questionable.

In the current research, we present a new method to decrease overprecision. Our method challenged the person’s fundamental assumptions about the task and introduces uncertainty. Our hypothesis is that when people encounter uncertainty and a demonstration showing that one of their fundamental assumptions about the task is invalid, they might be less confident in their decisions, and hence their tendency to overprecision should decrease. Introducing uncertainty into a task has been shown to be successful in the past when done by other means, e.g., by introducing multiple changes in the importance of various subcomponents in complex tasks (Gopher et al., Citation1989), by coping with a secondary task which forced pilots to explore new strategies of performing a task (Seagull & Gopher, Citation1997), or by forcing trainees to explore the possible strategy space (Yechiam et al., Citation2001).

The basis for our suggested method can be linked to the definition of System 1 and System 2 cognitive processes (Kahneman, Citation2003; Stanovich & West, Citation2000). The cognitive processes of System 1 are fast, automatic, effortless, associative and implicit. The cognitive processes of System 2 are slower, effortful, serial, and conscious. System 1 can lead us to make fast decisions, but these decisions are prone to the limitations of our bounded rationality. System 2 can control System 1 and lead to slower, but more accurate, decisions, but it needs to be activated. By challenging an individual’s fundamental assumptions about the task through introducing uncertainty, System 2 will take control, which, in turn, will lead to reduced overprecision and slow down decision-making processes.

We examined whether challenging participants’ fundamental assumption about a task and introducing uncertainty will lead to a drop in their overprecision, which will be demonstrated by slower decision-making times and increased reflection time. Decision-making times should reflect overprecision, since they indicate the allocated time for considering all possible options, and indeed shorter reaction times have been previously correlated with confidence (Camchong et al., Citation2007; Giardini et al., Citation2008; Sanchez & Dunning, Citation2020). Although decision-making times might be influenced by other factors such as experience in the task and not only overprecision, asking participants to indicate how much they consider other options might affect their performance and make the study much longer and tiring. We compared this method to the more standard method of giving participants explanations about their bounded rationality during the study. Hence, four experimental groups were evaluated in a 2 by 2 design. We hypothesized that both methods—changing the rule and giving explanations—will increase the amount of time participants dedicate to making their decision, but the effect of explanation will be limited.

In addition to the decision-making time, we evaluated that the time after participants received feedback about their decision and before moving on to the next round will be also increased using the two methods—changing the rule and giving explanations. In the literature, this period of time is termed post knowledge of results (PKR). PKR is defined as the period of time between giving the feedback—Knowledge of Results (KR)—for step n to the start of the next step, n + 1. There is evidence that the more detailed the KR is, the greater the KR effects and performance improvement will be if the PKR period is prolonged. PKR should be long enough for the learner to digest and utilize the KR information and implications (Newell, Citation1991; Travalos & Pratt, Citation1995). Without a sufficiently long PKR interval, the learner is unable to process the KR information and use it in the next step. Hence, our second hypothesis was that the effect would include the PKR as well.

2. Materials and methods

2.1. Materials

To evaluate our hypotheses, we used the visual perception game SET®. This popular card game has already been recognized as a powerful tool for studying questions in the field of cognition (Jacob & Hochstein, Citation2008; Nyamsuren & Taatgen, Citation2013a; Nyamsuren & Taatgen, Citation2013b; Nyamsuren & Taatgen, Citation2013c; Taatgen et al., Citation2003; Yuviler-Gavish, Faran, & Berman. Citation2020). The game SET® consists of 81 cards, each showing one, two or three identical geometric figures. The figures vary from card to card in shape (oval, squiggle or diamond), colour (red, purple or green) and filling (solid, shaded or no filling). A set is defined as a triplet of cards in which all three cards either match or differ in each of the four properties (number, shape, colour and filling), considered individually. In the standard SET® game, 12 cards are laid out, and players compete to find a set. The first participant to find a set wins the round and takes the three cards, which are replaced with new cards for the next round.

In our study, participants were asked to predict the actions of a virtual player in a series of rounds of SET®, using a variation of the game where different sets were assigned different values. Participants were rewarded for correctly anticipating the action of the virtual player, given two possible choices for each round. Participants, however, were only partially informed of the rules by which the virtual player made its choice, and they had to discover the relevant principles through exploration.

Challenging the fundamental assumption about the task through introducing uncertainty was done by changing the virtual player’s choice rules. In the first 60 rounds, it chooses an entirely red set and then, when participants were used to this rule, in the next 30 rounds, it chooses an entirely purple set. We assume that a fundamental assumption about the task is that the virtual player does not change its choice rules.

2.2. Design

Participants were asked to predict the actions of a virtual player facing an array of 12 cards. The virtual player could choose between two possible sets from the array. Participants were rewarded for correctly anticipating the choice of the virtual player. In the instructions to the game, participants were informed that each set was assigned a value, and that the virtual player would choose the option worth the greatest number of points. Participants were not told what values were assigned to each set.

The design included two independent variables—methods to improve the understanding of bounded rationality. These methods were Change and Explanation, and were manipulated in a 2 by 2 design. In the Change condition, the virtual player chose the red set in the first 60 rounds and the purple set in the next 30 rounds. In the Explanation condition, explanations about bounded rationality were given three times: before the task, after round 10 and after round 62. Participants were randomly assigned to four groups: Change-Explanation group, in which the choice rule was changed and explanations were supplied; Change-No Explanation group, in which the choice rule was changed but no explanations were supplied; No Change-Explanation group, in which the choice rule remained the same for the entire 90 rounds of the study but explanations were supplied; and No Change-No Explanation group, in which the choice rule remained the same and no explanations were supplied.

2.3. Participants

The participants were 120 undergraduate students (76 males, 44 females) from Braude College of Engineering, Israel and each was assigned randomly to one of the four groups, as follows: 31 participants (22 males, 9 females) were assigned to the Change-Explanation group, 29 participants (14 males, 15 females) were assigned to the Change-No Explanation group, 29 participants (21 males, 8 females) were assigned to the No Change-Explanation group, and 31 participants (19 males, 12 females) were assigned to the No Change-No Explanations group. Participants’ average age was 24.1, with a range of 18 to 37. All participants had normal or corrected-to-normal visual acuity. Three participants, one in each of the Change-No Explanation, No Change-Explanation and No Change-No Explanations groups, were colour-blind; however, it seems that this disability did not affect their performance because of the specific colours selected for the task, and two of them indeed found the rule, hence it was decided to keep their data in the experiment.

2.4. Experimental task

The experimental task entailed 90 rounds and for each round, participants were asked to predict which of two possible sets drawn from an array of 12 cards would be chosen by a virtual player. Participants were told that the virtual player would choose the set that maximized its profits, which were calculated based on certain rules that were never revealed, and that the location of the sets in the array has no meaning. Participants were informed that they would receive 1 point for each correct prediction, and that their goal was to maximize their points. To encourage participants to take the task seriously, they were also informed that the highest scorers would receive a small monetary reward.

After participants made their prediction in each round, they pressed a button marked “submit” to move to the next screen, in which they were shown the actual selection of the virtual player. Figure presents a typical screenshot showing the trainee’s prediction, outlined in pink (top), followed by a screenshot showing the virtual player’s actual selection, outlined in yellow (bottom). Participants’ response times, i.e., the time from the appearance of the array and choices to when the participant pressed the “submit” button and the time from displaying the virtual player’s actual selection to when the participants pressed the “Next Round” button, were measured for each round.

Figure 1. Screenshots from one round of the experimental task, showing the participant’s prediction (top, outlined in pink) and the virtual player’s actual choice (bottom, outlined in yellow) .

Figure 1. Screenshots from one round of the experimental task, showing the participant’s prediction (top, outlined in pink) and the virtual player’s actual choice (bottom, outlined in yellow) .

Each group was exposed to the same arrays and options in each of the 90 rounds. Recall, however, that the virtual player’s selections differed between the Change and the No Change groups.

The Change-Explanations and the No Change-Explanations groups were given three explanations about bounded rationality. The first explanation was given to them at the beginning of the task, as follows:

When you (or any other rational person) make a decision on how to behave, you take into account three things:

  1. Purpose: What is the goal (or goals) you want to achieve, and what is its/their relative importance to you?

  2. Status: What is the situation you are in right now—i.e., what are you free to do or what is restricting you in your environment?

  3. Knowledge: What action do you need to take to achieve your goal under existing environmental conditions?

For example: You want to get to the other side of the road as quickly as possible (let’s say, catch the train), while also staying healthy and safe; you identify the traffic on the road as busy and crowded. It is clear to you that jumping into the road in such a situation puts you in danger. If so, you will have to decide what is the best and plan accordingly: to run across the road or stand still and miss the train. Anyone else who looks at you from the side and sees your behaviour (which is the result of the decision you made) can only guess why you did it. Had they been asked in advance how you would behave, they would not necessarily have known the real reason.

The same applies to you: To be able to explain someone else’s behaviour with certainty, or to predict how they will act beforehand, you must know for sure all three considerations that guide them—purpose, situation, and knowledge, which are not necessarily your own. Any less than full knowledge will only allow you to guess.

The second explanation was given to participants in the relevant groups after round 10, after they gained some experience in the task and were ready to conceive information, and emphasized that our rationality might be bounded to our own purposes, status and knowledge, as follows:

Note: You are currently trying to guess how the “player” in front of you (the computer) will decide. We have learned before that to determine how others should behave, you must know for sure all three considerations that guide them—purpose, status, and knowledge. You may be assuming that the player’s considerations are similar to yours but are you really sure?

The third explanation was given to participants in the aforementioned groups after round 62, right after the rule change for the Change-Explanations group, and gave another hint by emphasizing that the assumption that the virtual player’s considerations are fixed might be invalid, as follows:

You may have already realized that the considerations of the “player” in front of you (the computer) in this situation are not necessarily the same as yours. You might also be assuming that these considerations are fixed, again—like yours. Is this really the case?

2.5. Procedure

This research complied with the American Psychological Association Code of Ethics and was approved by the Institutional Review Board at Braude College. Informed consent was obtained from each participant. All methods were performed in accordance with the relevant guidelines and regulations.

The experiment took place in a computer lab at Braude College, Israel. The participants met in groups of approximately 10, but each participant worked individually at a desktop computer. Each group was assigned randomly to one experimental condition. The experiment lasted approximately one hour.

After gathering in the lab, participants signed a consent form and completed a personal details questionnaire. Following this, an experimenter described the tasks ahead. The rules of SET® were explained by a video, and several examples were presented to ensure participants understood what constituted a set. Participants practiced identifying sets by playing 13 rounds of the ordinary SET® game (i.e., finding sets from among 12 cards) on the computer. The experimenter then explained the experimental task and the payment procedures, and ensured that participants understood how to perform the task on the computer. Participants in the Change-Explanations and No Change-Explanations groups received explanations about bounded rationality at the beginning of the task and after rounds 10 and 62, as described above. Participants in the other groups received no information about bounded rationality. The experimenter remained in the lab throughout the session to help with any technical problems. At the end of the session, participants were thanked and paid for their participation.

All participants received a fixed payment of NIS 50 (approximately $14). In each group, the five participants with the highest scores in the experimental task (the most correct predictions) also received bonuses based on their performance. The participant with the highest score received a bonus of NIS 100, and four runners-up received a bonus of NIS 50 each.

3. Results

The results were analysed by calculating, for each participant, the number of rounds completed before they found out the virtual player’s choice rule and made only correct decisions. For the Change-Explanations and Change-No Explanation groups, this was calculated separately for rounds 1–60 and for rounds 61–90. The results of participants who did not find the rule until round 60 were not included in the analysis of rounds 61–90, since they were unaware of any change in the rule. Because the performance measures of the participants who did not find the rule in the No Change-Explanations and No Change-No Explanations groups might have been similar to those of the participants who did not find the rule in the Change-Explanations and Change-No Explanation groups, the former groups’ results for rounds 61–90 were also not included in the analysis (34 participants altogether).

Five dependent measures were calculated for rounds 1–60, using analysis of variance (ANOVA) test with two independent measures, Change (yes or no) and Explanations (yes or no). The dependent measures were: Number of rounds completed until finding the rule; mean decision time per round until finding the rule (incorrect decisions); mean time before pressing the “Next Round” button per round (PKR) until finding the rule (incorrect decisions); mean decision time per round after finding the rule (correct decisions); and mean time before pressing the “Next Round” button per round (PKR) after finding the rule (correct decisions). The times of rounds 10 and 62 were excluded from the analysis of mean time before pressing the “Next Round” button since some participants received explanations after these rounds. In addition, the times of round 1 were excluded from the analysis since some participants used this round to adjust to the system.

For rounds 61–90, in the first three measures, only the results of participants in the Change condition were included (because the others either found the rule earlier or their results were not included in the analysis, as described above). Hence, the dependent measure was Explanation, and independent-samples, equal variances assumed, t-tests were performed to analyse the results. For the last two measures, the analysis was performed with an ANOVA test in a similar way for rounds 61–90.

Times are reported in seconds.

3.1. Rounds 1–60

3.1.1. Number of rounds needed to find the rule

The mean number of rounds needed to find the rule was not significantly different for the Change (M = 24.9, SD = 24.6) and No Change (M = 24.7, SD = 24.0) conditions (F(1,116) = 0.001, p = 0.97). It was also not significantly different for the Explanations (M = 27.1, SD = 24.5) and No Explanations (M = 22.5, SD = 23.9) conditions (F(1,116) = 1.03, p = 0.31). The interaction between Change and Explanations was also not significant (F(1,116) = 0.26, p = 0.61).

3.1.2. Mean decision time per round until finding the rule

The mean time it took participants to make a decision per round until finding the rule was not significantly different for the Change (M = 7.0, SD = 5.5) and No Change (M = 9.0, SD = 9.7) conditions (F(1,92) = 1.61, p = 0.21). It was also not significantly different for the Explanations (M = 7.9, SD = 24.5) and No Explanations (M = 8.1, SD = 23.9) conditions (F(1,92) = 0.02, p = 0.99). The interaction between Change and Explanations was also not significant (F(1,92) = 0.90, p = 0.35).

3.1.3. Mean time before pressing the “Next Round” button per round (PKR) until finding the rule

As in the previous measures, the mean time before pressing the “Next Round” button per round until finding the rule was not significantly different for the Change (M = 4.0, SD = 7.4) and No Change (M = 3.4, SD = 7.4) conditions (F(1,92) = 0.11, p = 0.74). It was also not significantly different for the Explanations (M = 3.6, SD = 3.7) and No Explanations (M = 3.8, SD = 7.8) conditions (F(1,92) = 0.03, p = 0.87). The interaction between Change and Explanations was also not significant (F(1,92) = 1.56, p = 0.22).

3.1.4. Mean decision time per round after finding the rule

The mean decision time per round after finding the rule was not significantly different for the Change (M = 2.6, SD = 1.2) and No Change (M = 2.5, SD = 0.9) conditions (F(1,82) = 0.06, p = 0.81). It was also not significantly different for the Explanations (M = 2.8, SD = 1.0) and No Explanations (M = 2.4, SD = 1.0) conditions (F(1,82) = 3.71, p = 0.06). The interaction between Change and Explanations was also not significant (F(1,82) = 0.94, p = 0.34).

3.1.5. Mean time before pressing the “Next Round” button per round (PKR) after finding the rule

As in the previous measures, the mean time before pressing the “Next Round” button per round after finding the rule was not significantly different for the Change (M = 1.0, SD = 0.5) and No Change (M = 0.9, SD = 0.3) conditions (F(1,82) = 0.93, p = 0.34). The mean time in the Explanations condition (M = 1.0, SD = 0.5), however, was significantly higher compared to the mean time in the No Explanations condition (M = 0.8, SD = 0.3; F(1,82) = 4.69, p = 0.03). The interaction between Change and Explanations was not significant (F(1,82) = 0.08, p = 0.78). The results are displayed in Figure .

Figure 2. Rounds 1–60: Mean time before pressing the “Next Round” button per round (PKR) after finding the rule .

Note. With standard error bars
Figure 2. Rounds 1–60: Mean time before pressing the “Next Round” button per round (PKR) after finding the rule .

To sum up, for rounds 1–60 the only significant difference was the mean time before pressing the “Next Round” button (PKR) after finding the rule, which was higher for the Explanations condition compared to the No Explanations condition. The detailed results for each round are demonstrated in Figure .

Figure 3. Rounds 2–9, 11–60: Mean time before pressing the “Next Round’” button per round (PKR) after finding the rule .

Note. With standard error bars
Figure 3. Rounds 2–9, 11–60: Mean time before pressing the “Next Round’” button per round (PKR) after finding the rule .

3.2. Rounds 61–90

3.2.1. Number of rounds needed to find the rule

The mean number of rounds needed to find the rule was not significantly different for the Explanations (M = 3.7, SD = 6.2) and No Explanations (M = 2.7, SD = 2.5) conditions (t(39) = 0.66, p = 0.52).

3.2.2. Mean decision time per round until finding the rule

The mean per round until finding the rule was not significantly different for the Explanations (M = 6.0, SD = 10.0) and No Explanations (M = 3.6, SD = 4.1) conditions (t(40) = 1.03, p = 0.31).

3.2.3. Mean time before pressing the “Next Round” button per round (PKR) until finding the rule

The mean time before pressing the “Next Round” button per round until finding the rule was, similarly to the previous measures, not significantly different for the Explanations (M = 5.0, SD = 7.9) and No Explanations (M = 2.4, SD = 4.1) conditions (t(40) = 1.35, p = 0.19).

3.2.4. Mean decision time per round after finding the rule

The mean per round after finding the rule was not significantly different for the Change (M = 2.2, SD = 0.8) and No Change (M = 5.9, SD = 24.9) conditions (F(1,80) = 1.07, p = 0.30). It was also not significantly different for the Explanations (M = 6.5, SD = 26.0) and No Explanations (M = 2.0, SD = 0.8) conditions (F(1,80) = 1.27, p = 0.26). The interaction between Change and Explanations was not significant, either (F(1,80) = 1.12, p = 0.29).

3.2.5. Mean time before pressing the “Next Round” button per round (PKR) after finding the rule

The mean time before pressing the “Next Round” button per round after finding the rule was significantly higher for the Change condition (M = 0.8, SD = 0.4) compared to the No Change condition (M = 0.6, SD = 0.3; F(1,80) = 9.07, p = 0.003). There was no significant difference between the mean time in the Explanations (M = 0.7, SD = 0.3) and No Explanations (M = 0.7, SD = 0.4) conditions (F(1,80) = 0.39, p = 0.53). The interaction between Change and Explanations was not significant (F(1,80) = 0.73, p = 0.40). The results are displayed in Figure .

Figure 4. Rounds 61–90: Mean time before pressing the “Next Round” button per round (PKR) after finding the rule .

Note. With standard error bars
Figure 4. Rounds 61–90: Mean time before pressing the “Next Round” button per round (PKR) after finding the rule .

To sum up, for rounds 61–91, similarly to rounds 1–60, the only significant difference was the mean time before pressing the “Next Round” button (PKR) after finding the rule, but for rounds 61–90 it was higher only for the Change condition compared to the No Change condition, and not for the Explanation condition compared to the No Explanation condition. The detailed results are presented in Figure .

Figure 5. Rounds 63–90: Mean time before pressing the “Next Round” button for each round (PKR) after finding the rule.

Note. With standard error bars
Figure 5. Rounds 63–90: Mean time before pressing the “Next Round” button for each round (PKR) after finding the rule.

Following the experiment, a follow-up transfer task provided no statistically significant change between the experimental conditions (see the Appendix).

4. Discussion

Most of us understand that our rationality is bounded by our cognitive limitations, knowledge, set of beliefs, etc. (Simon, Citation1972, Citation1990). This phenomenon has also been described in several research studies and fields (Carter, Citation1971; Collet, Citation2009; Ferreira et al., Citation2010; Kahneman, Citation2003; Morecroft, Citation1983). In general, people, however, are not aware enough of their own bounded rationality and demonstrate overprecision (Moore & Healy, Citation2008) in making their decisions. In the current research we evaluated a new method, which in addition to providing explanations (Fischhoff et al., Citation1978; Hirt & Markman, Citation1995; Koriat et al., 2004), to reduce the overprecision, challenged participants’ fundamental assumptions about the task and introduced uncertainty.

Participants in our study were asked to predict the actions of a virtual player in a series of rounds of SET®, a popular card game. Challenging the fundamental assumption about the task was done by changing the virtual player’s choice rules after 60 rounds. This method was juxtaposed with a second method that provided information about bounded rationality at the beginning of the task and after rounds 10 and 62. Four experimental groups played the game in a 2 by 2 design, with the conditions Change (yes or no) and Explanations (yes or no).

The results were separated into two stages. In rounds 1–60, before the change in the rule, and round 61–90, after the rule was changed for two of the groups. In the first stage, explanations significantly increased the time between when participants received the feedback on their decision and when they moved on to the next round (PKR). The effect of Change was not significant in this stage, as expected, since this manipulation was activated only in the second stage. In the second stage, explanations had no significant effect, but participants for whom the rule was changed significantly increased the time between when the feedback on their decision was given and they moved on to the next round (PKR(. Both effects were evaluated only for the rounds after participants found the rule.

Our hypothesis was that both methods, challenging the fundamental assumption about the task by changing the rule and introducing uncertainty and giving explanations about bounded rationality, will increase participants’ decision time and PKR time, but the effect of explanation will be limited. Our hypothesis was partially supported.

First, both methods did not have a significant effect on the decision time, in contrast to our expectation, but did increase the PKR—the time after participants received feedback on their decision and before moving on to the next round. It is not surprising that the effect of both methods, Change and Explanations, was reflected in the PKR, since participants used this period to process the information. They did not need more time to make a decision, because they already knew the correct answer, but nevertheless, we saw an increase in PKR time. We conjecture that they might have felt that they needed to build up their knowledge structure about their bounded rationality and their overprecision based on the new information they received, whether it was given through the explanations or by challenging their fundamental assumptions about the task through introducing uncertainty. Hence, the PKR was longer than it might otherwise have been expected to be necessary.

Second, the effect of the explanations was limited, in line with our assumptions, and was demonstrated only for rounds 1–60. In addition, although we did not hypothesize about the effect of the explanation on how fast participants would find the rule, it is interesting to note that the explanations did not improve the number of rounds needed until they found the rule and affected only the PKR.

Third, the effects were limited to the rounds after finding the rule, i.e., for correct decisions. Although overprecision can take place when both incorrect decisions (i.e., before finding the rule) and correct decisions are made, in the current study it was demonstrated only for the latter. One might question whether longer decision times when the decisions are correct reflect overprecision, but since the rule was chosen and changed arbitrarily, and not based on a logic which could serve as a hint, participants’ strategy in making decisions should have been to evaluate their decisions constantly and expect sudden changes in the rule. Hence, short decision times can be perceived as overprecision even when they do not harm performance. Furthermore, long decision times before finding the rule might reflect the difficulty of the task and not only overprecision, and hence the decision times after finding the rule reflect overprecision more accurately.

We demonstrated that the new method to reduce overprecision, challenging the fundamental assumption about the task through introducing uncertainty, was effective, in line with our hypothesis. Nonetheless, since the experiment ended in round 90, it is not certain whether this effect will last, or will fade away similarly to the effect of explanations. In addition, the transfer task did not indicate any significant effect produced by the conditions. Future research should try to prolong the experiment to address this question. In addition, future research should evaluate the transfer of training from the specific conditions of our experiment to more general understanding of our own bounded rationality. For example, the experimenters could allow participants to have a deeper experience with this paradigm, and then evaluate its effect. The evaluation can take place with tasks that reflect this understanding, such as questions about bounded rationality situations.

To sum up, the current research has laid the foundations for a new method to activate System 2 (Kahneman, Citation2003; Stanovich & West, Citation2000) in tasks that are based on our rationality and enable us to make more effortful and conscious decisions. Our goal is that the understanding of how bounded our rationality is will not be limited to a specific situation, and this understanding will improve daily life decisions in the domains of economic, industry, society, interpersonal relationships, etc. There is, however, still a long way to go.

Ethics approval

The research involved human participants and was approved by Braude College of Engineering’s ethical committee.

Consent to participate

Participants have signed the informed consent to participate.

Consent for publication

Participants have signed on informed consent to publish the research results without their personal information.

Acknowledgements

This research was supported in part by Braude College of Engineering Karmiel, Israel.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This research was supported in part by Braude College of Engineering, Israel.

Notes on contributors

Nirit Yuviler-Gavish

Dr. Nirit Yuviler-Gavish is the head of the Department of Industrial Engineering and Management at Braude College of Engineering Karmiel, Israel, and a faculty member.

Doron Faran

Dr. Doron Faran is a faculty member in the Department of Industrial Engineering and Management at Braude College of Engineering Karmiel, Israel.

Mark N. Berman

Mark N, Berman is a faculty member in the Department of Mathematics at Braude College of Engineering Karmiel, Israel.

References

  • Alpert, M., & Raiffa, H. (1982). A progress report on the training of probability assessors. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgement under uncertainty: Heuristics and biases (pp. 294–14). Cambridge University Press. https://doi.org/10.1017/CBO9780511809477.022
  • Arkes, H. R., Wortmann, R. L., Saville, P. D., & Harkness, A. R. (1981). Hindsight bias among physicians weighing the likelihood of diagnoses. The Journal of Applied Psychology, 66(2), 252–254.‏. https://doi.org/10.1037/0021-9010.66.2.252
  • Ben David, I., Graham, J. R., & Harvey, C. R. (2013). Managerial miscalibration. The Quarterly Journal of Economics, 128(4), 1547–1584.‏. https://doi.org/10.1093/qje/qjt023
  • Camchong, J., Goodie, A. S., McDowell, J. E., Gilmore, C. S., & Clementz, B. A. (2007). A cognitive neuroscience approach to studying the role of overconfidence in problem gambling. Journal of Gambling Studies, 23(2), 185–199.‏. https://doi.org/10.1007/s10899-006-9033-5
  • Camerer, C. F., Ho, T. H., & Chong, J. K. (2004). A cognitive hierarchy model of games. The Quarterly Journal of Economics, 119(3), 861–898.‏. https://doi.org/10.1162/0033553041502225
  • Carter, E. E. (1971). The behavioral theory of the firm and top-level corporate decisions. Administrative Science Quarterly, 16(4), 413–429. https://doi.org/10.2307/2391762
  • Chong, J. K., Ho, T. H., & Camerer, C. (2016). A generalized cognitive hierarchy model of games. Games and Economic Behavior, 99, 257–274.‏. https://doi.org/10.1016/j.geb.2016.08.007
  • Christensen-Szalanski, J. J., & Bushyhead, J. B. (1981). Physicians’ use of probabilistic information in a real clinical setting. Journal of Experimental Psychology Human Perception and Performance, 7(4), 928–935.‏. https://doi.org/10.1037/0096-1523.7.4.928
  • Collet, F. (2009). Does habitus matter? A comparative review of Bourdieu’s habitus and Simon’s bounded rationality with some implications for economic sociology. Sociological Theory, 27(4), 419–434.‏. https://doi.org/10.1111/j.1467-9558.2009.01356.x
  • Du, N., Budescu, D. V., Shelly, M. K., & Omer, T. C. (2011). The appeal of vague financial forecasts. Organizational Behavior and Human Decision Processes, 114(2), 179–189.‏. https://doi.org/10.1016/j.obhdp.2010.10.005
  • Ferreira, A. P. R. B., Ferreira, R. F., Rajgor, D., Shah, J., Menezes, A., Pietrobon, R., & Fretheim, A. (2010). Clinical reasoning in the real world is mediated by bounded rationality: Implications for diagnostic clinical practice guidelines. Plos One, 5(4), e10265. https://doi.org/10.1371/journal.pone.0010265
  • Fischhoff, B., Slovic, P., & Lichtenstein, S. (1978). Fault trees: Sensitivity of estimated failure probabilities to problem representation. Journal of Experimental Psychology Human Perception and Performance, 4(2), 330–344.‏. https://doi.org/10.1037/0096-1523.4.2.330
  • Giardini, F., Coricelli, G., Joffily, M., & Sirigu, A. (2008). Overconfidence in predictions as an effect of desirability bias. In M. Abdellaoui & J. D. Hey (Eds.), Advances in Decision Making under Risk and Uncertainty (pp (pp. 163–180). Springer. https://doi.org/10.1007/978-3-540-68437-4_11
  • Gigerenzer, G. (2020). What is bounded rationality? In V. Ricardo (Ed.), Routledge Handbook of Bounded Rationality (pp. 55–69). Routledge.‏. https://doi.org/10.4324/9781315658353
  • Gopher, D., Weil, M., & Siegel, D. (1989). Practice under changing priorities: An approach to training of complex skills. Acta Psychologica, 71(1–3), 147–179. https://doi.org/10.1016/0001-6918(89)90007-3
  • Hill, L. D., Gray, J. J., Carter, M. M., & Schulkin, J. (2005). Obstetrician-gynecologists’ decision making about the diagnosis of major depressive disorder and premenstrual dysphoric disorder. Journal of Psychosomatic Obstetrics & Gynecology, 26(1), 41–51.‏. https://doi.org/10.1080/01443610400023023
  • Hirt, E. R., & Markman, K. D. (1995). Multiple explanation: A consider-an-alternative strategy for debiasing judgments. Journal of Personality and Social Psychology, 69(6), 1069.‏. https://doi.org/10.1037/0022-3514.69.6.1069
  • Jacob, M., & Hochstein, S. (2008). SET recognition as a window to perceptual and cognitive processes. Perception & Psychophysics, 70(7), 1165–1184.‏. https://doi.org/10.3758/PP.70.7.1165
  • Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93(5), 1449–1475.‏. https://doi.org/10.1257/000282803322655392
  • Klayman, J., Soll, J. B., Gonzalez-Vallejo, C., & Barlas, S. (1999). Overconfidence: It depends on how, what, and whom you ask. Organizational Behavior and Human Decision Processes, 79(3), 216–247.‏. https://doi.org/10.1006/obhd.1999.2847
  • Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological Review, 100(4), 609–639.‏. https://doi.org/10.1037/0033-295X.100.4.609
  • Koriat, A., & Bjork, R. A. (2005). Illusions of competence in monitoring one’s knowledge during study. Journal of Experimental Psychology Learning, Memory, and Cognition, 31(2), 187–194.‏. https://doi.org/10.1037/0278-7393.31.2.187
  • Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology Human Learning and Memory, 6(2), 107–118.‏. https://doi.org/10.1037/0278-7393.6.2.107
  • Koriyama, Y., & Ozkes, A. I. (2021). Inclusive cognitive hierarchy. Journal of Economic Behavior & Organization, 186, 458–480.‏. https://doi.org/10.1016/j.jebo.2021.04.016
  • Makridakis, S., Hogarth, R. M., & Gaba, A. (2009). Forecasting and uncertainty in the economic and business world. International Journal of Forecasting, 25(4), 794–812.‏. https://doi.org/10.1016/j.ijforecast.2009.05.012
  • Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502–517.‏. https://doi.org/10.1037/0033-295X.115.2.502
  • Moore, D. A., Tenney, E. R., & Haran, U. (2015). Overprecision in judgment. In G. Keren & G. Wu (Eds.), The Wiley Blackwell handbook of judgment and decision making (pp. 182–209). Wiley‏. https://doi.org/10.1002/9781118468333.ch6
  • Morecroft, J. D. (1983). System dynamics: Portraying bounded rationality. Omega, 11(2), 131–142.‏. https://doi.org/10.1016/0305-0483(83)90002-6
  • Newell, K. M. (1991). Motor skill acquisition. Annual Review of Psychology, 42, 213–237. https://doi.org/10.1146/annurev.ps.42.020191.001241
  • Nyamsuren, E., & Taatgen, N. A. (2013a). The effect of visual representation style in problem-solving: A perspective from cognitive processes. Plos One, 8(11), e80550.‏. https://doi.org/10.1371/journal.pone.0080550
  • Nyamsuren, E., & Taatgen, N. A. (2013b). SET as an instance of a real‐world visual‐cognitive task. Cognitive Science, 37(1), 146–175.‏. https://doi.org/10.1111/cogs.12001
  • Nyamsuren, E., & Taatgen, N. A. (2013c). The synergy of top-down and bottom-up attention in complex task: Going beyond saliency models. Proceedings of the 35th annual conference of the cognitive science society, Austin, TX. vol. 35, pp. 3181–3186.‏
  • Sanchez, C., & Dunning, D. (2020). Decision fluency and overconfidence among beginners. Decision, 7(3), 225–237.‏. https://doi.org/10.1037/dec0000122
  • Scheinkman, J. A., & Xiong, W. (2003). Overconfidence and speculative bubbles. The Journal of Political Economy, 111(6), 1183–1220.‏. https://doi.org/10.1086/378531
  • Seagull, F. J., & Gopher, D. (1997). Training head movement in visual scanning: An embedded approach to the development of piloting skills with helmet-mounted displays. Journal of Experimental Psychology Applied, 3, 163–180. https://doi.org/10.1037/1076-898X.3.3.163
  • Simon, H. A. (1972). Theories of bounded rationality. Decision and Organization, 1(1), 161–176.‏.
  • Simon, H. A. (1990). Bounded rationality. In J. Eatwell, M. Milgate, & P. Newman (Eds.), Utility and probability (pp. 15–18). Palgrave Macmillan.‏. https://doi.org/10.1007/978-1-349-20568-4_5
  • Soll, J. B., & Klayman, J. (2004). Overconfidence in interval estimates. Journal of Experimental Psychology Learning, Memory, and Cognition, 30(2), 299–314.‏. https://doi.org/10.1037/0278-7393.30.2.299
  • Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? The Behavioral and Brain Sciences, 23(5), 645–665.‏ https://doi.org/10.1017/S0140525X00003435
  • Taatgen, N. A., van Oploo, M., Braaksma, J., & Niemantsverdriet, J. (2003). How to construct a believable opponent using cognitive modeling in the game of SET. Proceedings of the fifth international conference on cognitive modeling, Bamberg, Germany. (pp. 201–206). ‏Universitätsverlag Bamberg.
  • Travalos, A. K., & Pratt, J. (1995). Temporal locus of knowledge of result: A meta-analytic review. Perceptual and Motor Skills, 80(1), 3–14. https://doi.org/10.2466/pms.1995.80.1.3
  • Yechiam, E., Erev, I., & Gopher, D. (2001). On the potential value and limitations of emphasis change and other exploration-enhancing training methods. Journal of Experimental Psychology Applied, 7(4), 277–285. https://doi.org/10.1037/1076-898X.7.4.277
  • Yuviler-Gavish, N., Faran, D., & Berman, M. (2020). The effect of complexity on training for exploration of non-intuitive rules in theory of mind. Journal of Cognitive Enhancement, 4, 323–332. https://do.org/10.1007/s41465-019-00158-z‏

Appendix

In the transfer task, participants listened to a short story about a common daily life situation, and were then asked to anticipate in as much detail as possible the actions of another person. The transfer task was given before and after the 90 rounds experience in order to evaluate the condition’s effect on changes in the explanation. The transfer task’s text was (see Figure for the picture):

Figure 6. The picture presented to participants in the transfer task.

The transfer task was analysed by percent of participants in each group that changed their answer the second time that it was given, using an ANOVA test with two independent measures, Change (yes or no) and Explanations (yes or no). The dependent measure was whether or not the participants changed their answer.
The percentage of participants who changed their answer in the transfer was not significantly different for the Change condition (M = 0.20, SD = 0.4) compared to the No Change condition (M = 0.27, SD = 0.5; F(1,110) = 0.62, p = 0.43). There was no significant difference between the percentage of those in the Explanations (M = 0.24, SD = 0.4) and No Explanations (M = 0.24, SD = 0.4) conditions (F(1,110) = 0.001, p = 0.98). The interaction between Change and Explanations was not significant, either (F(1,110) = 0.06, p = 0.81).
Figure 6. The picture presented to participants in the transfer task.

The person in the picture owes you a sum of money that he promised to return to you a week ago. Since that date has passed, you try to contact him by phone, but in vain—he is never available. You are now walking down the street and stopping before crossing in front of a red light (you are in the black circle). Suddenly you notice that the person you have been chasing for a week is coming quickly from the left side (in the direction of the arrow) looking straight ahead. Now he comes in front of you, stops suddenly and turns half to the right with his gaze turned to you. You so desperately want to catch him… what will he do now?

Write in detail your prediction regarding the next step of the person in the picture, and why you think he/she will do what you predict. You do not have to specify how you will act, but keep in mind that the prediction you give will help you to choose your course of action”.