434
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Modelling situated intent for human-autonomy teaming: a human-centric approach

, , &
Received 04 Apr 2022, Accepted 24 Mar 2024, Published online: 09 Apr 2024

Abstract

Entering an era where humans and synthetic agents are supposed to collaborate and cooperate, adequate models of human intent are crucial for coordinated teamwork. Unfortunately, although there is a need for such models, the concept of intent is ambiguous and approaches to model intent from a human-centric perspective are scarce. Building upon theoretical and methodological foundations, this study aims to address these gaps by presenting a conceptualisation of intent alongside an approach. Specifically, leveraging the six levels of cognitive control outlined in the Joint Control Framework, a provisional model of human intent alongside a defined and operationalised concept is presented. Building on these foundations, a novel approach is proposed. Utilising seven scenario-based interviews, the value of these contributions is demonstrated through an example case in the context of Manned-Unmanned Teaming. It is concluded that intent should be understood as a multi-faceted concept shaped by situated constraints, where intent is formed through a commitment to choices by context-situation and means-end reasoning. It is also concluded that the approach is useful, particularly since it can glean insights from choices considered and committed, both being essential in the design of synthetic teammates’ capability to adapt to their human partner’s agency.

Relevance To human factors/ergonomics theory

With theoretical and methodological foundations, a provisional model of human intent is accompanied by a definition and operationalisation of intent to enhance understanding of this often ambiguous concept. Building upon these foundations, this paper introduces and demonstrates an approach for modelling situated intent from a human-centric perspective. These contributions deepen the understanding of human intent, particularly in the context of designing and developing systems that can effectively account for and adapt to human agency.

Introduction

Entering an era of Human-Autonomy Teaming (HAT), where humans and synthetic agents are supposed to work as teammates, the question ‘What does it mean to understand intent?’ is brought to the forefront. For instance, in aviation, unmanned aircraft are expected to act as human-like wingmen (Department of Defence Citation2018), requiring them to account for and adapt to their human partner’s agency in anticipated and unanticipated situations. Facilitating such understanding requires adequate intent models (Klein et al. Citation2004), enabling synthetic wingmen to reason about what their human partner is doing, why, and what will happen next (Sukthankar et al. Citation2014). Unfortunately, the concept of intent is often ambiguous (Bratman Citation1987), and approaches to model intent from a human-centric perspective are scarce in the literature (Norling Citation2008). These ambiguities and gaps in research make it difficult to know what and how to model human intent in the early design and development of HAT capabilities.

With a theoretical and methodological foundation, this work sheds light on several aspects of intent that should be considered when modelling human intent, providing a provisional model of human intent together with a definition and operationalisation of the concept. Additionally, a proposed approach for modelling situated intent in the context of HAT is described. An example case involving Manned-Unmanned Teaming (MUM-T) is used to demonstrate the value of these contributions.

Human-autonomy teaming

Teams are usually understood as two or more individuals performing specified roles by interacting adaptively, interdependently, and dynamically toward a common and valued goal (Salas, Sims, and Burke Citation2005), and are often used as a strategy of choice to cope with complex and dynamic tasks (Salas, Cooke, and Rosen Citation2008). With recent advances in artificial intelligence, machine learning, and cognitive modelling, the concept of HAT has become increasingly practical and applicable (O’Neill et al. Citation2020). In this context, Autonomy refers to synthetic agents embedded in computer-based systems (e.g. robots) capable of being independent, self-governing, viable in their respective roles and contexts (Kaber Citation2018), and perceived as teammates rather than tools (Lyons et al. Citation2021; O’Neill et al. Citation2020). As such, it has been argued that a new interaction paradigm is emerging (Tokadli and Dorneich Citation2019) in which humans and synthetic teammates complement each other with their respective strengths (Lyons et al. Citation2021). In aviation, this paradigm shift is evident as increasingly autonomous unmanned aircraft are developed to perform dull, dirty, dangerous, and difficult tasks (Gupta, Ghonge, and Jawandhiya Citation2013) and are expected to act as human-like wingmen by 2040 (Department of Defence Citation2018). Such MUM-T is envisioned to provide revolutionary operational synergies with improved mission effectiveness, survivability, and situation awareness by combining the inherent strengths of each aircraft platform.

Although HAT, and by extension MUM-T, is envisioned to provide several benefits, problems affecting task and team performance persist. For instance, Klein et al. (Citation2004) argue that all team members must commit to work together, maintain common ground, and be mutually predictable and directable, whether human or not. However, a conundrum persists where increased autonomy reduces humans’ ability to understand and predict their synthetic teammates (Endsley Citation2017), with an emerging need for team-oriented intent, shared mental models, and communication (Lyons et al. Citation2021). Research exemplifying how these problems have been addressed includes agent transparency (Chen et al. Citation2018; Mercado et al. Citation2016), bi-directional communication (Marathe et al. Citation2018; Schneider and Miller Citation2018) and interface design considerations (Calhoun et al. Citation2018; Endsley Citation2017).

Moreover, it can be argued that humans and synthetic agents must be ‘cognitively coupled’ for synthetic teammates to be able to account for the human partner’s agency in their decision-making process (Hollnagel, David, and Woods Citation1999), making fluent and effective HAT more challenging than merely achieving full autonomy (Goodrich and Schultz Citation2007). For instance, awareness of humans’ goals, plans, and activities is crucial for achieving natural interactions (Van-Horenbeke and Peer Citation2021). However, this poses a difficult problem for synthetic agents due to the immense variety of possibilities in unconstrained environments (Van-Horenbeke and Peer Citation2021). Further complicating matters, intent is context-dependent (Han and Pereira Citation2013), requiring synthetic agents to recognise human situation-adaptations as they respond to events in dynamic environments (Albrecht and Stone Citation2018) and when deviation occurs from the expected script (Hiatt et al. Citation2017). Additionally, there is a need for approaches that can efficiently and effectively identify relevant factors affecting human decision-making (Albrecht and Stone Citation2018) and an argument for including human expertise to inform models usable by synthetic agents, necessitating a more holistic approach and closer collaboration between communities (Hiatt et al. Citation2017; Van-Horenbeke and Peer Citation2021). Thus, for a holistic and closer collaboration, human factors methods can provide a deeper understanding of human intent, aiming to support conceptual design in developing synthetic teammates capable of understanding and adapting to their human partner’s agency.

Towards an understanding of intent

Before determining methods to model intent, it is crucial to understand what needs to be modelled. In this, distinguishing between ascribing and describing intent should be clarified (Hammarbäck et al. Citation2023). Ascribing intent refers to the process of becoming aware of intent using a model. In contrast, describing intent entails the process of analysing why intent is formed, what it represents, and how it is (to be) enacted. Considering these differences, it becomes apparent that modelling intent should be understood as an activity whereby human intent is described and analysed. Thus, to avoid relying on assumptions of human intent, describing intent can be seen as a prerequisite for models enabling synthetic partners to ascribe intent. Unfortunately, describing intent is faced with numerous challenges, particularly related to the conceptual ambiguity in the literature. For instance, intent can characterise both mental states and actions as humans ‘intend to do things’ and ‘do things intentionally’. Thus, intent seems to describe both future-directed mental states and present-directed actions, and although these share commonalities, the relationship between them remains unclear.

The nature and functional roles of intent

To tackle the ambiguity surrounding the intent concept, Bratman (Citation1987) adopts a functionalistic approach, proposing an influential theory of practical reasoning. According to this theory, intent encompasses deliberate choices characterised by commitment, serving the functional roles of coordinating choices and actions personally and socially. The nature and functional roles of intent are explored to understand these aspects better.

First, as the choices are characterised by commitment, these decisions tend to be persistent once settled upon things (Bratman Citation1987). Consequently, excessive deliberation and in situ decisions can be constrained by distributing practical reasoning and action over time. Second, persistent decisions function as input for further means-end reasoning, ensuring internal consistency and coherence. For instance, intent should be consistent with beliefs and prior decisions as well as sufficiently complete to be enacted. To achieve this, humans must use practical reasoning to coordinate belief and intent content. Consequently, prior decisions may pose problems in practical reasoning, effectively constraining intent by filtering inconsistent or conflicting choices. Third, as humans are predisposed to be motivated and act according to prior commitments when the time comes, Bratman (Citation1987) argues that intent serves as a conduct-controlling pro-attitude. Thus, intent differentiates itself from other pro-attitudes (e.g. desires, wishes) as it directly controls actions rather than indirectly influences them. For instance, humans may have desires without deciding to act on them. In contrast, once intent has been formed, humans may recognise situations in which they believe intent can be rationally enacted. In these situations, prior decision may guide their actions with only minor adjustments being made to be actionable in a dynamic environment.

The facets and elements of intent

While intent as ‘choices characterised by commitment’ provides an initial understanding, it does not clarify what type of persistent decisions are made. For instance, intent has been described as a ‘driving motivation’ (Freedman and Zilberstein Citation2019), goals (Lyons et al. Citation2021), and ‘plans writ large’ (Bratman Citation1987). Others have argued for a broader sense of intent, such as intent comprising the purpose and aim with all the connotations required for coordination (Pigeau and McCann Citation2006). While not being explicit about the types of choices these connotations are, insight can be gained from previously described frameworks and models. In the following, intent comprises several elements that can be fitted with intent content, represented in bold respective italic formatting.

  1. Joint Control Framework (JCF). In the context of MUM-T (Hammarbäck et al. Citation2023), fighter pilot intent has been described in terms of six levels of cognitive control (Lundberg and Johansson Citation2021). These levels are Frames, which contain frames that separate situations from their context; Effects, containing purposes and overarching goals; Values, which contain ability and quality tensions with associated criteria, priorities, and trade-offs; Generic, which contain generalised plans and specific procedures; Implementations, contains the control activities required to coordinate action-related capabilities and limitations; and Physical, containing physical objects and their attributes.

  2. Belief-Desire-Intention (BDI). In the context of practical reasoning, Bratman (Citation1987) outline four elements used in means-end reasoning: Beliefs about internal and external states; Purposes containing the reasons for actions; Outcomes, containing the likely and desired states; Norms containing conflict resolution strategies; and Hierarchical plans containing a coordination of activities and actions personally and socially.

  3. Commander’s Intent (CI). In the context of teamwork, Klein (Citation1994, Citation1999) suggested a script comprising seven elements in which intent statements should be filled and communicated to improve team performance. These elements include Purpose, containing higher level goals; Objective and Anti-goals, containing an image of the desired outcomes and undesired outcomes, respectively; Plan sequence, containing an abstract plan describing how it is intended to unfold; Rationale for the plan, containing reasons considered when planning; Key decisions containing contingency structures for handling unanticipated events; and Constraints and considerations containing criteria for achieving the objective.

  4. Operationalized Intent (OI). In the context of HAT, Schneider and Miller (Citation2018) operationalised intent to create a shared semantic space usable by humans and autonomous agents. Their operationalisation of intent comprises three elements: Goals consisting of a goal hierarchy described in terms of qualities and priorities; Task execution constraints consisting of a list of limits and effects; and Plans consisting of sequences of actions and timings.

  5. Plan-Goal Graph (PGG). For the purpose of intent interpretation and conflict resolution in macro-cognitive systems, Geddes (Citation1994, Citation1997) suggested a model based on a hierarchical structure. The structure comprises four elements: Goals consisting of a goal hierarchy; Constraints described in terms of feasibility, norms, and resources; Side-effects consisting of potential conflicts requiring resolution; and Plans consisting of sub-goals and actions.

Although different terms are used, a tentative mapping reveals several commonalities when comparing these frameworks and models (see ). In particular, purpose and goals, constraints, plans, and actions are common in the frameworks and models, illustrating the need to account for several facets and elements to make sense of intent.

Figure 1. Comparison of model elements (in bold) and content (in italic).

Figure 1. Comparison of model elements (in bold) and content (in italic).

A provisional model of human intent

Utilising the six levels of cognitive control as the cornerstone to integrate the compared frameworks and models, a provisional model of human intent can be created. The resulting model (see ) represents an intent space containing six intent facets comprising possible intent elements. Through a deliberative process, intent content in these elements can be fitted, whereby the connection between them can form an intent structure. The following subsections describe these facets, elements, and content in more detail.

Figure 2. A Provisional model of human intent.

Figure 2. A Provisional model of human intent.

Frames

Frames comprise explanatory structures associated with the context, allowing humans to ascribe meaning to situations (Lundberg Citation2015; Lundberg and Johansson Citation2021; Klein et al. Citation2007; Minsky Citation1975). Such frame structures contain elements that can be fitted with data or information pertaining to the situation, whereby the connection between these grants ‘explanatory power’. By adopting a frame, humans take a perspective on the situation, separating it from its context in which belief and intent content may become salient. Furthermore, frame elements may be sub-frames, implying situations-of-situations as humans adopt sub-frames to ascribe sub-situations. For instance, when a complex problem must be solved as parts. Although compared models were not explicit about the context and situation separations, this level comprises Adopted frames containing the belief content in the ascribed situation. These intent elements and content help understand how humans frame situations where certain intent may crystallise at other levels.

Effects

The Effects level comprises purposes and overarching goals manifested in the ascribed situation (Lundberg and Johansson Citation2021). Additionally, recognising the notion of ‘situations-of-situations’, the distinction between core and instrumental goals is useful. Here, core goals (or values) refer to the most central states or outcomes to maintain or achieve, whereas instrumental goals (or values) refer to the states or outcomes necessary for maintaining or achieving the core goals (Lundberg and Johansson, Citation2015, Citation2019). Given the prevalence of these elements in compared models, this level comprises Core and instrumental goals containing purposes and objectives. Such intent elements and content are essential as they contribute to understanding the motivation behind human action and the states to be achieved or upheld in the ascribed situation.

Values

The Values level refers to considerations, such as the measures of abilities and qualities and their associated criteria, priorities, and trade-offs (Lundberg and Johansson Citation2021). For instance, if efficiency is prioritised over quality or norms are violated in the name of effectiveness. The compared model highlighted such considerations and constraints. This level comprises Performance values pertaining to measures of abilities and qualities and Constraint values containing the imposed causal and intentional constraints. These aspects help us understand the criteria, priorities, and trade-offs humans may consider in the ascribed situation.

Generic

The Generic level comprises generalised plans and specific procedures (Lundberg and Johansson Citation2021). The distinction between these is the degree of situation dependency, where plans are viable in similar situations, whereas procedures can be carried out in a range of situations. Whilst the distinction between plans and procedures is not made in compared models, it is useful to distinguish between them as plans provide adaptability and procedures provide structure. Additionally, according to Klein’s (Citation1994) model, contingency structures are crucial for managing unexpected situations. Consequently, this level comprises a set of courses of action (COA) per the distinction between Plans, Procedures, and Contingency structures. These intent elements and content help understand the COAs humans rely on in familiar and unfamiliar situations.

Implementations

The Implementation level refers to control activities in which adjustments of constraints are made to realise COAs (Lundberg and Johansson Citation2021). While compared models did not detail this level, the coordination of actions is implied. This level comprises the Control activities involved in the coordination of action-related abilities and limitations. These intent elements and content help determine how actions can manifest deliberative processes.

Physical

The Physical level comprises objects and their properties, imposing constraints on the actions in the ascribed situation (Lundberg and Johansson Citation2021). Whilst physical objects are not explicitly described in compared models, they may be implied from concepts such as resource constraints. This level comprises the System and Environment containing controllable respective uncontrollable objects and attributes. These intent elements and content help determine how human actions can be enabled and constrained by the system and environment.

Defining and operationalising intent

Drawing from Bratman’s (Citation1987) influential theory of practical reasoning, the concept of intent is defined as ‘a structure comprised of choices characterised by commitment, with the functional roles of coordinating and guiding cognitive and physical activities personally and socially’. As such, intent has functional roles in practical reasoning as an object and action as a subject. As an object, intent enables humans to coordinate and constrain choices to make intent consistent and actionable, and as a subject, intent is a ‘conduct-controlling pro-attitude’ that motivates and guides human actions.

Based on several theoretical frameworks and models, the concept of intent is operationalised as ‘deliberated and connotated choices connected by context-situation and means-end reasoning within an intent space with six levels of cognitive control’. Here, the intent space represents a constrained set of contextual and situational intent elements with fitted and connected intent content at various levels of cognitive control that humans can use in practical reasoning and action. Furthermore, it is emphasised that intent–and extension action–is situated within a context and shaped by various constraints. For instance, available data or information to ascribe situations within the context; maintainable or achievable states or outcomes in terms of purposes and objectives; viable plans and procedures granted considerations related to criteria, priorities, and trade-offs; and performable actions granted timing and available resources.

Modelling human intent

Within artificial intelligence communities, a common trio utilising models for understanding human behaviour consists of intent, plan, and activity recognition (Freedman and Zilberstein Citation2019; Sukthankar et al. Citation2014; Van-Horenbeke and Peer Citation2021). Through various techniques, these communities employ computer-based models to enable synthetic agents to reason about what humans are doing, why they are doing it, and how they will proceed. While these models are useful, they often lack interpretability and understandability, highlighting the need for human factors methods to support designers and system developers. Unfortunately, approaches to model intent from a human-centric perspective are scarce in the literature, making it difficult to understand how to describe and analyse why intent is formed, what it represents, and how it is (to be) enacted.

Towards a human-centric approach to model intent

Acknowledging the need to describe human intent, Cognitive Work Analysis (CWA) and Cognitive Task Analysis (CTA) have been put forward as families of suitable methods (Heinze Citation2004). The main distinction between these methods lies in the focus and resulting model characteristics (Vicente Citation1999). More specifically, by focusing on the constraints, CWA methods provide formative models describing what humans can do, while focusing on cognitive tasks, CTA methods provide descriptive (or normative) models describing what humans (should) do. An analogy of a formative model is a map representing all paths that humans can choose, whereas a descriptive (or normative) model is a path representing the path humans (should) choose. From the perspective of modelling human intent, these model characteristics are complementary (Roth et al. Citation2019), and integrating them into a unified model is desirable as it can describe both possibilities and actualities. In particular, CWA methods are suggested as the basis for describing the choices that can be considered, whereas CTA methods are suggested to describe the commitments to these choices.

Considering the family of CWA and CTA methods, particularly Work Domain Analysis (WDA; Leveson Citation2000; Elliott et al. Citation2000) and a combination of Applied Cognitive Task Analysis (ACTA) and Critical Decision Method (CDM; Norling Citation2012) have been proposed as suitable methods. Generally, WDA focuses on the functional and environmental constraints that shape human decisions in their work (Naikar Citation2013), whereas ACTA and CDM focus on the reasons and decisions behind human behaviour (Norling Citation2012). Consequently, these methods align with the idea of combining complementary methods to design a model with both formative and descriptive characteristics. From an intentional point of view, this approach is attractive as it recognises that human intent is shaped by various constraints, leading to the manifestation of different strategies to cope with changing demands (Hassall and Sanderson Citation2014). The selection of WDA as the basis is further motivated by its alignment with the conceptualisation of intent put forward here. For instance, Bratman (Citation1987) describes intent in terms of mental structures characterised by commitment with the functional role of controlling actions, whereas Vicente (Citation1999) describes systems in terms of action-relevant functional structures that tend to be persistent over time. Thus, from the perspective of describing intentional systems, the conceptualisation of intent and proposed method harmonises with their assumptions.

Designing, analysing, and evaluating models of situated intent

With its origin in studies of human reasoning, WDA has been a valuable method for designing, analysing, and evaluating behaviour in complex socio-technical systems (Naikar Citation2013). Such systems are often represented in two-dimensional abstraction-decomposition spaces, depicting the functional structure through whole-part and means-end relations. While the abstraction dimension is often represented in a hierarchy with five levels ranging from a system’s functional purpose to its physical form, the six levels of cognitive control described in the provisional model are suggested. Regarding the decomposition dimension, work domain models often represent systems in various levels of granularity, from the whole system to sub-systems to components (Naikar Citation2013). Recognising the concept of situations-of-situations, such decomposition aids in treating the context and situation separately, providing a means to focus attention on salient contextual and situational elements and content of intent separately.

Taken together, such a two-dimensional abstraction-decomposition space can represent an intent space, describing the choices that can be considered and committed through context-situation and means-end reasoning. These modifications of typical work domain models are aligned with the operationalisation of intent. illustrates such intent space, comprising choices (C), decisions (D), and two intent structures connecting the context, situation, and the six levels of cognitive control. The depicted intent structures are inconsistent and potentially conflicting, diverging at the generic level of cognitive control.

Figure 3. Illustration of intent structures connecting decisions within an intent space. The intent structures are partly inconsistent in the situation, indicated by the blue (solid) and red (dashed) lines.

Figure 3. Illustration of intent structures connecting decisions within an intent space. The intent structures are partly inconsistent in the situation, indicated by the blue (solid) and red (dashed) lines.

From a practical perspective, designing models of situated intent aims to populate the model with choices within the intent space, representing the variety of intent elements and content that can be considered and committed. After considering and committing to choices, reasons and decisions are mapped, representing the intent elements and content formed through a deliberative process. Thus, both the formative and descriptive nature of intent can be displayed in the model. In , two inconsistent intent structures are illustrated, diverging at the generic level of cognitive control. This represents that implementations of intent can be enacted in two different manners within the situation. When analysing the model of situated intent, the implications of the choices considered and committed are central. For instance, in , an analysis may deduce that the two inconsistent intent structures are conflicting, necessitating conflict resolution. Finally, when evaluating models of situated intent, it is essential to validate the adequacy of the representations. For example, if the elements and content of the represented intent do not correspond with reality, it is necessary to revise the model. Here, it should be noted that the primary purpose of the suggested approach is to be practical and applicable when modelling intent to support HAT. Consequently, it is more critical for practitioners to understand the approach as a process in which insights are gained and design requirements can be identified rather than aiming for a perfect model.

Method

A methodological perspective of a case from previous work (Hammarbäck et al. Citation2023) is described to demonstrate the value of the conceptualisation of intent and proposed approach. The example case concerns a transfer of control situation in the context of a future reconnaissance mission involving MUM-T. In the CTA tradition, modelling generally includes knowledge elicitation, analysis, and representation (Crandall, Klein, and Hoffman Citation2006). Each activity has its purposes and associated methods; a combination of these is often required to populate a model with knowledge describing aspects of the world.

Knowledge elicitation

Knowledge elicitation refers to the activity of acquiring information, often using methods such as interviews, observations, self-reports or automatic collection (Crandall, Klein, and Hoffman Citation2006). A scenario-based approach was used to acquire information in the example case, using semi-structured interviews to elicit and capture knowledge in which intent was implicated.

Scenario

Scenarios refer to narratives of possible futures (Bishop, Hines, and Collins Citation2007) wherein a sequence of events can direct attention to causal processes and decision points, aiding the preparation of possible futures (Kahn and Wiener Citation1967). In the example case, a scenario was iteratively designed in collaboration with domain experts (see also Hammarbäck et al. Citation2023). The scenario described a narrative centred on a future reconnaissance mission involving MUM-T and included a transfer of control event as a target situation of investigation. The mission context was described as occurring around 2040 in a state between peace and war. The mission purpose was described as intelligence gathering, with the objectives to locate, identify, and report potentially hostile threats in a designated area. Generalised flight plans for the manned and unmanned aircraft were outlined, including a rendezvous point where the fighter pilot would assume control of the unmanned aircraft from a ground control station operator. After completing the transfer of control process, this new team configuration would approach the designated area.

Interviews

Interviews have been advocated as a suitable method for eliciting knowledge in which intent is implicated, particularly by using interview probes from ACTA and CDM to elicit and capture abstract and detailed reasons and decisions (Norling Citation2012). In the example case, seven one-hour semi-structured interviews with subject matter experts were conducted to acquire knowledge in which intent was implicated.

Participants. Subject matter experts are valuable sources of information, especially when consulting various expertise, as they can provide insights from different perspectives (Naikar Citation2013). Seven Swedish subject matter experts participated in the example case, including four experienced fighter pilots, one ground control station operator, and two technical specialists.

Procedure. The interviews began by describing the purpose and objectives of the study, followed by presenting the context of the scenario. To elicit intent, the scenario story was enacted alongside interview probes. A low-fidelity simulation was used to play out the scenario by drawing and moving objects on paper sheets while utilising interview probes from ACTA (Militello and Hutton Citation1998) and CDM (O’Hare et al. Citation1998). During the interviews, data was documented by the interviewer through note-taking. After the interview, the drawings and notes were collected and utilised to generate exhaustive summaries.

Knowledge analysis

Knowledge analysis refers to the activity of exploring the data to uncover insights and organise them in a meaningful way. It is often associated with qualitative methods such as thematic and content analysis (Crandall, Klein, and Hoffman Citation2006). In the example case, a theory-driven thematic analysis (Braun and Clarke Citation2006) of the summaries was conducted. The analysis was guided by the provisional model of human intent (see ), enabling a tentative categorisation of intent by fitting intent content into intent elements in the summaries. In addition to fitting intent content into intent elements, the reasons for decisions were noted.

Knowledge representation

Knowledge representation refers to the activity of meaningfully communicating data and findings in an understandable form, exemplified by textual descriptions, graphs, and illustrations (Crandall, Klein, and Hoffman Citation2006). In the example case, a designed model was accompanied by text descriptions. More specifically, the conceptual model was used to describe the intent space, comprising a constrained set of contextual and situational choices considered at different levels of cognitive control, as well as the intent structures formed through context-situation and means-end reasoning. Textual descriptions were added to provide details to complement the conceptual model.

Results

The example case resulted in a designed model representing a fighter pilot’s intent in a transfer of control situation (see ). The model depicts contextual and situational intent elements and content at the six cognitive levels, wherein an intent structure is formed through context-situation and means-end reasoning.

Figure 4. Designed model of fighter pilot intent in a transfer of control situation.

Figure 4. Designed model of fighter pilot intent in a transfer of control situation.

The following exemplifies intent elements (in bold) and intent content (in italic). Furthermore, the mapping of these intent elements and intent content is indicated as contextual (ContexT, CT) or situational (Target Situation, TS) alongside the level of cognitive control (Frames, Effects, Values, Generic, Implementation, or Physical).

From the designed model, a single intent structure emerged and became salient in the intent space (see ), indicating an agreement among participants. For instance, participants typically characterised the (TS, Frames) Ascribed transfer of control situation as a highly coordinative and safety-critical event involving several agents. As such, (CT, Effects) Achieve mission and Uphold flight safety were identified as influential core goals for the (TS, Effects) instrumental transfer of control goals. To manage the (TS, Frames) Ascribed transfer of control situation, participants noted that they would likely (TS, Values) follow a (TS, Generic) standardised transfer of control procedure to (TS, Values) effectively and efficiently (TS, Effects) coordinate a safe transfer of control. The analysis also indicates that the (TS, Generic) transfer of control procedure overlaps three distinct phases–before, during, and after the (TS, Implementation) transfer of control process.

Before the transfer of control event, participants noted that they must uphold (CT, Values) situation awareness of the area near the expected (CT, Generic) rendezvous point and (TS, Effects) locate the unmanned aircraft. To this end, the participants expressed that they would use Plans (CT, Generic: flight plans) to Navigate (CT, Implementation: determine relative aircraft positions), and from the determined positions, Aviate (CT, Implementation: direct flight path) and Manage (CT, Implementation: direct sensors) by using Aircraft systems (CT, Physical: navigation systems, flight control systems, sensor systems). After locating an aircraft expected to be the synthetic wingman, participants emphasise that they must ensure that Team performance (TS, Values: shared awareness of situation and intent) and Mission performance (CT, Values: effectiveness, efficiency) are–and will likely be–within acceptable performance boundaries before initiating the (TS, Implementation) transfer of control process. To this end, participants expressed that the team members must Communicate (TS, Implementation: radio communication). For instance, participants note that they would need to know the current and projected state of the (TS, Physical) unmanned aircraft (e.g. consumable and expendable resources, system failure) to ensure it can operate within imposed Intentional constraints (CT, Values: adhere to commander’s intent) and Causal constraints (CT, Values: distance, time). In cases in which the state of the (to be) synthetic wingman is outside expected performance boundaries, participants note that they must adapt, for instance, by (CT, Effects) aborting the reconnaissance mission altogether and enacting a (CT, Generic) contingency plan or adjusting current plans (CT, Implementation: wait for replacement aircraft).

After confirming that the state of the (TS, Physical) unmanned aircraft is as expected, participants describe that they would initiate the (TS, Implementation) transfer of control process using Aircraft systems (TS, Physical: control station). During this process, Team performance (TS, Values: shared awareness of situation and intent) was emphasised to (CT, Effects) Uphold flight safety. For instance, participants say they must maintain a safe separation between the two aircraft. To this end, participants noted that they must be aware of their respective (CT, Generic) roles regarding separation responsibilities and (TS, Physical) platform positions and altitudes to avoid potential collisions. Participants also noted that they would need to monitor (TS, Physical) Aircraft systems (control station: state of data link, transfer of control process status). During this process, participants also noted that they would continue to determine the current and projected state before accepting control, enabling them to note states outside expected performance boundaries and be able to cancel the transfer of control process or abort the mission if necessary.

After accepting control, participants expressed that they would need to confirm the successful completion of the (TS, Implementation) transfer of control process and lead the synthetic wingman. This marks the end of the (TS, Frames) Ascribed transfer of control situation and the continuation of the reconnaissance mission as a newly formed team.

Discussion

To further facilitate the conceptualisation of intent and approach, a methodological perspective on the definition and operationalisation is explored in light of the designed model. This is followed by a discussion of the proposed approach, limitations, and future work.

The definition of intent

The concept of intent was defined as ‘a structure comprised of choices characterised by commitment, with the functional roles of coordinating and guiding cognitive and physical activities personally and socially’. To illustrate the usefulness of this definition, the nature and functional roles of intent in the context of the example case are discussed.

Firstly, the nature of intent can identify decisions exhibiting a degree of persistence, thus helping distinguish it from other mental states (e.g. desires). Indeed, as Bratman (Citation1987) argues, humans may desire something without committing to act upon the desire. This distinction is useful when modelling intent as it emphasises commitment to choice rather than presenting it as one among many alternatives.

Secondly, commitment to choices is argued to be used as input when coordinating further personal and social choices. Thus, treating intent as an object provides a means to understand how humans can make intent internally consistent and coherent by filtering conflicts during context-situation and means-end reasoning. For instance, the designed model indicates that the fighter pilot and ground control station operator Communicate to maintain common ground and common intent by coordinating belief content and intent content, thus providing a shared awareness of situation and intent.

Thirdly, as a subject, intent is argued to be a conduct-controlling pro-attitude that motivates and guides action, thus contributing to understanding how it can ‘drive’ behaviour. For instance, the analysis shows how generalised Plans (e.g. flight plans) can be used to guide Control activities (e.g. direct flight path and direct sensors) to achieve instrumental goals (e.g. locate the unmanned aircraft). Similarly, the designed model indicates that a standardised transfer of control procedure may coordinate actions at a team level throughout the Ascribed transfer of control situation. Furthermore, treating intent as a conduct-controlling pro-attitude enables understanding intent through physical objects and their affordances as they provide indicative markers (or cues) on both situation and intent from their state (e.g. position and trajectory) and appearance (e.g. smoke).

The operationalisation of intent

From theoretical frameworks and models, intent was operationalised as ‘deliberated and connotated choices connected by context-situation and means-end reasoning within an intent space with six levels of cognitive control’. In this operationalisation, the intent space comprises contextual and situational intent elements in which intent content can be fitted and connected at different levels of cognitive control. As such, the model reflects that intent is situated within a context and shaped by various constraints. Consequently, intent can best be understood as a multi-faceted concept formed through context-situation and means-end reasoning. To further discuss the usefulness of this operationalisation, key aspects of the intent space are discussed and exemplified.

The context-situation separation

The design model suggests that different intent content becomes salient based on the adopted frames (i.e. Reconnaissance mission and Transfer of Control), allowing for the separation of the target situation from its context. Consequently, the target transfer of control situation is embedded within the reconnaissance mission situation, exemplifying the notion of situations-of-situations. From the designed model, this becomes noticeable as there are two concurrent processes: one related to the mission situation and another to the transfer of control situation.

Within these two ascribed situations, core goals in the mission context can be differentiated from instrumental goals in the transfer of control situation. Furthermore, generalised Plans and specific Procedures can be identified, enabling mission-dependent flight plans and mission-independent transfer of control procedures to be differentiated. Similarly, the context-situation separation helps identify and differentiate between central control activities and physical objects that seem closely associated with the Ascribed reconnaissance mission situation and the different phases of the Ascribed transfer of control situation. For instance, the designed model indicates an emphasis on contextual intent content (e.g. flight plans, direct flight path and sensors) before the Ascribed transfer of control situation. However, more situational control activities (communicate, transfer of control process) become salient as the situation unfolds. Thus, both contextual and situational control activities and physical objects are used to successfully coordinate a safe transfer of control.

The levels of cognitive control

As the designed model illustrates and previous examples indicate, all levels of cognitive control are used, allowing differentiation between various facets of intent to be identified. For instance, the frames level of cognitive control allows the identification and differentiation between ascribable situations within a context, making certain belief content within the frame salient. From a modelling perspective, this is useful as it focuses on what humans think is important in the situation rather than including all available data or information.

In terms of Core goals and Instrumental goals, both the Purposes and Objectives ought to be modelled. These intent elements provide different information, as the former describes why a particular state or outcome is desired while the latter describes what it looks like (Klein Citation1994). They are useful as one without the other may reduce the understanding of intent.

Using the distinction between Performance values and Constraint values, Mission performance (efficiency, effectiveness) and Team performance (shared awareness of situation and intent), as well as Intentional constraints (adhere to commander’s intent, follow procedure) and Causal constraints (distance, time), could be identified and differentiated. Such distinctions are useful as they point to different aspects considered. For instance, humans must also consider their task and team performance to efficiently and effectively Achieve mission. Likewise, it is useful to consider various types of constraints. In this case, normative and descriptive constraints are represented by ‘laws of men’ and ‘laws of nature’.

The generic level of cognitive control may hold intent content related to a set of Plans, Procedures, and Contingencies. Although these, in some sense, describe COAs, it is useful to differentiate among these intent elements. For instance, flight plans depend on the specific reconnaissance mission, whereas the transfer of control procedure likely does not exhibit such dependency. Thus, Procedures have a particular quality as they can be internalised to establish common intent (Pigeau and McCann Citation2006), exemplifying ‘strong intent’ (Geddes Citation1997) as specific steps are followed to coordinate activities at a team level–even without communication.

Aircraft systems and Environment were identified and differentiated at the physical level of cognitive control. This differentiation refers to whether the physical object is inside or outside the fighter pilot’s control in the ascribed situation. For instance, the designed model suggests that the unmanned aircraft is outside the fighter pilots’ aircraft systems until a complete transfer of control has occurred, in which it becomes controllable and perceived as a synthetic wingman in a newly configured MUM-T system. From a modelling perspective, this distinction is useful as both the Aircraft system and the Environment can increase and decrease the degree of freedom.

Context-situation and means-end reasoning

Besides intent elements (with fitted intent content) within the intent space, the connections formed by the context-situation and means-end reasoning are integral parts of intent models. Drawing from Klein et al. (Citation2007), it can be argued that these connections have an explanatory power for making sense of intent. For instance, by following context-situation reasoning, the intent structure suggests an interdependence between contextual and situational intent. This can be seen as flight plans are used to direct flight path and direct sensors towards the expected rendezvous point to locate the unmanned aircraft before adopting a transfer of control frame. Consequently, contextual intent as input can provide readiness for the anticipated situations. Likewise, the designed model suggests that cases in which the unmanned aircraft is (projected to be) outside performance boundaries could affect the reconnaissance mission. As such, situational intent content is used as input for a contextual change of intent.

The connections may also explain how ascribed situations can be managed by following the trajectories of means-end reasoning. For instance, the designed model helps explain how a fighter pilot can coordinate a safe transfer of control effectively and efficiently while maintaining shared awareness of situation and intent by following a standardised transfer of control procedure and radio communication.

The proposed approach

In modelling human intent, combining complementary CWA and CTA methods presents a promising approach, especially given the ability to integrate formative and descriptive (or normative) representations into a unified model. In the proposed approach, WDA was used as the foundation for describing the constraints that shape intent, whereas ACTA and CDM supported describing the intent formed through context-situation and means-end reasoning.

This approach offers the potential for yielding novel insights that are not easily obtained when using the methods individually. For example, by representing choices and decisions, modellers can glean insights about what humans can and (should) do in routine and non-routine work (Naikar Citation2013). Thus, whilst a synthetic teammate must support routine tasks in typical situations, they must do the same in atypical situations. Additionally, since reasons for decisions are represented by connections, they have an explanatory power that can provide a deeper understanding of human intent. In the context of a MUM-T mission, concrete examples of such insight can be found in a transfer of control and a link loss situations (Hammarbäck et al. Citation2023). The transfer of control case is assumed to be an anticipated situation wherein fighter pilots can rely on procedures during the handover process. In contrast, in the case of link loss, it is assumed to be an unanticipated situation whereby fighter pilots may diverge in their framing of the situation, resulting in inconsistent and conflicting intent. Such insights are crucial for resilient HAT, where synthetic teammates must effectively and fluently adapt to their human partner in expected and unexpected situations (Hiatt et al. Citation2017).

Whilst the approach can shed light on human intent, it is important to recognise that it provides insights and informs conceptual design and system development; thus, it can not perfectly represent intent. However, considering that it can depict routine and non-routine intent, it is possible to reduce the immense variety that is otherwise present (Van-Horenbeke and Peer Citation2021). Additionally, the model of situated intent suggests integrating techniques from artificial intelligence communities beyond the common intent, plan, and activity recognition trio for a more holistic perspective. For example, possible insights can be gained from context and situation recognition as well as object and affordance recognition communities. In this sense, the provisional model of human intent can offer a preliminary map of relations between communities and the potential for more informed computer-based models utilising complementary techniques.

Taken together, the combination of complementary CWA and CTA methods has been presented, demonstrating the potential for being both practical and applicable for uniquely describing intent (Hammarbäck et al. Citation2023). Using the approach, designed and analysed models offer information for developing computer-based models and design requirements for fluent and effective interactions among human and synthetic teammates.

Methodological limitations and future work

Identified limitations are highlighted and discussed, pointing to practical implications and possible avenues of future work. In particular, the provisional model of human intent and the limitations concerning knowledge elicitation, analysis, and representation are discussed.

The provisional model of human intent

When comparing frameworks and models, several similarities were uncovered and used to integrate and synthesise elements and content of intent into a provisional model. However, it is essential to note that the mapping of elements to the levels of cognitive control is tentative, primarily due to the lack of detailed descriptions in the compared frameworks and models. Whilst the model has a provisional nature, it ought to be useful for researchers and practitioners to consider what aspects need to be described when modelling human intent. This is a crucial point as intent, as a concept, is often ambiguous, potentially leading to erroneous assumptions and diverging results.

Knowledge elicitation

While interviews offer benefits for eliciting intent (Norling Citation2012), it is important to recognise that most intent is implicit, as it is uncommunicated or uncommunicable (Pigeau and McCann Citation2006). In the example case, a combination of low-fidelity simulation and interview probes elicited intent, resulting in a limited collection of drawings and textual summaries. Another limitation relates to an absence of observations, as what participants say they intend to do does not always correspond with what they intentionally do (Argyris and Schön Citation1974). Future work should consider combining complementary knowledge elicitation methods, such as interviews, observations, self-reports, and automatic data collection, to address these limitations.

Knowledge analysis

Regarding the knowledge analysis activity, CTA methods have received little attention (Crandall, Klein, and Hoffman Citation2006). In the example case, a theory-driven thematic analysis offered structure and flexibility when analysing qualitative data (Braun and Clarke Citation2006). However, the interpretations may be influenced by the analyst’s lack of experience in the domain, the chosen approach, and the potential for subjectivity. To address these challenges and limitations, future work should consider methods that ensure interpretations align with expressed intent. For example, involving participants in the analysis activity can help ensure knowledge transfer throughout the modelling activities.

Knowledge representation

While intent has been suggested to be described in abstraction-decomposition spaces, three limitations are presented with this type of representation. Firstly, the model representing intent in the example case is coarse, possibly lacking in detail for specific purposes. This limitation pertains to the notion of situations-of-situation and the selection of model granularity (Naikar Citation2013). For instance, the example case used represented the target situation of a transfer of control in the context of a mission, which is a very coarse representation. In contrast, as was noted earlier, the transfer of control could be further decomposed, yielding increasingly detailed representations of intent. From a practical point of view, future work ought to select the target situation of investigation based on the purpose, with the possibility of re-utilising the model to design models with other levels of granularity (Hammarbäck et al. Citation2023).

Secondly, although the example case depicted a single intent structure indicating common intent among participants, it may not accurately represent the variations intra- and inter-personally. For example, whilst participants expressed intent in the example case, the frequency of decisions was neither part of the analysis nor representation activity. Similarly, while intent can be understood as persistent decisions, it is necessary to recognise that these decisions can also be revoked and revised for adaptability. However, such ‘strength of intent’ was not part of any modelling activity. Future work should consider these limitations, as such information can further inform conceptual design and system development.

Thirdly, as a cross-temporal concept, the duration of persistent decisions is not easily represented in a model of situated intent. Thus, a limitation in the methodology lies in the static impression of intent, missing out on descriptions of how intent is formed, transformed, and enacted to cope with events over different time horizons. For this reason, future work should consider complementary approaches, such as encoding decision trajectories within the model or using ‘scores’ as per the JCF (Lundberg and Johansson Citation2021), that enable the time aspect to be described. Such information can benefit by elucidating the timing in intentional processes.

Conclusions

Addressing the need to understand what and how to model intent from a human-centric perspective, this work sheds light on the concept of intent and proposes an approach with theoretical and methodological foundations.

Accompanied by a provisional model of human intent, the concept was defined and operationalised to facilitate a better understanding. Granted the definition and operationalisation, it has been stressed that intent is situated within a context and shaped by constraints at different levels of cognitive control, thus reflecting a formative nature of how intent can be manifested. Within these constraints, intent is formed by committing to choices through context-situation and means-end reasoning, thus reflecting a descriptive (or normative) understanding of how intent is (or should be) manifested. Together, these characteristics posit that intent should be understood as a multi-dimensional concept, emphasizing the need for researchers and practitioners to consider what aspects of intent are or should be modelled. In this regard, the provisional model of human intent can provide an initial step.

Recognising the need for models with both normative and descriptive characteristics, a combination of CWA and CTA methods was proposed. Specifically, unifying WDA with ACTA and CDM was proposed as an approach to design, analyse, and evaluate models of situated intent from a human-centric perspective. The approach has been described, and an example case demonstrated its applicability and practicality in a HAT context, providing opportunities for novel insights that can inform conceptual design and system development.

Ultimately, this work contributes with a theoretical and methodological foundation to model human intent by describing and exemplifying important aspects of intent and how these can be modelled from a human-centric perspective. This is at the core of understanding human intent, particularly as a holistic approach is necessary to design and develop synthetic partners working collaboratively and cooperatively in this emerging interaction paradigm.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Swedish Defence Material Administration and NFFP (National Aviation Research Programme), which is funded by VINNOVA (Swedish Governmental Agency for Innovation Systems, 2017-04884, 2023-01191), the Swedish Armed Forces, and the Swedish Defence Material Administration. Open access funding provided by Linköping University.

References

  • Albrecht, Stefano V., and Peter Stone. 2018. “Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems.” Artificial Intelligence 258: 66–95. doi:10.1016/j.artint.2018.01.002.
  • Argyris, Chris, and Donald A. Schön. 1974. Theory in Practice: Increasing Professional Effectiveness. San Fransisco: Jossey-bass.
  • Bishop, Peter, Andy Hines, and Terry Collins. 2007. “The Current State of Scenario Development: An Overview of Techniques.” Foresight 9 (1): 5–25. doi:10.1108/14636680710727516.
  • Bratman, Michael E. 1987. Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press.
  • Braun, Virginia, and Victoria Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. doi:10.1191/1478088706qp063oa.
  • Calhoun, Gloria L., Heath A. Ruff, Kyle J. Behymer, and Elizabeth M. Frost. 2018. “Human-Autonomy Teaming Interface Design Considerations for Multi-Unmanned Vehicle Control.” Theoretical Issues in Ergonomics Science 19 (3): 321–352. doi:10.1080/1463922X.2017.1315751.
  • Chen, Jessie Y. C., Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. 2018. “Situation Awareness-Based Agent Transparency and Human-Autonomy Teaming Effectiveness.” Theoretical Issues in Ergonomics Science 19 (3): 259–282. doi:10.1080/1463922X.2017.1315750.
  • Crandall, Beth, Gary Klein, and Robert R. Hoffman. 2006. Working Minds: A Practitioner’s Guide to Cognitive Task Analysis. Cambridge, MA: MIT Press.
  • Department of Defence. 2018. “Unmanned Systems Integrated Roadmap 2017–2042.” https://www.hsdl.org/?view&did=826737.
  • Elliott, Glenn, Jennifer Crawford, Marcus Watson, and Penelope Sanderson. 2000. “Knowledge Elicitation Techniques for Modelling Intentional Systems with Cognitive Work Analysis.” Paper presented at the Proceedings of the Fifth Australian Aviation Psychology Symposium.
  • Endsley, Mica R. 2017. “From Here to Autonomy.” Human Factors 59 (1): 5–27. doi:10.1177/0018720816681350.
  • Freedman, Richard G., and Shlomo Zilberstein. 2019. “A Unifying Perspective of Plan, Activity, and Intent Recognition.” Paper presented at the Proceedings of the AAAI Workshops: Plan, Activity, Intent Recognition, 1–8.
  • Geddes, Norman D. 1994. “A Model for Intent Interpretation for Multiple Agents with Conflicts.” Paper presented at the Proceedings of IEEE International Conference on Systems, Man and Cybernetics, 3:2080–2085. IEEE. doi:10.1109/ICSMC.1994.400170.
  • Geddes, Norman D. 1997. “Large Scale Models of Cooperative and Hostile Intentions.” Paper presented at the Proceedings International Conference and Workshop on Engineering of Computer-Based Systems, 142–147. IEEE Computer. Soc. Press. doi:10.1109/ECBS.1997.581841.
  • Goodrich, Michael A., and Alan C. Schultz. 2007. “Human-Robot Interaction: A Survey.” Foundations and Trends® in Human-Computer Interaction 1 (3): 203–275. doi:10.1561/1100000005.
  • Gupta, Suraj G., Mangesh Ghonge, and Pradip M. Jawandhiya. 2013. “Review of Unmanned Aircraft System (UAS).” SSRN Electronic Journal 2 (4): 1646–1658. doi:10.2139/ssrn.3451039.
  • Hammarbäck, Jimmy, Jens Alfredson, Björn J. E. Johansson, and Lundberg Jonas. 2023. “My Synthetic Wingman Must Understand Me: Modelling Intent for Future Manned–Unmanned Teaming.” Cognition, Technology & Work 26: 107–126. doi:10.1007/s10111-023-00745-3.
  • Han, The Anh, and Luís Moniz Pereira. 2013. “State-of-the-Art of Intention Recognition and Its Use in Decision Making.” AI Communications 26 (2): 237–246. doi:10.3233/AIC-130559.
  • Hassall, Maureen E., and Penelope M. Sanderson. 2014. “A Formative Approach to the Strategies Analysis Phase of Cognitive Work Analysis.” Theoretical Issues in Ergonomics Science 15 (3): 215–261. doi:10.1080/1463922X.2012.725781.
  • Heinze, Clint. 2004. “Modelling Intention Recognition for Intelligent Agent Systems.” Doctoral thesis. University of Melbourne, Australia.
  • Hiatt, Laura M., Cody Narber, Esube Bekele, Sangeet S. Khemlani, and J. Gregory Trafton. 2017. “Human Modeling for Human–Robot Collaboration.” The International Journal of Robotics Research 36 (5-7): 580–596. doi:10.1177/0278364917690592.
  • Hollnagel, Erik, Daniel David, and D. Woods. 1999. “Cognitive Systems Engineering: New Wine in New Bottles.” International Journal of Human-Computer Studies 51 (2): 339–356. doi:10.1006/ijhc.1982.0313.
  • Kaber, David B. 2018. “A Conceptual Framework of Autonomous and Automated Agents.” Theoretical Issues in Ergonomics Science 19 (4): 406–430. doi:10.1080/1463922X.2017.1363314.
  • Kahn, Herman, and Anthony J. Wiener. 1967. The Year 2000: A Framework for Speculation on the Next Thirty-Three Years. New York: The Macmillan.
  • Klein, Gary. 1994. “A Script for the Commander’s Intent.” In Science of Command and Control: Part III: Coping with Change, edited by Alexander H. Levis and Ilze S. Levis, 75–85. Fairfax, VA: AFCEA International Press.
  • Klein, Gary. 1999. Sources of Power: How People Make Decisions. Cambridge, MA: The MIT Press.
  • Klein, Gary, Jennifer K. Phillips, Erica L. Rall, and Deborah A. Peluso. 2007. “A Data–Frame Theory of Sensemaking.” In Expertise out of Context: Proceedings of the Sixth International Conference on Naturalistic Decision Making, edited by Robert R. Hoffman, 113–155. New York, NY: Lawrence Erlbaum Associates.
  • Klein, Gary, David D. Woods, Jeffrey M. Bradshaw, Robert R. Hoffman, and Paul J. Feltovich. 2004. “Ten Challenges for Making Automation a ‘Team Player’ in Joint Human-Agent Activity.” IEEE Intelligent Systems 19 (06): 91–95. doi:10.1109/MIS.2004.74.
  • Leveson, Nancy G. 2000. “Intent Specifications: An Approach to Building Human-Centered Specifications.” IEEE Transactions on Software Engineering 26 (1): 15–35. doi:10.1109/32.825764.
  • Lundberg, Jonas. 2015. “Situation Awareness Systems, States and Processes: A Holistic Framework.” Theoretical Issues in Ergonomics Science 16 (5): 447–473. doi:10.1080/1463922X.2015.1008601.
  • Lundberg, Jonas, and Björn J. E. Johansson. 2015. “Systemic Resilience Model.” Reliability Engineering & System Safety 141: 22–32. doi:10.1016/j.ress.2015.03.013.
  • Lundberg, Jonas, and Björn J. E. Johansson. 2019. “Resilience is Not a Silver Bullet – Harnessing Resilience as Core Values and Resource Contexts in a Double Adaptive Process.” Reliability Engineering & System Safety 188: 110–117. doi:10.1016/j.ress.2019.03.003.
  • Lundberg, Jonas, and Björn J. E. Johansson. 2021. “A Framework for Describing Interaction between Human Operators and Autonomous, Automated, and Manual Control Systems.” Cognition, Technology & Work 23 (3): 381–401. doi:10.1007/s10111-020-00637-w.
  • Lyons, Joseph B., Katia Sycara, Michael Lewis, and August Capiola. 2021. “Human–Autonomy Teaming: Definitions, Debates, and Directions.” Frontiers in Psychology 12: 589585. doi:10.3389/fpsyg.2021.589585.
  • Marathe, Amar R., Kristin E. Schaefer, Arthur W. Evans, and Jason S. Metcalfe. 2018. “Bidirectional Communication for Effective Human-Agent Teaming.” In Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, Embodiment, and Simulation. VAMR 2018. Lecture Notes in Computer Science, Vol. 10909, 338–350. Berlin, Heidelberg: Springer-Verlag. doi:10.1007/978-3-319-91581-4_25.
  • Mercado, Joseph E., Michael A. Rupp, Jessie Y. C. Chen, Michael J. Barnes, Daniel Barber, and Katelyn Procci. 2016. “Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management.” Human Factors 58 (3): 401–415. doi:10.1177/0018720815621206.
  • Militello, Laura G., and Robert J. B. Hutton. 1998. “Applied Cognitive Task Analysis (ACTA): A Practitioner’s Toolkit for Understanding Cognitive Task Demands.” Ergonomics 41 (11): 1618–1641. doi:10.1080/001401398186108.
  • Minsky, Marvin. 1975. “Minsky’s Frame System Theory.” In TINLAP ‘75: Proceedings of the 1975 Workshop on Theoretical Issues in Natural Language Processing, 104–116. Stroudsburg, PA: Association for Computational Linguistics. doi:10.3115/980190.980222.
  • Naikar, Neelam. 2013. “Work Domain Analysis: Concept, Guidelines, and Cases.” Boca Raton, FL: CRC Press.
  • Norling, Emma. 2008. “What Should the Agent Know? The Challenge of Capturing Human Knowledge.” Paper presented at the AAMAS ‘08: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, 1225–1228.
  • Norling, Emma. 2012. “Modelling Human Behaviour with BDI Agents.” Doctoral thesis. University of Melbourne, Australia.
  • O’Hare, D., M. Wiggins, A. Williams, and W. Wong. 1998. “Cognitive Task Analyses for Decision Centred Design and Training.” Ergonomics 41 (11): 1698–1718. doi:10.1080/001401398186144.
  • O’Neill, Thomas, Nathan McNeese, Amy Barron, and Beau Schelble. 2020. “Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature.” Human Factors 64 (5): 904–938. doi:10.1177/0018720820960865.
  • Pigeau, Ross, and Carol McCann. 2006. “Establishing Common Intent: The Key to Co-Ordinated Military Action.” In The Operational Art: Canadian Perspectives: Leadership and Command, ­edited by Allan English. Kingston, Ontario: Canadian Defence Academy Press.
  • Roth, Emilie M., Christen Sushereba, Laura G. Militello, Julie Diiulio, and Katie Ernst. 2019. “Function Allocation Considerations in the Era of Human Autonomy Teaming.” Journal of Cognitive Engineering and Decision Making 13 (4): 199–220. doi:10.1177/1555343419878038.
  • Salas, Eduardo, Nancy J. Cooke, and Michael A. Rosen. 2008. “On Teams, Teamwork, and Team Performance: Discoveries and Developments.” Human Factors 50 (3): 540–547. doi:10.1518/001872008X288457.
  • Salas, Eduardo, Dana E. Sims, and C. Shawn Burke. 2005. “Is There a ‘Big Five’ in Teamwork?” Small Group Research 36 (5): 555–599. doi:10.1177/1046496405277134.
  • Schneider, Michael F., and Michael E. Miller. 2018. “Operationalized Intent for Communication in Human-Agent Teams.” Paper presented at the 2018 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), 117–123. IEEE. doi:10.1109/COGSIMA.2018.8423992.
  • Sukthankar, Gita, Robert P. Goldman, Christopher Geib, David V. Pynadath, and Hung H. Bui. 2014. Plan, Activity, and Intent Recognition: Theory and Practice. Elsevier.
  • Tokadli, Guliz, and Michael C. Dorneich. 2019. “Interaction Paradigms: From Human-Human Teaming to Human-Autonomy Teaming.” In  2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), 1–8. IEEE. doi:10.1109/DASC43569.2019.9081665.
  • Van-Horenbeke, Franz A., and Angelika Peer. 2021. “Activity, Plan, and Goal Recognition: A Review.” Frontiers in Robotics and AI 8: 643010. doi:10.3389/frobt.2021.643010.
  • Vicente, Kim J. 1999. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Boca Raton, FL: CRC Press.