903
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Misleading effect and spatial learning in head-mounted mixed reality-based navigation

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 408-422 | Received 24 Feb 2022, Accepted 12 Oct 2022, Published online: 15 Nov 2022

ABSTRACT

Mixed reality technology has been increasingly used for navigation. While most MR-based navigation systems are currently based on hand-held devices, for example, smartphones, head-mounted MR devices have become more and more popular in navigation. Much research has been conducted to investigate the navigation experience in MR. However, it is still unclear how ordinary users react to the first-person view and FOV (field of view)-limited navigation experience, especially in terms of spatial learning. In our study, we investigate how visualization in MR navigation affects spatial learning. More specifically, we test two related hypotheses: incorrect virtual information can lead users into incorrect spatial learning, and the visualization style of direction can influence users’ spatial learning and experience. We designed a user interface in Microsoft HoloLens 2 and conducted a user study with 40 participants. The user study consists of a walking session in which users wear Microsoft HoloLens 2 to navigate to an unknown destination, pre- and post-walking questionnaires, sketch map drawing, and a semi-structured interview about the user interface design. The results provide preliminary confirmation that users’ spatial learning can be misled by incorrect information, even in a small study area, but this misleading effect can be compensated by considerate visualization, for example, including lines instead of using only arrows as direction indicators. Arrows with or without lines as two visualization alternatives also influenced the user’s spatial learning and evaluation of the designed elements. Besides, the study shows that users’ preferences for navigation interfaces are diverse, and an adaptable interface should be provided. The results contribute to the design of head-mounted MR-based navigation interfaces and the application of MR in navigation in general.

1. Introduction

Mixed reality (MR) or augmented reality (AR) technology allows users to simultaneously perceive the real physical world and virtual digital holograms. Experiences with MR are quite pleasing to most users and thus highly valued in various fields, such as education, manufacturing, and gaming. Location-based service (LBS) and navigation also benefit a lot from MR. For example, the well-known location-based game Pokémon Go was downloaded over 10 million times on hand-held devices within a week of its release in 2016. “Live View” AR walking directions launched by Google in 2019 is not only entertaining but also helps users orientate in complex situations, for example, when leaving an unfamiliar subway station and unsure which way to go. More recently, the emerging head-mounted MR (hm-MR) is attracting more attention in LBS and navigation, as it creates a highly immersive experience, frees up the hands, and allows users to multitask.

Theoretically, the hm-MR mitigates some issues of current hand-held MR (e.g. the Live View in Google Maps on smartphones). The hand-held MR device is inconvenient and may distract users. For example, for safety reasons, Google Maps suggest the Live View users to put away the smartphone once they figure out the direction.Footnote1 This is less likely an issue for hm-MR as it includes the virtual objects in users’ daily normal field of view (FOV) and keep them aware of the physical environment (Tran, T. T. M., and C. Parker Citation2020). However, some user studies indicate that users may be overwhelmed by or obsessed with the new hm-MR experience and tend to ignore the physical world (Liu, Ding, and Meng Citation2021). Such inattentional blindness (see Inattentional blindness and its influence in mixed reality) can weaken spatial learning (Brügger, Richter, and Fabrikant Citation2019; Gramann, Hoepner, and Karrer-Gauss Citation2017; Ruginski et al. Citation2019). Besides, if the hm-MR fails mapping the space (i.e. spatial mapping), users may be guided to wrong places. For example, the current hm-MR has difficulties mapping transparent objects, such as glass, and might lead users to step on them. Ignorance of the real physical world may even be fatally dangerous in some navigation situations. Even if the hm-MR corrects itself early enough, it is not clear whether such errors lead users to wrong spatial learning.

In this paper, we investigate how hm-MR-based navigation visualization influences users’ perception and spatial learning. Specifically, we test whether incorrect information mislead users and whether the visualization of direction indicator affects users’ spatial learning.

1.1. Inattentional blindness and its influence in mixed reality

Inattentional blindness is common in our daily life. It refers to the situation where we ignore the object that are in the plain sight and fails to notice the existence of an unexpected item (Jensen et al. Citation2011). A slight distraction can cause inattentional blindness and hinder the main task. For example, phone usage during walking can lead to inattentional blindness during tasks with just low cognitive demands (Hyman et al. Citation2010). We do not just ignore trivia objects but also the safety relevant visual stimuli (Murphy and Greene Citation2015).

Increased inattentional blindness is found in MR. It is not surprising since people get distracted by many objects/events, and the entertaining, novel and sometimes interactive holograms in MR are definitely one of them. Krupenia and Sanderson (Citation2006) found the participants performed worse at detecting unexpected events wearing hm-MR. In McNamara, JR.’s report (acessed Citation2022) about a user study trying to take advantage of the inattentional blindness on education, he used hm-MR to keep the users focusing on the virtual content and less distracted by the real-world event. However, he found no significant difference in task performance between users using hm-MR and those using laptop. More recently, inattentional blindness has been confirmed in monitor-based AR (Dixon et al. Citation2014) and augmented reality head-up display (AR HUD, Wang et al. Citation2021). There is no clear conclusion on whether such inattentional blindness also exists in current more developed hm-MR. However, in the study of Liu, Ding, and Meng (Citation2021), when using hm-MR the participants tended to ignore the unaugmented physical elements, similar to how participants behaved with AR HUD as reported by Wang et al. (Citation2021).

In fact, despite the entertaining experience, hm-MR might reduce user’s spatial awareness. A common criticism of current hm-MR is the limited FOV. Usually, it’s only the screen for holograms, that is, with limited FOV, for example, the screen for Microsoft HoloLens 2 is 43°×29° (Heaney Citation2019), but the rest is with lens and allows users to see the real world (). However, users’ attention tends to be attracted by holograms and limited to the screen. In such situation, the peripheral vision might decrease. For normal-sighted people in physical world, the negative influence on spatial learning occurs only with extremely limited FOV (Barhorst-Cates, Rand, and Creem-Regehr Citation2016). In virtual reality (VR), restricted FOV does not impede spatial learning either (Adhanom et al. Citation2021). But these findings are based on pointing tasks or object placement tasks, it is not clear whether spatial learning of objects on sideways is affected.

Figure 1. Microsoft HoloLens 2, blue: screen for holograms; green: see-through lens. Photo by the authors.

Figure 1. Microsoft HoloLens 2, blue: screen for holograms; green: see-through lens. Photo by the authors.

The inattentional blindness can interfere with navigation and spatial learning and might impair users’ spatial ability in a long run. Users’ visual attention and spatial awareness are critical to navigation success and enhanced spatial learning (Kapaj, Lanini-Maggi, and Fabrikant Citation2021). With “traditional” navigation aids, such as being led by other people, or using smartphones, users are less attentive to the route and usually do not learn the space well (Stites, Matzen, and Gastelum Citation2020). With hm-MR, the users are even more attracted by the navigation aid, that is, the virtual visualization, which may lead to a decreased perception of the real world (McKendrick et al. Citation2016) and a loss of essential information for safe navigation. In the long run, users’ spatial ability may also suffer. Therefore, many researchers are trying to use as few and simple holograms as possible for the MR visualization and retain the users’ spatial awareness (Adam, Burnett, and Large Citation2015; McKendrick et al. Citation2016; Rehman and Cao Citation2017).

The inattentional blindness raises another concern of whether users can perceive the real physical world correctly if the virtual world conflicts with the real one. Navigation aids need to keep the users physiologically safe and not misled by the navigation aid (Fang, Li, and Shaw Citation2015). The current first-person-view navigation in hm-MR is with limited FOV and immersive experience. Whether the virtual objects or environments could override the perception of real world and thus mislead users is not clear yet. For example, if the device fails in real-time spatial mapping and direct the user to an inaccessible zone. This is important since if the virtual world overwrites the physical world, it may confuse the users afterward and cause much pain navigating by themselves. An even worse case could be that the users face dangers due to malicious visualization. This is similar to the misleading effects of map scale on geometry and feature selection (Monmonier Citation2005).

1.2. Geovisualization and spatial learning

Geovisualization styles influence individuals’ behavior and spatial learning as reported by Fuest et al. (Citation2021), and the functions of visual variables perform differently in 2D, 3D, and immersive environment. User studies show that types of symbols significantly influence tourists’ decisions on which place to visit when using a tourist map (Medynska-Gulij Citation2003) and visualization styles influence map-assisted spatial learning of expert wayfinders in outdoor navigation (Kapaj, Lanini-Maggi, and Fabrikant Citation2021). This might be related to the allocation of visual attention. For example, mobile map users with realistic-looking 3D landmarks share their visual attention more equally on task-relevant information, while those with 2D landmarks switch their attention between the visualized landmarks and the mobile map when performing navigation tasks (Kapaj, Lanini-Maggi, and Fabrikant Citation2021). Understanding how visualization influences users’ behavior and spatial learning improves user experience during navigation.

Spatial learning is important during navigation (Huang, Schmidt, and Gartner Citation2012; Ruginski et al. Citation2019). When users have access to navigation aids, they usually do not intentionally learn the walked space, which degrades spatial learning and spatial ability to some extent. This has led to many concerns,for example, safety concern, from researchers, police, and the public (McCullough and Collins Citation2019). The good news is that spatial learning also happens incidentally (Wenczel, Hepperle, and von Stülpnagel Citation2017). Many studies show that users can perform secondary tasks while walking (McKendrick et al. Citation2016) and incidental spatial learning is possible (Wunderlich, Grieger, and Gramann Citation2022). However, if users are too concentrated on the main task, that is, the navigation, the aforementioned inattentional blindness may occur and incidental spatial learning is less likely to happen.

Many of current findings for MR navigation are from using VR technologies instead of MR. VR environment is sometimes used to overcome the limitation with the FOV in MR head-mounted devices (HMD). For example, Tran, T. T. M., and C. Parker (Citation2020) created a VR city and then added “virtual” elements to test the usability of up front, on street and on hand maps in hm-MR navigation. While it indeed provided a larger, human-like FOV, new concerns, such as motion sickness using joysticks to navigate, necessity to set slower walking pace, occur. Besides, many visualization ideas of current MR interface design originate from desktop or HMD VR games. Such games are mainly in or mimics a first-person view and the gamers need to remember the maps, which is important in spatial learning. However, in VR games, players do not have to switch their attention between the real and the virtual objects. For design of MR, Grasset et al. (Citation2012) summarized the visualizations in MR and suggested that in MR the labels should not overlap with POIs or edges, and the contrast between video content and labels should be improved. However, those are neither specifically designed for hm-MR nor for navigation purposes. It remains to be explored, whether the provided visualizations satisfy the needs of MR navigation users, support the spatial learning and create pleasant navigation experience.

Direction indicators are an essential element for navigation. The orientation function of landmarks was previously underestimated, but is increasingly promoted by researchers and should be appropriately integrated into navigation aids (Fellner, Huang, and Gartner Citation2017; Lanini-Maggi, Ruginski, and Fabrikant Citation2021; Ohm, Ludwig, and Gerstmeier Citation2015). Currently, there is no clear guideline of the visualizations of direction and landmarks for MR-based navigation. Arrows or other separate holograms are commonly used as direction indicators in current MR-based navigation apps, such as mobidev (, MobiDev Citation2018), Dent Reality (, Dent Reality Citation2019), Google Map Live View (, Google Maps Citation2020), and Phiar (, Phiar Citation2022). Sometimes arrows are also combined with lines to highlight the direction or turn (, Phiar Citation2022). Another visualization, which may be more entertaining, is animated avatar (, VIEWAR Augmented Reality Citation2020).

Figure 2. Typical visualization styles of direction. (a)-(d) separate arrows/dots, (e) separate arrows with consecutive lines, (f) animated avatar.

Figure 2. Typical visualization styles of direction. (a)-(d) separate arrows/dots, (e) separate arrows with consecutive lines, (f) animated avatar.

The visualization of using separate holograms as direction indicator is in line with the Gestalt principle of continuity, which means that elements that are arranged on a line or a curve are perceived to be more related (UserTesting Citation2022). Users should be able to perceive the separate arrows/dots as a continuous path. In fact, such visualization works well for navigation, that is, simply reaching the destination. Previous researches have also confirmed that arrows are intuitive direction symbols (Liu, Ding, and Meng Citation2021). It requires only a small part of the limited FOV and thus spare much space for other information. Project-based MR found that the on-the-road arrows draw users’ attention to the physical world (Knierim et al. Citation2018). However, it is not clear if such visualization requires more mental efforts than a consecutive line.

Landmarks are valued in navigation and spatial learning. As visually, semantically or structurally salient objects (Raubal and Winter Citation2002; Sorrows and Hirtle Citation1999), they proved to be useful and intuitive in navigation (Bauer et al. Citation2015; Adam, Burnett, and Large Citation2015; Dong et al. Citation2020; Wenczel, Hepperle, and von Stülpnagel Citation2017; Çöltekin et al. Citation2020). Li et al. (Citation2014) found that visualizing distant landmarks supports users with low sense of direction (SOD) with spatial orientation. Credé et al. (Citation2020) confirmed the advantage of globally visible landmarks improving survey knowledge acquisition. Landmark knowledge are acquired at the very beginning of spatial learning (Ishikawa and Montello Citation2006) and is essential and possible for incidental spatial learning. Landmark learning is a common task for the evaluation of spatial learning (Hedge, Weaver, and Schnall Citation2017; van Wermeskerken et al. Citation2016). When overloaded by the main task, one’s visual field is narrowed (Kishishita et al. Citation2014) and the incidental learning of landmarks might be more difficult. During navigation, landmark learning is a secondary task and is also suitable for assessing mental workload of the main task, that is, navigation (McKendrick et al. Citation2016).

1.3. This study

To address the aforementioned cognitive issues associated with the interface design of hm-MR-based navigation, this study investigates the influences of different visualization styles on users’ perception of the real world. We formulate two research hypotheses about hm-MR-based navigation:

Hypothesis 1:

Incorrect virtual information misleads users’ perception of the physical environment and lead to wrong spatial memory.

Hypothesis 2:

Aligned separate holograms as direction indicator are perceived as a continuous path without additional mental efforts and do not influence spatial learning.

To test these two hypotheses, we built a MR-based navigation interface and conducted a user study where users performed tasks with the navigation aid. To test H1, we visualize an incorrect virtual path that conflicts with the physical environment. More specifically, we designed an artificial turn in a straight corridor. If users tend to remember the virtual path instead of the real environment, then H1 is true. To test H2, two visualizations are adopted for direction indicator, that is, arrows with lines and without lines. We also visualize some of the semantic and structural landmarks of the study area. Mental effort is evaluated by standard questionnaires (see 2.5 Procedures) and spatial learning is evaluated by learning of landmarks. Therefore, if the results of the questionnaire results and landmark learning remain the same for both with-line and without-line styles, then H2 is true. Otherwise, H2 is false. Furthermore, we collected user opinions about the interface design for future improvement.

2. User study

In this study, we design a navigation interface using Microsoft HoloLens 2 and conduct a user study. An artificial turn was introduced to test H1, two visualizations of direction indicators are used and selected landmarks are visualized to test H2. During the user study, the participants first conduct a pre-walking questionnaire including information of their knowledge background and SOD, and then use the navigation tool to reach the destination. A post-walking questionnaire is used to assess mental workload and sketch maps are used to assess the spatial learning results. Finally, we interview the participants on their opinions about MR navigation and interface design.

2.1. Navigation interface design

The designed interface mainly consists of arrows with/without lines to show the direction, and semantic and structural landmarks. The participants are then randomly divided into two groups, that is, the With Line (WL) group and the No Line (NL) group.

Three categories of landmarks are selected, that is, elevator/lift, corridor/hall and administration office. We used pictorial symbols to represent the landmarks. Pictorial symbols are demonstrated more effective than geometric symbols (Halik and Medyńska-Gulij Citation2017), and thus may be more effective in unintentional perception. The symbols should be designed big enough to show the details. We use black and white symbols with thicker lines and straight shapes to make them easier to distinguish (Halik and Medyńska-Gulij Citation2017) and to avoid distracting participants from perceiving the physical world (). We created these symbols from icons by vectorpocket, upklyak, pch.vector in freepik (www.freepik.com). They were rotated around vertical axis to keep facing the user.

Figure 3. Pictures used in the interface design.

Figure 3. Pictures used in the interface design.

2.2. Participants

This user study recruited 40 volunteer participants through posters online and around the university campus. All participants are adults (22–49 years old, mean age = 29.4 years old, SD = 6.4 years old). Seventeen participants are female and 23 are male. According to the questionnaire in the user study, the participants have only limited experience with both AR and VR. None of the participants reported or was observed visual impairment.

2.3. Hardware and interface

We used Microsoft HoloLens (2nd generation, https://www.microsoft.com/en-us/hololens/hardware) in the user study. The device is with 2k 3:2 light engines resolution, >2.5k radiants holographic density and real-time eye tracking.

The interface is designed using Unity (https://unity.com/). For the two groups of WL and NL, the landmarks and arrows are identical, the only difference is that the lines are only shown in the WL group ().

Figure 4. Participants’ view at start point (a) WL group; (b) NL group) and before the artificial turn (a) WL group; (b) NL group).

Figure 4. Participants’ view at start point (a) WL group; (b) NL group) and before the artificial turn (a) WL group; (b) NL group).

We first used Azure Spatial Anchor to save each hologram (e.g. each arrow was an independent spatial anchor). However, in a pilot study we found the loading of holograms is not ideal due to unstable internet connection and the participants were confused during the navigation. Therefore, we saved the whole path as one single anchor to make sure all the holograms are rendered in time. The visibility was set to 5 meters.

2.4. Study area

The user study is conducted in the main building at the city campus of Technical University of Munich. The study area is chosen for three reasons: a) there are no windows, to keep a constant lighting condition; b) there is more than one corridor along the path, to test participants’ perception of both visualized and non-visualized landmarks; and c) the walked path is wide enough to allow an artificial turn (). The study area is shown in .

Figure 5. Study area.

Figure 5. Study area.

2.5. Procedures

The main procedure of the user study includes a pre-questionnaire, a walking session, a post-walking-questionnaire and a semi-structured interview. First, the participants were given the Informed Consent Form, which introduces the study briefly and informs them that they are free to quit the study at any time or withdraw their personal data. After signing the form, the participants need to do a pre-walking questionnaire. The pre-walking questionnaire includes personal information, SBSOD (Q1–Q15, Santa Barbara Sense of Direction Scale, which is widely used to assess SOD, Hegarty et al. Citation2002), if they get lost more easily indoor than outdoor (Q16), familiarity of AR and VR (Q17–Q18) and the Pre-state questionnaires of SSSQ (Short Stress State Questionnaire, Helton and Näswall Citation2015). lists the questionnaire examples of Q16–Q18.

Table 1. Questions 16–18 in the Questionnaire.

The task description informed the participants that they will be asked questions about the path after the walking. But the participants were not informed about the specific questions. During the walking session, the participants first needed to adjust eye position using HoloLens 2, then were shown the “legend” with examples of the elevator/lift, corridor/hall and administration office landmarks. The participants could read the landmark legends without time limit till they fully understood the meaning of the symbols. Afterwards they started the navigation part. After reaching the destination, the participants went back along the same corridor wearing the HoloLens 2 (without any instructions displayed) accompanied by the experiment designer. During the return trip, the experiment designer explained the following steps (e.g. the coming interview) to the participant and kept her/him focused on the conversation and distracted from remembering the environment.

For the post-walking questionnaires, the participants first need to draw a sketch map and answer-related questions, then fill the post-state SSSQ (Helton and Näswall Citation2015), NASA TLX questionnaire (NASA Task Load Index),Footnote2 answer questions about interface design and draw their own design of the interface. At last, there was a semi-structured interview based on the answers and the conversation was recorded.

The misleading effect of artificial turn is evaluated by asking the participants to judge the structure of the study area (). This question is behind the sketch map task to prevent the options’ influence. The question is: Which of the following pictures is most similar to your impression of the structure of the study area? We give six diagrams to show different structures, including straight line with no corridor (A), one corridor at the right side (B) and two corridors at both sides (C), and curved line with no corridor (D), one corridor at the right side (E) and two corridors at both sides (F). The correct answer is C, that is, there are corridors on both side near the start point. Each of the options represents the participant’s impression of the study area. For example, if a participant chose B, it means this participant was not misled by the artificial turn and remembered the physical environment correctly, and the participant remembered the corridor labeled by the virtual landmark but overlooked the not-labelled one.

Figure 6. Six proposed options representing the study area structure .

Figure 6. Six proposed options representing the study area structure .

3. Results

We analyzed the participants’ pre-walking questionnaires, sketch maps, and collected their opinions about the MR navigation interface design in the post-walking sessions. The post-state SSSQ and TLX questionnaires were filled after the sketch mapping. The interview revealed that the participants filled the two questionnaires mainly based on the sketch map tasks instead of the navigation experience. They made great efforts to recall the route and found the tasks mentally demanding. Thus, the differences of pre-/post-state SSSQ and the results of the TLX questionnaire are mainly from the post-walking questionnaires instead of the navigation experience, which is beyond the focus of this study. Therefore, those results are not analyzed and reported in this paper.

3.1. Data analysis

3.1.1. Pre-walking questionnaire

The pre-walking questionnaire aims to reflect the participants’ sense of direction, whether they get lost indoor more easily than outdoor, and their experience with AR and VR. The results are shown in . The SBSOD is 4.39 for the WL group and 4.52 for the NL group. The score for Q16 is 4.55 for the WL group and 4.26 for the NL group, which indicates that participants are more likely to get lost indoors. The score of Q17 and Q18 for both WL and NL groups are between 2 and 3, indicating that the participants have only limited experience with both AR and VR. A simple t-test is conducted to test the significance of the differences between group WL and group NL, and the participants show no significant differences in all four aspects (p values are all above 0.05). This indicates that the potential differences in data analysis results are not due to participants’ differences, but are caused by the visualizations.

Table 2. Pre-walking Questionnaire Results (values represent: mean ± standard deviation).

3.1.2. Sketch Map

Right after the walking session, every participant was asked to draw a sketch map reflecting their memory of the walked area. shows the results of sketch mapping. The general results of the WL group and the NL group are very similar. We counted how many participants remembered each of the elements. The largest differences between the WL and NL groups are the elements counting of corridor and correct direction, which are both 4. But only two participants in the WL group drew the unlabeled corridor and participants from the WL group labeled more landmarks in most categories than those from the NL group. Regarding the pure real, physical objects (i.e. stairs, glass doors, and mailboxes), seven participants drew the short stairs near the start point, among which three are from the WL group and four the NL group. Twenty-seven participants drew at least one glass door with 14 and 13 participants from the WL group and the NL group, respectively. Among them, one participant in the WL group and two participants in the NL group drew two glass doors along the path, resulting in 15 glass doors in each group (see “Glass Door” in ). One participant in the WL group remembered the mailbox near the turn. Concerning the objects labeled by virtual landmarks (i.e. elevators, administrative offices, corridors, and artificial turn), 34 participants drew the elevator with 18 participants are from the WL group and 16 NL group. The numbers of admin office drawn are the same as that of the elevator. More participants (17) drew the corridor from the WL group than that (13) from the NL group. Most of the participants (37) drew the artificial turn with 19 from the WL group and 18 from the NL group. Furthermore, among each group drawing this artificial turn, 12 participants from the WL group and 16 from the NL group drew the turn direction correctly.

Table 3. The number of each type of object drawn on sketch maps.

Overall, more participants from the WL group draw the landmarks on their sketch maps. More of them draw correct numbers of the landmarks. Among all the participants, only one WL participant remembered the physical mailbox. However, more participants from the NL group remembered the direction correctly.

Directly after the sketch mapping, the participants scaled their self-confidence of their sketch. They were further asked to explain which part they are more confident of and why. For the confidence of sketch mapping, 25 participants’ self-confidence is more related to category. For example, they may be more confident of the direction, and less confident about landmarks. Eleven of them also mentioned position-related confidence. For example, they are less confident about the order of landmarks or the last landmarks. Seven participants’ confidence is only related to position, they are more confident about the things before the turn or the glass door, and less confident about those after it.

The results indicate that the artificial turn misled participants to some extent. The correctness of each option on the structure of the study area is shown in . Most participants chose B, i.e. only a corridor at the right side. Seven participants chose no corridor (A) and seven other participants chose the correct option C. These participants all remember the walked corridor as straight. Only two participants from the NL group chose E, where there was a turn in the walked corridor.

Table 4. Results of study area structure question.

All the participants from the WL group remembered the path correctly, while 18 participants from the NL group did so (A+B+C). Seventeen and 16 participants from the WL and the NL groups remembered the labeled corridor (B+C+E+F). Sixteen participants in both WL and NL groups overlooked the physical unlabeled corridor (A+B+D+E) and another one participant in the NL group was not sure about this corridor. In general, the memorized study area layout of participants from both groups are quite similar. Only without lines, two participants were confused by the artificial turn (i.e. the conflicted virtual and physical information), and they remembered the physical layout with a turn.

3.1.3. Interface design ratings

In the questions about interface design, the participants were asked to evaluate the elements in the interface (arrows, lines, and landmarks) from two aspects: 1) satisfaction, i.e. how participants like it, and 2) usability, i.e. how participants think it helps them remember the route. It was rated based on the 7-Likert-scale (1 not at all − 7 very much). The results are shown in and . In general, participants like the three elements we designed and regard them as useful, as all the values are above 4. For landmarks, the satisfaction and usability both tend to be higher in the WL group.

Figure 7. Rating results of the elements.

Figure 7. Rating results of the elements.

Table 5. Descriptive and ANOVA results for interface design ratings.

In the WL group, the line is liked most, followed by the arrow and the landmark. The usability of line is also the highest, and arrow is the least useful. In the NL group, both satisfaction and usability of arrow are higher than those of landmark. In both WL and NL groups, the satisfaction of direction indicators is higher than that of landmarks.

The satisfaction and usability of each element between WL and NL group are compared using two-way ANOVA and the results are shown in . The main effect of the element on the satisfaction is significant (p = 0.001, p < 0.01). The participants’ satisfaction of arrow (5.80 ± 1.38) is higher than that of landmark (4.55 ± 1.74).

The interaction effect between group and element influences the difference on element usability rating (p = 0.038, p < 0.05). A simple effects test was conducted to determine which factor was effective at each level. The results () show that the element has significant influence on the usability rating in NL group. Without line, the arrow is regarded significantly more useful than landmark (df(arrow-landmark) = 1.50). With line, the impact is not significant (p = 0.670, p > 0.05). Visualization group has significant influence on the usability rating of arrows. With line, the arrow’s usability is significantly lower (df(NL-WL) = 1.45).

Table 6. Simple effects test for usability (df: difference, SE: standard error).

3.1.4. Post interview

Finally, we analyzed the answers to the open questions about interface design (). Among all the aspects, map is the most frequently mentioned element. Thirteen participants (five from the WL group and eight from the NL group) said they would like to have an overview map at the corner similar to that in games.

Table 7. Participants’ interface design suggestions in semi-structured interview.

Interactive menus are also mentioned quite often (by 11 participants). The participants would like the menu to be either independent/call-out or based on clickable landmarks. Remaining distance or time to the destination is mentioned by eight participants (five from the WL group and three from the NL group), two participants were in favor of displaying the device status, as they felt that this would keep participants informed about current situation, such as whether the device functions normally and is reliable. Four participants would like to have avatars leading the way, but some of them prefer real-person sized virtual person while others prefer cats/dogs. Two participants would have cardinal directions, even with indoor navigation, as some buildings may use “North Gate”. Two participants mentioned they would learn better with audio assistance.

Despite the joy of using MR, 11 participants expressed their concern for safety and 1 for workload. Some did not want any more information to be displayed, as the current settings are sufficient to guide the way without occluding the real world. Others emphasized the expectation of HoloLens giving danger warnings, such as of accidents, or just inflate floors. Seven participants would like to have more digitalized objects, but also combined with the real world, e.g. use doodles to represent/highlight the trees. Four participants said the landmarks should be closer to the real objects or to be linked with them by virtual lines. For the landmark design, the preferences are quite diverse, which indicates the necessity to provide participants with different or personalized styles. Three participants reported that they were bothered by the height of virtual objects and would like them to be lower, preferably on the ground.

4. Discussion

In this study, we proposed two hypotheses. H1: incorrect virtual information misleads users’ perception of the physical environment and lead to wrong spatial memory; and H2: aligned separate holograms as direction indicator are perceived as a continuous path without additional mental efforts and do not influence spatial learning. We compared the participants’ SBSOD, VR/MR experience and if they are more likely to get lost indoor than outdoor. No significant differences are found between the two user groups. Therefore, the revealed differences in the tasks are caused by the visualization.

4.1. Discussion on hypothesis 1

To test H1, we designed an artificial turn in a straight corridor. Most participants (19 from the WL group and 18 from the NL group) remembered this turn in their sketch maps. Twelve from the WL group and 16 from the NL group remembered the turn direction correctly. Despite this artificial turn in the visualized path, most participants correctly recalled the walked corridor as straight. But 2 participants from NL group confused the visualized virtual path with the physical corridor. For most participants (95%, 38 out of 40) the physical information surpasses the virtual one in most cases. They were aware of the mismatch between the virtual and physical worlds and corrected the misinformation.

For the participants who remembered the walked corridor with a turn, possible explanations are that they had to pay much attention to the upcoming arrows, or they were trying to interpret the virtual landmarks and ignored the physical world. This constant alert to the upcoming virtual objects may contribute to the inattentional blindness found in previous studies and caused participants to overlook physical objects. In our study, only in certain cases (5%, 2 out of 40), the incorrect visualization led participants to wrong spatial memory. This finding reveals the possible misleading effect but is indeed not a strong support for H1. However, the current study area is restricted within a simple straight corridor, which is much more limited than the daily navigating area. We all experience that the longer we travel and the more complex the environment is, the more difficult it becomes to stay oriented. People’s spatial memory also decreases as they navigate (Ekstrom and Isham Citation2017). Therefore, it is logical to assume that in longer paths or more complex environments, the misleading effect would be stronger. User studies for more realistic daily navigation shall be conducted to test H1. Besides, the results also suggest that this misleading effect is related to the visualization, as the misleading effect is only shown in the NL group. The graphic design of the navigation interface requires much attention.

4.2. Discussion on hypothesis 2

For H2, we found a similar trend for the memory of landmarks in sketch maps in the two groups. We investigated how many participants remembered each landmark and found that the difference between the two groups is very small (the largest difference is 4). However, in general the participants from the WL group remembered more landmarks and information than those from the NL group. Differences concerning the subjective evaluation of the holograms are also found between the two groups. When lines are present, the arrows are seen as significantly less helpful. Since the WL and NL group participants show no differences in the background (as shown in the pre-walking questionnaire), the differences are caused by the visualization. The arrows are significantly less useful in the WL group than in the NL group, and within the WL group the arrow was also rated lower for both satisfaction and usability than the line. The results indicate that the line is sufficient to show the direction in the WL group. Without lines, the arrow is significantly more helpful than landmarks, while with lines, the landmark are rated as useful as the arrow. When only arrows are presented, the participants may tend to search for the next arrows which show the direction, and ignore the virtual landmarks and the physical world. With continuous lines, the participants were able to shift more attention from the direction indicators, to virtual landmarks or physical objects. Therefore, the participants tended to rate the landmark the same as the direction indicator and they add more landmarks on the sketch maps.

Our hypothesis H2 is false, as with the separated arrows, the participants’ spatial learning tended to be worse within the simple study area. Participants’ subjective feelings about the interface elements are also influenced.

Nonetheless, since the sketch mapping interferes with participants’ evaluation of SSSQ and TLX questionnaires, and the mental effort is not sufficiently analyzed, it’s not clear if it requires more mental efforts for the participants to perceive the separate holograms as a continuous path. The mental effort will be better evaluated using objective measurements, for example, eye movement data or EEG (electroencephalogram).

5. General discussion on interface design

In the analysis of H1, with only arrows, some participants were misled by the incorrect virtual information. In the analysis of H2, participants in the WL group found arrows much less helpful than those in the NL group, and they also found arrows less helpful than line as the direction indicator. Therefore, we recommend not to use arrows as direction indicator, but to include lines or use only lines instead. Thus, the participants are less likely to be misled by potential mistakes and they can focus more on the landmarks.

According to the semi-structured user interview, participants have their own preferences for the display and MR per se is not satisfying enough for the daily use. Although most participants only have limited experience with MR, they have strong opinions on what the interface should look like. Therefore, we need to take users’ personal opinions and preferences into consideration when designing MR applications/software. We found that some of participants’ advice can be combined with previous scientific findings and well integrated in the interface design. For example, in our study, participants proposed to use distance-dependent visualization. An ideal way of distance-dependent visualization is the combination of size and transparency in point symbols on mobile MR maps (Halik and Medyńska-Gulij Citation2017). The personalized preferences shown by our participants might explain the contradictory findings based on former studies (Halik and Medyńska-Gulij Citation2017). With the new technology, users expect more individualized interfaces. Besides, many relevant contextual factors, such as environment, interest, and tasks, also influence users’ behavior and thus should be adapted to (Bartling et al. Citation2022). Grasset et al. (Citation2012) proposed to let the designer specify high-level style. However, our study suggests that the users should be provided with adaptable interface, that is, be able to actively design their own style in the MR applications.

Some of the participants’ intuitive suggestions might be contradictory with previous design guidelines. Many participants mentioned that overview maps should be displayed. However, showing overview maps might hinder the FOV and thus needs to be carefully designed (Tran, T. T. M., and C. Parker Citation2020). We note that fewer participants from the WL group mentioned maps than in the NL group. The lines may create a sense of consecution and help the participants build some survey knowledge about the study area, thus relieve the need for a map. Again, we suggest including lines instead of only arrows as direction indicators to resume users’ incidental spatial learning.

While participants ask for more interactive functions, it remains a question if it is really necessary or how the interactions should be designed. Since Bartling et al. (Citation2021) found that under time pressure, the participants performed worse with map-related tasks, especially with highly interactive tasks. They recommend that with time pressure, interactions should be minimized for users’ benefits. Similar conclusion is drawn in Brunye et al. (Citation2017)’ work, that participants under time pressure rely more on egocentric information so as to avoid cognition overload. Therefore, it is not ideal to introduce too much information (Brunye et al. Citation2017). Such conclusions may not be drawn in different displays and may not be applicable in XR-based navigation. Nevertheless, these factors shall be considered in the MR-based navigation interface design. Our study shows that users’ preferences for the MR-navigation interface are diverse, and some may be contradictory to academic findings. Further research should be conducted to improve our understanding of the cognitive issues in MR-based navigation, and to maximize the usage of MR in spatial learning. For example, to analyze how and to which extent the MR-based navigation interface should be adaptive/adaptable.

5.1. Limitations

The current work revealed the visualization’s impact on spatial learning during MR-based navigation from descriptive results. Incorrect visualization could mislead users to wrong spatial memories, and the separated holograms as direction indicators tend to be more mentally demanding and make spatial learning more difficult. Further user studies with larger study areas and more participants are needed to testify to which extent the findings hold. Besides, objective measurements of mental workload, such as EEG and eye-tracking, are to be involved to investigating visualizations’ and other factors’ (e.g. SOD) impact on spatial learning in detail.

6. Conclusions and future work

In this study, we proposed two hypotheses concerning the effects of visualization on spatial learning in head-mounted MR-based navigation. Specifically, we test whether incorrect virtual information can mislead users’ perception of the physical world and whether using separate holograms as direction indicator increase mental efforts and influences spatial learning. We designed an indoor navigation interface, which visualizes semantic and structural landmarks using pictorial icons and the direction using arrows with/without lines. Based on this prototypical interface, we conducted a user study using Microsoft HoloLens 2 and collected user feedback about the interface design.

We found preliminary confirmation of the first hypothesis, that is, incorrect visualization can mislead users and leave wrong spatial memories. Two participants remembered a straight corridor as with a turn. Luckily, it is not quite common and most of the participants remembered the walked area as a straight corridor correctly. The second hypothesis that separate holograms do not introduce more mental efforts is rejected by the user study results. The separate holograms seem to be more mentally demanding, as participants in the With Line group remembered more landmarks and more details, and they all remembered the physical corridor correctly. Still, more advanced methods of mental workload measurement (e.g. EEG and eye-tracking) should be involved to further investigate the impact on mental workload.

Our results show that the misleading effect can be overcome by including lines rather than using only arrows as direction indicator, which allows users to attend more to the physical world. Therefore, different from the current navigation applications which use arrows, we recommend including lines as direction indicator for better incidental spatial learning. In addition, the user feedback in this study shows that participants have their strong preferences for personalized visualization styles and interfaces. Tools with customized navigation interfaces to different users will benefit users’ spatial learning and the usability of head-mounted MR-based navigation. We also call for a more in-depth investigation of head-mounted MR-based navigation interfaces in daily navigation situations and their impact on spatial learning, and for the development of adaptive and adaptable navigation.

Acknowledgments

The authors appreciate the efforts of the anonymous reviewers and the editor. We are grateful to all the participants in the experiment for their contributions to the user study.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The datasets generated and analyzed during the current study are not publicly available as they contain information that could compromise privacy and consent of research participants, but they are available from the corresponding author on reasonable request.

Additional information

Funding

This work is supported by the China Scholarship Council [Grant No. 201806040219 and Grant No. 202006040025]

Notes on contributors

Bing Liu

Bing Liu is a PhD candidate in Chair of Cartography and Visual Analytics, Technical University of Munich, Germany. In her PhD study, Bing Liu focuses on spatial learning during mixed reality-based navigation. She is also experienced in using eye-tracking and fMRI in spatial ability and cognition research.

Linfang Ding

Linfang Ding is an associate professor in geomatics at the Department of Civil and Environmental Engineering, Norwegian University of Science and Technology, Norway. Her current research interests include geospatial knowledge graphs, geovisual analytics, mobility analysis, and 3D city modelling.

Shengkai Wang

Shengkai Wang is a PhD candidate in Chair of Cartography and Visual Analytics, Technical University of Munich, Germany. His research interests include mixed reality-based visualization, spatial cognition, spatial navigation, and human-machine interface

Liqiu Meng

Liqiu Meng is a professor of Cartography at the Technical University of Munich. She is serving as Vice President of the International Cartographic Association. Her research interests include geodata integration, mobile map services, HD mapping, and geovisual analytics.

Notes

References

  • Adam, B., G. Burnett, and D. R. Large. 2015. “An Investigation of Augmented Reality Presentations of Landmark-Based Navigation Using a Head-Up Display.” In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, edited by G. Burnett, New York, NY, United States: ACM, 56–63.
  • Adhanom, I. B., M. Al-Zayer, P. Macneilage, and E. Folmer. 2021. “Field-Of-View Restriction to Reduce VR Sickness Does Not Impede Spatial Learning in Women.” ACM Transactions on Applied Perception 18 (2): 2. doi:10.1145/3448304.
  • Barhorst-Cates, E. M., K. M. Rand, and S. H. Creem-Regehr. 2016. “The Effects of Restricted Peripheral Field-Of-View on Spatial Learning While Navigating.” PloS One 11 (10): e0163785. doi:10.1371/journal.pone.0163785.
  • Bartling, M., B. Resch, T. Reichenbacher, C. R. Havas, A. C. Robinson, S. I. Fabrikant, and T. Blaschke. 2022. “Adapting Mobile Map Application Designs to Map Use Context: A Review and Call for Action on Potential Future Research Themes.” Cartography and Geographic Information Science 49 (3): 1–15. doi:10.1080/15230406.2021.2015720.
  • Bartling, M., A. C. Robinson, B. Resch, A. Eitzinger, and K. Atzmanstorfer. 2021. “The Role of User Context in the Design of Mobile Map Applications.” Cartography and Geographic Information Science 48 (5): 432–448. doi:10.1080/15230406.2021.1933595.
  • Bauer, C., M. Ullmann, B. Ludwig, C. Stahl, B. Krieg-Brückner, W. Zagler, and B. Göttfried. 2015. “Displaying Landmarks and the User’s Surroundings in Indoor Pedestrian Navigation Systems.” Journal of Ambient Intelligence and Smart Environments 7 (5): 635–657. doi:10.3233/AIS-150335.
  • Brügger, A., K.-F. Richter, and S. I. Fabrikant. 2019. “How Does Navigation System Behavior Influence Human Behavior?” Cognitive Research: Principles and Implications 4 (1): 1–22. doi:10.1186/s41235-019-0156-5.
  • Brunye, T. T., M. D. Wood, L. A. Houck, and H. A. Taylor. 2017. “The Path More Travelled: Time Pressure Increases Reliance on Familiar Route-Based Strategies During Navigation.” The Quarterly Journal of Experimental Psychology 70 (8): 1439–1452. doi:10.1080/17470218.2016.1187637.
  • Çöltekin, A., A. L. Griffin, A. Slingsby, A. C. Robinson, S. Christophe, V. Rautenbach, M. Chen, C. Pettit, and A. Klippel. 2020. “Geospatial Information Visualization and Extended Reality Displays.” In Manual of Digital Earth, edited by H. Guo, M. F. Goodchild, and A. Annoni, 229–277. Singapore: Springer. doi:10.1007/978-981-32-9915-3_7.
  • Credé, S., T. Thrash, C. Hölscher, and S. I. Fabrikant. 2020. “The Advantage of Globally Visible Landmarks for Spatial Learning.” Journal of Environmental Psychology 67: 101369. doi:10.1016/j.jenvp.2019.101369.
  • Dent Reality. 2019. “Dent Reality - Indoor AR Navigation Speed Run.” Accessed 19 January 2022. https://www.youtube.com/watch?v=-a3bn7oPLZM.
  • Dixon, B. J., M. J. Daly, H. H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish. 2014. “Inattentional Blindness Increased with Augmented Reality Surgical Navigation.” American Journal of Rhinology & Allergy 28 (5): 433–437. doi:10.2500/ajra.2014.28.4067.
  • Dong, W., T. Qin, H. Liao, Y. Liu, and J. Liu. 2020. “Comparing the Roles of Landmark Visual Salience and Semantic Salience in Visual Guidance During Indoor Wayfinding.” Cartography and Geographic Information Science 47 (3): 229–243. doi:10.1080/15230406.2019.1697965.
  • Ekstrom, A. D., and E. A. Isham. 2017. “Human Spatial Navigation: Representations Across Dimensions and Scales.” Current Opinion in Behavioral Sciences 17: 84–89. doi:10.1016/j.cobeha.2017.06.005.
  • Fang, Z., Q. Li, and S.-L. Shaw. 2015. “What About People in Pedestrian Navigation?” Geo-Spatial Information Science 18 (4): 135–150. doi:10.1080/10095020.2015.1126071.
  • Fellner, I., H. Huang, and G. Gartner. 2017. ““Turn Left After the WC, and Use the Lift to Go to the 2nd Floor”—generation of Landmark-Based Route Instructions for Indoor Navigation.” ISPRS International Journal of Geo-Information 6 (6): 183. doi:10.3390/ijgi6060183.
  • Fuest, S., S. Grüner, M. Vollrath, and M. Sester. 2021. “Evaluating the Effectiveness of Different Cartographic Design Variants for Influencing Route Choice.” Cartography and Geographic Information Science 48 (2): 169–185. doi:10.1080/15230406.2020.1855251.
  • Google Maps. 2020. “See the Way with Live View in Google Maps.” Accessed 19 January 2022. https://www.youtube.com/watch?v=ip_SV70WFO0.
  • Gramann, K., P. Hoepner, and K. Karrer-Gauss. 2017. “Modified Navigation Instructions for Spatial Navigation Assistance Systems Lead to Incidental Spatial Learning.” Frontiers in Psychology 8. doi:10.3389/fpsyg.2017.00193.
  • Grasset, R., T. Langlotz, D. Kalkofen, M. Tatzgern, and D. Schmalstieg. 2012. “Image-Driven View Management for Augmented Reality Browsers.” In 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Piscataway, NJ: IEEE, 177–186.
  • Halik, Ł., and B. Medyńska-Gulij. 2017. “The Differentiation of Point Symbols Using Selected Visual Variables in the Mobile Augmented Reality System.” The Cartographic Journal 54 (2): 147–156. doi:10.1080/00087041.2016.1253144.
  • Heaney, D. 2019. “HoloLens 2‘s Real Field of View Revealed - UploadVr.” UploadVR, February 26. Accessed 23 January 2022. https://uploadvr.com/hololens-2-field-of-view/.
  • Hedge, C., R. Weaver, and S. Schnall. 2017. “Spatial Learning and Wayfinding in an Immersive Environment: The Digital Fulldome.” Cyberpsychology, Behavior and Social Networking 20 (5): 327–333. doi:10.1089/cyber.2016.0399.
  • Hegarty, M., A. E. Richardson, D. R. Montello, K. Lovelace, and I. Subbiah. 2002. “Development of a Self-Report Measure of Environmental Spatial Ability.” Intelligence 30 (5): 425–447. doi:10.1016/S0160-2896(02)00116-2.
  • Helton, W. S., and K. Näswall. 2015. “Short Stress State Questionnaire.” European Journal of Psychological Assessment 31 (1): 20–30. doi:10.1027/1015-5759/a000200.
  • Huang, H., M. Schmidt, and G. Gartner. 2012. “Spatial Knowledge Acquisition with Mobile Maps, Augmented Reality and Voice in the Context of GPS-Based Pedestrian Navigation: Results from a Field Test.” Cartography and Geographic Information Science 39 (2): 107–116. doi:10.1559/15230406392107.
  • Hyman, I. E., S. M. Boss, B. M. Wise, K. E. McKenzie, and J. M. Caggiano. 2010. “Did You See the Unicycling Clown? Inattentional Blindness While Walking and Talking on a Cell Phone.” Applied Cognitive Psychology 24 (5): 597–607. doi:10.1002/acp.1638.
  • Ishikawa, T., and D. R. Montello. 2006. “Spatial Knowledge Acquisition from Direct Experience in the Environment: Individual Differences in the Development of Metric Knowledge and the Integration of Separately Learned Places.” Cognitive Psychology 52 (2): 93–129. doi:10.1016/j.cogpsych.2005.08.003.
  • Jensen, M. S., R. Yao, W. N. Street, and D. J. Simons. 2011. “Change Blindness and Inattentional Blindness.” Wiley Interdisciplinary Reviews Cognitive Science 2 (5): 529–546. doi:10.1002/wcs.130.
  • Kapaj, A., S. Lanini-Maggi, and S. I. Fabrikant. 2021. “The Influence of Landmark Visualization Style on Expert Wayfinders’ Visual Attention During a Real-World Navigation Task.” https://escholarship.org/uc/item/7km7x3w1.
  • Kishishita, N., K. Kiyokawa, J. Orlosky, T. Mashita, H. Takemura, and E. Kruijff. 2014. “Analysing the Effects of a Wide Field of View Augmented Reality Display on Search Performance in Divided Attention Tasks.” In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, 177–186.
  • Knierim, P., S. Maurer, K. Wolf, and M. Funk. 2018. “Quadcopter-Projected in-Situ Navigation Cues for Improved Location Awareness.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM.
  • Krupenia, S., and P. M. Sanderson. 2006. “Does a Head-Mounted Display Worsen Inattentional Blindness?” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50 (16): 1638–1642. doi:10.1177/154193120605001626.
  • Lanini-Maggi, S., I. T. Ruginski, and S. Fabrikant. 2021. “Improving Pedestrians’ Spatial Learning During Landmark-Based Navigation with Auditory Emotional Cues and Narrative.” In GIScience 2021 Short Paper Proceedings.
  • Li, R., A. Korda, M. Radtke, and A. Schwering. 2014. “Visualising Distant Off-Screen Landmarks on Mobile Devices to Support Spatial Orientation.” Journal of Location Based Services 8 (3): 166–178. doi:10.1080/17489725.2014.978825.
  • Liu, B., L. Ding, and L. Meng. 2021. “Spatial Knowledge Acquisition with Virtual Semantic Landmarks in Mixed Reality-Based Indoor Navigation.” Cartography and Geographic Information Science 48 (4): 305–319. doi:10.1080/15230406.2021.1908171.
  • McCullough, D., and R. Collins. 2019. ““Are We Losing Our Way?” Navigational Aids, Socio-Sensory Way-Finding and the Spatial Awareness of Young Adults.” Area 51 (3): 479–488. doi:10.1111/area.12478.
  • McKendrick, R., R. Parasuraman, R. Murtza, A. Formwalt, W. Baccus, M. Paczynski, and H. Ayaz. 2016. “Into the Wild: Neuroergonomic Differentiation of Hand-Held and Augmented Reality Wearable Displays During Outdoor Navigation with Functional Near Infrared Spectroscopy.” Frontiers in Human Neuroscience 10: 216. doi:10.3389/fnhum.2016.00216.
  • McNamara, K., JR. “The Effects of See-Through Head-Mounted Displays on Learning and Attention Towards Real-World Events.” Accessed 20 February 2022. https://keithmc13.github.io/HMD_Distraction_paper.pdf.
  • Medynska-Gulij, B. 2003. “The Effect of Cartographic Content on Tourist Map Users.” Cartography 32 (2): 49–54. doi:10.1080/00690805.2003.9714252.
  • MobiDev. 2018. “ARKit Based AR Indoor Navigation in Corporate Campus.” Accessed 19 January 2022. https://www.youtube.com/watch?v=VmROm6nbElA.
  • Monmonier, M. 2005. “Lying with Maps.” Statistical Science 20 (3): 3. doi:10.1214/088342305000000241.
  • Murphy, G., and C. M. Greene. 2015. “High Perceptual Load Causes Inattentional Blindness and Deafness in Drivers.” Visual Cognition 23 (7): 810–814. doi:10.1080/13506285.2015.1093245.
  • Ohm, C., B. Ludwig, and S. Gerstmeier. 2015. “Photographs or Mobile Maps? Displaying Landmarks in Pedestrian Navigation Systems.” In Proceedings of the 14th International Symposium on Information Science. Vol. 66, edited by F. Pehar, C. Schlögl, and C. Wolff, Zadar, 302–312.
  • Phiar. 2022. “Phiar at CES 2022.” Accessed 19 January 2022. https://www.youtube.com/watch?v=vg5Nz-dVwcE.
  • Raubal, M., and S. Winter. 2002. “Enriching Wayfinding Instructions with Local Landmarks.” In Geographic Information Science: Second International Conference, GIScience 2002, Boulder, CO, USA, September 25-28, 2002. Proceedings. edited by, M. J. Egenhofer and D. M. Mark, 1st ed. Lecture Notes in Computer Science 2478. Berlin, Heidelberg: Springer, 243–259. Berlin Heidelberg; Imprint: Springer.
  • Rehman, U., and S. Cao. 2017. “Augmented-Reality-Based Indoor Navigation: A Comparative Analysis of Handheld Devices versus Google Glass.” IEEE Transactions on Human-Machine Systems 47 (1): 140–151. doi:10.1109/THMS.2016.2620106.
  • Ruginski, I. T., S. H. Creem-Regehr, J. K. Stefanucci, and E. Cashdan. 2019. “GPS Use Negatively Affects Environmental Learning Through Spatial Transformation Abilities.” Journal of Environmental Psychology 64: 12–20. doi:10.1016/j.jenvp.2019.05.001.
  • Sorrows, M. E., and S. Hirtle. 1999. “The Nature of Landmarks for Real and Electronic Spaces.” undefined https://www.semanticscholar.org/paper/The-Nature-of-Landmarks-for-Real-and-Electronic-Sorrows-Hirtle/592973e8e1c69ccdc488070de503969ca23a68bf.
  • Stites, M. C., L. E. Matzen, and Z. N. Gastelum. 2020. “Where are We Going and Where Have We Been? Examining the Effects of Maps on Spatial Learning in an Indoor Guided Navigation Task.” Cognitive Research: Principles and Implications 5 (1): 1. doi:10.1186/s41235-020-00213-w.
  • Tran,T.T.M.,and C. Parker. 2020. “Designing Exocentric Pedestrian Navigation for AR Head Mounted Displays.” In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. edited by, R. Bernhaupt, F. Mueller, D. Verweij, J. Andres, J. McGrenere, A. Cockburn, I. Avellino et al. New York, NY, USA: ACM, 1–8.
  • UserTesting. 2022. “7 Gestalt Principles of Visual Perception: Cognitive Psychology for UX | UserTesting Blog.”|“7 Gestalt Principles of Visual Perception: Cognitive Psychology for UX | UserTesting Blog.” Accessed 17 February 2022. https://www.usertesting.com/blog/gestalt-principles#continuity.
  • van Wermeskerken, M., N. Fijan, C. Eielts, and W. T. J. L. Pouw. 2016. “Observation of Depictive versus Tracing Gestures Selectively Aids Verbal versus Visual-Spatial Learning in Primary School Children.” Applied Cognition Psychology 30 (5): 806–814. doi:10.1002/acp.3256.
  • VIEWAR Augmented Reality. 2020. “GuideBot - AR Indoor Navigation.” Accessed 19 January 2022. https://www.youtube.com/watch?v=Db_PUwXF0SA.
  • Wang, Y., Y. Wu, C. Chen, B. Wu, S. Ma, D. Wang, H. Li, and Z. Yang. 2021. “Inattentional Blindness in Augmented Reality Head-Up Display-Assisted Driving.” International Journal of Human–Computer Interaction 38 (9): 1–14. doi:10.1080/10447318.2021.1970434.
  • Wenczel, F., L. Hepperle, and R. von Stülpnagel. 2017. “Gaze Behavior During Incidental and Intentional Navigation in an Outdoor Environment.” Spatial Cognition & Computation 17 (1–2): 121–142. doi:10.1080/13875868.2016.1226838.
  • Wunderlich, A., S. Grieger, and K. Gramann. 2022. “Landmark Information Included in Turn-By-Turn Instructions Induce Incidental Acquisition of Lasting Route Knowledge.” Spatial Cognition & Computation 1–26. doi:10.1080/13875868.2021.2022681.