785
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Neuroadaptive LBS: towards human-, context-, and task-adaptive mobile geographic information displays to support spatial learning for pedestrian navigation

Pages 340-354 | Received 08 Feb 2023, Accepted 06 Sep 2023, Published online: 18 Sep 2023

ABSTRACT

Well-designed, neuroadaptive mobile geographic information displays (namGIDs) could improve the lives of millions of mobile citizens of the mostly urban information society who daily need to make time critical and societally relevant decisions while navigating. What are the basic perceptual and neurocognitive processes with which individuals make movement decisions when guided by human- and context-adaptive namGIDs? How can we study this in an ecologically valid way, also outside of the highly controlled laboratory? We report first ideas and results from our unique neuroadaptive research agenda that brings us closer to answering this fundamental empirical question. We present our first implemented methodological solutions of novel ambulatory evaluation methods to study and improve Location-based System (LBS) displays, by critical examination of how perceptual, neurocognitive, psychophysiological, and display design factors might influence decision-making and spatial learning in pedestrian mobility across broad ranges of users and mobility contexts.

1. Background

Every day, millions of mobile citizens of the evolving digital information society are making many spatio-temporal decisions indoors and outdoors, in familiar and unfamiliar environments, and especially while being on the move.Footnote1 Many of our mobility decisions are made in-situ, in variable context- and time-dependent situations, and are typically influenced by smart mobile geographic information displays (mGIDs) (Brügger, Richter, and Fabrikant Citation2019; Ruginski et al. Citation2022). We define mGIDs here as any type of display that visualises geographic information, including, but not limited to, paper maps, digital mobile map interfaces on Global Positioning System (GPS)-enabled navigation systems, extended mobile reality displays (augmented reality (AR)/virtual reality (VR)), and the like. Increased reliance on assistive location-based mGIDs has already shown to influence our daily space-time behaviour (Brügger, Richter, and Fabrikant Citation2019; Ruginski et al. Citation2022; Thrash et al. Citation2019) and negatively impact our attentional and cognitive spatial abilities and resources (Aporta and Higgs Citation2005; Dahmani and Bohbot Citation2020; Sugimoto et al. Citation2022). Some even warn about technological infantilizing of society, because of over-dependence of personalised, user-assistive devices, including LBS, and respective mGIDs (Thrash et al. Citation2019). Even though extremely popular, and becoming ubiquitous, mGIDs can be still difficult to use successfully for many individuals, and users of mobile map displays may still have difficulties to understand the presented information (Ruginski et al. Citation2019). This is because current mGIDs are not adapted to our individual needs yet. They do not yet automatically consider our individually variable prior knowledge, competences, skills, training, etc. They are also not yet adapted to our currently available or changing perceptual, cognitive, and emotional resources and capacities for the mobility tasks at hand (Spiers, Coutrot, and Hornberger Citation2023), and/or the rapidly changing use contexts (Coutrot et al. Citation2022; Thrash et al. Citation2019). Human- and context-adaptive mobile map displays (Reichenbacher Citation2001) should be cognitively supportive and perceptually salient (Brügger, Richter, and Fabrikant Citation2019), to guide the mobile citizens safely to a desired destination, to support them in remembering the traversed environments should the device unexpectedly fail, and thus to generally improve mobile users’ well-being and safety during use (Bartling et al. Citation2022; Thrash et al. Citation2019). This is important because users of well-designed mGIDs will be more efficient in their decision-making. They will also need less time and fewer resources to solve the task at hand. Hence, they likely will also be more effective (i.e., accurate) and happy with made decisions and resulting mobility behaviour (Thrash et al. Citation2019). In doing so, mobile citizens will feel confident and in control with made decisions (Sugimoto et al. Citation2022).

2. Motivation and proposed solutions towards empirically validated, human- and context-adaptive mGIDs

Increasing empirical evidence suggests that any map display use performance can be predicted by varying individual differences in spatial abilities (Ruginski et al. Citation2019), display users’ emotional states (Lanini-Maggi, Ruginski, and Fabrikant Citation2021), and even by personality traits, such as anxiety (Thoresen et al. Citation2016). Various researchers already provide empirical evidence that spatial knowledge and spatial learning deteriorate when people increasingly rely on non-user centred, chiefly technology-driven, location-based navigation assistance (Ruginski et al. Citation2022), because current LBS and mGIDs are not yet user-centred enough (Bartling et al. Citation2022; Thrash et al. Citation2019). It is still unclear whether this might occur because of split-attention and disengagement from the navigated environment (Gardony et al. Citation2013; Gardony, Brunyé, and Taylor Citation2015), from the wayfinding decision-making process, and what role the design of the mGID plays in this process (Ruginski et al. Citation2022). Technically driven LBS developments are not yet informed enough by perceptual and cognitive theories, geographic information theory, and/or respective cartographic visualisation principles. Hence, they are mostly not based on solid empirical evidence derived from user studies (Montello, Fabrikant, and Davies Citation2018). We argue for adaptive human-centred mGID research, to further inform the still mostly technology-driven LBS community, particularly when considering using mGIDs for navigation and wayfinding. For this, we can leverage cognitive (neuro)science which can bridge fundamental research in human/computer cognitive systems and thus the design and evaluation of visual information displays (Hegarty Citation2011). The goal of this proposed agenda-setting contribution is thus twofold:

  1. to present a cutting-edge research programme by authors and collaborators aimed at the design and development of neuroadaptive mGIDs (namGIDs), specifically used for pedestrian navigation, which is based on sound theoretical foundations and empirical evidence, and

  2. to outline ongoing novel methodological approaches for use-inspired, empirical research at the nexus of serving human- and context-adaptive geographic information for pedestrian mobility.

Our framework is especially targeted for the LBS community, because it emphasises empirical studies with individual- and context-adaptive mGIDs in-situ, where navigation happens, but this is still rarely considered to date (Ruginski et al. Citation2022). In doing so, we aim that those namGIDs of the future will guide navigators efficiently, effectively, and safely to their desired destinations. Effective guidance here means that users are remaining as independent as possible from the namGID. For this to happen, the namGID supports their users learn from the traversed environment as much as possible. This, in turn, will increase navigation efficiency eventually, because the wayfinder does not need to be distracted with mobile map use in the long run, that is, needing to consult the map as often as with today’s mobile maps. In other words, displays will guide us to continuously engage with the traversed environment to better support spatial learning in unfamiliar environments (Brügger, Richter, and Fabrikant Citation2019) and to maintain available spatial knowledge of familiar environments. This is important to avoid earlier mentioned technological infantilizing of society due to over-reliance on LBS and assistive GeoIT (Ruginski et al. Citation2022; Thrash et al. Citation2019). Our empirical research programme is driven thus by the following fundamental research question:

How do we need to design human- and context-adaptive namGID displays that guide visual attention, mitigate cognitive load, and support spatial learning when wayfinders navigate in familiar and unfamiliar environments?

Before one can answer this complex question, one first needs fundamental insights into human decision-making and spatial behaviour with mGIDs (Ruginski et al. Citation2022). For our research path forward, we thus delineate a commonly structured, three-pronged empirical research approach (). This approach is novel for LBS, because it is supported by cognitive neuroscience to answer the earlier posed research question which seems squarely relevant to the LBS community. As seen in , our approach includes three intertwined research foci and factors that we have already begun to study empirically: 1) namGID design, 2) the namGID users, and 3) their task- and context-dependent namGID use.

Figure 1. Proposed three-pronged empirical neuroadaptive mobile geographic information display (namGID) research framework considering human-, task-, and context-adaptive research dimensions (Fabrikant Citation2022).

Figure 1. Proposed three-pronged empirical neuroadaptive mobile geographic information display (namGID) research framework considering human-, task-, and context-adaptive research dimensions (Fabrikant Citation2022).

We draw upon novel data-analytics-driven and human-sensing-based, ambulatory assessment techniques for this approach. For this, we are equipped with empirically studied vision principles and supported by cognitive science theories and empirical evidence, borrowed from psychology and cognitive neuroscience (Montello, Fabrikant, and Davies Citation2018; Ruginski et al. Citation2022; Thrash et al. Citation2019). These were already tested by first empirical studies with users more than 50 participants in the lab, using VR (; Cheng et al. Citation2022, Citation2023) including remote online settings (; Lanini-Maggi, Ruginski, and Fabrikant Citation2021). More excitingly, the tested lab-based study approaches were also successfully transferred into the messy real-world outdoors (; Kapaj et al. Citation2023). For gaining a deeper understanding of how humans make mobility decisions with smart assistive navigation devices and how mGIDs affect individual and group mobility behaviour and spatial learning, we have started to deploy a novel mGID evaluation approach based on real-time in-situ, ambulatory user neurocognitive sensing and assessment. This, with the aim to upscale from today’s small-scale behavioural lab studies (i.e., in VR or indoors, etc.) with few participants to tomorrow’s large in-situ, crowdsourced data analytics in the messy real world (Coutrot et al. Citation2022; Spiers, Coutrot, and Hornberger Citation2023). This allows for empirically studying space-time decision-making of individuals, spatial learning, and behaviour with human- and context-adaptive mGIDs for broad ranges of indoor and outdoor users, uses, and use contexts. Given the recent global pandemic developments and respective difficulties to run studies with participants on-site in enclosed research laboratories, we have also started to apply remote, online user testing technologies including remote video-based emotion sensing methods ().

Figure 2. The three-sided CAVE set-up: a test participant is performing a navigation and wayfinding task in a virtual urban environment. Movement through the environment in VR is provided with a foot pedal, and other interaction is handled with a 3D pointing device. Cognitive load of a participant is measured in real-time with mEEG during the navigation experiment [image source: Alex Sofios].

Figure 2. The three-sided CAVE set-up: a test participant is performing a navigation and wayfinding task in a virtual urban environment. Movement through the environment in VR is provided with a foot pedal, and other interaction is handled with a 3D pointing device. Cognitive load of a participant is measured in real-time with mEEG during the navigation experiment [image source: Alex Sofios].

With this empirical and fundamental long-term research programme on namGIDs, we aim to enrich ongoing LBS activities on the currently still weakly defined linkages between the corners in , that is, human-adaptive (: Axes 1–2), task-adaptive (: Axes 2–3), and context-adaptive (: Axes 1–3) namGID design for space-time decision-making and mobility behaviour support. Next, we highlight ongoing methodological advancements from our research lab to date and briefly review first empirically supported insights related to human- and context-adaptive mGIDs, with the aim to design and implement namGIDs of the future.

3. Deploying ambulatory human-sensing and first empirical results

Previously employed empirical methods typically used in highly controlled research laboratories have either not at all and/or are only slowly adapted to today’s rapidly evolving mobile geographic information technology. These are either increasingly used in movement or with globally crowdsourced paradigms (Coutrot et al. Citation2022; Spiers, Coutrot, and Hornberger Citation2023). Empirical methods thus should increasingly support individual complex, real-time, and dynamic decision-making in the real world, in virtual worlds, and in digitally AR. Next, we further detail our adopted empirical approach coupling controlled lab studies in VR, with remote online web-based studies, and those carried out in-situ in the real world (: Axes 1–3).

Lab-based navigation study set-up: In-house built VR and remote online web-based human-sensing settings to study pedestrian navigation.

To increase ecological validity of navigation studies, we have built a room-sized cave automatic virtual environment (CAVEFootnote2) equipped with in-situ human-sensing technology to study human- and context-adaptive mGID use, as shown in .

We leverage neurophysiological data collection methods coupled with respective data-driven analysis approaches for controlled laboratory studies in VR (), remote online (), and in-situ outdoors (). This is to gain further insights along research axes 1–2 in . The collected human sensor data also comprise psychophysiological data streams including galvanic skin responses (GSRs; Lanini-Maggi et al. Citation2021) and online facial electromyography (EMG; Lanini-Maggi, Ruginski, and Fabrikant Citation2021) to measure users’ affect and emotion online () and in-situ (). We also employ in-situ mobile eye-tracking (mET) () to study users’ visual attention using mGIDs (Kapaj et al. Citation2023). As a unique novel contribution to the geographic information science (GIScience) community, including cartography, LBS, and cognate fields, we present here for the first time mobile electroencephalography (mEEG) to study navigators’ spatial learning by means of cognitive load when using mGIDs in the lab (Cheng et al. Citation2022, Citation2023) and in-situ outdoors (Kapaj et al. Citation2023). A closer description of our developed lab set-up, deployed hardware, and software including respective technical information is available online.Footnote3

Figure 3. Human- and context-dependent neuroadaptation: the density of landmarks shown on a namGID is adapted to individuals’ cognitive load during navigation to improve wayfinders’ spatial learning (based on Cheng Citation2019) [*map source: https://www.google.com/maps].

Figure 3. Human- and context-dependent neuroadaptation: the density of landmarks shown on a namGID is adapted to individuals’ cognitive load during navigation to improve wayfinders’ spatial learning (based on Cheng Citation2019) [*map source: https://www.google.com/maps].

Figure 4. Assessing a navigator’s emotional states including their eye movements in a web-browser, in real time, during a wayfinding task in a gamified VR setting deployed remotely online [image source: Sara Lanini Maggi].

Figure 4. Assessing a navigator’s emotional states including their eye movements in a web-browser, in real time, during a wayfinding task in a gamified VR setting deployed remotely online [image source: Sara Lanini Maggi].

Figure 5. Real-time ambulatory assessment of a navigator’s visual attention (mET) and cognitive load (mEEG) during mGID-assisted wayfinding task outdoors [image source: Armand Kapaj].

Figure 5. Real-time ambulatory assessment of a navigator’s visual attention (mET) and cognitive load (mEEG) during mGID-assisted wayfinding task outdoors [image source: Armand Kapaj].

Given the well-established importance of landmarks in visually based wayfinding (Yesiltepe, Conroy Dalton, and Ozbil Torun Citation2021), we have begun to study them specifically in the context of the proposed namGID framework. For example, as a result from their VR lab-based studies (see ), Cheng et al. (Citation2022, Citation2023) discovered that the number of landmarks shown on an mGID influences wayfinders’ spatial learning. The cognitive processing of shown information on the mGID has an impact on wayfinders’ cognitive load during navigation measured by mEEG. They found that participants acquire more spatial knowledge in the five- and seven-landmark mGID conditions, compared to only in the three-landmark condition. Five landmarks, compared to three or seven landmarks, improved spatial learning without overtaxing cognitive load during navigation in different urban environments. We thus contend that namGIDs need not only be designed to assist navigators to reach a destination swiftly and safely but they should also support wayfinders’ spatial learning outcomes considering cognitive load. This is particularly necessary should assistive navigation devices malfunction, fail altogether, or if they would be unable to geolocate in real time during navigation. The ongoing user studies on how landmarks influence cognitive load during navigation directly inform the development of a future neuroadaptive navigation system that changes to individuals’ cognitive load in real time during navigation and, in doing so, also would support pedestrians’ spatial learning of the traversed environment. Cheng and colleagues’ goal is to develop a neuroadaptive navigation system where relevant environmental features – for example the number of 3D landmarks shown on 2D mGIDs – will be adapted to individuals’ cognitive load in real time, as they are measured by mEEG during navigation (see , , and ).

As shown in , once the cognitive load of an individual navigator has reached a given saturation point, the density of the landmarks on the display is reduced until more cognitive resources are available for a user to handle a greater number of landmarks shown on the namGID. Which threshold to use and how to adapt it to individuals are still open empirical research questions. The goal of a neuroadaptive navigation assistance is to optimally support navigators’ spatial learning dependent on available cognitive and perceptual resources in real time and to orient navigators’ attention back to the environment rather than rely solely on the assistive map display, as explained in the introduction. Next, we turn to how map users’ affective states can influence the processing of geographic information, and thus how human- and context-adaptive mGIDs for pedestrian mobility need to consider wayfinders’ emotion and affect during navigation.

We coupled EEG, eye-tracking in-situ, and self-reports to assess decision-making performance in an emotionally laden moving object tracking task (Lanini-Maggi et al. Citation2021). By means of stationary gaze entropy, these authors were able to predict decision accuracy and completion time across task- and context-based expertise groups. They find that moving object tracking performance increases with superior spatial ability, and when users show positive affect (i.e., engagement), extracted both, from neural measurements, and self-reports. Task domain experts are less influenced by display design choices compared to task novices. In essence, authors suggest that neural and behavioural data can be beneficially complemented to interpret and understand collected eye-tracking data. Lanini-Maggi, Ruginski, and Fabrikant (Citation2021) studied how navigational instructions in the form of emotional storytelling affect spatial memory and map use. For this, they invited expert pedestrian navigators, sampled from the Swiss Armed Forces to an empirical study. Participants were first asked to watch a video of a first-person view that a pedestrian navigator was seeing while walking through an urban landscape. This video was dynamically synchronised with an adjacently shown (orthographic) mobile map. Our expert navigators looked significantly more often at the first-person view video during the spatial learning task than the dynamically synchronised mobile map. This viewing behaviour is even stronger when navigation instructions are emotionally laden. While above studies were executed in a lab environment, we now turn to outdoor navigation studies using human-sensing methodology in-situ.

4. Discussion and further developments

With the empirical research methodology laid out above, we can now track users’ cognitive load with mEEG also in the uncontrolled outdoors (Kapaj et al. Citation2022), as shown in . This includes the real-time monitoring of navigators’ display interactions using display touch analyses, coupled with mET to study navigators’ viewing behaviours in-situ. Of course, this methodology can also be deployed indoors, if so desired (e.g., see ).

Similarly, one can assess navigators’ emotion and affect together with their eye movements indoors and outdoors with facial video recordings, either in-situ or remotely (Lanini-Maggi, Ruginski, and Fabrikant Citation2021). This is done using vision-based classification of affective states in participants’ faces using online EMG () or arousal measurement with GSR (Lanini-Maggi et al. Citation2021), while users solve tasks in the lab and/or outdoors. Based on decision-making theories and design principles, areas of interests (AOIs) are placed at relevant locations on mGIDs of the employed smart-assistive device. The AOIs could be thematically relevant (high/low uncertainty, emotional triggers, etc.) and/or perceptually salient (cartographically enhanced) areas (Kapaj et al. Citation2022). Users’ eye movements and affective states are tracked with reference to these AOIs (Lanini-Maggi et al. Citation2021). We apply different cartographic design solutions based on empirically validated design theory and compare these to users’ task performance and affective states. Depending on users’ cognitive states (i.e., cognitive load recorded by mEEG) or affective states (i.e., using GSR or EMG), graphic display changes in the VR scene and/or cartographic design changes on the namGID can be triggered for real-time audiovisual feedback during decision-making in mobile situations (). For example, Kapaj et al. (Citation2023) can show that changing the display style of task-relevant landmarks from abstract 2D footprints to highly realistic 3D symbols on the location-based mGID does not affect expert navigators’ cognitive load but modifies their viewing behaviour, in detrimental ways for spatial learning (Ruginski et al. Citation2019), that is, away from the traversed environment towards the assistive mGID (Gardony et al. Citation2013; Gardony, Brunyé, and Taylor Citation2015). Affect and emotion also play a role in pedestrian navigation. For example, Lanini-Maggi et al. (Citationin revision) can demonstrate that people watching a 3D video online of a walk-through in a virtual urban park at night in first-person view feel more relaxed after the walk when the park is lit with blue colour highlights compared to the traditional white environmental lighting.

Figure 6. Neuroadaptive mGID in a gamified navigation setting: the virtual environment a test participant is experiencing wearing the HMD VR over an EEG cap is projected onto a large, screen-based CAVE VR systemFootnote5 [image source: Bingjie Cheng]. There are still some cognitive resources left (4) for the player.

Figure 6. Neuroadaptive mGID in a gamified navigation setting: the virtual environment a test participant is experiencing wearing the HMD VR over an EEG cap is projected onto a large, screen-based CAVE VR systemFootnote5 [image source: Bingjie Cheng]. There are still some cognitive resources left (4) for the player.

Our first encouraging empirical results collected in the lab and in the world suggest that human- and context-adaptive mGIDs, especially neuroadaptation for LBS, have an exciting future. One can imagine various display adaptations based on cognitive load assessed in real time, for example, changes in display immersion (monoscopic vs. stereoscopic views), landmark or map abstraction levels (Kapaj et al. Citation2023), levels of system automation (Brügger, Richter, and Fabrikant Citation2019), adaptations to the neuroadaptive mobile maps based on eye movements (Kapaj et al. Citation2023), a user’s state of affect and arousal (Lanini-Maggi, Ruginski, and Fabrikant Citation2021), etc. For example, based on decision makers’ route choices and their measured affective state, the VR display can be made to blink to alert the user or to make decision-irrelevant information visually less salient (Fabrikant, Hespanha, and Hegarty Citation2010). We have already built a first neuroadaptive navigation game tested with the public for head-mounted (HMD) VR. It was showcased at science fair at the University of Zurich in 2021 (). In a Pokémon GO-inspired urban navigation scenario, pedestrian navigators need to collect stars (Figure 6.1) or other items including lost keys during a wayfinding task. Landmarks are visualised along the route (Figure 6.5), and symbols not only represent feature locations in the world (Figure 6.2) but also on the mGID of the navigation device (Figure 6.3). Navigators see their cognitive load visualised in the scene while they are playing the game. This is achieved with a dynamically changing fill level of an empty black brain outline symbol in the middle of their vision field (Figure 6.4), dependent on their current cognitive load (i.e., magenta fill level of the brain symbol), and measured in real time with mEEG. We have also developed a version of the game where a pumping heart symbol changes dynamically, based on navigators’ arousal state, captured with a smart watch that records a navigator’s GSR in real time.Footnote4 The purity of the recorded mEEG signal (i.e., cognitive load accuracy) is still influenced by interferences from the infrared signal of the HMD head-tracking and its controller to manipulate the movement and the map display (Figure 6.3), which should be systematically studied before more empirical research can be carried out.

5. Conclusions and outlook

We showcased our unique neuroadaptive research programme and presented first empirical results that bring us closer to answering the fundamental empirical question of what basic perceptual and neurocognitive processes influence movement decisions when guided by human- and context-adaptive namGIDs. We discussed how we can leverage state-of-the-art perceptual and neurocognitive sensor technology in the context of LBS in ecologically valid ways, inside and outside of the highly controlled laboratory. Taken together, our empirical agenda-setting research programme on neuroadaptive LBS, that is, human- and context-adaptive namGIDs for pedestrian navigation, is aimed at future location-based namGID developers to provide them with empirically validated design guidelines. This is to assure that their namGID designs work as intended, and if they do, we know why, how, when, for which kinds of users, and in which use contexts (Bartling et al. Citation2022; Griffin et al. Citation2017). It is especially critical in an age of increasing personalisation and customer-based segmentation to be able to model and predict success of namGID use at fine-grained granularity of use and levels of detail of users. Navigators must be confident that their spatial learning and decision-making are not impacted by uncontrolled properties of the namGID or limited by their own background, training, competences, skills, and abilities, which might even hinder to apprehend the desired information rapidly and to make well-informed, accurate, and timely decisions in dynamically evolving contexts. Future research could be oriented to scale up the proposed empirical methods initially developed under the controlled lab paradigm in behavioural science (e.g., using mET, mEEG, mEMG, mGSR, etc.) to a new mobile, crowdsourced, human sensor science in the real world, capitalising on own well-established geospatial visual analytics approaches, including emerging geospatial artificial intelligence experiences, coupled with GIS. For this, we have started to collect users’ smartphone tapping behaviours coupled with GPS fixes, ambient light, and accelerometer data fully remotely and in the wild, without any contact with the tracked users, except for their initial agreement of informed consent (Reichenbacher et al. Citation2022). In doing so, it is especially critical to consider not only ethical research methods and careful human participants’ research reviews but also privacy concerns of users, and their collected data, which have yet to be fully addressed by this research community.

Acknowledgments

The author acknowledges the fundamental contributions of the EU-funded ERC GeoViSense Team members to advance presented research (in alphabetical order of first names): Alex Sofios, Armand Kapaj, Bingjie Cheng, Ian Ruginski, Sara Lanini Maggi, and Tyler Thrash. We also gratefully acknowledge the ongoing support by current collaborators Anna Wunderlich, Arko Gosh, Chris Hilton, Kai-Florian Richter, Klaus Gramann, and Tumasch Reichenbacher.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

We are indebted to the generous funding by the European Research Council (ERC) Advanced Grant GeoViSense, No. 740426.

Notes

1. Current special issue editors and editors of the LBS2022 proceedings have invited the author of the following conference publication to submit a version of this paper for this special issue (Fabrikant Citation2022).

3. On the web at: https://www.geo.uzh.ch/en/units/giva/services.html (accessed January 2023).

4. See technology used on the web at: https://www.geo.uzh.ch/en/units/giva/services/mobile-EDA-facial-emotions.html (accessed September 2023).

5. See technology used on the web at: https://www.geo.uzh.ch/en/units/giva/services/virtual-reality-HMD.html (accessed September 2023).

References

  • Aporta, C., and E. Higgs. 2005. “Satellite Culture.” Current Anthropology 46 (5): 729–753. https://doi.org/10.1086/432651.
  • Bartling, M., B. Resch, T. Reichenbacher, C. R. Havas, A. C. Robinson, S. I. Fabrikant, and T. Blaschke. 2022. “Adapting Mobile Map Application Designs to Map Use Context: A Review and Call for Action on Potential Future Research Themes.” Cartography and Geographic Information Science 49 (3): 237–251. https://doi.org/10.1080/15230406.2021.2015720.
  • Brügger, A., K.-F. Richter, and S. I. Fabrikant. 2019. “How Does Navigation System Behavior Influence Human Behavior?” Cognitive Research: Principles and Implications 4 (1): 5. https://doi.org/10.1186/s41235-019-0156-5.
  • Cheng, B. 2019. “Enhancing Spatial Learning with an Adaptive Navigation System That Employs Neurofeedback.” In Doctoral Colloquium. 14th International Conference on Spatial Information Theory: COSIT 2019, Regensburg, Germany. September 9–13, 2019.
  • Cheng, B., E. Lin, A. Wunderlich, K. Gramann, and S. I. Fabrikant 2023. “Using Eye Blink-Related Brain Activity to Investigate Cognitive Load During Assisted Navigation.” In Frontiers in Neuroscience. Neural Technology. https://doi.org/10.3389/fnins.2023.1024583.
  • Cheng, B., A. Wunderlich, K. Gramann, E. Lin, and S. I. Fabrikant. 2022. “The Effect of Landmark Visualization in Mobile Maps on Brain Activity During Navigation: A Virtual Reality Study.” Frontiers in Virtual Reality 3:3. https://doi.org/10.3389/frvir.2022.981625.
  • Coutrot, A., E. Manley, S. Goodroe, C. Gahnstrom, G. Filomena, D. Yesiltepe, R. C. Dalton. 2022. “Entropy of City Street Networks Linked to Future Spatial Navigation Ability.” Nature 604 (7904): 104–110. https://doi.org/10.1038/s41586-022-04486-7.
  • Dahmani, L., and V. D. Bohbot. 2020. “Habitual Use of GPS Negatively Impacts Spatial Memory During Self-Guided Navigation.” Scientific Reports 10 (1): 6310. https://doi.org/10.1038/s41598-020-62877-0.
  • Fabrikant, S. I. 2022. “Neuro-Adaptive LBS: Towards Human- and Context-Adaptive Mobile Geographic Information Displays (mGIDs) to Support Spatial Learning for Pedestrian Navigation.” In 17th International Conference on Location Based Services (LBS 2022), edited by J. Krisp and L. Meng, 48–58. Augsburg University, Munich, Germany, September. 12-14, 2022.
  • Fabrikant, S. I., S. R. Hespanha, and M. Hegarty. 2010. “Cognitively Inspired and Perceptually Salient Graphic Displays for Efficient Spatial Inference Making.” Annals of the Association of American Geographers 100 (1): 13–29. https://doi.org/10.1080/00045600903362378.
  • Gardony, A. L., T. T. Brunyé, C. R. Mahoney, and H. A. Taylor. 2013. “How Navigational Aids Impair Spatial Memory: Evidence for Divided Attention.” Spatial Cognition & Computation 13 (4): 319–350. https://doi.org/10.1080/13875868.2013.792821.
  • Gardony, A. L., T. T. Brunyé, and H. A. Taylor. 2015. “Navigational Aids and Spatial Memory Impairment: The Role of Divided Attention.” Spatial Cognition & Computation 15 (4): 246–284. https://doi.org/10.1080/13875868.2015.1059432.
  • Griffin, A. L., T. White, C. Fish, B. Tomio, H. Huang, C. R. Sluter, J. V. M. Bravo, et al. 2017. “Designing Across Map Use Contexts: A Research Agenda.” International Journal of Cartography 3 (sup1): 90–114. https://doi.org/10.1080/23729333.2017.1315988.
  • Hegarty, M. 2011. “The Cognitive Science of Visual-Spatial Displays: Implications for Design.” Topics in Cognitive Science 3 (3): 446–474. https://doi.org/10.1111/j.1756-8765.2011.01150.x.
  • Kapaj, A., S. Lanini-Maggi, C. Hilton, B. Cheng, and S. I. Fabrikant. 2022. “How Does the Design of Landmarks on a Mobile Map Influence Wayfinding experts’ Spatial Learning During a Real-World Navigation Task?” Cartography and Geographic Information Science 50 (2): 197–213. https://doi.org/10.1080/15230406.2023.2183525.
  • Kapaj, A., S. Lanini-Maggi, C. Hilton, B. Cheng, and S. I. Fabrikant. 2023. “How Does the Design of Landmarks on a Mobile Map Influence Wayfinding experts’ Spatial Learning During a Real-World Navigation Task?” Cartography and Geographic Information Science 50 (2): 197–213. https://doi.org/10.1080/15230406.2023.2183525.
  • Lanini-Maggi, S., M. Lanz, C. Hilton, and S. I. Fabrikant in revision. “The Positive Effect of Blue Lighting on Urban Park Visitor’s Affective States: A Virtual Reality Online Study Measuring Facial Expressions and Self-Reports. Environment Planning B: Urban Analytics and City Science.
  • Lanini-Maggi, S., I. T. Ruginski, and S. I. Fabrikant 2021. “Improving Pedestrians’ Spatial Learning During Landmark-Based Navigation with Auditory Emotional Cues and Narrative.” In UC Santa Barbara: Center for Spatial Studies (Ed.), 11th International Conference on Geographic Information Science. September 27–30, 2020, Poznan, Poland. https://doi.org/10.25436/E2NP43
  • Lanini-Maggi, S., I. T. Ruginski, T. F. Shipley, C. Hurter, A. T. Duchowski, B. B. Briesemeister, J. Lee, and S. I. Fabrikant. 2021. “Assessing How Visual Search Entropy and Engagement Predict Performance in a Multiple-Objects Tracking Air Traffic Control Task.” Computers in Human Behavior Reports 4:100127. https://doi.org/10.1016/j.chbr.2021.100127.
  • Montello, D., S. I. Fabrikant, and C. Davies. 2018. “Cognitive Perspectives on Cartography and Other Geographic Information Visualizations (Chapter 10).” In Handbook of Behavioral and Cognitive Geography, 177–196. Edward Elgar Publishing. https://doi.org/10.4337/9781784717544.00018.
  • Reichenbacher, T. 2001. “Adaptive Concepts for a Mobile Cartography.” Journal of Geographical Sciences 11 (S1): 43–53. https://doi.org/10.1007/BF02837443.
  • Reichenbacher, T., M. Aliakbarian, A. Ghosh, and S. I. Fabrikant. 2022. “Tappigraphy: Continuous Ambulatory Assessment and Analysis of in-situ Map App Use Behaviour.” Journal of Location Based Services 16 (3): 181–207. https://doi.org/10.1080/17489725.2022.2105410.
  • Ruginski, I., S. H. Creem-Regehr, J. K. Stefanucci, and E. Cashdan. 2019. “GPS Use Negatively Affects Environmental Learning Through Spatial Transformation Abilities.” Journal of Environmental Psychology 64:12–20. https://doi.org/10.1016/j.jenvp.2019.05.001.
  • Ruginski, I., N. Giudice, S. Creem-Regehr, and T. Ishikawa. 2022. “Designing Mobile Spatial Navigation Systems from the User’s Perspective: An Interdisciplinary Review.” Spatial Cognition & Computation 22 (1–2): 1–29. https://doi.org/10.1080/13875868.2022.2053382.
  • Spiers, H. J., A. Coutrot, and M. Hornberger. 2023. “Explaining World‐Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest.” Topics in Cognitive Science 15 (1): 120–138. https://doi.org/10.1111/tops.12590.
  • Sugimoto, M., T. Kusumi, N. Nagata, and T. Ishikawa. 2022. “Online Mobile Map Effect: How Smartphone Map Use Impairs Spatial Memory.” Spatial Cognition & Computation 22 (1–2): 161–183. https://doi.org/10.1080/13875868.2021.1969401.
  • Thoresen, J. C., R. Francelet, A. Coltekin, K.-F. Richter, S. I. Fabrikant, and C. Sandi. 2016. “Not All Anxious Individuals Get Lost: Trait Anxiety and Mental Rotation Ability Interact to Explain Performance in Map-Based Route Learning in Men.” Neurobiology of Learning and Memory 132:1–8. https://doi.org/10.1016/j.nlm.2016.04.008.
  • Thrash, T., S. I. Fabrikant, A. Brügger, C. T. Do, H. Huang, K.-F. Richter, S. Lanini-Maggi. 2019. “The Future of Geographic Information Displays from GIScience, Cartographic, and Cognitive Science Perspectives.” In Leibniz International Proceedings in Informatics, LIPIcs, 142. https://doi.org/10.4230/LIPIcs.COSIT.2019.19.
  • Yesiltepe, D., R. Conroy Dalton, and A. Ozbil Torun. 2021. “Landmarks in Wayfinding: A Review of the Existing Literature.” Cognitive Processing 22 (3): 369–410. https://doi.org/10.1007/s10339-021-01012-x.