235
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Visual effects of a forward-curled 3D map of the Forbidden City with eye-tracking

, , &
Received 08 Oct 2023, Accepted 07 May 2024, Published online: 17 May 2024

ABSTRACT

In urban environment visualization, including both traditional two-dimensional (2D) and three-dimensional (3D) visualization, the height of ground objects results in visual occlusions in ordinary 3D maps, which leads to challenges in displaying spatial relationships. We empirically studied the visual effects of a curled deformation method and assessed whether curled deformation visualization could help participants complete wayfinding tasks. The results revealed that a forward-curled map can include both ego-view and bird-view perspectives, ensure continuity from ego-view to bird-view perspectives, and address foreshortening effects. The remote, distant areas are pulled closer, thereby enhancing the sense of space and allowing participants to better understand the overall situation. A forward-curled map has a wider coverage range of fixation points and a wider scope of visual search and can improve a participant’s task completion efficiency. Moreover, the cognitive burden is not increased with this approach.

1. Introduction

Human cognition is important and indispensable in the exploration of and interaction with urban spatial environments. People have built many 3D urban models over time, from physical models of the 1933 “Plastico di Roma Imperiale” model and the 1964 “New York Panorama” model (Marshall Citation2018) to recent modern spatial visualization technology. Although the expression patterns have changed, better presentation, exploration and interaction with urban spatial environments have been enabled with new technology. Spatial visualization methods can help users better recognize spatial environments, and 3D visualizations can naturally present complex information about 3D environments. 3D visualization methods use 3D modeling technology based on visualizations, which combine data with 3D models of actual application scenarios, displaying these data more intuitively and vividly and improving the amount and accuracy of the utilized information. 3D geographic environment models present geographical scenes realistically and provide users with assistance in environmental cognition (Dong et al. Citation2022), urban planning and management (Ning et al. Citation2020), environmental simulation (Tang et al. Citation2019), emergency response (Tang and Ren Citation2012), navigation (Dong et al. Citation2022; Iftikhar, Shah, and Luximon Citation2020), cultural heritage protection (Marques et al. Citation2017), etc. However, 3D geographic environment models contain many scene objects and complex geometries and textures. When a 3D model is mapped to a 2D canvas based on a standard perspective or orthogonal projection, problems such as visual interference from a figure obscuring the ground can occur (Zhang, Zhang, and Xu Citation2016); therefore, users cannot efficiently complete interactive tasks (Yasumoto et al. Citation2011). The visualization of a geographical space can include invisible geographical phenomena (Midtbø and Harrie Citation2021), such as physical occlusions or invisible landscapes that have not yet occurred at present but that will occur in the future (Judge and Harrie Citation2020). Therefore, visualizing an urban geographic space so that users can quickly and effectively extract information about that space is a challenge for cartographers.

Deformation visualization technology (Jobson Citation2013; Pasewaldt, Trapp, and Döllner Citation2011; Sorene Citation2016) is a new urban 3D visualization approach. Deformation visualization technology was first used to display 2D flat maps, including subway road maps (Lloyd Citation2018) and road networks (Sielicka and Karsznia Citation2019). By keeping the original topological relationships unchanged, deformation technology abstractly expresses information by highlighting important spatial position information and spatial relationship information to improve user understanding, reception and memory. In general, spatial deformation visualization technology displays feature distributions at different perspectives in the same plane through up-and-down curling, left and right folding, dome production and other operations, which helps to effectively present 3D environments, effectively preventing the problem of information fragmentation from different perspectives in the switching process. A viewer can identify their own direction, location and surrounding environmental characteristics, obtaining flexible and diverse spatial information (Guo et al. Citation2018).

In this study, we explored the visual effects of a forward-curled deformation method and whether a participant’s spatial perception of distant remote areas and their understanding of the overall spatial structure of the region can be enhanced by reducing the distance between the participant and these distant areas through curling. Finally, we analyzed the effect of the curled deformation approach in wayfinding tasks.

2. Related work

2.1. Deformation visualization technology in maps

Spatial information visualization technology provides people with a spatial cognition tool that can effectively improve and enhance their ability to spatially understand geographic environments. In deformation visualization technology, based on an original visualization at the same scale, a visualization method is adopted in which multiple scales appear in the same map with different levels of detail. Nonstandard projection techniques are used to visualize deformations in the maps, including spatial distortion and object deformation; these techniques include perspective wall (Mackinlay, Robertson, and Card Citation1991), fisheye view (Furnas Citation1986; Tominski et al. Citation2006), and polyfocal display (Kadmon and Shlomi Citation1978) methods. Fisheye view technology can display the current focal center and edge area with different levels of detail (Tominski et al. Citation2006). When the visualization dimension is expanded to a 3D space, the deformation visualization technology used to create the original 2D map is insufficient to visualize deformations in the geographic environment in 3D space. Barr (Citation1984) was the first to introduce deformations into 3D geometric modeling. He defined the deformation space using a Cartesian coordinate system and used basic vectors and mathematical operations to simulate simple deformations. Subsequently, a free-form deformation technique was proposed (Sederberg and Parry Citation1986), in which an original 3D model was embedded into a peripheral frame control grid, and the control grid was deformed to deform the original 3D model. Jenny and Jenny (Citation2011) developed Terrain Bender, a free and open source software package that can be used to bend terrain models. In addition to the object deformation method in 3D space (Vallance and Calder Citation2001) and the urban 3D model deformation method (Möser et al. Citation2008), a sample maze model was developed that could be bent upward far from the fixation point, and according to the Hermite terrain deformation curve, three classes of nonlinear blended perspectives were proposed. Using multiperspective projections based on view-dependent global deformations, multiview 3D panoramas of 3D geovirtual environments were developed (Pasewaldt et al. Citation2014), enabling visualizations from multiple perspectives to be seamlessly combined into a single image, facilitating the presentation of information for virtual 3D city and landscape models. Moreover, the design decisions for 3D panoramic maps for 3D geovirtual environment exploration and navigation were discussed, and preliminary user studies were conducted to validate the use of 3D panoramic maps for navigation in 3D geovirtual environments.

2.2. Eye-tracking technology in wayfinding and map visualization exploration

Eye-tracking technology can enable the collection of eye movement information, high-precision fixation information and eye saccade information in real time (Hansen and Ji Citation2010; Lu et al. Citation2021). Compared with traditional behavioral measurement methods, eye-tracking methods have the advantages of greater temporal and spatial accuracy and can provide temporal and spatial information. In recent years, eye-tracking technology has become more precise and user friendly and has been extended to various fields and applications, such as facial recognition (Millen and Hancock Citation2019), social cognition (Richmond and Nelson Citation2009), spatial cognition (Ying et al. Citation2020), and planning and design practice (Lu et al. Citation2021).

Wayfinding is an effective method for becoming familiar with a new environment. In the process of wayfinding, humans can quickly obtain and understand environmental information, and wayfinding behavior has been explored by collecting behavioral data and testing hypotheses (Dong et al. Citation2022). Wayfinding behaviors include wayfinding efficiency (Parush and Berman Citation2004; Ying et al. Citation2020), wayfinding strategies (Chen, Chang, and Chang Citation2009; Lawton Citation1994), working memory (Ying et al. Citation2021a), visual attention (Ying et al. Citation2021b), and user experiences (Dalton Citation2001; Kuliga et al. Citation2015). Behavioral tests, such as questionnaires and interviews, are the most common methods for examining behavior (Golledge et al. Citation1985; Nenko, Koniukhov, and Petrova Citation2019). Eye-tracking technology is also used in route planning research (Fuhrmann, Komogortsev, and Tamir Citation2009), wayfinding and navigation (Kiefer, Giannopoulos, and Raubal Citation2014), where the objective is to compare the visual attention characteristics associated with navigation in different settings, such as real-world and desktop environments (Huang et al. Citation2021)and virtual reality (VR) environments (Dong et al. Citation2020, Citation2022).

With the development of cartography technology and visualization methods, different cartography forms have become increasingly abundant, and information expression and interaction methods are improving. Since the 1970s, eye tracking has emerged as one of the most objective and valuable tools for examining mapping perception and cognitive processes (Shuang et al. Citation2016). For example, map-reading behaviors have been analyzed to improve map design and readability (Manson et al. Citation2012) and map interface friendliness (Popelka et al. Citation2019), with differences in user performance representing differences in spatial cognition ability (Ying et al. Citation2021b). Moreover, cognitive processes in wayfinding tasks have been explored (Ying et al. Citation2021a). With the development of mobile Internet and VR technology, map applications have shifted from two-dimensional screens to virtual three-dimensional environments, making VR a new medium for map visualizations. In VR environments, eye-tracking technology has been applied to collect and evaluate eye-tracking data based on indoor evacuation behavior (Ugwitz et al. Citation2022) and to quantify 3D scene map visual attention (Yang and Li Citation2021).

3. Materials and methods

3.1. Participants

We recruited 34 participants (aged 18 to 25 years, 16 men and 18 women), all of whom were university students who volunteered to participate in our experiments. The participants had different academic backgrounds and had not previously been trained in professional space science competencies (cartography, geoinformatics, etc.). All participants had normal or corrected-to-normal vision. The participants were able to use computers for basic operations and were not familiar with the experimental stimulus material (the Forbidden City model was appropriately simplified and adjusted; Section 3.2) or the experimental procedure before the experiment. Participants were informed prior to the experiment, and consent was obtained. The experiment does not pose any risks to the participants’ life safety, health, and privacy.

3.2. Stimulus material

We selected the Forbidden City as the research area. The outer part of the Forbidden City is a rectangle that is long in the north‒south direction and short in the east‒west direction, which is suitable for curling deformation in the north‒south direction to highlight the overall characteristics of the region. Because the panorama of the Forbidden City was complicated, the model of the Forbidden City was appropriately simplified and adjusted. An ordinary 3D map of the Forbidden City (ordinary 3DMFC) was created using SketchUp; this map is not a static picture but rather a three-dimensional model. The experiment was carried out in SketchUp, and the participants could browse the model and complete the experiment at any angle and position. The ordinary 3DMFCs were divided into four equal areas of interest (AOIs), as shown in . The quartiles illustrate the spatial hierarchy and spatial sequence of the curved surface.

Figure 1. Ordinary 3D map of the Forbidden city and the divisions of areas of interest.

Figure 1. Ordinary 3D map of the Forbidden city and the divisions of areas of interest.

We used a visualization method for urban 3D curled deformation based on the SketchUp TrueBend plug-in and used the curled deformation method to process the map base surface and 3D urban model to generate the forward-curled 3D map of the Forbidden City (forward-curled 3DMFC). Additionally, as a three-dimensional model in SketchUp, participants can browse and operate the 3D scene freely. The forward-curled 3DMFCs were equally divided into four AOIs, as shown in .

Figure 2. Forward-curled 3D map of the Forbidden city and the corresponding divisions of areas of interest.

Figure 2. Forward-curled 3D map of the Forbidden city and the corresponding divisions of areas of interest.

3.3. Apparatus

A Tobii Pro Glasses 2 eye tracker was used in this experiment to record eye movement data. The Tobii Pro Glasses 2 had a sampling rate of 50 Hz. The horizontal and vertical tracking ranges were 82° and 52°, respectively. The eye tracker was connected to a laptop computer that collected, recorded and stored eye-tracking data while presenting stimulus material.

3.4. Procedures

3.4.1. Research process

In this study, a forward-curled deformation 3D map visualization method is proposed. We used wayfinding tasks to explore the visual effect of forward-curled maps. Our research comprises three main steps. First, we conduct an eye-tracking experiment that includes both free browsing and wayfinding tasks (Section 3.4.2). Next, we constructed a public cognitive map based on the average mention frequency for each spatial element by participants, followed by conducting interviews (Section 3.4.3). Finally, we analyze fixation information data, behavioral data, and public cognitive maps obtained from free browsing and wayfinding processes (Section 3.5). The analysis steps and contents are shown in .

Figure 3. The analysis steps and contents of visual effects about the forward-curled 3D map of the Forbidden City with eye-tracking.

Figure 3. The analysis steps and contents of visual effects about the forward-curled 3D map of the Forbidden City with eye-tracking.

3.4.2. Eye-tracking experiment procedure

The eye-tracking experiments were divided into two stages.

Stage 1: Practice operation. Under guidance, each participant learned the necessary operations, such as scaling, panning and map rotation, and the whole process lasted 1 minute.

Stage 2: Formal experiment. Each participant wore a turned-on eye-tracking device. First, participants freely browsed (no specific task) the forward-curled 3DMFC and the ordinary 3DMFC. In contrast to the 10-second duration per image observed in the free browsing test of 2D series images (He et al. Citation2023), a free browsing time allocation of 30 s was implemented for each 3D map in this study, aiming to strike a balance between allowing sufficient time for comprehensive observation of the entire three-dimensional scene and avoiding excessive periods that may lead to reduced engagement and drowsiness. And the participants used a near-ground perspective and the same starting position and angle. Then, participants used the forward-curled 3DMFCs and ordinary 3DMFCs to complete wayfinding tasks. In setting the targets, each task target was not exactly the same, but we attempted to make the search difficulty as similar as possible in each task. The starting point was the Meridian Gate (green triangle; the Meridian Gate is specified as the map origin), and the task targets (red stars) are shown in . The experimental design employed in this study was a within-subject design, and the task order was random. At the beginning of each task, the task targets were given to the participants. For example, the instructions for Task 1 were as follows: “Start from the Meridian Gate and look for the Yanxi Palace. Please tell me when you are ready by saying start.” When the participant determined how to find the target, they were asked to say “I have found it.” The specific tasks are shown in .

Table 1. Tasks and corresponding instructions of ordinary 3DMFC and forward-curled 3DMFC.

3.4.3. Cognition maps and interviews

After wayfinding, the participants drew a cognition map of the ordinary 3DMFC or forward-curled 3DMFC.

Finally, we conducted interviews in which the participants were asked to recall and answer questions about their feelings regarding map reading and wayfinding. Related issues, such as whether buildings and roads were clear, whether these objects affected the judgment of heights and distances in wayfinding, and the advantages of the deformed maps, were included. The interviews were also used to assist in the data analysis of the number of fixations.

The interview data were integrated into the analysis of the eye-tracking results and were not analyzed separately.

3.5. Analysis framework

3.5.1. Extraction of eye-tracking parameters

In eye-tracking research, the indicators are divided into four categories: visual information processing metrics, visual information searching metrics, cognitive burden indicators and fixation heatmaps. We selected eight variables to assess visual attention (Dong et al. Citation2019, Citation2022; Herman, Popelka, and Hejlová Citation2017; Li and Chen Citation2012), including the number of fixations, total fixation duration, average fixation duration, time to first fixation, number of saccades, saccade frequency, pupil diameter and heatmaps. We used Tobii Pro Lab (Lab version 1.98, https://connect.tobii.com/s/?language=zh_CN) software to extract and analyze the eye movement data.

3.5.1.1. Information processing indices

Number of fixations: In the coding task (such as browsing the map), the interview responses were used to determine whether the number of fixation points in a specific area was large because the participant was interested in this area or because this area contained complex information that was difficult to encode. In the search task, there were more fixations because the participants were more uncertain about the target or needed to spend more time processing visual information.

Total fixation duration: In the search task, the longer the total duration of the fixations was, the more difficult it was for the participants to extract and process information.

Average fixation duration: The longer the average fixation time was, the more difficult it was for a participant to interpret information while performing the task.

Fixation frequency: The fixation frequency was defined as the number of fixation points per unit time. The higher the fixation frequency was, the faster the information processing rate of the participant was during the task.

Time to first fixation: The time to first fixation was defined as the time from when a participant began browsing the map to before the fixation appeared in each AOI in the map. The shorter the time to first fixation in each AOI was, the faster the content in this area attracted the visual attention of the participants.

3.5.1.2. Visual information searching metrics

Saccadic refers to the movement of the eye from one fixation point to another fixation point, which mainly changes the range of the eye fixation point. The number of saccades and saccade frequency reflect the information and speed of the visual search. The more number of saccades, the more visual searches. The higher saccade frequency, the higher visual search efficiency.

3.5.1.3. Cognitive burden index

Pupil diameter: Changes in the pupil diameter can reflect changes in the brain’s cognitive burden; therefore, the pupil diameter is often used to characterize the cognitive burden in experimental tasks that require the brain.

3.5.1.4. Heatmap

A heatmap is a good reflection of the distribution of points of interest among the map content. By processing the eye-tracking experimental data collected by the eye tracker through Tobii Lab Pro software, a heatmap of the fixation points for each participant was generated.

3.5.2. Wayfinding behavior data

We used two metrics which are given as follows.

3.5.2.1. Task completion time

The task completion time was the total duration of the wayfinding process. This metric reflects the overall wayfinding completion efficiency.

3.5.2.2. The number of view switches

This metric was defined as the number of view switches caused by the actions performed (panning, rotating, zooming) by the participants during each task, reflecting the clarity of the field of view and the difficulty of the operation.

3.5.3. Public cognition map

The public cognition map is based on the superposition of each spatial element in the participant cognition map, and the average mention rate of each spatial element was then calculated.

In this paper, space elements are divided into three categories: road sections, regions and buildings. The road sections are the connections between the two nearest intersections; the regions refer to the smallest areas surrounded by the road section; and the buildings mainly include the palace and the palace gate. The experimental scenes are divided into 31 road sections, 12 regions, and 104 buildings.

The public cognition maps were divided into two groups: the public cognition maps of the forward-curled 3DMFC and the ordinary 3DMFC. They were drawn according to the participants’ average mentions of each spatial element in the cognition map.

4. Results

4.1. Visual effect analysis of forward-curled deformation maps

4.1.1. Analysis of the freely browsed map

During the experiment, the participants freely browsed the ordinary 3DMFC and the forward-curled 3DMFC for 30 s from a near-ground perspective; the same starting position and line-of-sight angle was used in both experiments.

4.1.1.1. Heatmap analysis

When the participants browsed the ordinary 3DMFC, the eye fixations were concentrated near the central axis and symmetric around the central axis, and mainly concentrated in the area closest to the participants. illustrated the heat map of one participant’s eye fixations with free browsing model.

Figure 4. Heatmaps of eye fixations during a participant’s free browsing model.

Figure 4. Heatmaps of eye fixations during a participant’s free browsing model.

When the participant browsed the forward-curled 3DMFC, the eye fixations were more extensively distributed throughout the region, especially in the central axis region, the remote region, and the bilateral region, with varying degrees of enhancement (). When the 3D map was curled forward and deformed, the distance to the remote area was reduced, and 3D objects were enlarged, which enabled a better display of the information for distant figures, and the distant, remote areas of the map were quickly noticed by the participant. The coverage area of the eye fixations was greater in the forward-curled map than in the ordinary map, and the scope of the participant visual search was wider, which enhanced the information in the distant, remote areas and each participant’s understanding of the overall situation. Notably, in , a floor plan of the Forbidden City is used to display the heatmaps of the fixations, and the material used in the experiment is shown in . The heatmap only shows one participant’s fixation information, and for other participants, their fixation heatmaps are similar with .

4.1.1.2. Number of fixations

During free browsing model the average number of eye fixations in the nearby areas (AOI-A and AOI-B) was greater than that in the remote, distant areas (AOI-C and AOI-D) (the four parts shown in ) in the ordinary 3DMFC (). Compared with the number of eye fixations in the ordinary 3DMFC, the number of eye fixations in the remote, distant areas (AOI-C and AOI-D) in the forward-curled 3DMFC was greater, and the number of eye fixations in the four areas was not considerably different. The forward-curled deformation map allowed participants to notice the spatial information in the map more effectively.

Figure 5. The average number of eye fixations during the free browsing model between forward-curled and ordinary 3DMFC.

Figure 5. The average number of eye fixations during the free browsing model between forward-curled and ordinary 3DMFC.

4.1.1.3. Time to first fixation

In the forward-curled 3DMFC, the average time to first fixation for the four AOIs was approximately same, especially the time to first fixation for AOI-C and AOI-D, which was shorter than the time to first fixation in the ordinary 3DMFC (). This was because the nearby areas in the ordinary 3D map occupied most of the participants’ field of view from the near-ground perspective. Therefore, when a participant browsed the map, they generally first observed the information in their field of view, adjusted the viewing angle through the rotate/zoom/pan tool, and then examined the information in the remote, distant area. In the forward-curled map, the distance between the participant and the remote, distant area was reduced, and all the content in the map was displayed within the participant’s field of vision and tended to be the same size. The participants needed to adjust their eye rotation only up and down to pay attention to the content throughout the entire map, which took less time.

Figure 6. Average time to first fixation for free browsing on Forward-curled and ordinary 3DMFC.

Figure 6. Average time to first fixation for free browsing on Forward-curled and ordinary 3DMFC.

4.1.1.4. Number of view switches

The participants exhibited an average of 7.38 view switches while browsing the forward-curled 3DMFC, which was significantly lower than the average of 12.00 view switches observed during browsing of the ordinary 3DMFC. Moreover, participants were able to acquire sufficient information from the forward-curled 3DMFC with fewer operations. This is because the participants could browse the front view and the top view in the forward-curled 3DMFC and could observe and browse the map smoothly from the ego-view to the bird-view perspectives, enabling them to obtain the global information and information in remote, distant areas. The participants browsed the nearby area in the map from the ego-view perspective and could clearly observe the building distribution and road layouts at close range. The participants obtained information about the remote, distant areas in the map from a bird-view; the buildings in the remote, distant areas were pulled closer after the map was curled, which increased the observable content in the field of view and reduced the number of occlusions between the buildings and the participants and the obstruction of the road by the buildings. The building and road distributions could be clearly observed, and participants obtained more information from the initial viewing angle, reducing view switching. The forward-curled 3D map provided ego- and bird-view perspectives and ensured continuity between the different perspectives; moreover, information about the whole region could be obtained with fewer operations.

4.1.2. Analysis of wayfinding tasks

4.1.2.1. Completion efficiency

The average time spent on each task using the forward-curled map was lower than that using the ordinary 3DMFC, and the number of view switches using the forward-curled 3DMFC was also lower than that using the ordinary 3DMFC ().

Table 2. Task completion time and the number of view switches for wayfinding tasks.

In the tasks using the ordinary 3DMFC (tasks 1, 2, and 3), participants could observe the map only from a head-up perspective from the initial view. Therefore, they needed more operations (panning, rotating, zooming) and view switches to obtain information about the remote, distant areas and complete the task; the task completion time was 22–34 s.

In the tasks using the forward-curled 3DMFC (tasks 4, 5, and 6), participants could observe the map from the initial head-up perspective (nearby area) and top-down perspective (remote, distant areas), which enabled the participants to better understand the global area information, thereby reducing the number of operations and task completion time.

4.1.2.2. Analysis of eye-tracking data

The analysis of eye-tracking data revealed that total fixation duration, number of fixations, average fixation duration, and fixation frequency at four indicators were used to measure participants’ visual information processing during the wayfinding task. Comparatively, total fixation duration, number of fixations, fixation frequency, and average fixation duration with the forward-curled 3DMFC were lower than those with the ordinary 3DMFC (). It shows that using curled deformation maps for wayfinding can reduce the difficulty of visual information processing and the amount of visible information participants need to process during wayfinding tasks. Moreover, this can reduce the time required to interpret, match, and memorize information. However, it is worth noting that the forward-curled 3DMFC exhibits slightly lower efficiency in information processing than ordinary 3DMFC.

Table 3. Average eye-tracking parameters for wayfinding tasks.

Saccade number and saccade frequency were employed as measures of visual search during the wayfinding task. The saccade number associated with the forward-curled 3DMFC was lower than that observed with the ordinary 3DMFC; the saccade frequency of the forward-curled 3DMFC was higher the ordinary 3DMFC (). These findings indicate that when participants utilized the forward-curled 3DMFC for wayfinding purposes, they required less visual search while being more efficient. Furthermore, the average pupil diameter with the curled map was smaller than that with the ordinary map; thus, to some extent, the cognitive burden associated with using the forward-curled 3D map was lower.

The above analysis was based on average data and did not reflect the strength of individual differences among the groups caused by the curled deformation map and the ordinary map. Therefore, we used the Kolmogorov‒Smirnov (K-S) test to examine the normality of the distribution of the data sets (). The data that were normally distributed were analyzed using a t test to determine if there were a significant difference between the two sets of data. In this test, “t” represented the t test statistic and “p” denoted double-tailed significance. If the data did not follow a normal distribution, a Kruskal‒Wallis (K-W) test was performed instead. In this test, “H” signified the K-W test statistic and “p” represents asymptotic significance. The results of these tests were shown in .

Table 4. K-S test results of eye-tracking parameters.

Table 5. Parametric t tests and K-W test results of eye-tracking data.

In contrast to those for the ordinary 3DMFC, the total fixation duration (T = 4.073, p < 0.01), number of fixations (T = 4.115, p < 0.01) and saccade frequency (T=-3.523, p < 0.01) were significantly different for the forward-curled 3DMFC, but there were no significant differences in the fixation frequency (T = 1.130, p > 0.05), average pupil diameter (T = 1.354, p > 0.05), average fixation duration (H = 0.780, p > 0.05) or number of saccades (H = 0.285, p > 0.05) (). According to the average results of these indicators (), the wayfinding task involving the curled deformation map was completed significantly faster, the amount of information needed for processing was lower, the visual search efficiency was greater, the operation was simpler, and the differences in the metrics were significant. Although this task involved reduced cognitive burden, these differences were not obvious.

4.2. Public cognition map

According to the public cognition map of the ordinary 3DMFC (), the average mention rate of buildings is mainly distributed in the range of 0.01 to 0.25, and the three buildings with mention rates higher than 0.5 are concentrated on the central axis of the Forbidden City. The average mention rate of road sections was greater for central and front main roads and lower for surrounding roads and roads connecting palace gates. Regions that are farther from the map origin have a lower mention rate (0–0.5), and the regions that are closer to the map origin have a higher mention rate overall (0.5–1).

Figure 7. Public cognition map of mention rates about buildings, road sections and regions for ordinary and forward-curled 3DMFC. Notably, when the building mention rate is 0, the map becomes transparent and reveals the color of the corresponding plane map region.

Figure 7. Public cognition map of mention rates about buildings, road sections and regions for ordinary and forward-curled 3DMFC. Notably, when the building mention rate is 0, the map becomes transparent and reveals the color of the corresponding plane map region.

In the public cognition map of the forward-curled 3DMFC (), the average mention rate of buildings is high overall, and the distribution in the range of 0.25 to 0.5 increased. The average mention rate still reflects the lower mention rate of the surrounding roads and the roads connecting the palace gates and the higher mention rate of the main north-south roads. The region mention rate of the central and front is higher, ranging from 0.5–1.

Compared with the public cognition map of the ordinary 3DMFC, in the public cognition map of the forward-curled 3DMFC (), the average mention rate of buildings is greater and more widely distributed, and the average mention rate of buildings that are far from the map origin and on both sides of the central axis is higher. The average mention rate still reflects the lower mention rate of the surrounding roads and the higher mention rate of the main roads. Moreover, the region mention rate is higher overall. The forward-curled deformation map can help individuals grasp the overall layout of the Forbidden City effectively.

5. Discussion

5.1. Similarities and differences in fixation behavior between the two stimuli

When free browsing the two maps, different fixation distribution characteristics were presented. When browsing the maps, the ordinary 3DMFC was mainly observed from an ego-view perspective. If participants wanted to obtain more information, they needed to switch their view frequently. While browsing the forward-curled 3DMFC, participants could observe the map smoothly from the ego- and bird-view perspectives. Thus, the greater the coverage of the fixation point was, the wider the scope of the participant’s visual search. Moreover, the distance to the remote area was reduced, and 3D objects were enlarged. The observability was enhanced, which enhanced the sense of space and overall spatial awareness of the participants.

The time to first fixation for the nearby areas in the ordinary 3DMFC and the time to first fixation for the remote, distant areas were large, but the time to first fixation for the nearby areas in the forward-curled 3DMFC was similar to the time to first fixation for the remote, distant areas. This is because when browsing ordinary 3D maps, the information in the field of view is usually observed first, with nearby areas occupying most of the participants’ field of view; then, the information in the distant areas can be observed by adjusting the viewing angle through different operations, resulting in a difference in the time to first fixation. In the forward-curled map, the distance between the participant and the remote, distant areas is reduced, and all the content in the map is displayed within the participant’s field of view. Thus, participants needed to adjust their eyes only to observe the entire area of the map; they looked at the four areas for approximately the same amount of time.

5.2. Limitations

In this study, we evaluated the advantages of using a curled deformation map to address occlusion problems and highlighted the spatial information in different region. The visual differences between ordinary 3D maps and deformed 3D maps have been studied, but previous studies have been limited to desktop environments (3D maps), rather than virtual environments such as an HMD or CAVE, to simulate real environments and study the differences between using ordinary 3D maps and deformed 3D maps (Dong et al. Citation2020, Citation2022; Iftikhar, Shah, and Luximon Citation2020). In future work, immersive virtual environments may be of interest, and VR and augmented reality (AR) environments should be considered in future research. We can combine 3D (deformed) maps with VR and AR to present 3D maps in an immersive manner and explore the characteristics of different experimental environments and deformation methods in map visualization and cognition.

Another limitation of this study is that only forward-curled deformation, which enhances the front features of ground objects and does not enhance the stimulus material in different areas with different deformation modes, was studied. For example, forward and downward curling, left-right curling, and doming can also enhance the processing of different areas in 3D maps. Moreover, there are many existing 3D visualization methods. For example, a distortion algorithm based on focus and context theory has been used to improve the internal visibility access method (Ying et al. Citation2019). In this study, the use of other visualization methods was not compared between the experimental and control groups; therefore, the experimental conclusions prove only the advantages of forward-curled deformation maps over ordinary 3D maps. Thus, in future work, we could explore the effects and cognitive characteristics of different visualization methods on 3D map deformation.

6. Conclusions

In this study, we used ordinary 3DMFCs and forward-curled 3DMFCs as experimental materials. We conducted experiments to determine whether forward curling deformation can improve the ability of participants to complete wayfinding tasks. The main research results of this paper are as follows:

  1. With a forward-curled map, participants can smoothly observe and browse the map from the ego- and bird-person perspectives. Curled deformation visualization techniques can reduce or eliminate space occlusion effects and enhance observability, which improves people’s sense of space, overall spatial awareness and overall understanding of the space.

  2. A curled deformation map can improve a participant’s task completion efficiency, reducing the amount of information needed for processing and the difficulty of various operations. In addition, information can be more easily interpreted, and the cognitive burden is not increased.

  3. The public cognition map with forward-curled 3DMFCs has a higher overall mention rate of spatial elements. The forward-curled deformation method can enhance a participant’s spatial perception of remote, distant areas and their understanding of the overall spatial structure of the region.

When combined with VR and AR equipment, the forward-curled deformation map can be applied in various scenarios or settings, such as unfamiliar environments within familiar surroundings, due to the visualization characteristics of the forward-curled deformation map, which enable the observation of surrounding environmental information and provide a particular understanding of distant building details and distribution patterns. Consequently, it facilitates faster adaptation to new environments. Additionally, it can be utilized for wayfinding navigation by enabling path planning from the starting position and optimizing the route based on progressively acquired detailed information.

Acknowledgments

We are grateful to all the participants in the experiment for their contributions to the study.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work is supported by the National Natural Science Foundation of China [Grant number 42071366].

Notes on contributors

Shen Ying

Shen Ying received a B.S. (1999) in Cartography from Wuhan Technique University of Surveying and Mapping (WTUSM), and MSc and PhD degree in Cartography and GIS from Wuhan University in 2002 and 2005, respectively. He has held lecturing and professor positions at Wuhan University, where he leads the department of Cartography and GIS in School of Resource and Environmental Sciences. His research interests include spatial visualization, 3D GIS and 3D cadastre, HD map for automatic pilot.

Junru Su

Junru Su is a student pursuing her PhD degree in 3D spatial visualization and spatial cognition at School of Resource and Environmental Sciences of Wuhan University.

Yuan Zhuang

Yuan Zhuang received the bachelor and MSc degree from Wuhan University in 2020 and 2023, respectively. Her research interests include visual analysis and spatial cognition.

Lina Huang

Lina Huang is an associate professor at Wuhan University. She is engaged in cartography, map design, geospatial information visualization and visual analysis.

References

  • Barr, A. 1984. “Global and Local Deformations of Solid Primitives.” ACM SIGGRAPH Computer Graphics 18 (3): 21–30. https://doi.org/10.1145/964965.808573.
  • Chen, C., W. Chang, and W. Chang. 2009. “Gender Differences in Relation to Wayfinding Strategies, Navigational Support Design, and Wayfinding Task Difficulty.” Journal of Environmental Psychology 29:220–226. https://doi.org/10.1016/j.jenvp.2008.07.003.
  • Dalton, R. 2001. Spatial Navigation in Immersive Virtual Environments. PhD diss, University of London.
  • Dong, W., H. Liao, Z. Zhan, B. Liu, S. Wang, and T. Yang. 2019. “New Research Progress of Eye Tracking-Based Map Cognition in Cartography Since 2008.” Acta Geographica Sinica 74 (3): 599–614. https://doi.org/10.11821/dlxb201903015.
  • Dong, W., T. Qin, T. Yang, H. Liao, B. Liu, L. Meng, and Y. Liu. 2022. “Wayfinding Behavior and Spatial Knowledge Acquisition: Are They the Same in Virtual Reality and in Real-World Environments?” Annals of the American Association of Geographers 112 (1): 226–246. https://doi.org/10.1080/24694452.2021.1894088.
  • Dong, W., T. Yang, H. Liao, and L. Meng. 2020. “How Does Map Use Differ in Virtual Reality and Desktop-Based Environments?” International Journal of Digital Earth 13 (12): 1484–1503. https://doi.org/10.1080/17538947.2020.1731617.
  • Fuhrmann, S., O. Komogortsev, and D. Tamir. 2009. “Investigating Hologram-Based Route Planning.” Transactions in GIS 13 (s1): 177–196. https://doi.org/10.1111/j.1467-9671.2009.01158.x.
  • Furnas, G. 1986. “Generalized Fisheye Views.” Paper presented at Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 16–23. Boston, Massachusetts, USA, April 13–17.
  • Golledge, R. G., T. R. Smith, J. W. Pellegrino, S. Doherty, and S. P. Marshall. 1985. “A Conceptual Model and Empirical Analysis of children’s Acquisition of Spatial Knowledge.” Journal of Environmental Psychology 5 (2): 125–152. https://doi.org/10.1016/S0272-4944(85)80014-1.
  • Guo, R., Y. Chen, S. Ying, G. Lü, and Z. Li. 2018. “Geographic Visualization of Pan-Map with the Context of Ternary Spaces.” Geomatics and Information Science of Wuhan University 43 (11): 1603–1610. https://doi.org/10.13203/j.whugis20180373.
  • Hansen, D., and Q. Ji. 2010. “In the Eye of the Beholder: A Survey of Models for Eyes and Gaze.” IEEE Transactions on Pattern Analysis & Machine Intelligence 32:478–500. https://doi.org/10.1109/TPAMI.2009.30.
  • He, B., W. Dong, H. Liao, Q. Ying, B. Shi, J. Liu, and Y. Wang. 2023. “A Geospatial Image Based Eye Movement Dataset for Cartography and GIS.” Cartography and Geographic Information Science 50 (1): 96–111. https://doi.org/10.1080/15230406.2022.2153172.
  • Herman, L., S. Popelka, and V. Hejlová. 2017. “Eye-Tracking Analysis of Interactive 3D Geovisualization.” Journal of Eye Movement Research 10:3. https://doi.org/10.16910/jemr.10.3.2.
  • Huang, L., D. Zhang, S. Ying, and T. Ai. 2021. “Influence of Individual Characteristics on Spatial Cognitive Ability in Desktop Virtual Environment and Real Scene.” Acta Geodaetica et Cartographica Sinica 50 (4): 509–521. https://doi.org/10.11947/j.AGCS.2021.20200134.
  • Iftikhar, H., P. Shah, and Y. Luximon. 2020. “Human Wayfinding Behaviour and Metrics in Complex Environments: A Systematic Literature Review.” Architectural Science Review 64 (5): 452–463. https://doi.org/10.1080/00038628.2020.1777386.
  • Jenny, H., and B. Jenny. 2011. “Terrain Bender.” http://www.terraincartography.com/terrainbender/.
  • Jobson, C. 2013. “Here & There: Horizonless Projections of Manhattan.” https://www.thisiscolossal.com/2013/05/here-there-horizonless-projections-of-manhattan/.
  • Judge, S., and L. Harrie. 2020. “Visualizing a Possible Future: Map Guidelines for a 3D Detailed Development Plan.” Journal of Geovisualization and Spatial Analysis 4 (1): 7. https://doi.org/10.1007/s41651-020-00049-4.
  • Kadmon, N., and E. Shlomi. 1978. “A Polyfocal Projection for Statistical Surfaces.” The Cartographic Journa 15 (1): 36–41. https://doi.org/10.1179/caj.1978.15.1.36.
  • Kiefer, P., I. Giannopoulos, and M. Raubal. 2014. “Where Am I? Investigating Map Matching During Self-Localization with Mobile Eye Tracking in an Urban Environment.” Transactions in GIS 18 (5): 660–686. https://doi.org/10.1111/tgis.12067.
  • Kuliga, S. F., T. Thrash, R. Dalton, and C. Hölscher. 2015. “Virtual Reality As an Empirical Research Tool — Exploring User Experience in a Real Building and a Corresponding Virtual Model.” Computers, Environment and Urban Systems 54:363–375. https://doi.org/10.1016/j.compenvurbsys.2015.09.006.
  • Lawton, C. A. 1994. “Gender Differences in Way-Finding Strategies: Relationship to Spatial Ability and Spatial Anxiety.” Sex Roles 30 (11): 765–779. https://doi.org/10.1007/BF01544230.
  • Li, W., and Y. Chen. 2012. “Cartography Eye Movements Study and Experimental Parameter Analysis.” Bulletin of Surveying & Mapping 10:16–20. http://tb.chinasmp.com/CN/0494-0911/home.shtml.
  • Lloyd, P. B. 2018. “Diagrammatic Maps of the New York Subway: An Historical Perspective.” Paper presented at 10th International Conference on the Theory and Application of Diagrams (Diagrams), 219–227. Edinburgh, Scotland. Edinburgh Napier Univ, June 18–22.
  • Lu, X., A. Tomkins, S. Hehl-Lange, and E. Lange. 2021. “Finding the Difference: Measuring Spatial Perception of Planning Phases of High-Rise Urban Developments in Virtual Reality.” Computers, Environment and Urban Systems 90:101685. https://doi.org/10.1016/j.compenvurbsys.2021.101685.
  • Mackinlay, J., G. Robertson, and S. Card. 1991. “The Perspective Wall: Detail and Context Smoothly Integrated.” Paper presented at Conference on Human Factors in Computing Systems, 173–176. New Orleans, LA, USA, April, May 27 2.
  • Manson, S. M., L. Kne, K. R. Dyke, J. Shannon, and S. Eria. 2012. “Using Eye-Tracking and Mouse Metrics to Test Usability of Web Mapping Navigation.” Cartography and Geographic Information Science 39 (1): 48–60. https://doi.org/10.1559/1523040639148.
  • Marques, L. F., J. A. Tenedorio, M. Burns, T. Romão, F. Birra, J. Marques, and A. Pires. 2017. “Cultural Heritage 3D Modelling and Visualisation within an Augmented Reality Environment, Based on Geographic Information Technologies and Mobile Platforms.” ACE: Architecture, City and Environment 11 (33): 117–136. https://doi.org/10.5821/ace.11.33.4686.
  • Marshall, C. 2018. “A Huge Scale Model Showing Ancient Rome at its Architectural Peak (Built Between 1933 and 1937).” https://www.openculture.com/2018/03/behold-a-huge-scale-model-of-ancient-rome-at-its-architectural-peak.html.
  • Midtbø, T., and L. Harrie. 2021. “Visualization of the Invisible (Editorial).” Journal of Geovisualization and Spatial Analysis 5 (1): 13. https://doi.org/10.1007/s41651-021-00080-z.
  • Millen, A. E., and P. J. B. Hancock. 2019. “Eye See Through You! Eye Tracking Unmasks Concealed Face Recognition Despite Countermeasures.” Cognitive Research: Principles and Implications 4 (1): 23. https://doi.org/10.1186/s41235-019-0169-0.
  • Möser, S., P. Degener, R. Wahl, and R. Klein. 2008. “Context Aware Terrain Visualization for Wayfinding and Navigation.” Computer Graphics Forum 27 (7): 1853–1860. https://doi.org/10.1111/j.1467-8659.2008.01332.x.
  • Nenko, A., A. Koniukhov, and M. Petrova. 2019. “Areas of Habitation in the City: Improving Urban Management Based on Check-In Data and Mental Mapping.” Paper presented at International Conference on Electronic Governance and Open Society: Challenges in Eurasia, Cham, 235–248. St. Petersburg, Russia, November 13–14.
  • Ning, X., Q. Zhu, H. Zhang, C. Wang, Z. Han, J. Zhang, and W. Zhao. 2020. “Dynamic Simulation Method of High-Speed Railway Engineering Construction Processes Based on Virtual Geographic Environment.” ISPRS International Journal of Geo-Information 9 (5). https://doi.org/10.3390/ijgi9050292.
  • Parush, A., and D. Berman. 2004. “Navigation and Orientation in 3D User Interfaces: The Impact of Navigation Aids and Landmarks.” International Journal of Human-Computer Studies 61:375–395. https://doi.org/10.1016/j.ijhcs.2003.12.018.
  • Pasewaldt, S., A. Semmo, M. Trapp, and J. Döllner. 2014. “Multi-Perspective 3D Panoramas.” International Journal of Geographical Information Science. https://doi.org/10.1080/13658816.2014.922686.
  • Pasewaldt, S., M. Trapp, and J. Döllner. 2011. “Multiscale Visualization of 3D Geovirtual Environments Using View-Dependent Multi-Perspective Views.” Journal of WSCG 19:111–118. http://dblp.uni-trier.de/db/journals/jwscg/jwscg19.html#PasewaldtTD11.
  • Popelka, S., L. Herman, T. Řezník, M. Pařilová, K. Jedlička, J. Bouchal, M. Kepka, and K. Charvát. 2019. “User Evaluation of Map-Based Visual Analytic Tools.” ISPRS International Journal of Geo-Information 8 (8): 363. https://doi.org/10.3390/ijgi8080363.
  • Richmond, J., and C. Nelson. 2009. “Relational Memory During Infancy: Evidence from Eye Tracking.” Developmental Science 12:549–556. https://doi.org/10.1111/j.1467-7687.2009.00795.x.
  • Sederberg, T., and S. Parry. 1986. “Free-Form Deformation of Solid Geometric Models.” 20:151–160. https://doi.org/10.1145/15886.15903.
  • Shuang, W., Y. Chen, Y. Yuan, H. Ye, and S. Zheng. 2016. “Visualizing the Intellectual Structure of Eye Movement Research in Cartography.” ISPRS International Journal of Geo-Information 5:168. https://doi.org/10.3390/ijgi5100168.
  • Sielicka, K. M., and I. Karsznia. 2019. “Evaluating Map Specifications for Automated Generalization of Settlements and Road Networks in Small-Scale Maps.” Miscellanea Geographica 23 (4): 242–255. https://doi.org/10.2478/mgrsd-2019-0025.
  • Sorene, P. 2016. “Istanbul Infinity: Aydin Büyüktas Recreates Turkish Cityscapes in a New Dimension.” http://flashbak.com/istanbul-infinity-aydin-buyuktas-recreates-turkishcityscapes-in-a-new-dimension-52521/.
  • Tang, L., X. Peng, C. Chen, H. Huang, and D. Lin. 2019. “Three-Dimensional Forest Growth Simulation in Virtual Geographic Environments.” Earth Science Informatics 12 (1): 31–41. https://doi.org/10.1007/s12145-018-0356-4.
  • Tang, F., and A. Ren. 2012. “GIS-Based 3D Evacuation Simulation for Indoor Fire.” Building & Environment 49:193–202. https://doi.org/10.1016/j.buildenv.2011.09.021.
  • Tominski, C., J. Abello, F. Ham, and H. Schumann. 2006. “Fisheye Tree Views and Lenses for Graph Visualization.” Paper presented at Tenth International Conference on Information Visualisation (IV’06), 17–24. London, UK, July 5–7.
  • Ugwitz, P., O. Kvarda, Z. Juříková, C. Šašinka, and S. Tamm. 2022. “Eye-Tracking in Interactive Virtual Environments: Implementation and Evaluation.” Applied Sciences 12:3. https://doi.org/10.3390/app12031027.
  • Vallance, S., and P. Calder. 2001. “Context in 3D Planar Navigation.” Australian Computer Science Communications 23 (5): 93–99. https://doi.org/10.1109/AUIC.2001.90628.
  • Yang, B., and H. Li. 2021. “A Visual Attention Model Based on Eye Tracking in 3D Scene Maps.” ISPRS International Journal of Geo-Information 10. https://doi.org/10.3390/ijgi10100664.
  • Yasumoto, S., A. Jones, T. Nakaya, and K. Yano. 2011. “The Use of a Virtual City Model for Assessing Equity in Access to Views.” Computers, Environment and Urban Systems 35:464–473. https://doi.org/10.1016/j.compenvurbsys.2011.07.002.
  • Ying, S., N. Chen, W. Li, C. Li, and R. Guo. 2019. “Distortion Visualization Techniques for 3D Coherent Sets: A Case Study of 3D Building Property Units.” Computers, Environment and Urban Systems 78:101382. https://doi.org/10.1016/j.compenvurbsys.2019.101382.
  • Ying, S., W. Zhang, J. Su, and L. Huang. 2021a. “The Cognitive View of the Earth with the Cases of Path-Finding Based on Google Earth.” Acta Geodaetica et Cartographica Sinica 50 (6): 739–748. https://doi.org/10.11947/j.AGCS.2021.20210050.
  • Ying, S., Y. Zhuang, L. Huang, N. Chen, and W. Zhang. 2020. “Impact of Gender, Cognitive Differences in 3D Scenes on Wayfinding.” Geomatics and Information Science of Wuhan University 45 (3): 317–324. https://doi.org/10.13203/j.whugis20190184.
  • Ying, S., Y. Zhuang, L. Huang, H. Wang, and Z. Yin. 2021b. “Analysis of the Correlation Between Spatial Cognitive Abilities and Wayfinding Decisions in 3D Digital Environments.” Behaviour & Information Technology 40 (8): 809–820. https://doi.org/10.1080/0144929X.2020.1726468.
  • Zhang, L., L. Zhang, and X. Xu. 2016. “Occlusion-Free Visualization of Important Geographic Features in 3D Urban Environments.” ISPRS International Journal of Geo-Information 5 (8): 138. https://doi.org/10.3390/ijgi5080138.