Abstract
Spatial navigation is a crucial cognitive ability that ensures effective functioning in people's daily work and lives. With the continuous development of Virtual Reality (VR) technology, its compatibility with spatial navigation research has become increasingly prominent. Under current technical conditions, there remain certain differences in individuals' spatial perception and behavioral performance between VR and reality; researchers should continuously verify and update their research findings within this new technological environment. When designing spatial navigation assessments based on VR, first, appropriate display devices and locomotion techniques should be selected by comprehensively considering equipment fidelity as well as participants' familiarity with and experience using the devices; second, the design logic of spatial navigation paradigms should be thoroughly understood, which typically include "learning" and "testing" stages, while ensuring the scientific rigor of task design at each stage; when designing scenarios and tasks, test difficulty should be adjusted by manipulating environmental factors that influence spatial navigation. Given the complexity of VR assessment tools, researchers should control and evaluate the quality of assessment tools from multiple dimensions.
Full Text
Preamble
The Design of Virtual Reality Tests for Spatial Navigation Ability
Chen Yan, Tian Xuetao, Luo Fang
(Department of Psychology, Beijing Normal University, Beijing 100875, China)
Abstract: Spatial navigation is a critical cognitive ability that ensures effective functioning in daily work and life. With the continuous advancement of Virtual Reality (VR) technology, its compatibility with spatial navigation research has become increasingly prominent. Under current technological conditions, notable differences remain between spatial perception and behavioral performance in VR versus the real world. Researchers should continuously verify and update their findings in this evolving technological context. When designing spatial navigation tests based on VR, it is essential to first comprehensively consider equipment fidelity, user familiarity, and user experience to select appropriate display devices and locomotion techniques. Second, one must thoroughly understand the design logic of spatial navigation paradigms, which typically include "learning" and "testing" phases, while ensuring scientific rigor in task design at each stage. Third, when designing scenarios and tasks, environmental factors affecting spatial navigation should be systematically manipulated to regulate test difficulty. Given the complexity of VR assessment tools, researchers must evaluate and control the quality of these instruments across multiple dimensions.
Keywords: spatial navigation ability, virtual reality, simulation-based assessment
In recent years, psychological measurement has gradually shifted from laboratory paradigms assessing basic cognitive abilities to evaluating comprehensive skills required for individuals to perform specific activities in real-world contexts. Among spatial abilities, previous research primarily focused on small-scale spatial ability—the capacity to perceive, imagine, and mentally manipulate objects external to oneself, such as mental rotation of figures or imagining paper folding and unfolding. However, this "basic" construct is insufficient to fully describe the cognitive processes involved in real-world orientation, spatial layout learning and memory, and path planning, which fall under the domain of large-scale spatial ability known as spatial navigation ability \cite{Wolbers2010}. This ability shows significant individual differences in the population \cite{Spiers2023, van_der_Ham2020, Zhang2023} and is susceptible to aging \cite{Lester2017}. Declining sense of direction and spatial memory represent key early cognitive markers for disorders such as Alzheimer's disease (AD) \cite{Coughlan2018}. Therefore, accurate quantification of individual differences in spatial navigation ability is a crucial prerequisite for both personnel selection in specific positions and early-stage monitoring and intervention for AD.
Early spatial navigation tests primarily consisted of self-report questionnaires (e.g., \cite{Hegarty2002, Lawton1994}), where participants self-assessed their spatial navigation experiences in daily life. While questionnaire methods are easy to administer and demonstrate acceptable reliability and validity—remaining a common auxiliary measurement tool—they inevitably lack objectivity. To more directly observe performance differences among individuals with varying abilities, researchers have conducted spatial navigation tests in real-world environments such as residential areas \cite{Ishikawa2006}, botanical gardens \cite{Muffato2016}, nuclear facilities \cite{Stites2020}, and campuses \cite{Pullano2024}. Such studies require large-scale spaces where participants can physically walk through scenes, ensuring high ecological validity. However, they also present limitations: they are time-consuming and labor-intensive, difficult to replicate, hard to control for factors like noise, weather, and traffic, and cannot be generalized to other scenarios \cite{van_der_Ham2015}.
To address these issues, an increasing number of researchers have adopted simulation-based assessment methods, using VR technology to construct two- or three-dimensional scenes that simulate specific real-world environments such as towns \cite{Weisberg2016}, museums \cite{Burles2020}, and wilderness settings \cite{Gagnon2018}, or task scenarios required by specific paradigms such as virtual water mazes \cite{Thornberry2021}, Y-mazes \cite{Rodgers2012}, dual-path paradigms \cite{Weisberg2014}, and triangle completion paradigms \cite{Adamo2012, Xie2017}. Compared to real-world testing, VR tests are not constrained by venue or time, offering greater efficiency, safety, and controllability. They enable environmental design and manipulation, provide natural interaction and immediate feedback, and ensure both standardization and ecological validity. Moreover, VR devices exhibit excellent compatibility with in vivo imaging technologies, allowing for the collection of multimodal data through non-invasive techniques such as EEG, eye-tracking, and fMRI \cite{Cogne2017, Migo2016}. In recent years, an increasing number of spatial navigation VR assessment tools have been widely applied in mild cognitive impairment (MCI) screening, neurorehabilitation training, military personnel selection and professional training, and architectural space safety optimization design \cite{Castegnaro2022, Coughlan2019, DaCosta2022, Diersch2019, Feng2022, Moussavi2022, Shi2021, Tarnanas2015, Wiener2020, Yang2024, Zen2013}.
From an assessment perspective, this study first clarifies the feasibility of using VR as a test medium by comparing spatial perception and behavioral differences between VR and the real world across multiple dimensions. Second, it addresses key considerations in test design, including: how to select appropriate display devices and locomotion techniques, how to design scientifically valid assessment paradigms, and which environmental factors affecting spatial navigation task difficulty should be manipulated. Finally, it discusses quality evaluation issues for VR assessment tools.
2. Differences in Spatial Perception and Behavior Between VR and the Real World
VR technology has developed rapidly over the past two decades, profoundly reshaping research methods for human behavior \cite{Brookes2020, Tarr2002}. This technology provides immersive experiences by presenting visual images and non-visual multi-channel stimuli in real-time interactive scenes. When designing tests using VR, researchers must first determine whether findings from VR-based studies can be generalized to the real world. Currently, this remains a complex and open question, particularly controversial in the spatial navigation domain. While some supportive evidence suggests that spatial navigation performance in VR is largely similar to that in the real world \cite{Coutrot2019, Cushman2008, Lloyd2009, Marin-Morales2019, Richardson1999}, other studies have revealed differences in perception and behavior in VR compared to reality, manifested in several key aspects.
At the perceptual level, differences exist in the sensory information available in VR versus real environments. In desktop VR, where participants remain physically stationary and rely solely on visual cues for spatial navigation, the lack of vestibular and kinesthetic information \cite{Ladouce2017, Park2018} can impair performance in spatial orientation and other aspects. Additionally, conflicts between visual and vestibular inputs can easily lead to motion sickness \cite{Diersch2019, Krohn2020}. Furthermore, individuals exhibit perceptual biases regarding distance, size, and speed in VR. For example, \cite{Bhagavathula2018} compared pedestrian street-crossing behavior in VR and real environments, finding similarities in crossing intentions and perceived traffic risks but significant differences in perceived vehicle speed. \cite{Kuliga2015} found that participants in virtual environments tended to overestimate corridor length and narrowness. Eye-tracking data analysis has also revealed differences in visual attention: individuals process visual information more efficiently in real environments, while visual information search is more efficient in virtual environments \cite{Dong2022}. \cite{Kimura2017} conducted spatial reorientation tests in both real and VR versions of rectangular rooms, finding that individuals utilized different types of visual cues in the two environments. Compared to the real world, people in VR relied more on discrete, salient cues and less on geometric features of spatial layouts.
Differences in perceptual cues and processing patterns affect spatial navigation behavior and task performance. \cite{van_der_Ham2015} found that participants demonstrated better spatial memory performance in real-world environments and experimentally demonstrated that poorer performance in VR was due to the lack of direct physical engagement. \cite{Savino2019} found that participants experienced greater task load and poorer performance in virtual environments. \cite{Kalantari2024} showed that, compared to real environments, participants in VR exhibited longer exploration times and distances, displaying more information-seeking behavior, yet poorer spatial navigation performance—manifested as more wrong turns at decision points and greater directional judgment errors. Researchers inferred from participants' self-reported metrics that the heightened uncertainty, cognitive load, and perceived task difficulty in VR triggered increased information-seeking behavior, broader exploration, and more time spent searching for signs and environmental cues, consequently impairing wayfinding performance and spatial memory.
These inconsistent results can be partly attributed to evolving VR technology. \cite{Sanchez-Vives2005} compared visual complexity between real and virtual worlds, demonstrating that even high-end computers cannot replicate the complexity of the physical world. However, this study was conducted nearly 20 years ago, when resolution and image processing capabilities were far inferior to today's standards. Moreover, VR realism depends not only on visual fidelity but also on factors such as body tracking quality, frame rate, latency, and content design \cite{Cummings2016}.
Currently, differences between VR and reality are difficult to avoid. When walking in the real world, individuals receive full-body kinesthetic feedback, which requires more physical exertion than movement in VR. VR also falls short in replicating inherent multisensory cues from the real world, such as smell, sound, and fine-grained tactile information. The absence or conflict of these cues in VR can interfere with spatial navigation activities. Additionally, perceptual differences in distance and speed, along with the lack of feedback from one's own actions, may lead individuals to behave differently in VR than in the real world, such as reduced head movement and "tunnel vision" \cite{Ewart2021}. Nevertheless, VR-related technology is developing rapidly, and people's familiarity with VR devices is gradually increasing. Problems identified in previous studies may well be mitigated in the new technological environment. As \cite{Ewart2021} noted, research often lags behind technological development, and past conclusions about differences between VR and the real world may no longer apply to current VR systems. Therefore, researchers should specify hardware and software versions in their papers and continuously monitor technological changes to verify and update their findings accordingly.
3. Design of Spatial Navigation VR Tests
Simulation-based assessment aims to provide more accurate and comprehensive evaluation results by simulating real-world situations or tasks that fully engage the target ability. For spatial navigation ability, the realism of assessment tools is reflected in visual presentation (determined by display devices) and interaction methods (how to simulate human movement in real space, primarily dependent on locomotion techniques). Test designers must understand how different display devices and locomotion techniques affect participant performance and whether these effects vary by task type.
VR technology has greatly expanded assessment contexts and interaction types, allowing test designers considerable freedom while ensuring scientific validity and effectiveness of content, procedures, and formats. By reviewing the design logic of previous spatial navigation tasks, Section 3.2 summarizes and proposes a design framework for common assessment paradigms.
In scenario and task design, VR technology's advantage lies in the comprehensiveness and convenience of variable manipulation. Test designers can add or simplify environmental elements and task rules according to assessment needs, thereby flexibly controlling test difficulty. Section 3.3 systematically organizes environmental factors affecting spatial navigation performance.
3.1 Selection of Display Devices and Locomotion Techniques
In recent years, VR headsets have offered increasingly wider fields of view (FOV), and six degrees of freedom (6-DOF) tracking technology has become widely applied, greatly enhancing VR product realism \cite{Jensen2018, Penelope2022}. \cite{Ragan2015} proposed three dimensions for evaluating VR fidelity: display fidelity, interaction fidelity, and scenario fidelity. Previous spatial navigation studies using VR have shown considerable variation in device configurations. While higher fidelity generally enhances immersion, engagement, and performance, researchers must also consider participants' familiarity with and experience using the equipment.
For spatial navigation tests, researchers are most concerned with display fidelity and interaction fidelity, which primarily depend on the choice of display devices and locomotion techniques. The two most commonly used display devices are desktop VR and head-mounted displays (HMDs). Desktop VR, also known as non-immersive VR, is a small desktop-based virtual reality system built on standard PC platforms that uses mid-to-low-end graphics workstations and stereoscopic displays to generate virtual scenes. Users interact with the environment through hand-controlled input devices such as keyboards, mice, or joysticks. HMDs, by contrast, enclose the user's visual and auditory perception of the external world, creating immersion in the virtual environment. The 3D visual principle involves presenting different images to the left and right eye screens, which the brain integrates to produce a stereoscopic effect. HMDs create stronger immersive experiences and support a richer variety of interaction devices.
Many studies have compared the effects of these two display types on spatial navigation performance and user experience. Early research found that participants performed better in desktop VR, with video game experience only affecting performance in desktop VR \cite{Elmqvist2008, Sousa2009}. \cite{Sousa2009} conducted interviews on user experience, with most participants reporting dizziness when wearing VR headsets and noting that tangled cables interfered with their turning behavior. However, as display fidelity and portability have improved, an increasing number of recent studies indicate that HMDs provide stronger immersion \cite{Buttussi2018, Buttussi2023, Kim2014} and that participants using HMDs explore virtual environments more thoroughly \cite{Feng2022, Ruddle2014}.
Notably, the effectiveness of display devices is closely related to the locomotion techniques they employ. Virtual locomotion techniques enable virtual agents to move arbitrarily in VR environments while keeping participants physically stationary or moving only within a limited real-world space \cite{Templeman1999}, also known as in-place locomotion techniques \cite{Buttussi2023}. Common simple devices include keyboards, mice (or their combination), and game controllers, which allow users to control movement in the virtual environment while standing or sitting in place. Earlier studies mostly supported keyboard or mouse use \cite{Lapointe2011, Thrash2015}, likely because people were more familiar with these operations, eliminating the need to learn controller usage. As VR products have become more widespread and device interaction fidelity has improved, controllers have gradually become mainstream VR accessories, accompanied by new questions: which locomotion method—teleportation or continuous movement—works better? Teleportation involves users pointing a laser beam from the controller to a distant location and instantly moving there with a button click. Continuous movement involves holding a button on the controller to walk at a constant speed. Most studies have found that teleportation induces less motion sickness, provides stronger immersion, and yields better spatial navigation performance compared to continuous movement \cite{Buttussi2021, Buttussi2023, Langbehn2018}. However, some researchers argue that teleportation creates a discontinuous sense of space, while continuous movement is less likely to cause dizziness \cite{Feng2022}. \cite{Buttussi2023} found that the impact of these two locomotion techniques on spatial navigation performance depends on display device type (HMD vs. desktop VR), virtual environment type (indoor vs. outdoor), and the type of spatial knowledge being measured. Additionally, \cite{Cherep2022} found that adaptation to teleportation varies among individuals. Therefore, researchers should consider device configuration, scenario and task types, and target populations when choosing between these techniques.
When humans walk in real space, internal feedback from body movement is crucial for spatial updating, conveying information about changes in direction (vestibular) and position (proprioception). Individuals integrate these different sources of information in the brain to form spatial representations \cite{Montello2005}. When sensory input from the real world conflicts with input from VR, participants may experience cybersickness—symptoms including nausea, headache, and eye fatigue \cite{Chang2020, Cheng2014}. Using in-place locomotion techniques, especially when the body remains stationary, participants do not receive body movement information while the scene in their field of view continuously shifts. This conflict between sensory inputs can easily cause cybersickness and impair spatial navigation performance \cite{He2019c, Ruddle2011}. \cite{Templeman1999} argued that ecological locomotion techniques should resemble real-world human movement mechanisms and characteristics, such as continuous body involvement in movement control with perceived physical exertion, visual flow updating speed consistent with walking speed, ability to free hands for other operations during walking, and reduced concern about falling.
\cite{van_der_Ham2015} investigated the effects of body movement and directional information on spatial navigation performance. The experiment included four conditions: (1) desktop VR where participants remained seated and interacted with the virtual environment on a computer screen; (2) the same setup with additional directional information (a compass on screen); (3) walking in a real environment; and (4) a "hybrid" condition designed to test the effect of "body movement"—participants walked in an open space while holding a tablet showing their current position and road conditions in the virtual environment, with real-time position updating via GPS tracking. Results showed significantly better performance in the latter two conditions on pointing and map-drawing tasks, demonstrating that direct physical engagement is important for spatial memory performance, particularly for survey knowledge formation.
Although physically walking most closely matches actual human movement patterns, it imposes high requirements on equipment cost, venue size, tracking technology, and safety assurance. Therefore, \cite{Riecke2010} attempted to decompose control of virtual movement into two aspects—turning control and forward movement control—and investigated whether both required direct physical engagement in VR. They compared three locomotion technique schemes using HMDs: (1) using a controller for both turning and forward movement; (2) physically walking; and (3) using a controller for forward movement while physically rotating in place for turning. Results showed that performance in the third scheme was significantly better than in the first and nearly identical to physical walking. This finding is significant, demonstrating that allowing only in-place body rotation can significantly enhance immersion and reduce dizziness, achieving a balance between economic efficiency and ecological validity. Subsequent studies have supported this conclusion \cite{Cherep2020, Cherep2022, Kelly2020, Lim2020}. Today, an increasing number of locomotion techniques support participant movement within small spaces, such as linear treadmills, omnidirectional treadmills \cite{Ruddle2013}, or VR bicycles.
Beyond locomotion techniques, body ownership is a core element for enhancing VR immersion and presence. Through synchronized multisensory feedback (visual, auditory, tactile, proprioceptive), users perceive the virtual body as their own \cite{Kilteni2012, Spanlang2014}, exemplified by the "rubber hand illusion" \cite{Botvinick1998}. The key to such techniques lies in synchronizing information across different sensory channels. For instance, "visual-proprioceptive" synchronization requires aligning the virtual body with users' perception of their limb positions, presenting the virtual body and limbs where they should appear. "Visual-kinesthetic" synchronization requires the virtual body to move in the same way as the real body, with actions highly consistent with users' actual movements, achievable through motion capture or controller tracking. "Visual-tactile" synchronization involves providing tactile feedback, such as controller vibration when the virtual hand in view touches an object. Thus, even when participants cannot see their own bodies in the headset, they can still experience strong embodiment. Body ownership not only reduces dizziness but also helps improve spatial navigation performance in virtual environments \cite{Pan2018, Moon2022}.
After selecting display devices and locomotion techniques, researchers should consider whether to include a practice session. \cite{Grubel2017} found that the sensorimotor process of interacting with VR equipment significantly affects individual task performance. Therefore, researchers recommend adding extra practice before testing to familiarize participants with device operation until they meet preset performance standards, preventing confounding with the target ability being measured. During practice, the virtual environment, interaction devices, and test tasks should be introduced gradually to avoid overwhelming participants with too much information simultaneously \cite{Diersch2019}. Additionally, a key advantage of VR is its greater inclusivity for participant populations, accommodating the needs of special groups \cite{Kalantari2020}. Particularly for assessing spatial navigation ability in older adults or patients with neurological disorders, ecological validity must be balanced with clinical feasibility, requiring appropriate optimization of interactive elements and testing protocols \cite{Colombo2024}. For example, when using treadmills, age-related declines in sensorimotor control must be considered. To avoid fall risks, older adults often prioritize sensorimotor processing, which occupies cognitive resources needed for task completion \cite{Schafer2006}. To address this, \cite{Lovden2005} demonstrated that providing walkers (i.e., adding handrails to treadmills) improved older adults' task performance. \cite{Schellenbach2010} showed that prior treadmill practice enhanced performance.
3.2 Design Framework for Common Assessment Paradigms
Spatial navigation is a complex cognitive ability encompassing perception, learning, representation, memory, planning, and orientation \cite{Wolbers2010}, making it difficult to comprehensively assess through a single task \cite{Thornberry2021}. Moreover, spatial navigation tests have long lacked a unified theoretical framework, leading to terminological confusion, unclear ability structures, inconsistent construct names for similar task types, and ambiguous relationships between different tests \cite{Sanchez-Escudero2024, Uttal2024}. Typically, researchers design specialized navigation tasks based on specific application needs (e.g., military personnel selection \cite{Hegarty2005}) or target specific ability facets, or adapt classic tests. Although researchers have designed numerous task types, most paradigms (except those measuring "perspective taking," which use special task forms \cite{Beatini2024, Brucato2023, Sophia2024}) can essentially be reduced to a two-stage "learning → testing" format (Figure 1 [FIGURE:1]).
In the learning stage, tasks can first be divided into two categories based on whether a map is provided: (1) providing an overall layout map before departure \cite{Morganti2013, Spiers2023}; or (2) never providing a map, requiring participants to learn the space through first-person movement. In the latter case, learning methods can be further divided into two types: (a) following a pre-designed route \cite{Bullens2010, Hegarty2023, Weisberg2014, Weisberg2016}, or (b) allowing free exploration in the scene \cite{Brunec2023, Gagnon2018, Thornberry2021}. The latter more closely resembles how people explore unfamiliar environments in daily life and helps reveal individual differences in spatial learning patterns, but it can introduce confounding variables in process control that affect data quality. Test designers should determine the learning stage format after comprehensively considering measurement purposes, variable control, and data analysis approaches.
The testing stage aims to evaluate the level of spatial knowledge formed after learning, or the form of spatial representation. The most common evaluation framework is Siegel and White's (1975) "Landmark-Route-Survey Model" \cite{Siegel1975}. They proposed that people form spatial knowledge representations in new environments through three stages: (1) landmark knowledge—memory of salient, identifiable objects or scene features; (2) route knowledge—composed of sequences of landmarks and associated decisions (e.g., "turn left at the gas station, go straight past two intersections to reach the post office"), lacking metric information about angles and distances; and (3) survey knowledge—a fine-grained, stable representation similar to a "map" that integrates spatial information using an allocentric reference system and does not change with the individual's position. Subsequent empirical research shows that these three types of spatial knowledge do not necessarily develop in strict sequence. For instance, studies have found that people can complete metric information-dependent tasks such as finding shortcuts or estimating directions/distances shortly after encountering a new environment \cite{Loomis1993}. Additionally, different representations can function separately or simultaneously \cite{Peer2021}, and the speed and sequence of forming various spatial representations show substantial individual differences \cite{Ishikawa2023, Maxim2023}.
Table 1 [TABLE:1] summarizes common assessment tasks for the three types of spatial knowledge. Survey knowledge can be assessed through two approaches: (1) direct measurement by requiring participants to draw or complete maps, or (2) indirect measurement based on specific tasks where researchers assume that participants can only complete these tasks after forming metric survey knowledge, such as pointing tasks, distance estimation tasks, and shortcut tasks \cite{Epstein2017, Newcombe2018}.
Table 1: Assessment Tasks Corresponding to Three Types of Spatial Knowledge
Landmark Knowledge Tasks
- Landmark Recognition Task: Present a series of photos sequentially, requiring participants to judge whether the buildings or objects in the photos appeared during the learning phase \cite{Stites2020, van_der_Ham2020}.
- Simultaneously present two photos—one of a building or location in the target area and another similar distractor—requiring participants to select which photo is from the target area \cite{Pullano2024}.
Route Knowledge Tasks
- Route Repetition Task: Retrace the route from the learning phase from a first-person perspective \cite{Muffato2016}.
- Route Knowledge Task: Present path segments from the start to the destination in photo form, requiring participants to sequence these paths in spatial order \cite{Pullano2024}.
- Path-Route Task: At a given location, require participants to choose from 2-3 directions which path leads to the target location \cite{van2020}.
Survey Knowledge Tasks
- Shortcut Task: Require participants to reach the target location using the shortest path, where the most direct route was not previously traveled \cite{He2023, Stites2020}.
- Pointing Task/Direction Estimation Task: Require participants to stand (or imagine standing) at a location and indicate the direction of a non-visible landmark relative to their current position \cite{He2023, Ishikawa2006, Muffato2016}.
- Distance Estimation Task: On a blank paper, use a standard line to represent a certain real-world distance (e.g., a 2cm line representing 0.6km), requiring participants to draw a line representing their estimated distance between two specified locations \cite{Ishikawa2006}. Alternatively, present three landmarks and require participants to imagine from a bird's-eye view and judge which two landmarks are closest \cite{van_der_Ham2020}.
- Map Drawing Task: Require participants to draw the overall layout of a previously learned space on blank paper, reconstructing the positions of buildings and roads \cite{Ishikawa2006, Muffato2016}.
- Map Completion Task: Provide a simplified map requiring participants to complete the start point, end point, and designated buildings or routes \cite{Stites2020}.
- Survey Knowledge Task: Label several areas on a simplified map and present photos below, requiring participants to match each photo's building location to the corresponding area on the map \cite{Pullano2024}.
- Model-Building Task: Present a blank plane and several building top-view images, requiring participants to drag each building onto the plane to reconstruct the spatial layout \cite{Pagkratidou2020}.
3.3 Scenario and Task Design: Manipulation of Environmental Factors
When designing test scenarios and tasks, environmental factors affecting participant performance must be comprehensively considered. These factors can be broadly categorized into three types: task-related factors, visual cue-related factors, and spatial layout complexity-related factors.
Task-related factors involve considerations in setting task rules and designing items (or levels), such as whether tasks are time-limited, the number of target locations in wayfinding tasks, the number of decision points along the shortest path, whether the shortest route involves extensive backtracking \cite{Coutrot2022, Yesiltepe2023}, and whether the target direction aligns with the reference frame direction in direction judgment tasks \cite{He2018}. Researchers can manipulate such variables to differentiate item difficulty and modify or delete items based on pilot test results.
Visual cue-related factors address everyday phenomena where individuals can walk from point A to B but cannot verbally describe the route when not in the scene. To explain this, \cite{Dalton2001} distinguished between "knowledge in the head" (memory information about the scene or decision-making experience) and "knowledge in the world" (visual cues in the scene such as salient landmarks, signs, and spatial contour features). When designing scenarios, researchers must carefully consider which visual cues facilitate performance and which increase task difficulty. For example, obstacles formed by buildings, high walls, and tree clusters reduce environmental visibility, causing spatial representations to be segmented into relatively independent "sub-maps," decreasing distance and direction estimation accuracy, making spatial updating and path integration more difficult, and reducing navigation efficiency \cite{He2019a, Horner2016, Maxim2023, Meilinger2016}. While manipulating "obstacles" is difficult in real spaces, VR can readily address this. For instance, researchers created a "see-through" condition where some buildings became transparent, finding that participants' wayfinding and direction judgment performance significantly improved \cite{He2019b, He2020}. Visibility can also be manipulated by changing weather conditions, such as creating rain, snow, or fog effects \cite{Coutrot2022, Yesiltepe2023}. Beyond obstacle manipulation, direct assistance cues can be added, such as road signs \cite{Farr2012, Johanson2023}, distal global cues \cite{Peer2021, Wolbers2010}, or maps. Map presentation format \cite{Stites2020} and partial occlusion \cite{Coutrot2022} also affect task difficulty. However, excessive visual cues may increase cognitive load and produce negative effects. For example, \cite{Nori2023} added moving crowds to the same virtual scene and found that participants' spatial anxiety levels increased significantly, and navigation performance deteriorated. In \cite{Farran2015}'s study, participants showed no significant performance differences between cue-rich and cue-sparse environments, suggesting that overly complex scenes are unnecessary and may introduce additional interference.
Spatial layout complexity-related factors include the number of intersections/decision points \cite{Richter2009}, number of dead ends \cite{Yesiltepe2023}, number of circular roads \cite{Marquardt2009, Natapov2020}, average number of options at decision points (interconnection density) \cite{Slone2016}, spatial geometric structural features \cite{Kimura2017, Wolbers2010}, regularity of building arrangement \cite{Barton2014}, and street network entropy (SNE) \cite{Batty2014, Boeing2019, Coutrot2022}. Notably, the impact of spatial layout complexity on navigation performance depends on task type \cite{He2019c}. For route knowledge, which essentially consists of sequences of "stimulus-response" associations, task difficulty is mainly affected by the number of decision points, number of landmarks, and landmark similarity. In contrast, survey knowledge formation relies on accurate spatial updating and path integration, primarily affected by visibility, variability in road orientation, and regularity of building arrangement. For example, grid-like spatial layouts facilitate faster survey knowledge formation \cite{Peer2021}.
These spatial layout factors can be quantified and manipulated using space syntax metrics \cite{He2019c, Li2016, Pagkratidou2020, Yesiltepe2023}. Space syntax is a set of mathematical modeling techniques applied to architectural and urban spatial analysis, now widely used in urban planning, human geography, psychology, tourism, and anthropology to understand relationships between spatial features and human behavior \cite{Bafna2003}. \cite{Pagkratidou2020} referenced axial analysis and visibility graph analysis techniques from space syntax, using several metrics to quantify geographic location characteristics of different areas and examine their relationship with spatial memory task performance. Direction judgment results showed that participants were more accurate when pointing to areas with higher integration, connectivity, and choice values—areas that also had higher visual integration and visual connectivity. This finding has implications for task design, as researchers can use space syntax to calculate characteristic parameters of different areas and adjust task difficulty accordingly. Other research combining free exploration process data with space syntax found that in unfamiliar environments, participants preferentially explore areas with higher integration, from which they can more effectively acquire spatial information and form survey representations \cite{Brunec2023, Emo2012}.
Beyond manipulating these three categories of environmental factors, researchers can use VR to create scenarios impossible in the real world, such as impossible loops \cite{Galbraith2009}, compressed buildings \cite{Chan2023}, or activities unachievable in reality, such as instantaneous teleportation between non-adjacent locations \cite{Muryy2018, Vass2016, Warren2017}. These special virtual environment designs are significant for understanding the cognitive mechanisms and individual differences in human spatial navigation.
4. Quality Evaluation of VR Assessment Tools
As the compatibility between VR technology and spatial navigation research becomes increasingly prominent, more researchers are using VR as a test medium to adapt classic assessment tasks \cite{Adamo2012, Commins2020, Rodgers2012, Thornberry2021, Weisberg2014, Xie2017} or develop new, realistic spatial navigation assessment scenarios \cite{Colombo2024, Feng2022, Ranjbar2017, Shi2021, Ventura2013}. Although this new assessment approach overcomes some limitations of traditional tests, it has also raised questions in the academic community about the psychometric quality of such tools, directly affecting the interpretation of corresponding research results. \cite{Sanchez-Escudero2024} systematically reviewed the quality of VR- or serious game-based spatial navigation assessment tools published between 2012-2023 according to COSMIN guidelines \cite{Mokkink2018}, finding that none of the 37 empirical studies reported complete psychometric quality test results. Only 2 studies (5.4%) reported structural validity, using exploratory factor analysis \cite{Bellassen2012} and confirmatory factor analysis \cite{Allison2019} to demonstrate the ability structure measured by the tools. Three studies (8.1%) reported internal consistency reliability: \cite{Caffo2018}'s Virtual Reorientation Test (VReoT) treated scores from five subtests as five items, calculating a Cronbach's α coefficient of 0.79; \cite{Allison2016, Allison2019} reported reliability for two versions of a cognitive mapping task, with the first version showing α=0.86, while the revised version used Cronbach's α for continuous scores (learning phase α=0.83, recall phase α=0.35) and Kuder-Richardson-20 for dichotomous items (landmark recognition KR-20=0.87, scene recognition KR-20=0.87, free recall KR-20=0.62). Four studies (10.8%) reported test-retest reliability for five assessment tools, all using Intraclass Correlation Coefficient (ICC) \cite{Tarnanas2015, Allison2019, Coughlan2020, Park2022}. Ten studies (27%) reported criterion-related validity and diagnostic accuracy for 11 assessment tools \cite{Allison2016, Allison2019, Bellassen2012, Caffo2012, Castegnaro2022, Coughlan2019, DaCosta2022, Laczo2022, Levine2020, Park2022}, with most reporting ROC AUC for continuous scores and seven studies evaluating sensitivity and specificity using Youden's index. Nine studies (24.3%) reported convergent validity—correlations between tool results and neuropsychological or neurological measures \cite{Castegnaro2022, DaCosta2022, Kunz2015, Laczo2022, Lee2014, Lesk2014, Morganti2013, Parizkova2018, Ritchie2018}. Thirty-two studies (86.5%) reported known-group validity for 29 assessment tools—testing whether tools can effectively distinguish groups known to differ in spatial navigation ability, such as young adults, older adults with early-stage AD symptoms, and late-stage AD patients \cite{Allison2016, Bayahya2021, Bellassen2012, Bierbrauer2020, Caffo2012, Caffo2018, Castegnaro2022, Colmant2023, Coughlan2019, DaCosta2022, Davis2020, Gellersen2021, Konishi2018, Kunz2015, Laczo2021, Laczo2022, Lesk2014, Migo2016, Mohammadi2018, Morganti2013, Moussavi2022, Park2022, Parizkova2018, Pink2023, Plaza2023, Puthusseryppady2022, Ritchie2018, Serino2015, Serino2018, Tarnanas2012, Tarnanas2015, Zen2013}.
As a new test medium, VR differs substantially from traditional paper-and-pencil tests in interaction methods, data types, variable control, testing scale, user experience, and application scenarios. Given the complexity of VR assessment tools, developers and users should comprehensively evaluate and control tool quality across multiple dimensions. \cite{Krohn2020} proposed the VR-Check framework, providing systematic guidelines for optimizing paradigms in clinical neuropsychological assessment. The framework includes ten evaluation dimensions:
- Domain Specificity: The degree to which the VR paradigm actually measures the intended theoretical construct.
- Ecological Relevance: The extent to which the VR assessment task reflects real-world application of the target ability, judged from environmental, stimulus, and activity perspectives.
- Technical Feasibility: Whether the designed task can be reasonably implemented in VR, device compatibility, and venue limitations.
- User Feasibility: Whether the VR test's interaction methods, task difficulty, duration, attention demands, and dizziness levels are acceptable for the target population.
- User Motivation: Assessment based on factors that may promote user engagement, such as interest, reward mechanisms, and face validity.
- Task Adaptability: Ease of developing parallel test versions and adjusting task difficulty.
- Performance Quantification: VR can track and record user behavior with high temporal and spatial resolution, making automated performance measurement an important evaluation dimension.
- Immersive Capacities: Assessment from two perspectives—the objective technical attributes of the VR system and users' subjective experience of "presence" in the virtual environment.
- Training Feasibility: Whether VR tests can be repeatedly applied as training tools beyond diagnostic functions.
- Predictable Pitfalls: Whether potential pitfalls in VR test design and development are anticipated and avoided, and whether resource consumption and potential knowledge gains are reasonably balanced.
Compared to psychometric test quality standards, the first two dimensions of the VR-Check framework similarly emphasize the importance of construct and ecological validity but do not address reliability. In \cite{Sanchez-Escudero2024}'s review, only 8.1% of empirical studies reported reliability analyses for VR assessment tools, and these rarely achieved the reliability levels of paper-and-pencil tests. Spatial navigation VR tests involving scene interaction typically require learning time, making large-item testing difficult within limited administration time—likely a primary reason for insufficient reliability estimation precision in such tools.
The VR-Check framework's significant contribution lies in defining best-practice benchmarks for VR assessment tools across multiple dimensions. However, current and even future VR tests may struggle to simultaneously meet all evaluation criteria. In practice, VR-Check is better suited as a guiding framework for different stages of test development, with researchers checking each item during initial design to determine which are most important for current assessment goals and whether trade-offs between items are necessary. For instance, pursuing immersive, high-fidelity virtual environments can introduce variables unrelated to measurement goals, potentially affecting variable control and reliability/validity.
Most existing spatial ability tests have been developed for specific applications such as personnel selection or disease screening, lacking adequate description of theoretical frameworks and ability definitions, complete psychometric quality testing \cite{Newcombe2023, Sanchez-Escudero2024}, specific scoring methods and norm explanations, or even public availability. These issues limit effective academic exchange and application. \cite{Uttal2024} noted that systematic organization and review of existing spatial thinking tests is currently necessary. This study focuses on large-scale cognitive ability within spatial thinking—spatial navigation ability—and discusses key issues in designing VR tests for this domain.
Using VR as a test medium represents an important trend in intelligent assessment. VR technology enables simulation of spatial navigation activities under arbitrary scenarios and environmental conditions within limited physical spaces, greatly enhancing scenario expandability while maintaining test control and standardization. Future research must address several challenges in this field. First are issues under current technological conditions: low frame rates and long interaction delays can cause cybersickness or motion sickness, limiting participants' ability performance, with individual differences in susceptibility \cite{Chang2020}. Individual differences in video game experience lead to varying familiarity with tasks and operations, interfering with test results \cite{Murias2016}. New interaction devices often require learning time and involve many uncertainties. Researchers should actively address these issues by providing adequate practice before formal testing, collecting relevant individual difference information for statistical control (e.g., motion sickness susceptibility), and simplifying task interaction rules to reduce additional cognitive load. Second, future VR tests may undergo technology-driven transformations, such as integrating eye-tracking, EEG, and biofeedback for multimodal assessment of spatial navigation ability with mutually corroborating indicators. Finally, researchers should prioritize tool and data openness, allowing assessment tools to be reused, adapted, and quality-reviewed by different researchers, while continuously monitoring technological developments to verify and update findings.
The interaction characteristics and mechanisms between humans and VR represent an interdisciplinary topic spanning psychology and artificial intelligence. Related technological devices are evolving at a rapid pace, and people's familiarity with VR is gradually increasing, presenting both opportunities and challenges. Future collaboration among researchers in relevant fields is needed to integrate continuously developing VR technology into spatial navigation ability test development, help reveal the cognitive mechanisms of spatial navigation, achieve more precise and efficient ability assessment, and continuously expand VR technology's application scenarios in psychological research.
References
Zhang, F., Chen, M., Pu, Y., et al. (2023). Multi-level mechanisms underlying individual differences in spatial navigation ability. Advances in Psychological Science, 31(9), 1642–1655.
Adamo, D. E., Briceño, E. M., Sindone, J. A., Alexander, N. B., & Moffat, S. D. (2012). Age differences in virtual environment and real world path integration. Frontiers in Aging Neuroscience, 4, 26.
Allison, S. L., Fagan, A. M., Morris, J. C., & Head, D. (2016). Spatial navigation in preclinical Alzheimer's disease. Journal of Alzheimer's Disease, 52(1), 77–90.
Allison, S. L., Rodebaugh, T. L., Johnston, C., Fagan, A. M., Morris, J. C., & Head, D. (2019). Developing a spatial navigation screening tool sensitive to the preclinical Alzheimer disease continuum. Archives of Clinical Neuropsychology, 34(7), 1138-1155. https://doi.org/10.1093/arclin/acz019
Bafna, S. (2003). Space Syntax: A brief introduction to its logic and analytical techniques. Environment and Behavior, 35(1), 17–29. https://doi.org/10.1177/0013916502238863
Barton, K. R., Valtchanov, D., & Ellard, C. (2014). Seeing beyond your visual field: The influence of spatial topology and visual field on navigation performance. Environment and Behavior, 46(4), 507–529. https://doi.org/10.1177/0013916512466094
Batty, M., Morphet, R., Masucci, P., & Stanilov, K. (2014). Entropy, complexity, and spatial information. Journal of Geographical Systems, 16(4), 363–385. https://doi.org/10.1007/s10109-014-0202-2
Bayahya, A. Y., Alhalabi, W., & Alamri, S. H. (2021). Smart health system to detect dementia disorders using virtual reality. Healthcare (Switzerland), 9(7), 810. https://doi.org/10.3390/healthcare9070810
Beatini, V., Cohen, D., Di Tore, S., Pellerin, H., Aiello, P., Sibilio, M., & Berthoz, A. (2024/02//). Measuring perspective taking with the "virtual class" videogame: A child development study. Computers in Human Behavior, 151, 1–17.
Bellassen, V., Iglói, K., de Souza, L. C., Dubois, B., & Rondi-Reig, L. (2012). Temporal order memory assessed during spatiotemporal navigation as a behavioral cognitive marker for differential Alzheimer's disease diagnosis. Journal of Neuroscience, 32(6), 1942–1952.
Bhagavathula, R., Williams, B., Owens, J., & Gibbons, R. (2018). The reality of virtual reality: A comparison of pedestrian behavior in real and virtual environments. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 3, 2056–2060. https://doi.org/10.1177/1541931218621464
Bierbrauer, A., Kunz, L., Gomes, C. A., Luhmann, M., Deuker, L., Getzmann, S., Wascher, E., Gajewski, P. D., Hengstler, J. G., Fernandez-Alvarez, M., Atienza, M., Cammisuli, D. M., Bonatti, F., Pruneti, C., Percesepe, A., Bellaali, Y., Hanseeuw, B., Strange, B. A., Cantero, J. L., & Axmacher, N. (2020). Unmasking selective path integration deficits in Alzheimer's disease risk carriers. Science Advances, 6(35).
Boeing, G. (2019). Urban spatial order: Street network orientation, configuration, and entropy. Applied Network Science, 4(1), Article 67.
Botvinick, M., & Cohen, J. (1998). Rubber hands 'feel' touch that eyes see. Nature (London), 391(6669), 756–756. https://doi.org/10.1038/35784
Brookes, J., Warburton, M., Alghadier, M., Mon-Williams, M., & Mushtaq, F. (2020). Studying human behavior with virtual reality: The Unity Experiment Framework. Behavior Research Methods, 52(2), 455–463. https://doi.org/10.3758/s13428-019-01242-0
Brucato, M., Frick, A., Pichelmann, S., Nazareth, A., & Newcombe, N. S. (2023). Measuring spatial perspective taking: Analysis of four measures using Item Response Theory. Topics in Cognitive Science, 15(1), 46–74. https://doi.org/10.1111/tops.12597
Brunec, I. K., Nantais, M. M., Sutton, J. E., Epstein, R. A., & Newcombe, N. S. (2023) Exploration patterns shape cognitive map learning. Cognition, 233, 105360.
Bullens, J., Iglói, K., Berthoz, A., Postma, A., & Rondi-Reig, L. (2010). Developmental time course of the acquisition of sequential egocentric and allocentric navigation strategies. Journal of Experimental Child Psychology, 107(3), 337–350.
Burles, F., Liu, I., Hart, C., Murias, K., Graham, S. A., & Iaria, G. (2020///May/Jun). The emergence of cognitive maps for spatial navigation in 7‐ to 10‐year‐old children. Child Development, 91(3), e733–e744.
Buttussi, F., & Chittaro, L. (2018). Effects of different types of virtual reality display on presence and learning in a safety training scenario. IEEE Transactions on Visualization and Computer Graphics, 24(2), 1063–1076. https://doi.org/10.1109/TVCG.2017.2653117
Buttussi, F., & Chittaro, L. (2021). Locomotion in place in virtual reality: A comparative evaluation of joystick, teleport, and leaning. IEEE Transactions on Visualization and Computer Graphics, 27(1), 125–136. https://doi.org/10.1109/TVCG.2019.2928304
Buttussi, F., & Chittaro, L. (2023). Acquisition and retention of spatial knowledge through virtual reality experiences: Effects of VR setup and locomotion technique. International Journal of Human-Computer Studies, 177, 103067. https://doi.org/10.1016/j.ijhcs.2023.103067
Caffò, A. O., De Caro, M. F., Picucci, L., Notarnicola, A., Settanni, A., Livrea, P., Lancioni, G. E., & Bosco, A. (2012). Reorientation deficits are associated with amnestic mild cognitive impairment. American Journal of Alzheimer's Disease and Other Dementias, 27(5), 321–330.
Caffò, A. O., Lopez, A., Spano, G., Serino, S., Cipresso, P., Stasolla, F., Savino, M., Lancioni, G. E., Riva, G., & Bosco, A. (2018). Spatial reorientation decline in aging: The combination of geometry and landmarks. Aging and Mental Health, 22(10), 1372–1383. https://doi.org/10.1080/13607863.2017.1354973
Castegnaro, A., Howett, D., Li, A., Harding, E., Chan, D., Burgess, N., & King, J. (2022). Assessing mild cognitive impairment using object-location memory in immersive virtual environments. Hippocampus, 32(9), 660–678. https://doi.org/10.1002/hipo.23458
Chan, H. M., Ding, J., & Saunders, J. A. (2023). Does viewing an environment without occluders improve spatial learning of a large-scale virtual environment? Journal of Environmental Psychology, 92, 102156. https://doi.org/10.1016/j.jenvp.2023.102156
Chang, E., Kim, H. T., & Yoo, B. (2020). Virtual reality sickness: A review of causes and measurements. International Journal of Human-Computer Interaction, 36(17), 1658–1682.
Cheng-Li, & Liu. (2014). A study of detecting and combating cybersickness with fuzzy control for the elderly within 3d virtual stores. International Journal of Human-Computer Studies.
Cherep, L. A., Lim, A. F., Kelly, J. W., Acharya, D., Velasco, A., Bustamante, E., Ostrander, A. G., & Gilbert, S. B. (2020). Spatial cognitive implications of teleporting through virtual environments. Journal of Experimental Psychology: Applied, 26(3), 480–492. https://doi.org/10.1037/xap0000263
Cherep, L. A., Kelly, J. W., Miller, A., Lim, A. F., & Gilbert, S. B. (2022). Individual differences in teleporting through virtual environments. Journal of Experimental Psychology: Applied, 29(1), 111–123
Cogné, M., Taillade, M., N'Kaoua, B., Tarruella, A., Klinger, E., Larrue, F., … Sorita, E. (2017). The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review. Annals of Physical and Rehabilitation Medicine, 60(3), 164−176.
Colmant, L., Bierbrauer, A., Bellaali, Y., Kunz, L., Van Dongen, J., Sleegers, K., Axmacher, N., Lefèvre, P., & Hanseeuw, B. (2023). Dissociating effects of aging and genetic risk of sporadic Alzheimer's disease on path integration. Neurobiology of Aging, 131, 170–181.
Colombo, G., Minta, K., Grübel, J., Tai, W. L. E., Hölscher, C., & Schinazi, V. R. (2024). Detecting cognitive impairment through an age-friendly serious game: The development and usability of the Spatial Performance Assessment for Cognitive Evaluation (SPACE). Computers in Human Behavior, 160, 108349. https://doi.org/10.1016/j.chb.2024.108349
Commins, S., Duffin, J., Chaves, K., Leahy, D., Corcoran, K., Caffrey, M., Keenan, L., Finan, D., & Thornberry, C. (2020). NavWell: A simplified virtual-reality platform for spatial navigation and memory experiments. Behavior Research Methods, 52(3), 1189–1207. https://doi.org/10.3758/s13428-019-01261-9
Coughlan, G., Coutrot, A., Khondoker, M., Minihane, A. M., Spiers, H., & Hornberger, M. (2019). Toward personalized cognitive diagnostics of at-genetic-risk Alzheimer's disease. Proceedings of the National Academy of Sciences of the United States of America, 116(19), 9285–9292. https://doi.org/10.1073/pnas.1817107116
Coughlan, G., Laczó, J., Hort, J., Minihane, A. M., & Hornberger, M. (2018). Spatial navigation deficits — Overlooked cognitive marker for preclinical Alzheimer disease? Nature Reviews Neurology, 14(8), 496–506.
Coughlan, G., Puthusseryppady, V., Lowry, E., Gillings, R., Spiers, H., Minihane, A. M., & Hornberger, M. (2020). Test-retest reliability of spatial navigation in adults at-risk of Alzheimer's disease. PLoS ONE, 15(9), e0239077. https://doi.org/10.1371/journal.pone.0239077
Coutrot, A., Schmidt, S., Coutrot, L., Pittman, J., Hong, L., Wiener, J. M., … Spiers, H. J. (2019). Virtual navigation tested on a mobile app is predictive of real-world wayfinding navigation performance. PloS One, 14(3), e0213272.
Coutrot, A., Manley, E., Goodroe, S., Gahnstrom, C., Filomena, G., Yesiltepe, D., Dalton, R. C., Wiener, J. M., Hölscher, C., Hornberger, M., & Spiers, H. J. (2022). Entropy of city street networks linked to future spatial navigation ability. Nature, 604(7904), 104–4,110A–110O.
Cummings, J. J., & Bailenson, J. N. (2016). How immersive is enough? A meta-analysis of the effect of immersive technology on user presence. Media Psychology, 19(2), 272–309.
Cushman, L. A., Stein, K., & Duffy, C. J. (2008). Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality. Neurology, 71(12), 888–895.
Da Costa, R. Q. M., Pompeu, J. E., Moretto, E., Silva, J. M., Dos Santos, M. D., Nitrini, R., & Brucki, S. M. D. (2022). Two immersive virtual reality tasks for the assessment of spatial orientation in older adults with and without cognitive impairment: Concurrent validity, group comparison, and accuracy results. Journal of the International Neuropsychological Society, 28(5), 460–472. https://doi.org/10.1017/S1355617721000655
Dalton, R. C. (2001). Spatial navigation in immersive virtual environments. Istanbul Technical University.
Davis, R., & Sikorskii, A. (2020). Eye tracking analysis of visual cues during wayfinding in early stage Alzheimer's disease. Dementia and Geriatric Cognitive Disorders, 49(1), 91–97.
Diersch, N., Wolbers, T., el Jundi, B., Kelber, A., & Webb, B. (2019). The potential of virtual reality for spatial navigation research across the adult lifespan. Journal of Experimental Biology, 222(Suppl_1).
Dong, W., Qin, T., Yang, T., Liao, H., Liu, B., Meng, L., & Liu, Y. (2022). Wayfinding behavior and spatial knowledge acquisition: Are they the same in virtual reality and in real-world environments? Annals of the Association of American Geographers, 112(1), 226–246.
Elmqvist, N., Tudoreanu, M.E., Tsigas, P. (2008). Evaluating motion constraints for 3D wayfinding in immersive and desktop virtual environments. In: Proceeding of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems - CHI'08. ACM Press, New York, New York, USA, pp. 883-892.
Emo, B., Hoelscher, C., Wiener, J., & Dalton, R. (2012). Wayfinding and spatial configuration: Evidence from street corners. Proceedings of the 8th International Space Syntax Symposium(pp. 1–16). Santiago de Chile: PUC.
Epstein, R. A., Patai, E. Z., Julian, J. B., & Spiers, H. J. (2017). The cognitive map in humans: Spatial navigation and beyond. Nature Neuroscience, 20, 1504–1513.
Ewart, I. J., & Johnson, H. (2021). Virtual reality as a tool to investigate and predict occupant behaviour in the real world: The example of wayfinding. ITcon, 26, 286–302.
Farr, A. C., Kleinschmidt, T., Yarlagadda, P., & Mengersen, K. (2012). Wayfinding: A simple concept, a complex process. Transport Reviews, 32(6), 715–743. https://doi.org/10.1080/01441647.2012.712555
Farran, E. K., Purser, H. R. M., Courbois, Y., Balle, M., Sockeel, P., Mellier, D., & Blades, M. (2015). Route knowledge and configural knowledge in typical and atypical development: A comparison of sparse and rich environments. Journal of Neurodevelopmental Disorders, 7.
Feng, Y., Duives, D. C., & Hoogendoorn, S. P. (2022). Wayfinding behaviour in a multi-level building: A comparative study of HMD VR and desktop VR. Advanced Engineering Informatics, 51, 101475.
Gagnon, K. T., Thomas, B. J., Munion, A., Creem-Regehr, S.H., Cashdan, E. A., & Stefanucci, J. K. (2018). Not all those who wander are lost: Spatial exploration patterns and their relationship to gender and spatial memory. Cognition, 180, 108−117.
Galbraith, C., Zetzsche, C., Schill, K., & Wolter, J. (2009). Representation of space: Image-like or sensorimotor? Spatial Vision, 22(5), 409–424. https://doi.org/10.1163/156856809789476074
Garcia-Betances, R. I., Arredondo Waldmeyer, M. T., Fico, G., & Cabrera-Umpierrez, M. F. (2015). A succinct overview of virtual reality technology use in Alzheimer's disease. Frontiers in Aging Neuroscience, 7, 80.
Gellersen, H. M., Coughlan, G., Hornberger, M., & Simons, J. S. (2021). Memory precision of object-location binding is unimpaired in APOE ε4-carriers with spatial navigation deficits. Brain Communications, 3(2). https://doi.org/10.1093/braincomms/fcab087
Gramann, K., Gwin, J. T., Ferris, D. P., Oie, K., Jung, T., Lin, C., Liao, L., & Makeig, S. (2011). Cognition in action: Imaging brain/body dynamics in mobile humans. Reviews in the Neurosciences, 22(6), 593–608. https://doi.org/10.1515/RNS.2011.047
Grübel, J., Thrash, T., Hölscher, C., & Schinazi, V. R. (2017/09//). Evaluation of a conceptual framework for predicting navigation performance in virtual reality. PLoS One, 12(9)
He, Q., & McNamara, T. P. (2018). Spatial updating strategy affects the reference frame in path integration. Psychonomic Bulletin & Review, 25(3), 1073–1079. https://doi.org/10.3758/s13423-017-1307-7
He, Q., & Brown, T. I. (2019a). Environmental barriers disrupt grid-like representations in humans during navigation. Current Biology, 29(16), 2718–2722.e3. https://doi.org/10.1016/j.cub.2019.06.072
He, Q., McNamara, T. P., & Brown, T. I. (2019b). Manipulating the visibility of barriers to improve spatial navigation efficiency cognitive mapping. Scientific Reports, 9(1), https://doi.org/10.1038/s41598-019-48098-0
He, Q., McNamara, T. P., Bodenheimer, B., & Klippel, A. (2019c). Acquisition and transfer of spatial knowledge during wayfinding. Journal of Experimental Psychology. Learning, Memory, and Cognition, 45(8), 1364–1386. https://doi.org/10.1037/xlm0000654
He, Q., Han, A. T., Churaman, T. A., & Brown, T. I. (2020). The role of working memory capacity in spatial learning depends on spatial information integration difficulty in the environment. Journal of Experimental Psychology: General, 150(4), 666–685.
He, C., Boone, A. P., & Hegarty, M. (2023/10//). Measuring configural spatial knowledge: Individual differences in correlations between pointing and shortcutting. Psychonomic Bulletin & Review, 30(5), 1767–1778.
Heft, H. (1996). The ecological approach to navigation: A gibsonian perspective. In: Portugali, J. (eds), The Construction of Cognitive Maps. GeoJournal Library, vol 32. Springer, Dordrecht. https://doi.org/10.1007/978-0-585-33485-1_6
Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah, I. (2002). Development of a self-report measure of environmental spatial ability. Intelligence, 30(5), 425−447.
Hegarty, M., & Waller, D. A. (2005). Individual differences in spatial abilities. In The Cambridge Handbook of Visuospatial Thinking. Cambridge: Cambridge University Press, pp. 121–69
Hegarty, M., He, C., Boone, A. P., Yu, S., Jacobs, E. G., & Chrastil, E. R. (2023/01//). Understanding differences in wayfinding strategies. Topics in Cognitive Science, 15(1), 102–119.
Horner, A. J., Bisby, J. A., Wang, A., Bogus, K., & Burgess, N. (2016). The role of spatial boundaries in shaping long-term event representations. Cognition, https://doi.org/10.1016/j.cognition.2016.05.013
Ishikawa, T., & Montello, D. R. (2006/03//). Spatial knowledge acquisition from direct experience in the environment: Individual differences in the development of metric knowledge and the integration of separately learned places. Cognitive Psychology, 52(2), 93–129.
Ishikawa, T. (2023/01//). Individual differences and skill training in cognitive mapping: How and why people differ. Topics in Cognitive Science, 15(1), 163–186.
Jensen, L., & Konradsen, F. (2018). A review of the use of virtual reality head-mounted displays in education and training. Education and Information Technologies, 23(4), 1515–1529.
Johanson, C., Gutwin, C., & Mandryk, R. (2023). Trails, rails, and over-reliance: How navigation assistance affects route-finding and spatial learning in virtual environments. International Journal of Human-Computer Studies, 178, 103097. https://doi.org/10.1016/j.ijhcs.2023.103097
Kalantari, S., & Neo, J. R. J. (2020). Virtual environments for design research: Lessons learned from use of fully immersive virtual reality in interior design research. Journal of Interior Design, 45(3), 27–42.
Kalantari, S., Mostafavi, A., Xu, T. B., Lee, A. S., & Yang, Q. (2024). Comparing spatial navigation in a virtual environment vs. an identical real environment across the adult lifespan. Computers in Human Behavior, 157, Article 108210. https://doi.org/10.1016/j.chb.2024.108210
Kelly, J. W., Ostrander, A. G., Lim, A. F., Cherep, L. A., & Gilbert, S. B. (2020). Teleporting through virtual environments: Effects of path scale and environment scale on spatial updating. IEEE Transactions on Visualization and Computer Graphics, 26(5), 1841–1850. https://doi.org/10.1109/TVCG.2020.2973051
Kilteni, K., Groten, R., & Slater, M. (2012). The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environment, 21(4), 373–387. https://doi.org/10.1162/PRES_a_00124
Kim, K., Rosenthal, M. Z., Zielinski, D. J., & Brady, R. (2014). Effects of virtual environment platforms on emotional responses. Computer Methods and Programs in Biomedicine, 113(3), 882–893.
Kimura, K., Reichert, J. F., Olson, A., Pouya, O. R., Wang, X., Moussavi, Z., & Kelly, D. M. (2017). Orientation in virtual reality does not fully measure up to the real-world. Scientific Reports, 7(1), 18109–18109. https://doi.org/10.1038/s41598-017-18289-8
Konishi, K., Joober, R., Poirier, J., MacDonald, K., Chakravarty, M., Patel, R., Breitner, J., & Bohbot, V. D. (2018). Healthy versus entorhinal cortical atrophy identification in asymptomatic APOE4 carriers at risk for Alzheimer's disease. Journal of Alzheimer's Disease, 61(4), 1493–1507. https://doi.org/10.3233/JAD-170540
Krohn, S., Tromp, J., Quinque, E. M., Belger, J., Klotzsche, F., Rekers, S., Chojecki, P., de Mooij, J., Akbal, M., McCall, C., Villringer, A., Gaebler, M., Finke, C., & Thöne-Otto, A. (2020). Multidimensional evaluation of virtual reality paradigms in clinical neuropsychology: Application of the VR-Check framework. Journal of Medical Internet Research, 22(4), e16724. https://doi.org/10.2196/16724
Kuliga, S. F., Thrash, T., Dalton, R. C., & Hölscher, C. (2015). Virtual reality as an empirical research tool — Exploring user experience in a real building and a corresponding virtual model. Computers, Environment and Urban Systems, 54, 363–375. https://doi.org/10.1016/j.compenvurbsys.2015.09.006
Kunz, L., Navarro Schröder, T., Lee, H., Montag, C., Lachmann, B., Sariyska, R., Reuter, M., Stirnberg, R., Stöcker, T., MessingFloeter, P. C., Fell, J., Doeller, C. F., & Axmacher, N. (2015). Reduced grid-cell-like representations in adults at genetic risk for Alzheimer's disease. Science, 350(6259), 430–433. https://doi.org/10.1126/science.aad1171
Laczó, M., Martinkovic, L., Lerch, O., Wiener, J. M., Kalinova, J., Matuskova, V., Nedelska, Z., Vyhnalek, M., Hort, J., & Laczó, J. (2022). Different profiles of spatial navigation deficits in Alzheimer's disease biomarker-positive versus biomarker-negative older adults with amnestic mild cognitive impairment. Frontiers in Aging Neuroscience,14(000), 24.
Laczó, M., Wiener, J. M., Kalinova, J., Matuskova, V., Vyhnalek, M., Hort, J., & Laczó, J. (2021). Spatial navigation and visuospatial strategies in typical and atypical aging. Brain Sciences, 11.
Ladouce, S., Donaldson, D. I., Dudchenko, P. A., & Ietswaart, M. (2017). Understanding minds in real-world environments: Toward a mobile cognition approach. Frontiers in Human Neuroscience, 10, 694–694. https://doi.org/10.3389/fnhum.2016.00694
Langbehn, E., Lubos, P., Steinicke, F. (2018). Evaluation of locomotion techniques for room-scale VR: Joystick, teleportation, and redirected walking. In: Proceedings of ACM Virtual Reality International Conference (VRIC'18). ACM, New York, NY, USA, 4. https://doi.org/10.1145/3234253.3234291.
Lapointe, J.-F., Savard, P., & Vinson, N. G. (2011). A comparative study of four input devices for desktop virtual walkthroughs. Computers & Human Behavior, 27(6), https://doi.org/10.1016/j.chb.2011.06.014
Lawton, C. A. (1994). Gender differences in way-finding strategies: Relationship to spatial ability and spatial anxiety. Sex Roles, 30(11), 765−779.
Lee, J., Kho, S., Yoo, H. B., Park, S., & Choi, J. (2014). Spatial memory impairments in amnestic mild cognitive impairment in a virtual radial arm maze. Neuropsychiatric Disease and Treatment, 10, 653–660. https://doi.org/10.2147/NDT.S58185
Lesk, V. E., Wan Shamsuddin, S. N., Walters, E. R., & Ugail, H. (2014). Using a virtual environment to assess cognition in the elderly. Virtual Reality, 18(4), 271–279. https://doi.org/10.1007/s10055-014-0252-2
Lester, A. W., Moffat, S. D., Wiener, J. M., Barnes, C. A. and Wolbers, T. (2017). The aging navigational system. Neuron 95, 1019–1035.
Levine, T. F., Allison, S. L., Stojanovic, M., Fagan, A. M., Morris, J. C., & Head, D. (2020). Spatial navigation ability predicts progression of dementia symptomatology. Alzheimer's and Dementia, 16(3), 491–500. https://doi.org/10.1002/alz.12031
Li, R., & Klippel, A. (2016). Wayfinding behaviors in complex buildings: The impact of environmental legibility and familiarity. Environment and Behavior, 48(3), https://doi.org/10.1177/0013916514550243
Lim, A. F., Kelly, J. W., Sepich, N. C., Cherep, L. A., Freed, G. C., & Gilbert, S. B. (2020). Rotational self-motion cues improve spatial learning when teleporting in virtual environments. In: Symposium on Spatial User Interaction, SUI '20. ACM, New York, NY, USA, https://doi.org/10.1145/3385959.3418443.
Lloyd, J., Persaud, N. V., & Powell, T. E. (2009/08//). Equivalence of real-world and virtual-reality route learning: A pilot study. Journal of Cybertherapy and Rehabilitation, 12(4), 423–427.
Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G., Pellegrino, J. W., & Fry, P. A. (1993). Nonvisual navigation of blind and sighted: Assessment of path integration ability. Journal of Experimental Psychology: General, 122, 73–91.
Lövdén, M., Schellenbach, M., Grossman-Hutter, B., Krüger, A. & Lindenberger, U. (2005). Environmental topography and postural control demands shape aging-associated decrements in spatial navigation performance. Psychology & Aging, 20(4), 683–694.
Marín-Morales, J., Higuera-Trujillo, J. L., De-Juan-Ripoll, C., Llinares, C., Guixeres, J., Iñarra, S., & Alcañiz, M. (2019). Navigation comparison between a real and a virtual museum: Time-dependent differences using head mounted display. Interacting with Computers, 31(2), https://doi.org/10.1093/iwc/iwz018
Marquardt, G., & Schmieg, P. (2009). Dementia-friendly architecture: Environments that facilitate wayfinding in nursing homes. American Journal of Alzheimer's Disease & Other Dementias®, 24(4), 333–340. https://doi.org/10.1177/1533317509334959
Maxim, P., & Brown, T. I. (2023/01//). Toward an understanding of cognitive mapping ability through manipulations and measurement of schemas and stress. Topics in Cognitive Science, 15(1), 75–101.
Meilinger, T., Strickrodt, M., & Bülthoff, H. H. (2016). Qualitative differences in memory for vista and environmental spaces are caused by opaque borders, not movement or successive presentation. Cognition, 155, 77–95. https://doi.org/10.1016/j.cognition.2016.06.003
Migo, E. M., O'Daly, O., Mitterschiffthaler, M., Antonova, E., Dawson, G. R., Dourish, C. T., ... Morris, R. G. (2016). Investigating virtual reality navigation in amnestic mild cognitive impairment using fMRI. Neuropsychology, Development, and Cognition. Section B, Aging, Neuropsychology and Cognition, 23(2), 196–217. doi:10.1080/13825585.2015.1073218
Mislevy, R. J. (2013). Evidence-centered design for simulation-based assessment. Military Medicine, 178(10 Suppl), 107–114
Mohammadi, A., Kargar, M., & Hesami, E. (2018). Using virtual reality to distinguish subjects with multiple- but not single-domain amnestic mild cognitive impairment from normal elderly subjects. Psychogeriatrics, 18(2), 132–142. https://doi.org/10.1111/psyg.12301
Mokkink, L. B., de Vet, H. C. W., Prinsen, C. A. C., Patrick, D. L., Alonso, J., Bouter, L. M., & Terwee, C. B. (2018). COSMIN risk of bias checklist for systematic reviews of patient-reported outcome measures. Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation, 27(5), 1171–1179. https://doi.org/10.1007/s11136-017-1765-4
Montello, D. R. (2005). Navigation. In P. Shah, & A. Miyake (Eds.), The Cambridge handbook of visuospatial thinking (pp. 257e294). New York: Cambridge University Press.
Moon, H.-J., Gauthier, B., Park, H.-D., Faivre, N., & Blanke, O. (2022). Sense of self impacts spatial navigation and hexadirectional coding in human entorhinal cortex. Communications Biology, 5(1), 406–406. https://doi.org/10.1038/s42003-022-03361-5
Morganti, F., Stefanini, S., & Riva, G. (2013/12//). From allo- to egocentric spatial ability in early Alzheimer's disease: A study with virtual reality spatial tasks. Cognitive Neuroscience, 4(3-4), 171–180.
Moussavi, Z., Kimura, K., & Lithgow, B. (2022). Egocentric spatial orientation differences between Alzheimer's disease at early stages and mild cognitive impairment: A diagnostic aid. Medical and Biological Engineering and Computing, 60(2), 501–509. https://doi.org/10.1007/s11517-021-02478-9
Muffato, V., Meneghetti, C., & de Beni, R. (2016). Not all is lost in older adults' route learning? The role of visuo-spatial abilities and type of task. Journal of Environmental Psychology, 47, 230−241.
Murias, K., Kwok, K., Castillejo, A. G., Liu, I., & Iaria, G. (2016). The effects of video game use on performance in a virtual navigation task. Computers in Human Behavior, 58, 398–406.
Muryy, A. & Glennerster, A. (2018) Pointing errors in nonmetric virtual environments. In Spatial Cognition XI (CreemRegehr, S. et al., eds), pp. 43–57, Springer
Neisser, U. (1977). Cognition and reality. The American Journal of Psychology, 90(3), 541.
Newcombe, N. S. (2018). Individual variation in human navigation. Current Biology, 28, R1004–R1008.
Newcombe, N. S., Hegarty, M., & Uttal, D. (2023). Building a cognitive science of human variation: Individual differences in spatial navigation. Topics in Cognitive Science, 15(1), 6–14.
Nori, R., Zucchelli, M. M., Palmiero, M., & Piccardi, L. (2023). Environmental cognitive load and spatial anxiety: What matters in navigation? Journal of Environmental Psychology, 88, 102032.
Natapov, A., Kuliga, S., Dalton, R. C., & Holscher, C. (2020). Linking building-circulation typology and wayfinding: Design, spatial analysis, and anticipated wayfinding difficulty of circulation types. Architectural Science Review, 63(1), 34–46. https://doi.org/10.1080/00038628.2019.1675041
Pagkratidou, M., Galati, A., & Avraamides, M. (2020). Do environmental characteristics predict spatial memory about unfamiliar environments? Spatial Cognition and Computation, 20(1), 1–32.
Pan, X., & Antonia F de, C. H. (2018/08//). Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology, 109(3), 395–417. https://doi.org/10.1111/bjop.12290
Parizkova, M., Lerch, O., Moffat, S. D., Andel, R., Mazancova, A. F., Nedelska, Z., Vyhnalek, M., Hort, J., & Laczó, J. (2018). The effect of Alzheimer's disease on spatial navigation strategies. Neurobiology of Aging, 64, 107–115. https://doi.org/10.1016/j.neurobiolaging.2017.12.019
Park, J. L., Dudchenko, P. A., & Donaldson, D. I. (2018). Navigation in real-world environments: New opportunities afforded by advances in mobile brain imaging. Frontiers in Human Neuroscience, 12, 361. https://doi.org/10.3389/fnhum.2018.00361
Park, J. H. (2022). Can the virtual reality-based spatial memory test better discriminate mild cognitive impairment than neuropsychological assessment? International Journal of Environmental Research and Public Health, 19(16). https://doi.org/10.3390/ijerph19169950
Peer, M., Brunec, I. K., Newcombe, N. S., & Epstein, R. A. (2021/01//). Structuring knowledge with cognitive maps and cognitive graphs. Trends in Cognitive Sciences, 25(1), 37–54.
Penelope, A., & Emmanuel, F. (2022). A scoping review of the educational uses of 6DoF HMDs. Virtual Reality, 26(1), 205–222. https://doi.org/10.1007/s10055-021-00556-9
Pink, D., Ilkel, E., Chandreswaran, V., Moser, D., Getzmann, S., Patrick, G., Axmacher, N., & Zhang, H. (2023). Modeling the impact of genotype, age, sex, and continuous navigation on pathway integration performance. BioRxiv. https://doi.org/10.1101/2023.09.11.556925
Plaza-Rosales, I., Brunetti, E., Montefusco-Siegmund, R., Madariaga, S., Hafelin, R., Ponce, D. P., Behrens, M. I., Maldonado, P. E., & Paula-Lima, A. (2023). Visual-spatial processing impairment in the occipital-frontal connectivity network at early stages of Alzheimer's disease. Frontiers in Aging Neuroscience, 15, 1097577. https://doi.org/10.3389/fnagi.2023.1097577
Pullano, L., Foti, F., Liuzza, M. T., & Palermo, L. (2024/03//). The role of place attachment and spatial anxiety in environmental knowledge. Journal of Environmental Psychology, 94, 1–12.
Puthusseryppady, V., Morrissey, S., Spiers, H., Patel, M., & Hornberger, M. (2022). Predicting real world spatial disorientation in Alzheimer's disease patients using virtual reality navigation tests. Scientific Reports, 12(1). https://doi.org/10.1038/s41598-022-17634-w
Ragan, E. D., Bowman, D. A., Kopper, R., Stinson, C., Scerbo, S., & McMahan, R. P. (2015). Effects of field of view and visual complexity on virtual reality training effectiveness for a visual scanning task. IEEE Transactions on Visualization and Computer Graphics, 21(7), 794–807.
Ranjbar Pouya, O., Byagowi, A., Kelly, D. M., & Moussavi, Z. (2017). Introducing a new age-and-cognition-sensitive measurement for assessing spatial orientation using a landmark-less virtual reality navigational task. Quarterly Journal of Experimental Psychology (2006), 70(7), 1406–1419.
Richardson, A. E., Montello, D. R., & Hegarty, M. (1999/07//). Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Memory & Cognition, 27(4), 741–750.
Richardson, A. E., Powers, M. E., & Bousquet, L. G. (2011). Video game experience predicts virtual, but not real navigation performance. Computers in Human Behavior, 27(1), 552–560.
Richter, K.-F. (2009). Adaptable path planning in regionalized environments. In K. S. Hornsby, C. Claramunt, M. Denis, & G. Ligozat (Eds.), Spatial Information Theory (pp. 453–470). Berlin, Heidelberg: Springer Berlin Heidelberg.
Riecke, B. E., Bodenheimer, B., McNamara, T. P., Williams, B., Peng, P., & Feuereissen, D. (2010). Do we need to walk for effective virtual reality navigation? Physical rotations alone may suffice. In Spatial Cognition VII (pp. 234–247). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-14749-4_21
Ritchie, K., Carrière, I., Howett, D., Su, L., Hornberger, M., O'Brien, J. T., Ritchie, C. W., & Chan, D. (2018). Allocentric and egocentric spatial processing in middle-aged adults at high risk of late-onset Alzheimer's disease: The PREVENT dementia study. Journal of Alzheimer's Disease, 65(3), 885–896. https://doi.org/10.3233/JAD-180432
Rodgers, M. K., Sindone III, J. A., & Moffat, S. D. (2012). Effects of age on navigation strategy. Neurobiology of Aging, 33(1), 202.e215−202.e222.
Ruddle, R. A., Volkova, E., Mohler, B., & Bülthoff, H. H. (2011). The effect of landmark and body-based sensory information on route knowledge. Memory & Cognition, 39(4), https://doi.org/10.3758/s13421-010-0054-z
Ruddle, R. A., Volkova, E., & Bülthoff, H. H. (2013). Learning to walk in virtual reality. ACM Transactions on Applied Perception, 10(2), 1–17. https://doi.org/10.1145/2465780.2465785
Ruddle, R. A., Payne, S. J., & Jones, D. M. (2014). Navigating large-scale virtual environments: What differences occur between helmet-mounted and desk-top displays? Presence, 8(2), 157–168.
Sanchez-Vives, M. V., & Slater, M. (2005). From presence to consciousness through virtual reality. Nature Reviews. Neuroscience, 6(4), 332–339. https://doi.org/10.1038/nrn1651
Sánchez-Escudero, J. P., Galvis-Herrera, A., Sánchez-Trujillo, D., Torres-López, L. C., Kennedy, C. J., Aguirre-Acevedo, D., Garcia-Barrera, M., & Trujillo, N. (2024/02/26/). Virtual reality and serious videogame-based instruments for assessing spatial navigation in alzheimer's disease: A systematic review of psychometric properties. Neuropsychology Review.
Savino, G.-L., Emanuel, N., Kowalzik, S., Kroll, F., Lange, M. C., Laudan, M., Leder, R., Liang, Z., Markhabayeva, D., & Schmeißer, M. (2019). Comparing pedestrian navigation methods in virtual reality and real life. 2019 International Conference on Multimodal Interaction, 16–25.
Schäfer, S., Huxhold, O., & Lindenberger, U. (2006). Healthy mind in healthy body? A review of sensorimotor-cognitive interdependencies in old age. European Review of Aging & Physical Activity, 3(2), 53–61.
Schellenbach, M., Lövdén, M., Verrel, J., Krüger, A., & Lindenberger, U. (2010). Adult age differences in familiarization to treadmill walking within virtual environments. Gait & Posture, 31(3), 372–377.
Serino, S., Morganti, F., Colombo, D., & Riva, G. (2018). The contribution of allocentric impairments to the cognitive decline in Alzheimer's disease. Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, 253, 84–91. https://doi.org/10.1007/978-3-030-01093-5_11
Serino, S., Morganti, F., Di Stefano, F., & Riva, G. (2015). Detecting early egocentric and allocentric impairments deficits in Alzheimer's disease: An experimental study with virtual reality. Frontiers in Aging Neuroscience, 7, 1–10. https://doi.org/10.3389/fnagi.2015.00088
Shi, Y., Kang, J., Xia, P., Tyagi, O., Mehta, R. K., & Du, J. (2021). Spatial knowledge and firefighters' wayfinding performance: A virtual reality search and rescue experiment. Safety Science, 139, 105231. https://doi.org/10.1016/j.ssci.2021.105231
Siegel, A. W., & White, S. H. (1975). The development of spatial representations of large-scale environments. Child Development and Behavior, 10, 9–55.
Slone, E., Burles, F., & Iaria, G. (2016). Environmental layout complexity affects neural activity during navigation in humans. European Journal of Neuroscience, 43(9), https://doi.org/10.1111/ejn.13218
Sousa Santos, B., Dias, P., Pimentel, A., Baggerman, J., Ferreira, C., Silva, S., & Madeira, J. (2009/01//). Head-mounted display versus desktop for 3D navigation in virtual reality: A user study. Multimedia Tools and Applications, 41(1), 161–181. https://doi.org/10.1007/s11042-008-0223-2
Spanlang, B., Normand, J.-M., Borland, D., Kilteni, K., Giannopoulos, E., Pomés, A. s, González-Franco, M., Perez-Marcos, D., Arroyo-Palacios, J., Muncunill, X. N., & Slater, M. (2014). How to build an embodiment lab: Achieving body representation illusions in virtual reality. Frontiers in Robotics and AI, 1. https://doi.org/10.3389/frobt.2014.00009
Spiers, H. J., Coutrot, A., & Hornberger, M. (2023/01//). Explaining world-wide variation in navigation ability from millions of people: Citizen science project Sea Hero Quest. Topics in Cognitive Science, 15(1), 120–138. https://doi.org/10.1111/tops.12590
Stites, M. C., Matzen, L. E., & Gastelum, Z. N. (2020). Where are we going and where have we been? Examining the effects of maps on spatial learning in an indoor guided navigation task. Cognitive Research: Principles and Implications, 5(1), 13.
Sophia, R., & Carsten, F. (2024). Translating spatial navigation evaluation from experimental to clinical settings: The virtual environments navigation assessment (VIENNA). Behavior Research Methods (Online), 56(3), 2033–2048.
Tarnanas, I., Laskaris, N., & Tsolaki, M. (2012). On the comparison of VR-responses, as performance measures in prospective memory, with auditory P300 responses in MCI detection. Studies in Health Technology and Informatics, 181, 156–161.
Tarnanas, I., Papagiannopoulos, S., Kazis, D., Wiederhold, M., Widerhold, B., Vuillermot, S., & Tsolaki, M. (2015). Reliability of a novel serious game using dual-task gait profiles to early characterize aMCI. Frontiers in Aging Neuroscience. https://doi.org/10.3389/fnagi.2015.00050
Tarr, M. J., & Warren, W. H. (2002). Virtual reality in behavioral neuroscience and beyond. Nature Neuroscience, 5(Suppl 11), 1089–1092.
Templeman, James, N., Denbrook, Patricia, & S., et al. (1999). Virtual locomotion: Walking in place through virtual environments. Presence Teleoperators & Virtual Environments.
Thornberry, C., Cimadevilla, J. M., & Commins, S. (2021). Virtual Morris water maze: Opportunities and challenges. Reviews in the Neurosciences, 32(8), 887–903.
Thrash T, Kapadia M, Moussaid M, Wilhelm C, Helbing D, Sumner RW, et al. (2015). Evaluation of control interfaces for desktop virtual environments. Presence: Teleoperators and Virtual Environments, 24(4):322–334.
Uttal, D. H., McKee, K., Simms, N., Hegarty, M., & Newcombe, N. S. (2024). How can we best assess spatial skills? Practical and conceptual challenges. Journal of Intelligence, 12(1), 8.
van der Ham, I. J. M., Faber, A. M. E., Venselaar, M., van Kreveld, M. J., & Loffler, M. (2015). Ecological validity of virtual environments to assess human navigation ability. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00637
van der Ham, I. J. M., Claessen, M. H. G., Evers, A. W. M., & van der Kuil, M. N. A. (2020). Large-scale assessment of human navigation ability across the lifespan. Scientific Reports (Nature Publisher Group), 10(1)
Vass, L. K., Copara, M. S., Seyal, M., Shahlaie, K., Farias, S. T., Shen, P. Y. and Ekstrom, A. D. (2016). Oscillations go the distance: Low-frequency human hippocampal oscillations code spatial distance in the absence of sensory cues during teleportation. Neuron 89, 1180–1186.
Ventura, M., Shute, V., Wright, T., & Zhao, W. (2013). An investigation of the validity of the virtual spatial navigation assessment. Frontiers in Psychology, 4, 852–852. https://doi.org/10.3389/fpsyg.2013.00852
Warren, W. H., Rothman, D. B., Schnapp, B. H., & Ericson, J. D. (2017). Wormholes in virtual space: From cognitive maps to cognitive graphs. Cognition, https://doi.org/10.1016/j.cognition.2017.05.020
Weisberg, S. M., Schinazi, V. R., Newcombe, N. S., Shipley, T. F., & Epstein, R. A. (2014). Variations in cognitive maps: Understanding individual differences in navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(3), 669–682.
Weisberg, S. M., & Newcombe, N. S. (2016/05//). How do (some) people make a cognitive map? Routes, places, and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(5), 768–785.
Wiener, J. M., Carroll, D., Moeller, S., Bibi, I., Ivanova, D., Allen, P., & Wolbers, T. (2020). A novel virtual-reality-based route-learning test suite: Assessing the effects of cognitive aging on navigation. Behavior Research Methods, 52(2), 630–640. https://doi.org/10.3758/s13428-019-01264-8
Wolbers, T., & Hegarty, M. (2010). What determines our navigational abilities? Trends in Cognitive Sciences, 14(3), 138–146
Xie, Y., Bigelow, R. T., Frankenthaler, S. F., Studenski, S. A., Moffat, S. D., & Agrawal, Y. (2017). Vestibular loss in older adults is associated with impaired spatial navigation: Data from the triangle completion task. Frontiers in neurology, 8, 173.
Yang, Q., & Kalantari, S. (2024). Real-time continuous perceived uncertainty annotation for spatial navigation studies in buildings. Journal of Building Engineering, 82, 108250.
Yesiltepe, D., Fernández Velasco, P., Coutrot, A., Ozbil Torun, A., Wiener, J. M., Holscher, C., Hornberger, M., Conroy Dalton, R., & Spiers, H. J. (2023). Entropy and a sub-group of geometric measures of paths predict navigability in the environment. Cognition, https://doi.org/10.1016/j.cognition.2023.105443
Zen, D., Byagowi, A., Garcia, M., Kelly, D., Lithgow, B., & Moussavi, Z. (2013, November). The perceived orientation in people with and without Alzheimer's. In 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) (pp. 460–463). IEEE. https://doi.org/10.1109/ner.2013.6695971