Research on Generating an Indoor Landmark Salience Model for Self-Location and Spatial Orientation from Eye-Tracking Data
<p>AOI divisions for semantic importance.</p> "> Figure 2
<p>Eye-tracking experiments. (<b>a</b>) Experimental environment; (<b>b</b>) Number of experimental stimuli in the three tasks; (<b>c</b>) Visual area of the experimental stimuli in task 2; (<b>d</b>) Visual area of the experimental stimuli in task 3.</p> "> Figure 3
<p>Eight images of the experimental stimuli in task 1. (<b>a</b>–<b>h</b>) are experimental stimuli in task 1.</p> "> Figure 4
<p>Experimental stimuli in task 3. (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>) Indoor scene image; (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>) Indoor map for self-location; (<b>a3</b>,<b>b3</b>,<b>c3</b>,<b>d3</b>) Indoor map for orientation.</p> "> Figure 4 Cont.
<p>Experimental stimuli in task 3. (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>) Indoor scene image; (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>) Indoor map for self-location; (<b>a3</b>,<b>b3</b>,<b>c3</b>,<b>d3</b>) Indoor map for orientation.</p> "> Figure 5
<p>Accuracy of weighting algorithms.</p> "> Figure 6
<p>Comparison of mean difference value among eye-tracking data.</p> "> Figure 7
<p>Comparison of the differences in visual salience among stores, elevators, signs and benches between tasks 2 and 3. * means <span class="html-italic">p</span> < 0.05. (<b>a</b>) Gaze duration differences between task 2 and 3; (<b>b</b>) Saccade duration differences between task 2 and 3.</p> "> Figure 8
<p>Comparison of four landmark salience calculation methods between task 2 and 3.</p> "> Figure 9
<p>Comparison of the indoor map between tasks 2 and 3. (<b>a</b>) Indoor map based on task 2; (<b>b</b>) Indoor map based on task 3.</p> ">
Abstract
:1. Introduction
- Can eye-tracking data be used to construct an indoor landmark salience model? If so, how can the accuracy of the salience results be ensured?
- Are there any differences in landmark salience between self-location and orientation in indoor wayfinding? If differences occur, how can an indoor landmark salience model be built for self-location and orientation?
2. Background and Related Work
2.1. Indoor Landmark Salience Models
2.2. Differences in Landmark Salience during Wayfinding
2.3. Eye-Tracking for Task Differences in Landmark Salience
3. Indoor Landmark Salience Model
3.1. Landmark Salience Based on Eye-Tracking Data
3.1.1. Stimulated Landmark Salience
3.1.2. Eye-Tracking Data Selection
3.1.3. Weighting Algorithms
3.1.4. Calculating Process
3.2. Indoor Landmark Salience Model
3.2.1. Visual Attractiveness
3.2.2. Semantic Attractiveness
3.2.3. Structure Attractiveness
3.2.4. Modelling Process
4. Case Study
4.1. Experimental Design
4.1.1. Apparatus
4.1.2. Procedure
- Task #1 (landmark selection): Assume that you are shopping in a mall. When the experiment begins, you will view eight indoor scene images one at a time. Please select the most attractive landmark (store, bench, elevator or signs) in each image. When you find the result, click on the landmark to proceed. There is no time limit for you to find the landmark.
- Task #2 (self-localization): You will find your location on the map. First, you should observe an indoor scene image carefully and try to memorize the necessary landmark information as much as possible. You are not allowed to look at the image again. Then, you should find the location and click on it on the indoor map. Two locations should be found in this phase.
- Task #3 (orientation): You will find your orientation in the indoor scene image with the assistance of landmarks. You should remember the landmark information related to the route from A to B on the indoor map. Then, you will point out the correct orientation to get to B and click on it on the image. Two orientations need to be noted in this task. After that, the experiment ends.
4.1.3. Stimuli
4.1.4. Participants
4.2. Results
4.2.1. Landmark Salience Based on Eye-Tracking Data
4.2.2. Differences in Landmark Salience between Self-Location and Orientation
4.2.3. Landmark Salience Model for Self-Location and Orientation
5. Discussion
5.1. Important Factors for the Landmark Salience Model
5.2. Differences in Landmark Salience between Self-Location and Orientation
5.2.1. Differences in the Participants’ Visual Behaviours
5.2.2. Differences in Indoor Environments
5.3. Compared with Previous Research
5.4. Improvements for Current Studies
6. Conclusions and Future Research
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Lynch, K. The Image of the City; M.I.T. Press: Cambridge, MA, USA, 1960; pp. 46–68. [Google Scholar]
- Allen, G.L. Spatial abilities, cognitive maps, and wayfinding: Bases for individual differences in spatialcognition and behavior. J. Wayfinding Behav. 1999, 9, 46–80. [Google Scholar]
- Schwering, A.; Krukar, J.; Li, R.; Anacta, V.J.; Fuest, S. Wayfinding through orientation. Spat. Cogn. Comput. 2017, 17, 273–303. [Google Scholar] [CrossRef]
- Duckham, M.; Winter, S.; Robinson, M. Including landmarks in routing instructions. J. Locat. Based Serv. 2010, 4, 28–52. [Google Scholar] [CrossRef]
- Piccardi, L.; Palmiero, M.; Bocchi, A.; Boccia, M.; Guariglia, C. How does environmental knowledge allow us to come back home? J. Exp. Brain Res. 2019, 237, 1811–1820. [Google Scholar] [CrossRef] [PubMed]
- Albrecht, R.; von Stülpnagel, R. Memory for salient landmarks: Empirical findings and a cognitive model. In Proceedings of the 11th International Conference, Spatial Cognition 2018, Tübingen, Germany, 5–8 September 2018. [Google Scholar]
- Raubal, M.; Winter, S. Enriching wayfinding instructions with local landmarks. In Geographic Information Science; Egenhofer, M.J., Mark, D.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Nuhn, E.; Timpf, S. A multidimensional model for selecting personalised landmarks. J. Locat. Based Serv. 2017, 11, 153–180. [Google Scholar] [CrossRef]
- Anagnostopoulos, V.; Havlena, M.; Kiefer, P.; Giannopoulos, I.; Schindler, K.; Raubal, M. Gaze-Informed location-based services. Int. J. Geogr. Inf. Sci. 2017, 31, 1770–1797. [Google Scholar] [CrossRef]
- Erkan, İ. Examining wayfinding behaviours in architectural spaces using brain imaging with electroencephalography (EEG). Archit. Sci. Rev. 2018, 61, 410–428. [Google Scholar] [CrossRef]
- Kiefer, P.; Giannopoulos, I.; Raubal, M. Where am I? Investigating map matching during selflocalization with mobile eye tracking in an urban environment. J. Trans. GIS 2014, 18, 660–686. [Google Scholar] [CrossRef]
- Koletsis, E.; van Elzakker, C.P.; Kraak, M.J.; Cartwright, W.; Arrowsmith, C.; Field, K. An investigation into challenges experienced when route planning, navigating and wayfinding. J. Int. J. Cartogr. 2017, 3, 4–18. [Google Scholar] [CrossRef]
- Brunyé, T.T.; Gardony, A.L.; Holmes, A.; Taylor, H.A. Spatial decision dynamics during wayfinding: Intersections prompt the decision-making process. Cogn. Res. Princ. Implic. 2018, 3, 13. [Google Scholar] [CrossRef]
- Jia, F.; Tian, J.; Zhi, M. A visual Salience model of landmark-based on virtual geographicalexperiments. Acta Geod. Cartogr. Sin. 2018, 47, 1114–1122. [Google Scholar] [CrossRef]
- Liao, H.; Dong, W.; Huang, H.; Gartner, G.; Liu, H. Inferring user tasks in pedestrian navigation from eye movement data in real-world environments. Int. J. Geogr. Inf. Sci. 2018, 33, 739–763. [Google Scholar] [CrossRef]
- Sorrows, M.E.; Hirtle, S.C. The nature of landmarks for real and electronic spaces. In International Conference on Spatial Information Theory; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
- Richter, K.F.; Winter, S. Landmarks: GIScience for Intelligent Services; Springer Publishing Company: New York, NY, USA, 2014. [Google Scholar]
- Elias, B.; Paelke, V.; Kuhnt, S. Concepts for the cartographic visualization of landmarks. In Location Based Services & Telecartography-Proceedings of the Symposium 2005, Geowissenschaftliche Mitteilungen; Gartner, G., Ed.; Vienna University of Technology: Wien, Austria, 2005; pp. 1149–1155. [Google Scholar]
- Zhu, L.; Svedová, H.; Shen, J.; Stachon, Z.; Shi, J.; Snopková, D.; Li, X. An instance-based scoring system for indoor landmark salience evaluation. Geografie 2019, 124, 103–131. [Google Scholar] [CrossRef]
- Mummidi, L.; Krumm, J. Discovering points of interest from users’ map annotations. GeoJournal 2008, 72, 215–227. [Google Scholar] [CrossRef] [Green Version]
- Wang, C.; Chen, Y.; Zheng, S.; Liao, H. Gender and Age Differences in Using Indoor Maps for Wayfinding in Real Environments. ISPRS Int. J. Geo Inf. 2019, 8, 11. [Google Scholar] [CrossRef] [Green Version]
- Huang, H.; Gartner, G.; Krisp, J.M.; Raubal, M.; Van de Nico, W. Location based services: Ongoing evolution and research agenda. J. Locat. Based Serv. 2018, 12, 63–93. [Google Scholar] [CrossRef]
- Fellner, I.; Huang, H.; Gartner, G. Turn Left after the WC, and Use the Lift to Go to the 2nd Floor’—Generation of Landmark-Based Route Instructions for Indoor Navigation. ISPRS Int. J. Geo Inf. 2017, 6, 183. [Google Scholar] [CrossRef] [Green Version]
- Li, L.; Mao, K.; Li, G.; Wen, Y.A. A Landmark-based cognition strength grid model for indoor guidance. Surv. Rev. 2017, 50, 336–346. [Google Scholar] [CrossRef]
- Gkonos, C.; Giannopoulos, I.; Raubal, M. Maps, vibration or gaze? Comparison of novel navigation assistance in indoor and outdoor environments. J. Locat. Based Serv. 2017, 11, 29–49. [Google Scholar] [CrossRef]
- Ohm, C.; Müller, M.; Ludwig, B. Evaluating indoor pedestrian navigation interfaces using mobile eye tracking. Spat. Cogn. Comput. 2017, 17, 32. [Google Scholar] [CrossRef]
- Lyu, H.; Yu, Z.; Meng, L. A Computational Method for Indoor Landmark Extraction. In Progress in Location-Based Services 2014; Springer: Cham, Germany, 2015; pp. 45–59. [Google Scholar]
- GTze, J.; Boye, J. Learning landmark salience models from users’ route instructions. J. Locat. Based Serv. 2016, 10, 47–63. [Google Scholar] [CrossRef]
- Fang, Z.; Li, Q.; Shaw, S.L. What about people in pedestrian navigation? Geo Spat. Inf. Sci. 2015, 18, 135–150. [Google Scholar] [CrossRef] [Green Version]
- Golledge, R.G. Human wayfinding and cognitive maps. In The Colonization of Unfamiliar Landscapes; Routledge: London, UK, 1999; pp. 5–45. [Google Scholar]
- Liao, H.; Dong, W. An exploratory study investigating gender effects on using 3d maps for spatial orientation in wayfinding. J. Int. J. Geo Inf. 2017, 6, 60. [Google Scholar] [CrossRef] [Green Version]
- Meilinger, T.; Knauff, M. Ask for directions or use a map: A field experiment on spatial orientation and wayfinding in an urban environment. J. Surv. 2008, 53, 13–23. [Google Scholar] [CrossRef]
- Wiener, J.M.; de Condappa, O.; Hölscher, C. Do you have to look where you go? Gaze behaviour during spatial decision making. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Boston, MA, USA, 20–23 July 2011; Cognitive Science Society: Austin, TX, USA, 2011. [Google Scholar]
- Lscher, C.; Büchner, S.J.; Meilinger, T. Adaptivity of wayfinding strategies in a multi-building ensemble: The effects of spatial structure, task requirements, and metric information. J. Environ. Psychol. 2009, 29, 208–219. [Google Scholar]
- Zhang, H.; Ye, C. An indoor wayfinding system based on geometric features aided graph SLAM for the visually impaired. J. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1592–1604. [Google Scholar] [CrossRef] [PubMed]
- Kinsley, K.M.; Dan, S.; Spitler, J. GoPro as an ethnographic tool: A wayfinding study in an academic library. J. Access Serv. 2016, 13, 7–23. [Google Scholar] [CrossRef]
- Tanriverdi, V.; Jacob, R.J.K. Interacting with eye movements in virtual environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, The Hague, The Netherlands, 1 April 2000; pp. 265–272. [Google Scholar]
- Steinke, T. Eye movement studies in cartography and related fields. J. Cartogr. 1987, 24, 197–221. [Google Scholar] [CrossRef]
- Ohm, C.; Müller, M.; Ludwig, B.; Bienk, S. Where is the landmark? Eye tracking studies in large-scale indoor environments. In Proceedings of the 2nd International Workshop on Eye Tracking for Spatial Research (in Conjunction with GIScience 2014), Vienna, Austria, 30 October 2014; Peter, K., Ioannis, G., Antonio, K., Raubal, M., Eds.; CEUR: Aachen, Germany, 2014; pp. 47–51. [Google Scholar]
- Schrom-Feiertag, H.; Settgast, V.; Seer, S. Evaluation of indoor guidance systems using eye tracking in an immersive virtual environment. Spat. Cogn. Comput. 2017, 17, 163–183. [Google Scholar] [CrossRef]
- Kiefer, P.; Giannopoulos, I.; Raubal, M.; Duchowski, A.T. Eye tracking for spatial research: Cognition, computation, challenges. Spat. Cogn. Comput. 2017, 17, 1–9. [Google Scholar] [CrossRef]
- Bulling, A.; Ward, J.A.; Gellersen, H.; Troster, G. Eye movement analysis for activity recognition using electrooculography. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 741–753. [Google Scholar] [CrossRef] [PubMed]
- Kiefer, P.; Giannopoulos, I.; Raubal, M. Using eye movements to recognize activities on cartographic maps. In Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Orlando, FL, USA, 5–8 November 2013; pp. 478–481. [Google Scholar]
- Goldberg, J.H.; Kotval, X.P. Computer interface evaluation using eye movements: Methods and constructs. Int. J. Ind. Ergon. 1999, 24, 631–645. [Google Scholar] [CrossRef]
- Dong, W.; Qin, T.; Liao, H.; Liu, Y.; Liu, J. Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding. Cartogr. Geogr. Inf. Sci. 2019, 1–15. [Google Scholar] [CrossRef]
- Li, W.; Liu, Z. A method of SVM with normalization in intrusion detection. Procedia Environ. Sci. 2011, 11, 256–262. [Google Scholar] [CrossRef] [Green Version]
- Just, M.A.; Carpenter, P.A. Eye fixations and cognitive processes. Cogn. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
- Dong, W.; Liao, H.; Roth, R.E.; Wang, S. Eye tracking to explore the potential of enhanced imagery basemaps in web mapping. Cartogr. J. 2014, 51, 313–329. [Google Scholar] [CrossRef]
- Nuhn, E.; Timpf, S. Personal dimensions of landmarks. In Proceedings of the International Conference on Geographic Information Science, Wageningen, The Netherlands, 9–12 May 2017. [Google Scholar]
- Liben, L.; Myers, L.; Christensen, A. Identifying locations and directions on field and representational mapping tasks: Predictors of success. Spat. Cogn. Comput. 2010, 10, 105–134. [Google Scholar] [CrossRef]
- Liao, H.; Dong, W.; Peng, C.; Liu, H. Exploring differences of visual attention in pedestrian navigation when using 2D maps and 3D geo-browsers. Cartogr. Geogr. Inf. Sci. 2016, 44, 474–490. [Google Scholar] [CrossRef]
Type | Features | Unit Statistic | Variable Definitions |
---|---|---|---|
Fixation | |||
Total | Total fixation duration | Second (s) | The total duration of fixations |
Total fixation counts | Count | The total counts of fixations | |
Total fixation dispersion | pixel | The total dispersion of fixations | |
AOI | Time to first fixation | Second (s) | The time before first fixation on AOIs |
First fixation duration | Second (s) | The duration first fixation taken on AOIs | |
Gaze duration | Second (s) | The duration of fixations on AOIs | |
Fixation dispersion | pixel | The dispersion of fixations on AOIs | |
Fixation counts | Count | The counts of fixations on AOIs | |
Saccade | |||
Total | Total saccade counts | Count | The total counts of saccades |
Total saccade duration | Second (s) | The total duration of saccades | |
Saccade amplitude | Degree (°) | The total amplitude of saccades | |
AOI | Saccade counts | Count | The counts of saccades on AOIs |
Saccade duration | Second (s) | The duration of saccades on AOIs | |
Saccade amplitude | Degree (°) | The amplitude of saccades on AOIs | |
Pupil | |||
Total | Pupil diameter | Millimeter (mm) | The average left and right pupil diameter |
AOI | AOI pupil diameter | Millimeter (mm) | The average left and right pupil diameter on AOIs |
Pupil difference | Millimeter (mm) | The differences between Pupil and AOI pupil diameter |
Input: eye-tracking data and stimulated landmark salience |
Output: landmark salience based on eye-tracking data |
for the result of each and calculated by one-way ANOVA do |
If then |
return |
end |
If then |
delate |
end |
end |
/* feature selection */ |
for the and calculated by the five weighting algorithms do |
|
end |
/* weighting algorithm comparison*/ |
forthe selected weighting algorithmsdo |
calculate the coefficient of eye-tracking data (), and establish the landmark salience based on eye-tracking data (). |
end |
/* landmark salience based on eye-tracking data*/ |
Indicator | Property | Unit Statistic | Measurement |
---|---|---|---|
Shape | shape factor * | Ratio | Shape factor = height/width |
deviation * | Ratio | Deviation = area mbr /Facade Area | |
Color | hue error ** | Decimal | |
lightness ** | Boolean value | If lightness c = 1; else c = 0 | |
Façade area | Façade area * | Square meter () | Facade Area = height width |
Visibility | Visual distance ** | Meter (m) | Visual Distance = min{Perceivable distance} |
Indicator | Unit Statistic | Measurement |
---|---|---|
Semantic importance | Ratio | AOI fixation duration/Total fixation duration |
Explicit marks * | Boolean value | If explicit 1, or 0 |
Degree of familiarity | Ratio | Familiar with the mark/Total participants |
Indicator | Unit Statistic | Measurement |
---|---|---|
Number of adjacent routes * | Constant | The number of routes |
Number of adjacent objects * | Constant | The number of objects |
Location importance ** | Ratio | 1/ |
Type | Features | Mean | SD | ANOVA | |
---|---|---|---|---|---|
Fixation | F | p | |||
Total | total fixation duration (s) | 4.861 | 1.010 | 5.135 | 0.032 * |
total fixation counts | 15.332 | 2.548 | 9.002 | 0.015 * | |
total fixation dispersion | 2105.042 | 716.124 | 2.411 | 0.123 | |
AOI | time to first fixation (s) | 0.612 | 0.353 | 1.277 | 0.277 |
first fixation duration (s) | 0.440 | 0.124 | 0.156 | 0.635 | |
gaze duration (s) | 2.637 | 0.723 | 3.713 | 0.041 * | |
fixation dispersion | 320.402 | 71.188 | 0.263 | 0.419 | |
fixation counts | 5.945 | 1.432 | 6.578 | 0.014 * | |
Saccade | |||||
Total | total saccade duration (s) | 0.561 | 0.093 | 18.351 | 0.001 * |
total saccade counts | 13.134 | 3.086 | 1.770 | 0.205 | |
saccade amplitude | 81.199 | 19.145 | 1.754 | 0.265 | |
AOI | saccade duration (s) | 0.292 | 0.063 | 6.613 | 0.004 * |
saccade counts | 7.059 | 3.067 | 0.971 | 0.252 | |
saccade amplitude | 49.419 | 16.540 | 1.910 | 0.244 | |
regression | 1.480 | 0.565 | 1.138 | 0.094 | |
Pupil | |||||
Total | pupil size | 3.771 | 0.040 | 3.597 | 0.071 |
AOI | AOI pupil size | 3.875 | 0.041 | 3.783 | 0.068 |
pupil difference | 0.157 | 0.040 | 8.310 | 0.001 * |
Eye-Tracking Data | PLSR | AHP | EWM | SDM | CRITIC | |
---|---|---|---|---|---|---|
Fixation | ||||||
total fixation duration | 0.005 | 0.172 | 0.108 | 0.187 | 0.142 | |
total fixation counts | 0.007 | 0.180 | 0.112 | 0.171 | 0.154 | |
gaze duration (s) | −0.034 | 0.088 | 0.206 | 0.107 | 0.135 | |
fixation counts | 0.003 | 0.116 | 0.139 | 0.128 | 0.122 | |
Saccade | ||||||
total saccade duration | −0.212 | 0.137 | 0.166 | 0.143 | 0.150 | |
saccade duration | 1.631 | 0.174 | 0.116 | 0.145 | 0.147 | |
Pupil | ||||||
pupil difference | 1.348 | 0.132 | 0.152 | 0.117 | 0.18 | |
Intercept | C | 0.128 |
AOI | ||||||||
AOI Type | AOI1 store(cinema) * | AOI2 escalator * | AOI3 sign * | AOI4 bench * | ||||
T2: 0851 | T3: 1.105 | T2: 0.386 | T3: 0.905 | T2: 0.513 | T3: 0.423 | T2: 0.346 | T3: 0.481 | |
ANOVA | F = 32.152 | p = 0.001 | F = 9.401 | p = 0.004 | F = 24.152 | p = 0.001 | F = 78.921 | p = 0.001 |
AOI | ||||||||
AOI Type | AOI5 store(drinks) * | AOI6 store(eyeglass) * | AOI7 escalator* | AOI8 sign* | ||||
T2: 0.413 | T3: 0.789 | T2: 0.847 | T3: 1.287 | T2: 0.502 | T3: 0.727 | T2: 0.361 | T3: 0.284 | |
ANOVA | F = 42.115 | p = 0.001 | F = 15.083 | p = 0.001 | F = 12.502 | p = 0.001 | F = 8.928 | p = 0.006 |
AOI | ||||||||
AOI Type | AOI9 bench* | AOI10 store(drinks) | AOI11 store(Herborist) * | AOI12 store(cosmetic) | ||||
T2: 0.0.727 | T3: 0.468 | T2: 0.405 | T3: 0.469 | T2: 0.665 | T3: 1.074 | T2: 0.693 | T3: 0.618 | |
ANOVA | F = 16.024 | p = 0.001 | F = 3.152 | p = 0.032 | F = 109.293 | p = 0.001 | F = 1.526 | p = 0.423 |
AOI | ||||||||
AOI Type | AOI13 elevator | AOI14 store(jewelry) * | AOI15 escalator* | AOI16 store(Dior) * | ||||
T2: 0.501 | T3: 0.618 | T2: 0.835 | T3: 1.148 | T2: 0.212 | T3: 0.355 | T2: 0.817 | T3: 1.085 | |
ANOVA | F = 8.700 | p = 0.003 | F = 66.192 | p = 0.001 | F = 3.124 | p = 0.041 | F = 28.941 | p = 0.001 |
AOI | ANOVA t2 and t3 dF (1,36) F: 4.156 p: 0.048 | |||||||
AOI Type | AOI17 escalator | AOI18 store(D&G) | AOI19 store(coach) * | |||||
T2: 0.299 | T3: 0.332 | T2: 0.639 | T3: 0.735 | T2: 0.339 | T3: 0.723 | |||
ANOVA | F = 0.982 | p = 0.614 | F = 4.1865 | p = 0.052 | F = 34.512 | p = 0.001 |
Measure | Property | Type | Task 2 | ANOVA | Task 3 | ANOVA | ||
---|---|---|---|---|---|---|---|---|
Coefficient | F | p | Coefficient | F | p | |||
Visual | Shape factor | 0.018 | 10.852 | 0.002 * | 0.005 | 8.977 | 0.004 * | |
Deviation | 0.107 | 14.465 | 0.000 * | 0.218 | 24.945 | 0.001 * | ||
Hue | 0.060 | 12.388 | 0.001 * | 0.159 | 23.355 | 0.001 * | ||
Brightness | --- | 0.081 | 778 | --- | 1.146 | 0.291 | ||
Façade area | 0.0003 | 9.293 | 0.004 * | 0.001 | 8.368 | 0.006 * | ||
Visual distance | −0.011 | 27.756 | 000 * | −0.014 | 26.769 | 0.000 * | ||
Intercept | 0.559 | --- | --- | 0.706 | --- | --- | ||
Semantic | Semantic Importance | 1.259 | 70.211 | 0.000 * | 3.362 | 65.393 | 0.000 * | |
Explicit marks | --- | 0.021 | 0.884 | --- | 1.966 | 0.169 | ||
Degree of familiarity | 0.054 | 3.792 | 0.047 * | −0.098 | 10.829 | 0.002 * | ||
intercept | 0.368 | --- | - | 0.332 | --- | --- | ||
Structural | Adjacent routes | 0.048 | 40.4 | 0.000 * | 0.088 | 32.449 | 0.000 * | |
Adjacent objects | --- | 3.4 | 0.073 | --- | 1.244 | 0.272 | ||
Location | 0.228 | 8.124 | 0.007 * | 0.339 | 5.298 | 0.021 * | ||
Intercept | 0.283 | --- | --- | 0.288 | --- | --- |
Eye Movement Feature | Task2 | Task3 | t-Test | |
---|---|---|---|---|
M SD | M SD | t | p | |
Fixation | ||||
total fixation duration (s) | 6.636 0.955 | 10.694 1.335 | −16.089 | 0.001 * |
first fixation duration (s) | 0.335 0.239 | 0.339 0.230 | −0.105 | 0.918 |
gaze duration (s) | 1.124 0.535 | 1.818 0.789 | −4.083 | 0.001 * |
Saccade | ||||
total saccade duration (s) | 1.059 0.176 | 1.597 0.126 | −20.207 | 0.001 * |
saccade duration (s) | 0.255 0.104 | 0.314 0.153 | −3.734 | 0.001 * |
Pupil | ||||
pupil size | 3.771 0.023 | 3.767 0.018 | −0.743 | 0.984 |
AOI pupil size | 3.895 0.048 | 3.898 0.043 | −0.685 | 0.456 |
pupil difference | 0.124 0.045 | 0.131 0.061 | −0.423 | 0.338 |
Visual | Shape | Deviation | Hue | Brightness | Façade area | Distance | Prominence | Unique label | ||
---|---|---|---|---|---|---|---|---|---|---|
Task2 | ||||||||||
PLSR | 0.018 | 0.107 | 0.060 | --- | 0.0003 | −0.011 | --- | --- | ||
Equal | 0.166 | 0.166 | 0.166 | 0.166 | 0.160 | 0.166 | --- | --- | ||
Expert | 0.370 | 0.150 | 0.340 | 0.120 | 0.100 | −0.080 | --- | --- | ||
Instance | --- | --- | --- | --- | 0.0565 | --- | 0.3437 | 0.1394 | ||
Task3 | ||||||||||
PLSR | 0.005 | 0.218 | 0.159 | --- | 0.001 | −0.014 | --- | --- | ||
Equal | 0.166 | 0.166 | 0.166 | 0.166 | 0.16 | 0.166 | --- | --- | ||
Expert | 0.350 | 0.210 | 0.280 | 0.230 | 0.090 | −0.160 | --- | --- | ||
Instance | --- | --- | --- | --- | 0.0565 | --- | 0.3437 | 0.1394 | ||
Semantic | Importance | Marks | Familiarity | Uniqueness | Structural | Route | Object | Location | Spatial extent | Permanence |
Task2 | Task2 | |||||||||
PLSR | 1.259 | --- | 0.054 | --- | PLSR | 0.048 | --- | 0.228 | --- | --- |
Equal | 0.334 | 0.333 | 0.333 | --- | Equal | 0.334 | 0.333 | 0.333 | --- | --- |
Expert | 0.290 | 0.380 | 0.330 | --- | Expert | 0.280 | 0.300 | 0.420 | --- | --- |
Instance | --- | 0.0156 | 0.1070 | 0.0408 | Instance | --- | --- | 0.1988 | 0.0261 | 0.0721 |
Task3 | Task3 | |||||||||
PLSR | 3.362 | --- | −0.098 | --- | PLSR | 0.088 | --- | 0.339 | --- | --- |
Equal | 0.334 | 0.333 | 0.333 | --- | Equal | 0.334 | 0.333 | 0.333 | --- | --- |
Expert | 0.230 | 0.350 | 0.420 | --- | Expert | 0.250 | 0.280 | 0.470 | --- | --- |
Instance | --- | 0.0156 | 0.1070 | 0.0408 | Instance | --- | --- | 0.1988 | 0.0261 | 0.0721 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.; Chen, Y.; Zheng, S.; Yuan, Y.; Wang, S. Research on Generating an Indoor Landmark Salience Model for Self-Location and Spatial Orientation from Eye-Tracking Data. ISPRS Int. J. Geo-Inf. 2020, 9, 97. https://doi.org/10.3390/ijgi9020097
Wang C, Chen Y, Zheng S, Yuan Y, Wang S. Research on Generating an Indoor Landmark Salience Model for Self-Location and Spatial Orientation from Eye-Tracking Data. ISPRS International Journal of Geo-Information. 2020; 9(2):97. https://doi.org/10.3390/ijgi9020097
Chicago/Turabian StyleWang, Chengshun, Yufen Chen, Shulei Zheng, Yecheng Yuan, and Shuang Wang. 2020. "Research on Generating an Indoor Landmark Salience Model for Self-Location and Spatial Orientation from Eye-Tracking Data" ISPRS International Journal of Geo-Information 9, no. 2: 97. https://doi.org/10.3390/ijgi9020097