Abstract
Providing personal and location-dependent services is one of the promising services in public spaces like a shopping mall. So far, sensors in the environment have reliably detected the current positions of humans, but it is difficult to identify people using these sensors. On the other hand, wearable devices can send their personal identity information, but precise position estimation remains problematic. In this paper, we propose a method of associating wearable accelerometers and foot tracking results using laser range finders in the environment. First we propose an association method based on signal correlation between biped foot and accelerometer. However, in crowded situations, sometimes only one foot of a pedestrian is observed because of occlusion. To cope with the problem, we propose a new evaluation function that focuses on the phase dependent correlation of cyclic walking behavior. Example results of tracking individuals in the environment confirm the effectiveness of this method.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Information infrastructure that provides personal and location-dependent services in public spaces like a shopping mall permits a wide variety of applications. Such a system will provide the positions of friends who are currently shopping in the mall. When they have many bags, users will call a porter robot, which can reach them by using the location system. To enable location-dependent and personal services, we propose a system that locates and identifies a pedestrian, who carries a mobile information terminal, anywhere in a crowded environment.
Many kinds of location systems have been studied that provide the positions of pedestrians by using sensors installed in the environment. For example, location systems using cameras and laser range finders (LRFs) can track people in the environment very precisely. However, it is difficult to identify each pedestrian or a person carrying a specific wearable device by using only sensors in the environment.
On the other hand, in ubiquitous computing, many kinds of wearable devices have been used to locate people. Since a location system using ID tags requires the installation of many reader devices in the environment for precise localization, it is not a realistic solution in large public spaces. Wearable inertial sensors are also used to locate people, but the cumulative estimation error is often problematic. For a precise location system, it is important to integrate other sources of information.
In order to locate a pedestrian carrying a specific mobile device anywhere in an environment, a promising approach is to integrate environmental sensors that observe people and wearable sensors that locate the person carrying them. In this paper, we propose a novel method integrating LRFs in the environment and wearable accelerometers to locate people precisely and continuously. Since location systems using LRFs have been successfully applied for tracking people in large public spaces like train stations and the sizes of LRF units are becoming smaller, LRFs are highly suitable for installation in public spaces. Since many cellular phones are equipped with an accelerometer for a variety of applications, users who have a cellular phone do not have to carry any additional device.
The rest of this paper is organized as follows. First, we review previous studies. Then, we discuss a method of integrating LRFs and accelerometers and how it can provide reliable estimation. Finally, we discuss the application of our method to a practical system and present the results of an experimental evaluation.
2 Related works
2.1 Locating pedestrians using environmental sensors
Locating pedestrian has been an important issue in computer vision and frequently studied (Hu et al. 2004). One advantage of using cameras is that we can use much information including colors and motion gestures. A problem with cameras is that they suffer from changes in the lighting conditions in the environment. Also, using cameras in public spaces for identification purpose sometimes causes privacy issue.
Laser range finders (LRFs) have recently attracted increasing attention for locating people in public places. As they have become smaller, it becomes easier to install them in environments. Since LRFs observes only the positions of people, installation of LRFs does not raise privacy issue. Cui et al. (2007) succeeded in tracking a large number of people by observing feet of pedestrians. Zhao and Shibasaki (2005) also track people by using a simple walking model of pedestrians. Glas et al. (2009) placed LRFs in a shopping mall to predict the trajectories of people by observing customers at waist height.
In general, sensors placed in the environment are good at locating people precisely. However, it is difficult to use them to identify pedestrians when they are walking in a crowded environment.
2.2 Locating people by using wearable sensors
In ubiquitous computing, wearable devices have been used to locate people (Hightower and Borriello 2001). Devices that have been studied include IR tags (Want et al. 1992), ultrasonic wave tags (Harter et al. 1999), RFID tags (Amemiya et al. 2004; Ni et al. 2003), Wi-Fi (Bahl and Padmanabhan 2000), and UWB (Mizugaki et al. 2007). If the device ID is registered with the system, the person carrying that specific device can be located and identified. However, tag-based methods require the placement of many reader devices in order to locate people accurately, so the cost of installing reader devices is problematic in large public places. Wi-Fi- and UWB-based methods do not provide enough resolution to distinguish one person in a crowd. Furthermore, if users of the system have to carry additional devices just to use the location service, the cost and inconvenience should also be considered.
Wearable inertial sensors have also been used to locate a person by integrating observations (Bao and Intille 2004; Foxlin 2005; Hightower and Borriello 2001). Since integral drift has been problematic, it is important to combine observations with those of other sensors. Recently, many types of cellular phones have started to incorporate accelerometers, and some people are carrying them in their daily lives. Therefore, the approaches using acceleration sensors for locating people can effectively use the infrastructure.
2.3 Locating people by using a combination of sensors
To locate and identify people in the environment, methods that integrate both environmental sensors and wearable devices have been studied.
Kourogi et al. (2006) integrated wearable inertial sensors, a GPS function, and an RFID tag system. Woodman and Harle (2008) also integrated wearable inertial sensors and map information. Schulz et al. (2003) used LRFs and ID tags to locate people in a laboratory, and they proposed a method that integrates positions detected using LRFs and identifies people by using sparse ID-tag readers in the environment. Mori et al. (2004) used floor sensors and ID-tags and identified people carrying ID tags. These methods focused on gradually identifying people after initially locating their positions roughly using ID tags as they approach reader devices. However, since these methods integrate environmental sensors and ID tags on the basis of their positions, it is difficult to distinguish them in a crowded environment when the spatial resolution by using ID tags is not enough.
In contrast, we integrate LRFs in the environment and wearable sensors on the basis of the motion of people. Since our method is based on synchronization of motion and does not incorporate the computation of precise position in the integration process, it does not suffer from the drift problem of inertial sensors.
In previous work (Ikeda et al. 2010), LRFs and wearable gyroscopes are integrated based on body rotation around the vertical axis from both types of sensors. However, it was difficult to distinguish pedestrians who move in a line when the trajectories are similar. Another problem is the method’s use of gyroscopes, since cellular phones equipped with gyroscopes are not yet so common.
In this paper, to cope with these problems, we propose a new method that extracts features from a bipedal walking pattern. LRFs observe pedestrians at the height of feet and estimate the positions of people and walking rhythms. The wearable accelerometer also observes walking rhythms. Since walking rhythms differ from person to person, the proposed method can distinguish pedestrians walking in a line, and it uses only an accelerometer in the wearable devices.
3 People tracking and identification using LRFs and wearable accelerometers
3.1 Associating signals from environmental and wearable sensors
To locate each person carrying a wearable sensor, we focus on correlation of signals that are observed from environmental and wearable sensors. After features of the motion are observed using two types of sensors, signals are compared to determine whether the two signals come from the same person.
In this framework, the problem of locating the person with a wearable sensor is reduced to comparing the signal from the wearable sensor to all signals from the people detected by the environmental sensors and then selecting the person with the most similar signal (Fig. 1).
Suppose feet of pedestrians are tracked by using LRFs in the environment, and the motions of both feet are estimated (Fig. 2). Simultaneously, the timings of footsteps are observed by using wearable accelerometers. If the signals from both kinds of sensors are from the same pedestrian, we can assume that the two signals are highly correlated, since they were originally generated from a common walking rhythm. We found the acceleration signal from wearable sensors and acceleration of both feet that are estimated from tracking results are highly correlated. In this paper we focus on walking behavior and propose an association method of signals from both kinds of sensors based on signal correlation.
3.2 Tracking biped foot of pedestrians by using LRFs
Zhao and Shibasaki (2005) proposed a pedestrian tracking method by using LRFs at the height of the feet. By observing the feet of pedestrians, not only the positions of pedestrians but also the timing of their footsteps was observed.
Our method expands upon the system described in (Glas et al. 2009) and uses a particle-filter-based algorithm to track feet in the environment (Fig. 3). In our tracking algorithm, a background model is first computed for each sensor by analyzing hundreds of scan frames to filter out noise and moving objects. Points detected in front of this background scan are grouped into segments within a certain size range, and those that persist over several scans are registered as foot detections. Each foot is then tracked by the particle filter using a simple linear motion model.
Then we compute velocity and acceleration of each foot from tracked positions:
where \( {\tilde{\mathbf{x}}}(t) \) is smoothed position vector, \( {\mathbf{v}}(t) \)is velocity vector, \( a_{L} (t) \)is acceleration of one foot. Suffix L represents that \( a_{L} (t) \) is an estimation from LRFs. \( \Updelta \) represents sampling period.
3.3 Observation of acceleration from wearable sensors
To extract walking rhythm from the wearable accelerometer, we focus on the vertical component of the observed acceleration. The vertical acceleration \( a_{\text{A}} (t) \) is estimated from three-dimensional acceleration vector \( {\mathbf{a}}(t) \) and unit vector of vertical direction \( {\mathbf{e}}_{z} (t) \) as
where suffix A represents that \( a_{\text{A}} (t) \) is an observation from an accelerometer.
We approximate vertical direction \( {\mathbf{e}}_{z} (t) \) by averaging \( {\mathbf{a}}(t) \) over a few seconds:
where L is number of frames to compute average. Since \( {\mathbf{a}}(t)\, \) includes both components of the gravity and the acceleration due to walking, we set L to several times the walking period to filter out latter component of \( {\mathbf{a}}(t)\, \) in each footstep. In experiments, we set L to 8 s and \( {\hat{\mathbf{e}}}_{z} (t)\, \) was a good approximation to extract motion pattern from acceleration vector sequences.
Original and smoothed vertical acceleration signals are shown in Fig. 4a. The accelerometer is attached to the left waist. One footstep of the walk is about 500 ms in the graph, and the timing of the footsteps of both feet is clearly observed. Note that since the accelerometer is attached to the left waist, the impact of a footstep of the left foot is clearer.
3.4 Associating motion of biped foot and body acceleration
Figure 4b shows the smoothed velocity and acceleration of each foot estimated by our tracking method. When the speed of a pedestrian’s idling foot becomes lower and it finally lands on the ground, a large vertical acceleration is observed. Therefore, we can expect the impact of landing to be observed when the acceleration of the idling foot is negative. Note that since LRFs observe at the height of the leg, the velocity does not become zero when the foot lands.
Figure 4c shows minimum of acceleration signals of both feet. This minimum of acceleration (Fig. 4c) and the vertical acceleration signal (Fig. 4a) are highly correlated (Fig. 4d).
To evaluate the correlation between the two signals, we propose computing Pearson’s correlation function between the minimum foot acceleration from LRFs and the acceleration from the accelerometer.
where \( \hat{a}_{A} (t) \) is normalized acceleration from a wearable accelerometer, \( \hat{a}_{\text{biped}} (t) \) is normalized signal of \( a_{\text{biped}} (t) \), which is minimum of acceleration of right and left foot \( a_{\text{left}} (t),\,a_{\text{right}} (t) \) that are computed from Eq. (1):
For each wearable accelerometer, the trajectory of the person who is carrying the sensor is estimated by selecting the trajectory that maximizes Eq. (4).
4 Evaluating signal correlation depends on the phase of walking
In a crowded scene, sometimes only one foot of a pedestrian is observed because of occlusion. However, computing Eq. (4) for acceleration from a single foot results in low correlation. This is because the acceleration from a wearable device record landing both foot whereas the trajectory records motion of one foot. Figure 5 shows relation between acceleration signals from one foot and a wearable accelerometer. In one cycle of acceleration from a single foot, signals shows both positive and negative correlation depends on the phase of walking. To cope with this problem, we propose an evaluation method that focuses on the phase in cyclic walking behavior.
4.1 Relationship between acceleration of a foot and body
In order to associate cyclic signals that include both positively and negatively correlated part depending on the phase, we propose to learn weight coefficients that model signal correlation in each phase. Figure 6 shows computed correlation between observed acceleration signals in each of 16 phase periods, which is division of one cycle defined based on the peak of the foot velocity (See right foot velocity in Fig. 5). We divided one cycle into 16 phase periods. The horizontal axis in Fig. 6 represents the phase period, and the vertical axis is average of \( a_{A} (t)a_{L} (t) \) in each phase period. There are clear positive correlation in earlier phase periods and negative correlation in latter phase periods. Figure 6 shows computed results for three subjects. This graph shows the variations among individuals are not significant.
4.2 Associating acceleration from a foot and body based on the weight depends on the phase of walking
4.2.1 Algorithm
-
1.
Smooth observed velocity of a feet and extract local maximal and minimal value. Define one cycle as the period between local maxima. In each time in a cycle, define the phase \( \varphi (t) \) to zero at the time of local maxima, \( \pi \) at local minima, and linearly interpolated phase at other time.
$$\varphi (t) = \left\{ \begin{array}{ll} 0 &{{\text{velocity}}\,{\text{is}}\,{\text{local}}\,{\text{maxima}}\,{\text{at}}\,t\,(=t_{1} )} \\ \pi &{\text{velocity}}\,{\text{is}}\,{\text{local}}\,{\text{minima}}\,{\text{at}}\,\,t\,(=t_{2})\\ {\frac{{t - t_{1} }}{{t_{2} - t_{1} }}\pi } &{t_{1} \leq t < t_{2} } \\ {\left( {1 + \frac{{t - t_{2}}}{{t^{\prime }_{1} - t_{2} }}} \right)\pi } & {t_{2} \leq t <t^{\prime }_{1} ,\quad t^{\prime }_{1}\,{\text{is}}\,{\text{next}}\,{\text{local}}\,{\text{maxima}}} \end{array} \right.$$(6) -
2.
Divide one cycle into M phase periods \( \varphi_{k} (k = 1 \ldots M) \) . We use M = 16 in experiments. In each period k, compute average of Eq. (4) in each phase period.
$$ avg_{k} = {\text{average}}\,\,{\text{of}}\,\hat{a}_{\text{A}} (t)\hat{a}_{\text{L}} (t) . $$(7)where t is in \( \,\varphi_{k} \) and \( \hat{a}_{A} (t)\;\hat{a}_{L} (t) \) are normalized acceleration from a wearable accelerometer and a tracking result using LRFs.
-
3.
Define coefficients w according to the average. We use \( \theta = 0. 2 5 \) in experiments.
$${\text{w}}(\varphi ) = \left\{ {\begin{array}{ll} { + 1} & {\phi\,{\text{is}}\,{\text{in}}\,\phi_{k} \;{\text{and}}\;avg_{k} >\theta } \\ { - 1} & {\phi\,{\text{is}}\,{\text{in}}\,\phi_{k} \;{\text{and}}\;avg_{k} <\theta } \\ 0 & {\text{otherwise}} \\ \end{array} } \right.$$(8)
By using the weight function defined in Eq. (8), evaluate correlation between sensors:
Based on Eq. (9), the trajectory of the user is estimated by selecting the trajectory that maximizes it. Figure 7 shows computed weight function in each phase period.
5 Experiments
5.1 Experimental setup
We conducted experiments at a shopping mall in the Asia and Pacific Trade Center, in Osaka, Japan (Fig. 8). We located people in a 20-m-radius area of the arcade containing many restaurants and shops selling clothing and accessories. People in this area were monitored via a sensor network consisting of six LRFs installed at a height of 20 cm (Fig. 9). We modified a previous system (Kanda et al. 2008) designed for tracking a biped foot and expanded it to incorporate wearable sensors to locate and identify people.
Each foot of a pedestrian in the environment was detected and tracked with a particle filter. By computing the expectation of the particles, we estimated the position and velocity 25 times per second. This tracking algorithm ran very stably and reliably with a measured position.
Three people in the environment each carried one wearable sensor with a three-axis accelerometer (Fig. 10). In experiments, the observed acceleration signals were time-stamped and sent to a host PC via Bluetooth. Figure 11 shows estimated trajectories of feet in 4 s. Sometimes only one foot of a pedestrian is observed.
5.2 Accuracy of identifying pedestrian
We tested with three subjects and four trials. Figure 12 shows the first 20 s of computed correlation between the wearable sensor on subjects and all tracked foot when the subject was walking. Figure 12a–c show the typical results for subject 1,2 and 3. The colored lines show correlation of the subject. When the colored line is the highest among all trajectories at the same time, the subject is correctly identified. In experiments three subjects carried a wearable accelerometer and walked with other pedestrians in a shopping mall. There are about 10 pedestrians in the environment shown in Fig. 8.
In 15 experiments, the pedestrian who are carrying the sensor was identified in the sequences. Table 1 shows accuracy of identification at 4, 6, 8, and 10 s after the subject appeared. As the time becomes longer after the subject appeared, more accurate correlation between signals becomes.
6 Discussion
6.1 Time synchronization
Since our method locates people by comparing time sequences, it is important to adjust the clocks of the LRFs and wearable sensors. In the following experiments, the wearable sensor clocks were synchronized with the host PC when they initially established a Bluetooth connection.
Another problem is the delay in the transmission from the wearable sensors to the host PC. In the experiments, signals were sent with timestamps added by the wearable sensors. If the timestamp were set after the signals had been sent (e.g., by the host PC), the results would be affected by sudden transmission delays.
6.2 Privacy issues
When cameras are installed in public spaces, the problem of invasion of privacy is inevitably raised. Since LRFs do not observe the face or any other information that identifies pedestrians, this issue is irrelevant to our method.
6.3 Integrating wearable gyroscopes
In this paper, we focused on association by using wearable accelerometers and LRFs. By using wearable devices equipped with both accelerometers and gyroscopes, we can roughly estimate locations of people. Though the estimated positions suffer from cumulative error, location information will provide another key for accurate association when two people walk in similar rhythm. We have already proposed association method that uses angular velocities (Ikeda et al. 2010). In the next step we would like to integrate both walking rhythms and location information.
6.4 Scalability
In experiments, we identified three pedestrians when about ten people are walking in a shopping mall to confirm possibility of the proposed method. When two people walk in same period and phase, observation from one accelerometer is highly correlated with two trajectories. In such case, there remains ambiguity in association. In principle, by assuming two people never behave in perfect synchronization, all people are identified after enough length of observations (larger parameter T in Eq. 9).
However, as the number of people becomes larger, the system needs longer T and the decision is delayed. To cope with the problem, it is promising to integrate gyroscopes as discussed in previous paragraph to resolve ambiguity. Another method is using flexible length of T for association. By using fixed larger T will result in longer delay of associations. However, when there are not many people, if the computed evaluation function is larger than a fixed threshold for only one pedestrian, we can decide in advance. Then effective T will become smaller by associating in more flexible manner.
6.5 The effect of the pose of accelerometer
In experiments, we attached wearable acceleration sensors to the waist of pedestrians. By computing vertical component of the acceleration, the pose of the sensors does not affect our method. However, acceleration signals differ depending on the position the sensor is attached.
We confirmed the differences that may arise when sensors are carried in different ways: in a pocket, in hands, in a bag (Fig. 13). The shape of the observed acceleration signals is not completely same, but the detected peaks of acceleration are still clear and there is no significant difference in computing correlation process.
7 Conclusion
In this paper, to estimate both positions and IDs of pedestrians, we propose a method that associates precise position information using sensors in the environment and reliable ID information using wearable sensors. Since the tracking results of biped foot of a pedestrian and the body oscillation of the same pedestrian correlate, we associate these signals from same pedestrian that maximizes correlation between them.
Experimental results for locating people in a shopping mall show the precision of our method. Since LRFs are now becoming common and people are carrying cellular phones that contain accelerometers, we believe that our method is realistic and can provide a fundamental means of location services in public places. In future, we would like to investigate our method when pedestrians carry cellular phones in different ways. Since wearable devices provide much information other than walking rhythm, we would like to expand our method to associate observations of various kinds of behaviors.
References
Mizugaki K et al (2007) Accurate wireless location/communication system with 22-cm error using UWB-IR. IEEE radio and wireless symposium, pp 455–458
Amemiya T, Yamashita J, Hirota K, Hirose M (2004) Virtual leading blocks for the deaf-blind: A real-time way-finder by verbal-nonverbal hybrid interface and high-density RFID tag space. In: Proceedings of the IEEE virtual reality conference, pp 165–172
Bahl P, Padmanabhan VN (2000) RADAR: an in-building RF-based user location and tracking system. In: Proceedings of the IEEE INFOCOM 2000, vol 2, pp 775–784
Bao L, Intille SS (2004) Activity recognition from user-annotated acceleration data. In: Ferscha A, Mattern F (eds) Proceedings of the pervasive 2004, vol LNCS 3001. Springer, Berlin, pp 1–17
Cui J, Zha H, Zhao H, Shibasaki R (2007) Laser-based detection and tracking of multiple people in crowds. Comput Vis Image Underst 106(2–3):300–312
Foxlin E (2005) Pedestrian tracking with shoe-mounted inertial sensors. IEEE Comput Graph Appl 25(6):38–46
Glas DF, Miyashita T, Ishiguro H, Hagita N (2009) Laser-based tracking of human position and orientation using parametric shape modeling. Adv Robotics 23(4):405–428
Harter A, Hopper A, Steggles P, Ward A, Webster P (1999) The anatomy of a context-aware application. In: Proceedings of the 5th annual ACM/IEEE international conference on mobile computing and networking (Mobicom ‘99), pp 59–68
Hightower J, Borriello G (2001) Location systems for ubiquitous computing. Computer 34(8):57–66
Hu W, Tan T, Wang L, Maybank S (2004) A survey on visual surveillance of object motion and behaviors. IEEE Trans Syst Man Cybern Part C 34(3):334–352
Ikeda T, Ishiguro H, Glas DF, Shiomi M, Miyashita T, Hagita N (2010) Person identification by integrating wearable sensors and tracking results FROM environmental sensors. In: Proceedings of the IEEE international conference on robotics and automation (ICRA ‘10), pp 2637–2642
Kanda T, Glas DF, Shiomi M, Ishiguro H, Hagita N (2008) Who will be the customer? A social robot that anticipates people’s behavior from their trajectories. In: Proceedings of the 10th international conference on ubiquitous computing (UbiComp ‘08), pp 380–389
Kourogi M, Sakata N, Okuma T, Kurata T (2006) indoor/outdoor pedestrian navigation with an embedded GPS/RFID/self-contained sensor system. In: Proceedings of the 16th international conference on artificial reality and telexistence (ICAT ‘06), pp 1310–1321
Mori T, Suemasu Y, Noguchi H, Sato T (2004) Multiple people tracking by integrating distributed floor pressure sensors and RFID system. In: Proceedings of the of IEEE international conference on systems, man and cybernetics, vol 6, pp 5271–5278
Ni LM, Liu Y, Lau YC, Patil AP (2003) LANDMARC: indoor location sensing using active RFID. In: Proceedings of the first IEEE international conference on pervasive computing and communications (PerCom ‘03), pp 407–415
Schulz D, Fox D, Hightower J (2003) People tracking with anonymous and id-sensors using rao-blackwellised particle filters. In: Proceedings of the 18th international joint conference on artificial intelligence (IJCAI ‘03), pp 921–928
Want R, Hopper A, Falcao V, Gibbons J (1992) The active badge location system. ACM Trans Info Syst 10(1):91–102
Woodman OJ, Harle R (2008) Pedestrian localization for indoor environments. In: Proceedings of the 10th international conference on ubiquitous computing (UbiComp ‘08), pp 114–123
Zhao H, Shibasaki R (2005) A novel system for tracking pedestrians using multiple single-row laser range scanners. IEEE Trans SMC Part A Syst Hum 35-2:283–291
Acknowledgments
This work was supported by KAKENHI (25330314).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Ikeda, T., Ishiguro, H., Miyashita, T. et al. Pedestrian identification by associating wearable and environmental sensors based on phase dependent correlation of human walking. J Ambient Intell Human Comput 5, 645–654 (2014). https://doi.org/10.1007/s12652-013-0191-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12652-013-0191-x