Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/2936924.2937077acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
research-article

Unsupervised Learning of Qualitative Motion Behaviours by a Mobile Robot

Published: 09 May 2016 Publication History

Abstract

The success of mobile robots, in daily living environments, depends on their capabilities to understand human movements and interact in a safe manner. This paper presents a novel unsupervised qualitative-relational framework for learning human motion patterns using a single mobile robot platform. It is capable of learning human motion patterns in real-world environments, in order to predict future behaviours.
This previously untackled task is challenging because of the limited field of view provided by a single mobile robot. It is only able to observe one location at any time, resulting in incomplete and partial human detections and trajectories. Central to the success of the presented framework is mapping the detections into an abstract qualitative space, and then characterising motion invariant to exact metric position.
This framework was used by a physical robot autonomously patrolling a company's office during a six week deployment. Experimental results from this deployment are discussed and demonstrate the effectiveness and applicability of the system.

References

[1]
J. F. Allen. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832--843, 1983.
[2]
A. Basharat, A. Gritai, and M. Shah. Learning object motion patterns for anomaly detection and improved object detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2008.
[3]
A. Behera, D. C. Hogg, and A. G. Cohn. Egocentric activity monitoring and recovery. In Asian Conference on Computer Vision (ACCV), 2012.
[4]
M. Bennewitz, W. Burgard, G. Cielniak, and S. Thrun. Learning motion patterns of people for compliant robot motion. International Journal of Robotics Research, 24:31--48, 2005.
[5]
M. Bennewitz, W. Burgard, and S. Thrun. Using EM to learn motion behaviors of persons with mobile robots. In IEEE Conf. on Intelligent Robots and Systems (IROS), 2002.
[6]
J. Chen, A. G. Cohn, D. Liu, S. Wang, J. Ouyang, and Q. Yu. A survey of qualitative spatial representations. The Knowledge Engineering Review, 30:106--136, 2015.
[7]
G. Cielniak, M. Bennewitz, and W. Burgard. Where is. ..? learning and utilizing motion patterns of persons with mobile robots. In International Joint Conf. on Artificial Intelligence (IJCAI), 2003.
[8]
E. Clementini, P. D. Felice, and D. Hernández. Qualitative representation of positional information. Artificial Intelligence, 95(2):317--356, 1997.
[9]
D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph matching in pattern recognition. International Journal of Pattern Recognition and Artificial Intelligence, 18:265--298, 2004.
[10]
H. Dee and D. C. Hogg. Detecting inexplicable behaviour. In Proc. of British Machine Vision Conference (BMVC2014), 2004.
[11]
M. Delafontaine, A. G. Cohn, and N. Van de Weghe. Implementing a qualitative calculus to analyse moving point objects. Expert Systems with Applications, 38(5):5187--5196, 2011.
[12]
C. Dondrup, N. Bellotto, F. Jovan, and M. Hanheide. Real-time multisensor people tracking for human-robot spatial interaction. In Workshop on Machine Learning for Social Robotics at IEEE Conf. on Robotics and Automation (ICRA), 2015.
[13]
K. S. Dubba, M. R. d. Oliveira, G. H. Lim, H. Kasaei, L. S. Lopes, and A. Tome. Grounding language in perception for scene conceptualization in autonomous robots. In AAAI Spring Symposium Series, 2014.
[14]
Y. Gatsoulis, P. Duckworth, C. Dondrup, P. Lightbody, and C. Burbridge. QSRlib: A library for qualitative spatial-temporal relations and reasoning, Jan 2016. qsrlib.readthedocs.org.
[15]
Y. Gatsoulis et al. QSRlib: A library for qualitative spatial-temporal relations and reasoning. In preparation.
[16]
W. Hu, X. Xiao, Z. Fu, D. Xie, and T. Tan. A system for learning statistical motion patterns. IEEE Trans. on Pattern Analysis and Machine Intelligence, 28:1450--1464, 2006.
[17]
W. Hu, D. Xie, and T. Tan. A hierarchical self-organizing approach for learning the patterns of motion trajectories. IEEE Trans. on Neural Networks, 15:135--144, 2004.
[18]
N. Johnson and D. C. Hogg. Learning the distribution of object trajectories for event recognition. In Proc. British Machine Vision Conference (BMVC), 1995.
[19]
T. Kanda, D. Glas, M. Shiomi, and N. Hagita. Abstracting people's trajectories for social robots to proactively approach customers. IEEE Trans. on Robotics, 25:1382--1396, 2009.
[20]
L. Liao, D. J. Patterson, D. Fox, and H. Kautz. Learning and inferring transportation routines. Artificial Intelligence, 171:311--331, 2007.
[21]
M. Luber, L. Spinello, J. Silva, and K. Arras. Socially-aware robot navigation: A learning approach. In IEEE Conf. on Intelligent Robots and Systems (IROS), 2012.
[22]
C. Piciarelli, G. L. Foresti, and L. Snidaro. Trajectory clustering and its applications for video surveillance. In IEEE Conf. on Advanced Video and Signal Based Surveillance, 2005.
[23]
P. W. Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53--65, 1987.
[24]
M. Sridhar, A. G. Cohn, and D. C. Hogg. Unsupervised learning of event classes from video. In Association for the Advancement of Artificial Intelligence (AAAI), 2010.
[25]
N. Van de Weghe, A. G. Cohn, P. De Maeyer, and F. Witlox. Representing moving objects in computer-based expert systems: The overtake event example. Expert Systems with Applications, 29:977--983, 2005.
[26]
J. Young and N. Hawes. Learning by observation using qualitative spatial relations. In IEEE Conf. on Autonomous Agents and Multiagent Systems, 2015.

Cited By

View all
  • (2016)Unsupervised activity recognition using latent semantic analysis on a mobile robotProceedings of the Twenty-second European Conference on Artificial Intelligence10.3233/978-1-61499-672-9-1062(1062-1070)Online publication date: 29-Aug-2016

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
AAMAS '16: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems
May 2016
1580 pages
ISBN:9781450342391

Sponsors

  • IFAAMAS

In-Cooperation

Publisher

International Foundation for Autonomous Agents and Multiagent Systems

Richland, SC

Publication History

Published: 09 May 2016

Check for updates

Author Tags

  1. machine learning for robotics
  2. mobile robotics
  3. qualitative spatial representations
  4. unsupervised learning

Qualifiers

  • Research-article

Funding Sources

  • EU FP7 project 600623 (STRANDS)

Conference

AAMAS '16
Sponsor:

Acceptance Rates

AAMAS '16 Paper Acceptance Rate 137 of 550 submissions, 25%;
Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2016)Unsupervised activity recognition using latent semantic analysis on a mobile robotProceedings of the Twenty-second European Conference on Artificial Intelligence10.3233/978-1-61499-672-9-1062(1062-1070)Online publication date: 29-Aug-2016

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media