Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification

Published: 18 December 2020 Publication History

Abstract

This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home.

Supplementary Material

hu (hu.zip)
Supplemental movie, appendix, image and software files for, ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification

References

[1]
Alessandro Albini, Simone Denei, and Giorgio Cannata. Human hand recognition from robotic skin measurements in human-robot physical interactions. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4348--4353. IEEE, 2017.
[2]
Fernando Alonso-Martín, Juan Gamboa-Montero, José Castillo, Álvaro Castro-González, and Miguel Salichs. Detecting and classifying human touches in a social robot through acoustic sensing and machine learning. Sensors, 17(5):1138, 2017.
[3]
Alexander Alspach, Joohyung Kim, and Katsu Yamane. Design and fabrication of a soft robotic hand and arm system. In 2018 IEEE International Conference on Soft Robotics (RoboSoft), pages 369--375. IEEE, 2018.
[4]
Kerem Altun and Karon E MacLean. Recognizing affect in human touch of a robot. Pattern Recognition Letters, 66:31--40, 2015.
[5]
Patrick Chiu, Chelhwon Kim, and Hideto Oda. Recognizing gestures on projected button widgets with an rgb-d camera using a cnn. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, pages 369--374. ACM, 2018.
[6]
Pak-Kiu Chung, Bing Fang, and Francis Quek. Mirrortrack-a vision based multi-touch system for glossy display surfaces. IET, pages 571--576, 2008.
[7]
KC Dohse, Thomas Dohse, Jeremiah D Still, and Derrick J Parkhurst. Enhancing multi-user interaction with multi-touch tabletop displays using hand tracking. In First International Conference on Advances in Computer-Human Interaction, pages 297--302. IEEE, 2008.
[8]
Siyuan Dong, Wenzhen Yuan, and Edward H Adelson. Improved gelsight tactile sensor for measuring geometry and slip. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 137--144. IEEE, 2017.
[9]
Sachin Sudhakar Farfade, Mohammad J Saberian, and Li-Jia Li. Multi-view face detection using deep convolutional neural networks. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, pages 643--650. ACM, 2015.
[10]
Simon Haykin. Neural networks: a comprehensive foundation. Prentice Hall PTR, 1994.
[11]
Matthew J Hertenstein, Julie M Verkamp, Alyssa M Kerestes, and Rachel M Holmes. The communicative functions of touch in humans, nonhuman primates, and rats: a review and synthesis of the empirical research. Genetic, social, and general psychology monographs, 132 (1):5--94, 2006.
[12]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700--4708, 2017.
[13]
Isabella Huang, Jingjun Liu, and Ruzena Bajcsy. A depth camera-based soft fingertip device for contact region estimation and perceptionaction coupling. In 2019 International Conference on Robotics and Automation (ICRA), pages 8443--8449. IEEE, 2019.
[14]
Hiroshi Ishiguro, Tetsuo Ono, Michita Imai, Takeshi Maeda, Takayuki Kanda, and Ryohei Nakatsu. Robovie: an interactive humanoid robot. Industrial robot: An international journal, 28(6):498--504, 2001.
[15]
Sooyeon Jeong, Kristopher Dos Santos, Suzanne Graca, Brianna O'Connell, Laurel Anderson, Nicole Stenquist, Katie Fitzpatrick, Honey Goodenough, Deirdre Logan, Peter Weinstock, et al. Designing a socially assistive robot for pediatric care. In Proceedings of the 14th international conference on interaction design and children, pages 387--390. ACM, 2015.
[16]
Micah K Johnson and Edward H Adelson. Retrographic sensing for the measurement of surface texture and shape. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 1070--1077. IEEE, 2009.
[17]
Micah K Johnson, Forrester Cole, Alvin Raj, and Edward H Adelson. Microgeometry capture using an elastomeric sensor. In ACM Transactions on Graphics (TOG), volume 30, page 46. ACM, 2011.
[18]
Zhanat Kappassov, Daulet Baimukashev, Zharaskhan Kuanyshuly, Yerzhan Massalin, Arshat Urazbayev, and Huseyin Atakan Varol. Color-coded fiber-optic tactile sensor for an elastomeric robot skin. In 2019 International Conference on Robotics and Automation (ICRA), pages 2146--2152. IEEE, 2019.
[19]
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[20]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097--1105, 2012.
[21]
Julien Letessier and François Bérard. Visual tracking of bare fingers for interactive surfaces. In Proceedings of the 17th annual ACM symposium on User interface software and technology, pages 119--122. ACM, 2004.
[22]
Rui Li, Robert Platt, Wenzhen Yuan, Andreas ten Pas, Nathan Roscup, Mandayam A Srinivasan, and Edward Adelson. Localization and manipulation of small parts using gelsight tactile sensing. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3988--3993. IEEE, 2014.
[23]
Hongbin Liu, Juan Greco, Xiaojing Song, Joao Bimbo, Lakmal Seneviratne, and Kaspar Althoefer. Tactile image based contact shape recognition using neural network. In 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pages 138--143. IEEE, 2012.
[24]
Asanterabi Kighoma Malima, Erol Özgür, and Müjdat Çetin. A fast algorithm for vision-based hand gesture recognition for robot control. IEEE (Institute of Electrical and Electronics Engineers), 2006.
[25]
Jérôme Martin, Vincent Devin, and James L Crowley. Active hand tracking. In Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pages 573--578. IEEE, 1998.
[26]
Takashi Minato, Yuichiro Yoshikawa, Tomoyuki Noda, Shuhei Ikemoto, Hiroshi Ishiguro, and Minoru Asada. Cb2: A child robot with biomimetic body for cognitive developmental robotics. In 2007 7th IEEE-RAS International Conference on Humanoid Robots, pages 557--562. IEEE, 2007.
[27]
Cristina Nuzzi, Simone Pasinetti, Matteo Lancini, Franco Docchio, and Giovanna Sansoni. Deep learning-based hand gesture recognition for collaborative robots. IEEE Instrumentation & Measurement Magazine, 22(2):44--51, 2019.
[28]
Shijia Pan, Ceferino Gabriel Ramirez, Mostafa Mirshekari, Jonathon Fagert, Albert Jin Chung, Chih Chi Hu, John Paul Shen, Hae Young Noh, and Pei Zhang. Surfacevibe: vibration-based tap & swipe tracking on ubiquitous surfaces. In 2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pages 197--208. IEEE, 2017.
[29]
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct): 2825-2830, 2011.
[30]
Ariadna Quattoni, Michael Collins, and Trevor Darrell. Transfer learning for image classification with sparse prototype representations. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1--8. IEEE, 2008.
[31]
Miguel A Salichs, Ramon Barber, Alaa M Khamis, María Malfaz, Javier F Gorostiza, Rakel Pacheco, Rafael Rivas, Ana Corrales, Elena Delgado, and David Garcia. Maggie: A robotic platform for human-robot social interaction. In 2006 IEEE Conference on Robotics, Automation and Mechatronics, pages 1--7. IEEE, 2006.
[32]
Siddharth Sanan, Michael H Ornstein, and Christopher G Atkeson. Physical human interaction for an inflatable manipulator. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 7401--7404. IEEE, 2011.
[33]
Johannes Schöning, Jonathan Hook, Tom Bartindale, Dominik Schmidt, Patrick Oliver, Florian Echtler, Nima Motamedi, Peter Brandl, and Ulrich von Zadow. Building interactive multi-touch surfaces. In Tabletops-Horizontal Interactive Displays, pages 27--49. Springer, 2010.
[34]
Takanori Shibata. Ubiquitous surface tactile sensor. In IEEE Conference on Robotics and Automation, 2004. TExCRA Technical Exhibition Based., pages 5--6. IEEE, 2004.
[35]
David Silvera-Tawil, David Rye, and Mari Velonaki. Interpretation of social touch on an artificial arm covered with an eit-based sensitive skin. International Journal of Social Robotics, 6(4):489--505, 2014.
[36]
Walter Dan Stiehl and Cynthia Breazeal. Affective touch for robotic companions. In International Conference on Affective Computing and Intelligent Interaction, pages 747--754. Springer, 2005.
[37]
Satoshi Suzuki et al. Topological structural analysis of digitized binary images by border following. Computer vision, graphics, and image processing, 30(1):32--46, 1985.
[38]
Yoshiki Takeoka, Takashi Miyaki, and Jun Rekimoto. Z-touch: an infrastructure for 3d gesture interaction in the proximity of tabletop surfaces. In ACM International Conference on Interactive Tabletops and Surfaces, pages 91--94, 2010.
[39]
Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer learning. In International Conference on Artificial Neural Networks, pages 270--279. Springer, 2018.
[40]
D Silvera Tawil, David Rye, and Mari Velonaki. Improved eit drive patterns for a robotics sensitive skin. In Proceeding of Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, pages 2--4, 2009.
[41]
David Silvera Tawil, David Rye, and Mari Velonaki. Touch modality interpretation for an eit-based sensitive skin. In 2011 IEEE International Conference on Robotics and Automation, pages 3770--3776. IEEE, 2011.
[42]
Kazuyoshi Wada and Takanori Shibata. Living with seal robots - its socio-psychological and physiological influences on the elderly at a care house. IEEE transactions on robotics, 23(5):972--980, 2007.
[43]
Benjamin Ward-Cherrier, Nicholas Pestell, Luke Cramphorn, Benjamin Winstone, Maria Elena Giannaccini, Jonathan Rossiter, and Nathan F Lepora. The tactip family: Soft optical tactile sensors with 3d-printed biomimetic morphologies. Soft robotics, 5(2):216--227, 2018.
[44]
Andrew D Wilson. Touchlight: an imaging touch screen and display for gesture-based interaction. In Proceedings of the 6th international conference on Multimodal interfaces, pages 69--76, 2004.
[45]
Benjamin Winstone, Gareth Griffiths, Tony Pipe, Chris Melhuish, and Jonathon Rossiter. Tactip-tactile fingertip device, texture analysis through optical tracking of skin features. In Conference on Biomimetic and Biohybrid Systems, pages 323--334. Springer, 2013.
[46]
Dan Xu, Yen-Lun Chen, Chuan Lin, Xin Kong, and Xinyu Wu. Real-time dynamic gesture recognition system based on depth perception for robot navigation. In 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 689--694. IEEE, 2012.
[47]
Jie Yang and Alex Waibel. Tracking human faces in real-time. Technical report, Carnegie-Mellon Univ Pittsburgh Pa School Of Computer Science, 1995.

Cited By

View all
  • (2024)Portable Head-Mounted System for Mobile Forearm TrackingSensors10.3390/s2407222724:7(2227)Online publication date: 30-Mar-2024
  • (2024)Ring-a-Pose: A Ring for Continuous Hand Pose TrackingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997418:4(1-30)Online publication date: 21-Nov-2024
  • (2024)airTac: A Contactless Digital Tactile Receptor for Detecting Material and Roughness via Terahertz SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785868:3(1-37)Online publication date: 9-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 4
December 2020
1356 pages
EISSN:2474-9567
DOI:10.1145/3444864
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 December 2020
Published in IMWUT Volume 4, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Human-robot interaction
  2. Image classification
  3. Shadows
  4. Tactile interaction

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • NSF National Robotic Initiative Award NRI

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)307
  • Downloads (Last 6 weeks)35
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Portable Head-Mounted System for Mobile Forearm TrackingSensors10.3390/s2407222724:7(2227)Online publication date: 30-Mar-2024
  • (2024)Ring-a-Pose: A Ring for Continuous Hand Pose TrackingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997418:4(1-30)Online publication date: 21-Nov-2024
  • (2024)airTac: A Contactless Digital Tactile Receptor for Detecting Material and Roughness via Terahertz SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785868:3(1-37)Online publication date: 9-Sep-2024
  • (2024)Towards Smartphone-based 3D Hand Pose Reconstruction Using Acoustic SignalsACM Transactions on Sensor Networks10.1145/367712220:5(1-32)Online publication date: 16-Jul-2024
  • (2024)WheelPose: Data Synthesis Techniques to Improve Pose Estimation Performance on Wheelchair UsersProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642555(1-25)Online publication date: 11-May-2024
  • (2024)Touch Interaction Classification Using Upper-Body Sensor SuitIEEE Sensors Journal10.1109/JSEN.2024.340710624:14(22720-22732)Online publication date: 15-Jul-2024
  • (2023)UnifiedSense: Enabling Without-Device Gesture Interactions Using Over-the-shoulder Training Between Redundant Wearable SensorsProceedings of the ACM on Human-Computer Interaction10.1145/36042777:MHCI(1-25)Online publication date: 13-Sep-2023
  • (2023)Touch-and-HealProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35962587:2(1-33)Online publication date: 12-Jun-2023
  • (2023)ShadowTouch: Enabling Free-Form Touch-Based Hand-to-Surface Interaction with Wrist-Mounted Illuminant by Shadow ProjectionProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606785(1-14)Online publication date: 29-Oct-2023
  • (2023)CAFI-ARProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35694996:4(1-23)Online publication date: 11-Jan-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media