Abstract
Service robots are appearing more and more in our daily life. The development of service robots combines multiple fields of research, from object perception to object manipulation. The state-of-the-art continues to improve to make a proper coupling between object perception and manipulation. This coupling is necessary for service robots not only to perform various tasks in a reasonable amount of time but also to continually adapt to new environments and safely interact with non-expert human users. Nowadays, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object in predefined settings. Besides, in most of the cases, there is a reliance on large amounts of training data. Therefore, the knowledge of such robots is fixed after the training phase, and any changes in the environment require complicated, time-consuming, and expensive robot re-programming by human experts. Therefore, these approaches are still too rigid for real-life applications in unstructured environments, where a significant portion of the environment is unknown and cannot be directly sensed or controlled. In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects. Therefore, apart from batch learning, the robot should be able to continually learn about new object categories and grasp affordances from very few training examples on-site. Moreover, apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition by teaching new concepts, or by correcting insufficient or erroneous concepts. In this way, the robot will constantly learn how to help humans in everyday tasks by gaining more and more experiences without the need for re-programming. In this paper, we review a set of previously published works and discuss advances in service robots from object perception to complex object manipulation and shed light on the current challenges and bottlenecks.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Boston dynamics spot. https://www.bostondynamics.com/spot
Dexterous hand. https://www.shadowrobot.com/products/dexterous-hand/
husqvarna automower. https://www.husqvarna.com/us/products/robotic-lawn-mowers/
irobot. https://www.irobot.com/
Aein, M.J., Aksoy, E.E., Wörgötter, F.: Library of actions: Implementing a generic robot execution framework by using manipulation action semantics. Int J Robot Res, p. 910–934 (2018)
Aleixandre, M., Santos, J.P., Sayago, I., Cabellos, J.M., Arroyo, T., Horrillo, M.C.: A wireless and portable electronic nose to differentiate musts of different ripeness degree and grape varieties. Sensors 15(4), 8429–8443 (2015)
Antanas, L., Moreno, P., Neumann, M., de Figueiredo, R.P., Kersting, K., Santos-Victor, J., De Raedt, L.: Semantic and geometric reasoning for robotic grasping: a probabilistic logic approach. Autonomous Robots 43(6), 1393–1418 (2019)
Arain, M.A., Schaffernicht, E., Bennetts, V.H., Lilienthal, A.J.: The right direction to smell: Efficient sensor planning strategies for robot assisted gas tomography. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp 4275–4281. IEEE (2016)
Arumugam, R., Enti, V.R., Bingbing, L., Xiaojun, W., Baskaran, K., Kong, F.F., Kumar, A.S., Meng, K.D., Kit, G.W.: Davinci: A cloud computing framework for service robots. In: 2010 IEEE International Conference on Robotics and Automation, pp 3084–3089. IEEE (2010)
Asfour, T., Wächter, M., Kaul, L., Rader, S., Weiner, P., Ottenhaus, S., Grimm, R., Zhou, Y., Grotz, M., Paus, F.: ARMAR-6: A high-performance humanoid for human-robot collaboration in real world scenarios. IEEE Robotics & Automation Magazine 26(4), 108–121 (2019)
Beetz, M., Klank, U., Kresse, I., Maldonado, A., Mösenlechner, L., Pangercic, D., Rühr, T., Tenorth, M.: Robotic roommates making pancakes. In: 2011 11th IEEE-RAS International Conference on Humanoid Robots, pp 529–536. IEEE (2011)
Beksi, W.J., Spruth, J., Papanikolopoulos, N.: Core: A cloud-based object recognition engine for robotics. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4512–4517. IEEE (2015)
Billard, A., Kragic, D.: Trends and challenges in robot manipulation. Science 364(6446) (2019)
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: Optimal speed and accuracy of object detection. arXiv:2004.10934 (2020)
Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis—a survey. IEEE Trans Robot 30(2), 289–309 (2013)
Busch, B., Cotugno, G., Khoramshahi, M., Skaltsas, G., Turchi, D., Urbano, L.,Wächter, M., Zhou, Y., Asfour, T., Deacon, G., et al.: Evaluation of an industrial robotic assistant in an ecological environment. In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–8. IEEE (2019)
Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans Pattern Anal Mach Intell 43(1), 172–186 (2019)
Cha, S.H.: Comprehensive survey on distance/similarity measures between probability density functions. City 1(2), 1 (2007)
Chang, L., Smith, J.R., Fox, D.: Interactive singulation of objects from a pile. In: 2012 IEEE International Conference on Robotics and Automation, pp 3875–3882. IEEE (2012)
Chaumette, F., Hutchinson, S.: Visual servo control. i. basic approaches. IEEE Robotics & Automation Magazine 13(4). https://doi.org/10.1109/MRA.2006.250573 (2006)
Chaumette, F., Hutchinson, S.: Visual servo control. ii. advanced approaches [tutorial]. IEEE Robotics & Automation Magazine 14(1). https://doi.org/10.1109/MRA.2007.339609 (2007 )
Chaves, D., Ruiz-Sarmiento, J., Petkov, N., Gonzalez-Jimenez, J.: Integration of CNN into a robotic architecture to build semantic maps of indoor environments. In: International Work-Conference on Artificial Neural Networks, pp 313–324. Springer (2019)
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern. Anal. Mach. Intell. 40, 834–848 (2017)
Cherubini, A., Chaumette, F.: Visual navigation of a mobile robot with laser-based collision avoidance. Int. J. Robot. Res. 32(2), 189–205 (2013)
Chilo, J., Pelegri-Sebastia, J., Cupane, M., Sogorb, T.: E-nose application to food industry production. IEEE Instrumentation & Measurement Magazine 19(1), 27–33 (2016)
Choy, C., Gwak, J., Savarese, S.: 4D spatio-temporal convnets: Minkowski convolutional neural networks.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3075–3084 (2019)
Ciui, B., Martin, A., Mishra, R.K., Nakagawa, T., Dawkins, T.J., Lyu, M., Cristea, C., Sandulescu, R., Wang, J.: Chemical sensing at the robot fingertips: Toward automated taste discrimination in food samples. ACS Sensors 3(11), 2375–2384 (2018)
Debeunne, C., Vivet, D.: A review of visual-lidar fusion based simultaneous localization and mapping. Sensors 20(7), 2068 (2020)
Deimel, R., Brock, O.: A novel type of compliant and underactuated robotic hand for dexterous grasping. Int. J. Robot. Res. 35(1-3), 161–185 (2016)
Do, T.T., Nguyen, A., Reid, I.: AffordanceNet: An end-to-end deep learning approach for object affordance detection. In: 2018 IEEE international conference on robotics and automation (ICRA), pp 1–5. IEEE (2018)
Doumanoglou, A., Kouskouridas, R., Malassiotis, S., Kim, T.K.: Recovering 6D object pose and predicting next-best-view in the crowd.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3583–3592 (2016)
Eckert, M.A., Kamdar, N.V., Chang, C.E., Beckmann, C.F., Greicius, M.D., Menon, V.: A cross-modal system linking primary auditory and visual cortices: Evidence from intrinsic fMRI connectivity analysis. Human Brain Mapping 29(7), 848–857 (2008)
Eitel, A., Hauff, N., Burgard, W.: Learning to singulate objects using a push proposal network. In: Robotics Research, pp 405–419. Springer (2020)
Elango, N., Faudzi, A.: A review article: investigations on soft materials for soft robot manipulations. The International Journal of Advanced Manufacturing Technology 80(5-8), 1027–1037 (2015)
Englsberger, J., Werner, A., Ott, C., Henze, B., Roa, M.A., Garofalo, G., Burger, R., Beyer, A., Eiberger, O., Schmid, K., et al.: Overview of the torque-controlled humanoid robot TORO. In: 2014 IEEE-RAS International Conference on Humanoid Robots, pp 916–923. IEEE (2014)
Ephrat, A., Mosseri, I., Lang, O., Dekel, T., Wilson, K., Hassidim, A., Freeman, W.T., Rubinstein, M.: Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation. arXiv:1804.03619 (2018)
Eppner, C., Brock, O.: Planning grasp strategies that exploit environmental constraints. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp 4947–4952. IEEE (2015)
Ernst, M.O., Bülthoff, H.H.: Merging the senses into a robust percept. Trends in Cognitive Sciences 8(4), 162–169 (2004)
Evans, J.S.B.: Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol. 59, 255–278 (2008)
Fäulhammer, T., Ambruş, R., Burbridge, C., Zillich, M., Folkesson, J., Hawes, N., Jensfelt, P., Vincze, M.: Autonomous learning of object models on a mobile robot. IEEE Robot. Automation Lett. 2(1), 26–33 (2016)
Ficuciello, F.: Hand-arm autonomous grasping: Synergistic motions to enhance the learning process. Intelligent Service Robotics 12(1), 17–25 (2019)
Fontanals, J., Dang-Vu, B.A., Porges, O., Rosell, J., Roa, M.A.: Integrated grasp and motion planning using independent contact regions. In: 2014 IEEE-RAS International Conference on Humanoid Robots, pp 887–893. IEEE (2014)
Fridovich-Keil, D., Bajcsy, A., Fisac, J.F., Herbert, S.L., Wang, S., Dragan, A.D., Tomlin, C.J.: Confidence-aware motion prediction for real-time collision avoidance. Int J Robot Res 250–265 (2019)
Fromm, T.: Self-supervised damage-avoiding manipulation strategy optimization via mental simulation. arXiv:1712.07452 (2017)
Fuchs, M., Borst, C., Giordano, P.R., Baumann, A., Kraemer, E., Langwald, J., Gruber, R., Seitz, N., Plank, G., Kunze, K., et al.: Rollin’Justin-Design considerations and realization of a mobile platform for a humanoid upper body. In: 2009 IEEE International Conference on Robotics and Automation, pp 4131–4137. IEEE (2009)
Funkhouser, T.A.: Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. Int J Robot Res (2019)
Gao, R., Feris, R., Grauman, K.: Learning to separate object sounds by watching unlabeled video.. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 35–53 (2018)
Gao, R., Grauman, K.: 2.5D visual sound.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 324–333 (2019)
Gao, R., Grauman, K.: Co-separating sounds of visual objects.. In: Proceedings of the IEEE International Conference on Computer Vision, pp 3879–3888 (2019)
Gao, S., Tsang, I.W.H., Ma, Y.: Learning category-specific dictionary and shared dictionary for fine-grained image categorization. IEEE Transactions on Image Processing 23(2), 623–634 (2013)
Garrett, C.R., Lozano-Pérez, T., Kaelbling, L.P.: Ffrob: An efficient heuristic for task and motion planning. In: Algorithmic Foundations of Robotics XI, pp 179–195. Springer (2015)
Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4367–4375 (2018)
Girshick, R.: Fast r-cnn.. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448 (2015)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation.. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587 (2014)
Gupta, M., Sukhatme, G.S.: Using manipulation primitives for brick sorting in clutter. In: 2012 IEEE International Conference on Robotics and Automation, pp 3883–3889. IEEE (2012)
Gutiérrez, M., Llobera, A., Ipatov, A., Vila-Planas, J., Mínguez, S., Demming, S., Büttgenbach, S., Capdevila, F., Domingo, C., Jiménez-Jorquera, C.: Application of an e-tongue to the analysis of monovarietal and blends of white wines. Sensors 11(5), 4840–4857 (2011)
Hang, K., Li, M., Stork, J.A., Bekiroglu, Y., Pokorny, F.T., Billard, A., Kragic, D.: Hierarchical fingertip space: A unified framework for grasp planning and in-hand grasp adaptation. IEEE Trans. Robot. 32(4), 960–972 (2016)
Hang, K., Morgan, A.S., Dollar, A.M.: Pre-grasp sliding manipulation of thin objects using soft, compliant, or underactuated hands. IEEE Robot. Autom. Lett. 4(2), 662–669 (2019)
Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features.. In: Proceedings of the IEEE International Conference on Computer Vision, pp 3018–3027 (2017)
Harnad, S.: To cognize is to categorize: Cognition is categorization. In: Handbook of categorization in cognitive science, pp 21–54. Elsevier (2017)
Haustein, J.A., Hang, K., Kragic, D.: Integrating motion and hierarchical fingertip grasp planning. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp 3439–3446. IEEE (2017)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn.. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969 (2017)
Helbing, D., Molnár, P.: Social force model for pedestrian dynamics. Phys. Rev. E 51, 4282–4286 (1995). https://doi.org/10.1103/PhysRevE.51.4282. https://link.aps.org/doi/10.1103/PhysRevE.51.4282
Hermann, A., Drews, F., Bauer, J., Klemm, S., Roennau, A., Dillmann, R.: Unified gpu voxel collision detection for mobile manipulation planning. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 4154–4160. IEEE (2014)
Hermans, T., Rehg, J.M., Bobick, A.: Affordance prediction via learned object attributes. In: ICRA: Workshop on Semantic Perception, Mapping, and Exploration, vol. 1. Citeseer (2011)
Hertzberg, J., Zhang, J., Zhang, L., Rockel, S., Neumann, B., Lehmann, J., Dubba, K.S., Cohn, A.G., Saffiotti, A., Pecora, F., et al.: The RACE project. KI-Künstliche Intelligenz 28(4), 297–304 (2014)
Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., Schaal, S.: Learning of grasp selection based on shape-templates. Autonomous Robots 36(1-2), 51–65 (2014)
Huang, S.J., Chang, W.H., Su, J.Y.: Intelligent robotic gripper with adaptive grasping force. Int. J. Control Autom. Sys. 15(5), 2272–2282 (2017)
Huang, Z., Huang, L., Gong, Y., Huang, C., Wang, X.: Mask scoring r-cnn.. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6409–6418 (2019)
Illeris, K.: A comprehensive understanding of human learning. In: Contemporary Theories of Learning, pp 1–14. Routledge (2018)
Ingrand, F., Ghallab, M.: Deliberation for autonomous robots: A survey. Artif. Intell. 247, 10–44 (2017)
Ishida, H., Wada, Y., Matsukura, H.: Chemical sensing in robotic applications: A review. IEEE Sensors Journal 12(11), 3163–3173 (2012)
Jain, A., Kemp, C.C.: EL-E: an assistive mobile manipulator that autonomously fetches objects from flat surfaces. Autonomous Robots 28(1), 45 (2010)
Jetchev, N., Toussaint, M.: Discovering relevant task spaces using inverse feedback control. Autonomous Robots 37(2), 169–189 (2014). https://doi.org/10.1007/s10514-014-9384-1
Johns, E., Leutenegger, S., Davison, A.J.: Deep learning a grasp function for grasping under gripper pose uncertainty. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4461–4468. IEEE (2016)
Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 433–449 (1999)
Jumb, V., Sohani, M., Shrivas, A.: Color image segmentation using K-means clustering and Otsu’s adaptive thresholding. International Journal of Innovative Technology and Exploring Engineering (IJITEE) 3 (9), 72–76 (2014)
Kaiser, P., Asfour, T.: Autonomous detection and experimental validation of affordances. IEEE Robot. Autom. Lett. 3(3), 1949–1956 (2018)
Kang, G., Kim, Y.B., Lee, Y.H., Oh, H.S., You, W.S., Choi, H.R.: Sampling-based motion planning of manipulator with goal-oriented sampling. Intell. Service Robotics: 1–9 (2019)
Kappler, D., Meier, F., Issac, J., Mainprice, J., Cifuentes, C.G., Wuthricḧ, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.: Real-time perception meets reactive motion generation. IEEE Robot. Autom. Lett. 3(3), 1864–1871 (2018)
Karaman, S., Frazzoli, E.: Sampling-based algorithms for optimal motion planning. The Int. J. Robot. Res. 30(7), 846–894 (2011). https://doi.org/10.1177/0278364911406761
Kasaei, H., Luo, S., Sasso, R., Kasaei, M.: Simultaneous multi-view object recognition and grasping in open-ended domains. arXiv:2106.01866 (2021)
Kasaei, S.H.: Look further to recognize better: Learning shared topics and category-specific dictionaries for open-ended 3D object recognition. arXiv:1907.12924 (2019)
of Kasaei, S.H.: OrthographicNet: A deep transfer learning approach for 3D object recognition in open-ended domains. IEEE/ASME Transactions on Mechatronics (2020)
Kasaei, S.H., Ghorbani, M., Schilperoort, J., van der Rest, W.: Investigating the importance of shape features, color constancy, color spaces and similarity measures in open-ended 3D object recognition. arXiv:2002.03779 (2020)
Kasaei, S.H., Oliveira, M., Lim, G.H., Seabra Lopes, L., Tomé, A.M.: An adaptive object perception system based on environment exploration and bayesian learning. In: 2015 IEEE International Conference on Autonomous Robot Systems and Competitions, pp 221–226. IEEE (2015)
Kasaei, S.H., Oliveira, M., Lim, G.H., Seabra Lopes, L., Tomé, A.M.: Interactive open-ended learning for 3D object recognition: An approach and experiments. Journal of Intelligent & Robotic Systems 80 (3-4), 537–553 (2015)
Kasaei, S.H., Oliveira, M., Lim, G.H., Seabra Lopes, L., Tomé, A.M.: Towards lifelong assistive robotics: A tight coupling between object perception and manipulation. Neurocomputing 291, 151–166 (2018)
Kasaei, S.H., Seabra Lopes, L., Tomé, A.M.: Concurrent 3D object category learning and recognition based on topic modelling and human feedback. In: 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp 329–334. IEEE (2016)
Kasaei, S.H., Shafii, N., Seabra Lopes, L., Tome, A.M.: Interactive open-ended object, affordance and grasp learning for robotic manipulation. In: 2019 IEEE International Conference on Robotics and Automation (ICRA) (2019)
Kasaei, S.H., Sock, J., Seabra Lopes, L., Tomé, A.M., Kim, T.K.: Perceiving, learning, and recognizing 3D objects: An approach to cognitive service robots. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Kasaei, S.H., Tomé, A.M., Seabra Lopes, L.: Hierarchical object representation for open-ended object category learning and recognition.. In: Advances in Neural Information Processing Systems, pp 1948–1956 (2016)
Kasaei, S.H., Tomé, A.M., Seabra Lopes, L., Oliveira, M.: GOOD: A global orthographic object descriptor for 3D object recognition and manipulation. Pattern Recogn. Lett. 83, 312–320 (2016)
Kasaei, S.H.M., Seabra Lopes, L., Tomé, A.M.: Local-LDA: Open-ended learning of latent topics for 3D object recognition. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) (2019)
Kehoe, B., Patil, S., Abbeel, P., Goldberg, K.: A survey of research on cloud robotics and automation, vol. 12 (2015)
Kemker, R., McClure, M., Abitino, A., Hayes, T.L., Kanan, C.: Measuring catastrophic forgetting in neural networks. In: Thirty-second AAAI Conference on Artificial Intelligence (2018)
Kertész, C., Turunen, M.: Common sounds in bedrooms (csibe) corpora for sound event recognition of domestic robots. Intelligent Service Robotics 11(4), 335–346 (2018)
Keunecke, N., Hamidreza Kasaei, S.: Combining shape features with multiple color spaces in open-ended 3d object recognition. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids2020) (2021)
Kiani, S., Minaei, S., Ghasemi-Varnamkhasti, M.: Fusion of artificial senses as a robust approach to food quality assessment. Journal of Food Engineering 171, 230–239 (2016)
Kim, B.W., Park, Y., Suh, I.H.: Integration of top-down and bottom-up visual processing using a recurrent convolutional–deconvolutional neural network for semantic segmentation. Intelligent Service Robotics 1–11 (2019)
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences 114(13), 3521–3526 (2017)
Kopicki, M., Detry, R., Adjigble, M., Stolkin, R., Leonardis, A., Wyatt, J.L.: One-shot learning and generation of dexterous grasps for novel objects. Int. J. Robot. Res. 35(8), 959–976 (2016)
Kopicki, M.S., Belter, D., Wyatt, J.L.: Learning better generative models for dexterous, single-view grasping of novel objects. Int. J. Robot. Res. 38(10-11), 1246–1267 (2019)
Kragic, D., Christensen, H.: Robust visual servoing. Int. J. Robot. Res. 22(10-11), 923–939 (2003)
Krawczyk, B., Woźniak, M.: One-class classifiers with incremental learning and forgetting for data streams with concept drift. Soft Computing 19(12), 3387–3400 (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks.. In: Advances in Neural Information Processing Systems, pp 1097–1105 (2012)
Kumra, S., Joshi, S., Sahin, F.: Antipodal robotic grasping using generative residual convolutional neural network. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 9626–9633. IEEE (2020)
Langley, P., Laird, J.E., Rogers, S.: Cognitive architectures: Research issues and challenges. Cognitive Systems Research 10(2), 141–160 (2009)
Lebedev, M.A., Wise, S.P.: Insights into seeing and grasping: distinguishing the neural correlates of perception and action. Behavioral Cognit. Neurosci. Rev. 1(2), 108–129 (2002)
Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4-5), 705–724 (2015)
Leroux, C., Lebec, O., Ghezala, M.B., Mezouar, Y., Devillers, L., Chastagnol, C., Martin, J.C., Leynaert, V., Fattal, C.: Armen: Assistive robotics to maintain elderly people in natural environment. IRBM 34(2), 101–107 (2013)
Li, B., Lu, Y., Johan, H.: Sketch-based 3D model retrieval by viewpoint entropy-based adaptive view clustering. In: Proceedings of the Sixth Eurographics Workshop on 3D Object Retrieval, pp 49–56. Eurographics Association (2013)
Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: Convolution on x-transformed points.. In: Advances in neural information processing systems, pp 820–830 (2018)
Liang, X., Lin, L., Wei, Y., Shen, X., Yang, J., Yan, S.: Proposal-free network for instance-level object segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2978–2991 (2017)
Liang, Y.h., Cai, C.: Intelligent collision avoidance based on two-dimensional risk model. J. Algorithms Comput. Technol. 10(3), 131–141 (2016)
Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: European conference on computer vision, pp 740–755. Springer (2014)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation.. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440 (2015)
Lüddecke, T., Kulvicius, T., Wörgötter, F.: Context-based affordance segmentation from 2D images for robot actions. Robotics and Autonomous Systems (2019)
Luo, S., Bimbo, J., Dahiya, R., Liu, H.: Robotic tactile perception of object properties: A review. Mechatronics 48, 54–67 (2017)
Luo, S., Kasaei, H., Schomaker, L.: Accelerating reinforcement learning for reaching using continuous curriculum learning. arXiv:2002.02697 (2020)
Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv:1703.09312 (2017)
Mahler, J., Matl, M., Liu, X., Li, A., Gealy, D., Goldberg, K.: Dex-Net 3.0: Computing robust robot vacuum suction grasp targets in point clouds using a new analytic model and deep learning. arXiv:709.06670 (2017)
Mahler, J., Matl, M., Satish, V., Danielczuk, M., DeRose, B., McKinley, S., Goldberg, K.: Learning ambidextrous robot grasping policies. Science Robotics 4(26) (2019)
Mahler, J., Pokorny, F.T., Hou, B., Roderick, M., Laskey, M., Aubry, M., Kohlhoff, K., Kröger, T., Kuffner, J., Goldberg, K.: Dex-Net 1.0: A cloud-based network of 3D objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp 1957–1964. IEEE (2016)
Mauro, M., Riemenschneider, H., Signoroni, A., Leonardi, R., Van Gool, L.: A unified framework for content-aware view selection and planning through view importance. In: Proceedings BMVC 2014, pp 1–11 (2014)
Memmesheimer, R., Mykhalchyshyna, I., Seib, V., Evers, T., Paulus, D.: homer@UniKoblenz: Winning Team of the RoboCup@Home Open Platform League 2018, pp 512–523. Springer International Publishing, New York (2019). https://doi.org/10.1007/978-3-030-27544-0_42
Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., Von Hofsten, C., Rosander, K., Lopes, M., Santos-Victor, J., et al.: The iCub humanoid robot: An open-systems platform for research in cognitive development. Neural Networks 23(8-9), 1125–1134 (2010)
Miller, A.T., Allen, P.K.: GraspIt! a versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11(4), 110–122 (2004)
Mirrazavi Salehian, S.S., Figueroa, N., Billard, A.: A unified framework for coordinated multi-arm motion planning. Int. J. Robot. Res. 37(10), 1205–1232 (2018)
Mo, K., Zhu, S., Chang, A.X., Yi, L., Tripathi, S., Guibas, L.J., Su, H.: PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 909–918 (2019)
Moll, M., Kavraki, L., Rosell, J., et al.: Randomized physics-based motion planning for grasping in cluttered and uncertain environments. IEEE Robot. Autom. Lett. 3(2), 712–719 (2017)
Moriello, L., Biagiotti, L., Melchiorri, C., Paoli, A.: Manipulating liquids with robots: A sloshing-free solution. Control Engineering Practice 78, 129–141 (2018)
Morrison, D., Corke, P., Leitner, J.: Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. In: Proceedings of Robotics: Science and Systems (RSS) (2018)
Morrison, D., Corke, P., Leitner, J.: Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. In: International Conference on Robotics: Science and Systems (RSS) (2018)
Morrison, D., Corke, P., Leitner, J.: Learning robust, real-time, reactive robotic grasping. In: The International Journal of Robotics Research, pp 183–201 (2019)
Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 891–898 (2014)
Mukadam, M., Dong, J., Yan, X., Dellaert, F., Boots, B.: Continuous-time gaussian process motion planning via probabilistic inference. The International Journal of Robotics Research 37(11), 1319–1340 (2018)
Murali, A., Mousavian, A., Eppner, C., Paxton, C., Fox, D.: 6-DOF grasping for target-driven object manipulation in clutter. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp 1–8. IEEE (2020)
Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp 1374–1381. IEEE (2015)
Nag, A., Mukhopadhyay, S.C.: Fabrication and implementation of printed sensors for taste sensing applications. Sensors and Actuators A: Physical 269, 53–61 (2018)
Nagle, H.T., Schiffman, S.S.: Electronic taste and smell: the case for performance standards [point of view]. Proceedings of the IEEE 106(9), 1471–1478 (2018)
Niemueller, T., Schiffer, S., Lakemeyer, G., Rezapour-Lakani, S.: Life-long learning perception using cloud database technology. In: Proc IROS Workshop on Cloud Robotics. Citeseer (2013)
Oliveira, G.L., Bollen, C., Burgard, W., Brox, T.: Efficient and robust deep networks for semantic segmentation. Int. J. Robot. Res. 37(4-5), 472–491 (2018)
Oliveira, M., Lim, G.H., Seabra Lopes, L., Kasaei, S.H., Tomé, A.M., Chauhan, A.: A perceptual memory system for grounding semantic representations in intelligent service robots. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 2216–2223. IEEE (2014)
Oliveira, M., Seabra Lopes, L., Lim, G.H., Kasaei, S.H., Sappa, A.D., Tomé, A.M.: Concurrent learning of visual codebooks and object categories in open-ended domains. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 2488–2495. IEEE (2015)
Oliveira, M., Seabra Lopes, L., Lim, G.H., Kasaei, S.H., Tomé, A.M., Chauhan, A.: 3D object perception and perceptual learning in the RACE project. Robotics and Autonomous Systems 75, 614–626 (2016)
Andrychowicz, OpenAI: Marcin, Baker, Bowen, Chociej, Maciek, Jozefowicz, Rafal, McGrew, Bob, Pachocki, Jakub, Petron, Arthur, Plappert, Matthias, Powell, Glenn: Learning dexterous in-hand manipulation. International Journal of Robotics Research 39(1), 3–20 (2020)
Oreshkin, B., López, P.R., Lacoste, A.: TADAM: Task dependent adaptive metric for improved few-shot learning.. In: Advances in Neural Information Processing Systems, pp 721–731 (2018)
Pages, J., Marchionni, L., Ferro, F.: Tiago: the modular robot that adapts to different research needs. In: International Workshop on Robot Modularity, IROS (2016)
Pan, J., Zhang, L., Manocha, D.: Collision-free and smooth trajectory computation in cluttered environments. Int.J. Robot. Res. 31(10), 1155–1175 (2012)
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: A review. Neural Netw. 113, 54–71 (2019)
Piazza, C., Grioli, G., Catalano, M., Bicchi, A.: A century of robotic hands. Annual Review of Control Robotics and Autonomous Systems 2, 1–32 (2019)
Polvara, R., Trabattoni, M., Kucner, T.P., Schaffernicht, E., Amigoni, F., Lilienthal, A.J.: A next-best-smell approach for remote gas detection with a mobile robot. arXiv:1801.06819 (2018)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 652–660 (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: Deep hierarchical feature learning on point sets in a metric space.. In: Advances in Neural Information Processing Systems, pp 5099–5108 (2017)
Qian, K., Jing, X., Duan, Y., Zhou, B., Fang, F., Xia, J., Ma, X.: Grasp pose detection with affordance-based task constraint learning in single-view point clouds. J. Intell. Robot. Sys. (2020)
Qin, Y., Chen, R., Zhu, H., Song, M., Xu, J., Su, H.: S4G: Amodal single-view single-shot SE(3) grasp detection in cluttered scenes. arXiv:1910.14218 (2019)
Qiu, S., Wang, J.: Application of sensory evaluation, hs-spme gc-ms, e-nose, and e-tongue for quality detection in citrus fruits. Journal of food science 80(10), S2296–S2304 (2015)
Qureshi, A.H., Miao, Y., Simeonov, A., Yip, M.C.: Motion planning networks: Bridging the gap between learning-based and classical motion planners. arXiv:1907.06013 (2019)
Qureshi, A.H., Simeonov, A., Bency, M.J., Yip, M.C.: Motion planning networks. In: 2019 International Conference on Robotics and Automation (ICRA), pp 2118–2124. IEEE (2019)
Rabinovich, A., Vedaldi, A., Galleguillos, C., Wiewiora, E., Belongie, S.J.: Objects in context. In: ICCV, vol. 1, pp 1–8. Citeseer (2007)
Rakita, D., Mutlu, B., Gleicher, M.: RelaxedIK: Real-time synthesis of accurate and feasible robot arm motion. In: Robotics: Science and Systems (2018)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 779–788 (2016)
Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks.. In: Advances in Neural Information Processing Systems, pp 91–99 (2015)
Ristin, M., Guillaumin, M., Gall, J., Van Gool, L.: Incremental learning of NCM forests for large-scale image classification.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3654–3661 (2014)
Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: 2009 IEEE International Conference on Robotics and Automation, pp 3212–3217. IEEE (2009)
Rusu, R.B., Bradski, G., Thibaux, R., Hsu, J.: Fast 3D recognition and pose using the viewpoint feature histogram. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 2155–2162. IEEE (2010)
Rusu, R.B., Marton, Z.C., Blodow, N., Beetz, M.: Learning informative point classes for the acquisition of object model maps. In: 2008 10th International Conference on Control, Automation, Robotics and Vision, pp 643–650. IEEE (2008)
Saha, O., Dasgupta, P.: A comprehensive survey of recent trends in cloud robotics architectures and applications, vol. 7, p 47 (2018)
Sahbani, A., El-Khoury, S., Bidaud, P.: An overview of 3D object grasp synthesis algorithms, vol. 60, pp 326–336 (2012)
Sajjan, S.S., Moore, M., Pan, M., Nagaraja, G., Lee, J., Zeng, A., Song, S.: ClearGrasp:3D shape estimation of transparent objects for manipulation. In: 2020 IEEE International Conference on Robotics and Automation (ICRA) (2020)
Shafii, N., Kasaei, S.H., Seabra Lopes, L.: Learning to grasp familiar objects using object view recognition and template matching. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 2895–2900. IEEE (2016)
Simoens, P., Dragone, M., Saffiotti, A.: The internet of robotic things: A review of concept, added value and applications. Int. J. Adv. Rob. Sys. 15(1), 1729881418759,424 (2018)
Singh, N.H., Thongam, K.: Neural network-based approaches for mobile robot navigation in static and moving obstacles environments. Intelligent Service Robotics 12(1), 55–67 (2019)
Skočaj, D., Vrečko, A., Mahnič, M., Janíček, M., Kruijff, G.J.M., Hanheide, M., Hawes, N., Wyatt, J.L., Keller, T., Zhou, K., et al.: An integrated system for interactive continuous learning of categorical knowledge. Journal of Experimental & Theoretical Artificial Intelligence 28(5), 823–848 (2016)
Sock, J., Kasaei, S.H., Seabra Lopes, L., Kim, T.K.: Multi-view 6D object pose estimation and camera motion planning using rgbd images.. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp 2228–2235 (2017)
Spasova, S., Baeten, R., Coster, S., Ghailani, D., Peña-Casas, R., Vanhercke, B.: Challenges in long-term care in Europe. A study of national policies, European Social Policy Network (ESPN), Brussels: European Commission (2018)
Srinivasa, S., Ferguson, D., Vandeweghe, J.M., Diankov, R., Berenson, D., Helfrich, C., Strasdat, K.: The robotic busboy: Steps towards developing a mobile robotic home assistant. In: Proceedings of International Conference on Intelligent Autonomous Systems (2008)
Srinivasa, S.S., Ferguson, D., Helfrich, C.J., Berenson, D., Collet, A., Diankov, R., Gallagher, G., Hollinger, G., Kuffner, J., Weghe, M.V.: HERB: a home exploring robotic butler. Autonomous Robots 28(1), 5 (2010)
Stein, B.E., Meredith, M.A.: The merging of the senses. The MIT Press (1993)
Stilman, M., Schamburek, J.U., Kuffner, J., Asfour, T.: Manipulation planning among movable obstacles. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, pp 3327–3332. IEEE (2007)
Sun, J., Moore, J.L., Bobick, A., Rehg, J.M.: Learning visual object categories for robot affordance prediction. The International Journal of Robotics Research 29(2-3), 174–197 (2010)
Sundaralingam, B., Hermans, T.: Relaxed-rigidity constraints: kinematic trajectory optimization and collision avoidance for in-grasp manipulation. Autonomous Robots 43(2), 469–483 (2019)
Szegedy, C., Toshev, A., Erhan, D.: Deep neural networks for object detection.. In: Advances in Neural Information Processing Systems, pp 2553–2561 (2013)
Tan, J., Xu, J.: Applications of electronic nose (e-nose) and electronic tongue (e-tongue) in food quality-related properties determination: A review. Artificial Intelligence in Agriculture (2020)
Tian, Z., Shen, C., Chen, H., He, T.: Fcos: Fully convolutional one-stage object detection.. In: Proceedings of the IEEE International Conference on Computer Vision, pp 9627–9636 (2019)
Toussaint, M., Allen, K.R., Smith, K.A., Tenenbaum, J.B.: Differentiable physics and stable modes for tool-use and manipulation planning. In: Robotics: Science and Systems, vol. 2 (2018)
Truong, X.T., Yoong, V.N., Ngo, T.D.: Socially aware robot navigation system in human interactive environments. Intelligent Service Robotics 10(4), 287–295 (2017)
Tsagarakis, N.G., Caldwell, D.G., Negrello, F., Choi, W., Baccelliere, L., Loc, V.G., Noorden, J., Muratore, L., Margan, A., Cardellino, A., et al.: WALK-MAN: A high-performance humanoid platform for realistic environments. J. Field Robot. 34(7), 1225–1259 (2017)
Tsarouchi, P., Makris, S., Chryssolouris, G.: Human–robot interaction review and challenges on task planning and programming. Int. J. Comput. Integrated Manuf. 29(8), 916–931 (2016)
Tschannen, M., Bachem, O., Lucic, M.: Recent advances in autoencoder-based representation learning. arXiv:1812.05069 (2018)
Tziafas, G., Kasaei, H.: Few-shot visual grounding for natural human-robot interaction. In: 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp 50–55. IEEE (2021)
Ullrich, M., Ali, H., Durner, M., Márton, Z.C., Triebel, R.: Selecting CNN features for online learning of 3D objects. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5086–5091. IEEE (2017)
Van Den Berg, J., Stilman, M., Kuffner, J., Lin, M., Manocha, D.: Path planning among movable obstacles: a probabilistically complete approach. In: Algorithmic Foundation of Robotics VIII, pp 599–614. Springer (2009)
Van Hoof, H., Kroemer, O., Peters, J.: Probabilistic segmentation and targeted exploration of objects in cluttered environments. IEEE Transactions on Robotics 30(5), 1198–1209 (2014)
Verma, O.P., Hanmandlu, M., Susan, S., Kulkarni, M., Jain, P.K.: A simple single seeded region growing algorithm for color image segmentation using adaptive thresholding. In: 2011 International Conference on Communication Systems and Network Technologies, pp 500–503. IEEE (2011)
Vezzani, G., Regoli, M., Pattacini, U., Natale, L.: A novel pipeline for bi-manual handover task. Advanced Robotics 31(23-24), 1267–1280 (2017)
Villarreal, B.L., Gordillo, J.: Bioinspired smell sensor: nostril model and design. IEEE/ASME Trans. Mechatronics 21(2), 912–921 (2015)
Wang, K., Liew, J.H., Zou, Y., Zhou, D., Feng, J.: Panet: Few-shot image semantic segmentation with prototype alignment.. In: Proceedings of the IEEE International Conference on Computer Vision, pp 9197–9206 (2019)
Wang, X., Kong, T., Shen, C., Jiang, Y., Li, L.: Solo: Segmenting objects by locations. arXiv:1912.04488 (2019)
Wang, X., Zhang, R., Kong, T., Li, L., Shen, C.: Solov2: Dynamic, faster and stronger. arXiv:2003.10152 (2020)
Wilfong, G.: Motion planning in the presence of movable obstacles. Annals Mathematics Artif. Intell. 3(1), 131–150 (1991)
Wise, M., Ferguson, M., King, D., Diehr, E., Dymesich, D.: Fetch and freight: Standard platforms for service robot applications. In: Workshop on Autonomous Mobile Service Robots (2016)
Wohlkinger, W., Vincze, M.: Ensemble of shape functions for 3D object classification. In: 2011 IEEE International Conference on Robotics and Biomimetics, pp 2987–2992. IEEE (2011)
Wood, R., Baxter, P., Belpaeme, T.: A review of long-term memory in natural and synthetic systems. Adaptive Behavior 20(2), 81–103 (2012)
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3D ShapeNets: A deep representation for volumetric shapes.. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1912–1920 (2015)
Xu, M., Wang, J., Zhu, L.: The qualitative and quantitative assessment of tea quality based on e-nose, e-tongue and e-eye combined with chemometrics. Food chemistry 289, 482–489 (2019)
Xu, Y., Fan, T., Xu, M., Zeng, L., Qiao, Y.: SpiderCNN: Deep learning on point sets with parameterized convolutional filters.. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 87–102 (2018)
Yasuda, Y.D., Martins, L.E.G., Cappabianco, F.A.: Autonomous visual navigation for mobile robots: A systematic literature review. ACM Computing Surveys (CSUR) 53(1), 1–34 (2020)
Yeon, A., Visvanathan, R., Mamduh, S., Kamarudin, K., Kamarudin, L., Zakaria, A.: Implementation of behaviour based robot with sense of smell and sight. Procedia Computer Science 76, 119–125 (2015)
Yervilla-Herrera, H., Vasquez-Gomez, J.I., Murrieta-Cid, R., Becerra, I., Sucar, L.E.: Optimal motion planning and stopping test for 3-D object reconstruction. Intell. Serv. Robot. 12(1), 103–123 (2019)
Zeng, A., Song, S., Yu, K.T., Donlon, E., Hogan, F.R., Bauza, M., Ma, D., Taylor, O., Liu, M., Romo, E., et al.: Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp 1–8. IEEE (2018)
Zeng, L., Bone, G.M.: Mobile robot collision avoidance in human environments. Int. J. Adv. Rob. Sys. 10(1), 41 (2013)
Zhan, Q., Liang, Y., Xiao, Y.: Color-based segmentation of point clouds. Laser scanning 38(3), 155–161 (2009)
Zhang, Y., Wei, X.S., Wu, J., Cai, J., Lu, J., Nguyen, V.A., Do, M.N.: Weakly supervised fine-grained categorization with part-based image representation. IEEE Trans. Image Process. 25(4), 1713–1725 (2016)
Zhao, H., Gan, C., Ma, W.C., Torralba, A.: The sound of motions.. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1735–1744 (2019)
Zhao, H., Gan, C., Rouditchenko, A., Vondrick, C., McDermott, J., Torralba, A.: The sound of pixels.. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 570–586 (2018)
Zhao, L., Liu, Z., Chen, J., Cai, W., Wang, W., Zeng, L.: A compatible framework for rgb-d slam in dynamic scenes. IEEE Access 7, 75,604–75,614 (2019)
Zhao, Y., Birdal, T., Deng, H., Tombari, F.: 3D point capsule networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1009–1018 (2019)
Zhao, Z.Q., Zheng, P., Xu, S.t., Wu, X.: Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems (2019)
Zhou, X., Wang, D., Krähenbühl, P.: Objects as points. arXiv:1904.07850 (2019)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kasaei, S.H., Melsen, J., van Beers, F. et al. The State of Lifelong Learning in Service Robots:. J Intell Robot Syst 103, 8 (2021). https://doi.org/10.1007/s10846-021-01458-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-021-01458-3