Nothing Special   »   [go: up one dir, main page]

COBOT Applications - Recent Advances and Challenges

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

robotics

Review
COBOT Applications—Recent Advances and Challenges
Claudio Taesi , Francesco Aggogeri and Nicola Pellegrini *

Department of Mechanical and Industrial Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy;
claudio.taesi@unibs.it (C.T.); francesco.aggogeri@unibs.it (F.A.)
* Correspondence: nicola.pellegrini@unibs.it; Tel.: +39-030-3715580

Abstract: This study provides a structured literature review of the recent COllaborative roBOT
(COBOT) applications in industrial and service contexts. Several papers and research studies were
selected and analyzed, observing the collaborative robot interactions, the control technologies and the
market impact. This review focuses on stationary COBOTs that may guarantee flexible applications,
resource efficiency, and worker safety from a fixed location. COBOTs offer new opportunities to
develop and integrate control techniques, environmental recognition of time-variant object location,
and user-friendly programming to interact safely with humans. Artificial Intelligence (AI) and
machine learning systems enable and boost the COBOT’s ability to perceive its surroundings. A deep
analysis of different applications of COBOTs and their properties, from industrial assembly, material
handling, service personal assistance, security and inspection, Medicare, and supernumerary tasks,
was carried out. Among the observations, the analysis outlined the importance and the dependencies
of the control interfaces, the intention recognition, the programming techniques, and virtual reality
solutions. A market analysis of 195 models was developed, focusing on the physical characteristics
and key features to demonstrate the relevance and growing interest in this field, highlighting the
potential of COBOT adoption based on (i) degrees of freedom, (ii) reach and payload, (iii) accuracy,
and (iv) energy consumption vs. tool center point velocity. Finally, a discussion on the advantages
and limits is summarized, considering anthropomorphic robot applications for further investigations.

Keywords: COBOT; control; perception; applications; collaborative robotics


Citation: Taesi, C.; Aggogeri, F.;
Pellegrini, N. COBOT
Applications—Recent Advances and
Challenges. Robotics 2023, 12, 79.
https://doi.org/10.3390/
1. Introduction
robotics12030079 Robot installations are expected to grow over the next five years, based on a significant
dominant sales trend (annual mean increase of 12% over 2017–2022), despite the global
Academic Editors: Rui P. Rocha,
Oscar Reinoso García,
pandemic event and the contractions of some specific markets. COllaborative roBOTs
Thierry Chaminade and
(COBOTs) are classified based on reference frame location as fixed, mobile, and hybrid
Kerstin Thurow solutions [1–3]. The first class considers the robot placement in a time-invariant position,
while the mobile configuration allows the robot motion. The hybrid architecture is com-
Received: 30 April 2023 posed of both aforementioned elements; it can move between different tasks and work
Revised: 1 June 2023 areas, enabling material transportation (kits, tools, light parts, sub-assemblies). In addition,
Accepted: 2 June 2023
robots are available with sensors as well as a user interface that recognizes and reacts
Published: 4 June 2023
to an unstructured environment. In this context, automation and AI are impacting on
workers and job profiles where repetitive or dangerous tasks are prevalent. COBOTs can
be programmed without involving experts of high-skilled resources. SMEs (small and
Copyright: © 2023 by the authors.
mid-size enterprises) are the pivotal players due to the investment leverage that is not
Licensee MDPI, Basel, Switzerland. widely affordable for pioneering technologies [4]. The obtained flexibility permits the
This article is an open access article SMEs to accomplish productivity enhancements without compromising the low-volume
distributed under the terms and production [5] to react to customer demand variability. On the other hand, large multi-
conditions of the Creative Commons modal factories can rapidly switch from a range of different applications: from oil and
Attribution (CC BY) license (https:// gas to aerospace, building, and automotive products [6]. These companies manage the
creativecommons.org/licenses/by/ ability to operate with several product lines, employing teams with various skills who
4.0/). are able to reconfigure the layout to respond to a dynamic order. Moreover, considering

Robotics 2023, 12, 79. https://doi.org/10.3390/robotics12030079 https://www.mdpi.com/journal/robotics


Robotics 2023, 12, 79 2 of 33

the transformation of digital factory models and Industry 4.0 enabling technologies, the
data are gathered at each phase of production from machines/equipment according to ISO
10218 safe interaction in a collaborative workspace [4,7,8]. Data are then aggregated and
processed to optimize the entire production process [9]. For instance, the gripping force or
the trajectory of a robot arm can be updated if the digital twin estimates an enhancement in
production performance in terms of safety, quality or production indicators [10].
The most involved industrial sectors were the automotive and electronic fields, which
accounted for over 60% of global new installations. Considering the 2022 survey of the
International Federation of Robotics (IFR), the results show that the China region is the
main COBOT producer, with 52 models representing 26.4% of the global assessment. The
second country is South Korea, with 14.7%, and Japan, with 11.2% of the total assessment.
The mentioned countries represent 52.3% of the global COBOT models. The remaining
percentage is divided into two groups: (i) the United States of America, Germany, Switzer-
land, and Denmark (29.5% of the total) and (ii) Italy, United Kingdom, France, Canada, and
India (17.5 of the total).
Although the literature presents various COBOT applications in an industrial context,
further studies are required to investigate the recent advances. In particular, the increase in
COBOT abilities shows the need for a set of guidelines to permit a valuable comparison. In
this work, the authors aim to present a review of the recent state-of-the-art innovations,
classifying the type of applications and the interaction with human beings, highlighting
the practical implications. Finally, we present a market classification of COBOT features,
including the degrees of freedom (DoF), reach volume, payload, position accuracy, and
intention recognition.
The paper is organized as follows: Section 2 describes the methodology employed
in conducting the literature review, including the search strategy, selection criteria and
screening process. It provides a detailed account of the steps taken to ensure the compre-
hensive and systematic identification of relevant articles. Section 3 reports the various
applications of COBOTs in the manufacturing and service industries. It examines how
COBOTs are employed in different contexts, such as manufacturing processes, material
handling, personal assistance, security and inspections, and medical applications. Section 4
presents the advancements and developments in the field of human–robot interaction
(HRI) within the context of COBOTs. It explores different aspects of HRI, including control
interfaces, programming, learning techniques, virtual reality systems, and intention recog-
nition. The review highlights the latest research and innovations that aim to improve the
collaboration, communication, and mutual understanding between humans and COBOTs.
Section 5 presents the COBOT market analyses examining the different types of COBOTs
offered by manufacturers and analyzing their features, capabilities, and trends. The review
provides insights into the current state of the COBOT market, key players, market share,
and emerging trends in terms of technological advancements and adoption rates. Section 6,
Discussion, and Section 7, Conclusion, summarize the review work and future challenges.

2. Collaborative Robot Architecture Frame


In the last decades, collaborative robots have attracted wide interest from academic
researchers to industrial and service operators [11,12]. The definitions of collaborative
robot were given in the 1990s. The initial concept was a passive mechanism supervised by
a human operator [13,14].
The literature provides works that study three-dimensional workspace sharing, collab-
orative and cooperative tasks, programming, and interaction. Additional factors regarding
the work area layout that are important to consider during a COBOT applications review
are:
• Coexistence: the work areas need to be defined without overlapping zones. The
human operator and the robot can perform the activities separately.
• Synchronization: the human and the robot share the work environment with indepen-
dent tasks.
Robotics 2023, 12, x FOR PEER REVIEW 3 of 33

• Coexistence: the work areas need to be defined without overlapping zones. The hu-
Robotics 2023, 12, 79 3 of 33
man operator and the robot can perform the activities separately.
• Synchronization: the human and the robot share the work environment with inde-
pendent tasks.
•• Cooperation:
Cooperation:the the human
human and
and thethe robot
robot share
share the the
workwork environment
environment and and the task
the task ex-
execution is in a step-by-step procedure.
ecution is in a step-by-step procedure.
•• Collaboration:
Collaboration:the thehuman
humanand androbot
robotshare
sharethe
thework
workareaareaand
andthe
thetask
taskconcurrently.
concurrently.
Referringthe
Referring thecollaboration
collaborationidentified
identified by
by the
the robot
robot safety
safety standards
standards ISO
ISO 10218,
10218, Figure
Figure1
shows the four scenarios: (i) Safety rated monitoring stop, (ii) Hand guiding,
1 shows the four scenarios: (i) Safety rated monitoring stop, (ii) Hand guiding, (iii) Speed(iii) Speed
and separation monitoring, and (iv) Force and torque
and separation monitoring, and (iv) Force and torque limitation. limitation.

(a) (b)

(c) (d)
Figure
Figure1.1.COBOT
COBOTcollaboration
collaborationwith
withoperator:
operator:Safety
Safetyrated
ratedmonitoring
monitoringstop
stop(a);
(a);Hand
Handguiding
guiding(b);
(b);
Speed and separation monitoring (c); Force and torque limitation (d).
Speed and separation monitoring (c); Force and torque limitation (d).

To
Toevaluate
evaluatethe
thepractical
practicalapplications,
applications,this
thisreview
reviewfocuses
focusesononthree
threemacro-elements:
macro-elements:
•• Safety:
Safety: COBOTS are designed to work safely in the same workspaceoccupied
COBOTS are designed to work safely in the same workspace occupiedby
byan
an
operator, detecting and reacting to the risk of accidents or injuries.
operator, detecting and reacting to the risk of accidents or injuries.
•• Flexibility:
Flexibility:COBOTS
COBOTScancanbebereconfigured
reconfiguredtotoexecute
executeaaset
setof
ofunknown
unknowntasks.
tasks.
•• User-friendliness:
User-friendliness: COBOTS are equipped with intuitive interfacestotoprogram
COBOTS are equipped with intuitive interfaces programand
and
operate them without requiring extensive technical knowledge.
operate them without requiring extensive technical knowledge.
The purpose of this work is to conduct a literature review on the topic of collaborative
robotics in manufacturing and service applications from the period 1996 to 2023. The review
focuses on identifying relevant publications through an extensive search using specific
keywords, as “COBOT”, “collaborative robotics”, and terms related to manufacturing,
Robotics 2023, 12, 79 4 of 33

assembly, assistance, and medicine. The research involved multiple databases, including
Web of Science, Scopus, and PubMed, to ensure a comprehensive coverage of the relevant
literature. The search strategy involved querying the selected databases using the identified
keywords and selected relevant filters, such as publication period and the subject area
of engineering. The intention was to retrieve publications that specifically addressed
collaborative robotics in manufacturing and service applications. The search resulted in
423 initial results, which were then subjected to further screening. To refine the list of
relevant publications, a two-stage screening process was employed. In the first stage,
titles and abstracts were reviewed to eliminate duplicate articles of the same research.
Thus, the selection criteria emphasized articles that presented case studies rather than
simulations. As a result, 98 articles were identified for full-text review. The reviewed
articles embraced a range of topics related to collaborative robotics in both manufacturing
and service applications. The manufacturing applications primarily focused on:
• Manufacturing processes: articles that focus on the use of collaborative robots in
various manufacturing processes, such as assembly lines and welding.
• Material handling: Articles that specifically address the application of collaborative
robots in material handling tasks, including picking, sorting, and transporting objects.
In the service applications domain, the articles discussed of:
• Personal assistance: articles that explore the use of collaborative robots in providing
assistance to individuals in tasks such as household chores or caregiving.
• Security and inspections: articles that examine the application of collaborative robots
in security-related tasks, such as surveillance, monitoring, and inspections in various
settings.
• Medicare: articles that discuss the utilization of collaborative robots in healthcare and
medical environments, including patient care, surgical assistance, and rehabilitation.
Furthermore, the article selection highlighted the importance of the interaction be-
tween humans and collaborative robots, emphasizing its significance within collaborative
robotics. Specific focus was given to articles that discussed human interactions with
collaborative robots. This included four key areas:
• Control interface: articles that investigate different interfaces and control mechanisms
for humans to interact and communicate with collaborative robots effectively.
• Intention recognition: articles that study how to define techniques, algorithms, and
sensor systems used to enable robots to recognize and understand human intentions.
• Programming and learning: articles that explore methods and techniques for program-
ming and teaching collaborative robots, including machine learning, programming
languages, and algorithms.
• Virtual reality perspectives: articles that discuss the potential of virtual reality systems
in enhancing human–robot interactions and collaboration, such as immersive training
environments and augmented reality interfaces.
Finally, to examine applications and human interactions, the review also prioritized the
analysis of core technologies that support and enable collaborative robotics. The selected
technologies include bioelectric interfaces and force, impedance, and visual sensors.

3. Classification of COBOT Applications


An initial classification of COBOT applications is established on the device usage
context: industrial (assembly and handling tasks) or service (personal assistance, security,
and Medicare).

3.1. Industrial Application of Collaborative Robots: Assembly


In manufacturing and assembling processes, production depends on the availability
of tools, human labor, and machinery. The efficiency determines the lead time and the
product quality. During manufacturing processes, various repetitive activities that cause
fatigue in humans are often involved. Therefore, to eliminate employee risks and fatigue, it
Robotics 2023, 12, 79 5 of 33

is necessary to develop robots that would complement human labor in heavy or repetitive
work. Levratti, A. et al. introduce a modern tire workshop assistant robot which can bear
heavy wheels and transfer them to any spot in the workshop, and can be interacted with
either via gestures or tele-operatively through a haptic interface [15]. Further, Peternel,
L. et al. propose a method to enable robots to adapt their behavior to human fatigue in
human–robot co-manipulation tasks. The online model is used to estimate human motor
fatigue, and when a specific level is discerned, the robot applies the acquired ability to
accomplish the challenging phase of the task. The efficacy of the proposed approach is
evidenced by trials on a real-world co-manipulation task [16].
In the assembly domain, COBOTs are employed to support the assembly of complex
products. Cherubini, A. et al. present a collaborative human–robot manufacturing cell
for homokinetic joint assembly, in which the COBOT switches between active and passive
behaviors to lighten the burden on the operator and to comply with his/her needs. The
approach is validated in a series of assembly experiments, and it is fully compatible with
safety standards [17]. Many papers discuss how humans and robots can work simultane-
ously to improve the efficiency and complexity of assembly processes. The work of Tan, J.
T. C. et al. studies the design COBOTs in cellular manufacturing. Task modeling, safety
development, mental workload, and man-machine interface are all studied to optimize the
system design and performance [18]. Krueger, J. et al. also look at logistic and financial
aspects of cooperative assembly, such as efficient component supply [19]. The study of
Erden, M. S. et al. presents an end-point impedance measurement of the human hand
while performing welding interactively with the KUKA robot [20]. A paper discusses
human–robot cooperation in precise positioning of a flat object on a target. Algorithms
were developed to represent the cooperation schemes, and these were evaluated using a
robot prototype and experiments with humans. Furthermore, the main challenge of Wojtara,
T. et al. is in regulating the robot-human interaction, as the robot interprets signals from the
human in order to understand their intention [21]. Morel, G et al. define a control algorithm
combining visual servo control and force feedback within the impedance control approach
to perform peg-in-hole insertion experiments with a seven-axis robot manipulator [22].
Magrini, E. et al. present a framework for guaranteeing human safety in robotic cells that
enable harmonious coexistence and dependable interplay between humans and robots.
Certified laser scanners are also employed to observe human–robot proximity in the cell,
while safe communication protocols and logical units are utilized for secure low-level robot
control. Furthermore, a smart human-machine interface is included to facilitate in-process
collaborative activities, as well as gesture recognition of operator instructions. The frame-
work has been tested in an industrial cell, with a robot and an operator closely examining a
workpiece [23]. Another critical application of collaborative robots in manufacturing is the
elimination of redundancy in operations. For most manufacturing activities, the repetitive
processes often come towards the end of the production activities. During these activities,
a series of other repetitive actions are performed. To ensure higher quality and uniformity,
polishing, lifting of assembling parts can be assigned to collaborative robots [24].
Machine learning in accordance with collaborative robots ensures consistency in the
quality and cycle time to accomplish the industrial tasks. G. Michalos et al. highlight how
learning control techniques are essential in human–robot collaboration for better handling
of materials. They implement control techniques through collaborative robots that are
human-centered with neural networks, fuzzy logic control, and adaptive control forms
as the basis for ensuring collaborative robots’ dependable material-handling ability. Like
humans, collective human-centered robots need logical interpretation of situations as they
present themselves to correctly hand-related risk issues [25]. A robot should take the
initiative during joint human–robot task execution. Three initiative conditions are tested in
a user study: human-initiated, reactive, and proactive. Results show significant advances
in proactive conditions [26].
Robotics 2023, 12, 79 6 of 33

3.2. Industrial Application of Collaborative Robots: Material Handling


The application of collaborative robots in material handling provides significant
benefits. Material handling processes can be complex, involving multiple stages and
various types of equipment. Coordinating these processes and ensuring that they are
executed correctly can be challenging. Donner, P. and Buss, M. present a controller that
can actively dampen undesired oscillations, while allowing desired oscillations to reach
a desired energy level. In the paper, real-world experiments show the positive results in
interaction with an operator [27]. Dimeas, F. et al. work on a method to detect and stabilize
unstable behavior in physical human–robot interactions using an admittance controller
with online adaptation of the admittance control gains [28]. Deformable materials are
critical to handle. Kruse, D. et al. discuss a novel approach to robotic manipulation of
highly deformable materials, using sensor feedback and vision to dictate robot motion. The
robot is capable of contact sensing to maintain tension and equipped with a head-mounted
RGBD sensor to detect folds. The combination of force and vision controllers allows the
robot to follow human motion without excessive crimps in the sheet [29].
Gams et al. have extended the dynamic movement primitives (DMPs) framework
in order to enable dynamic behavior execution and cooperative tasks that are bimanual
and tightly coupled. To achieve this, they proposed a modulation approach and evaluated
it for the purpose of interacting with objects and the environment. This permits the
combination of independent robotic trajectories, thereby allowing implementation of an
iterative learning control algorithm to execute bimanual and tightly coupled cooperative
tasks. The algorithm is used to learn a coupling term, which is then applied to the original
trajectory in a feed-forward manner, thereby adjusting the trajectory to the desired positions
or external forces [30].

3.3. Service Application of Collaborative Robots: Personal Assistance


The application of collaborative robots in personal assistance has advanced over the
years because of increased artificial intelligence technology that allows robots to take
over some activities that humans previously concentrated on. Because of the ability of
collaborative robots to operate in a logical and sequential manner, they have, in many ways,
become personal assistants to human beings in handling various issues. For this scope,
Bestick, A. et al. estimate personalized human kinematic models from motion capture
data, which can be utilized to refine a variety of human–robot collaborative scenarios that
prioritize the comfort and ergonomics of a single human collaborator. An experiment
involving human–robot collaborative manipulation is conducted to evaluate the approach,
and results demonstrate that when the robot plans with a personalized kinematic model,
human subjects rotate their torsos significantly less during bimanual object handoffs [31].
In healthcare, collaborative robots can assist healthcare professionals in various tasks,
such as patient monitoring, medication management, and rehabilitation exercises. They
can also help patients with limited mobility to perform daily activities, such as dressing,
bathing, and grooming. Collaborative robots can provide assistance to elderly people living
independently or in care homes.
Moreover, for persons whose movement is restricted because of health complica-
tions, facilities have developed robots that help such individuals in their movement. The
pilot study of Kidal et al. investigates human factors associated with assembly cells
for workers with cognitive disabilities. Preliminary findings indicate that personalized
human-automation load-balancing strategies and collaborative robots have the potential to
empower workers to complete complex assembly tasks. Design requirements for assembly
cells are contrasted with those for the regular workforce to ensure that they are optimized
for the needs of workers with cognitive disabilities [32]. As personal assistants to older
people, collaborative robots help individuals with day-to-day tasks. The paper edited by
Bohme, H. J. et al. presents a scheme for human–robot interaction that can be used in
unstructured, crowded, and cluttered environments, such as a mobile information kiosk
in a home store. The methods used include vision-based interaction, sound analysis, and
Robotics 2023, 12, 79 7 of 33

speech output, and they are integrated into a prototypical interaction cycle. Experimental
results show the key features of the subsystems, which can be applied to a variety of service
robots, and future research will focus on improving the tracking system, which is currently
limited to people facing the robot [33]. Besides the home-based robotics assistance activities,
the application of personal assistance robots has been applied in the telecommunication and
construction industries. In telecommunications, collaborative robots have been essential
in assisting the subscribers of a particular telecommunication authority. As a personal
assistant to the subscriber, the collaborative robot forwards and responds to the calls when-
ever the subscriber is offline or on another call. Through relaying relevant information
such as voice notes, personal assistance robots enable individuals to receive information
about all the calls they missed while offline. Finally, robots are essential for human labor as
personal assistants in construction. The robots help engineers lift material, create a safer
work environment, enhance the quality of outcomes, and make the whole process more
cost-effective [34].

3.4. Service Application of Collaborative Robots: Security and Inspection


The security context shows technological advancements in accordance with the ap-
plication of collaborative robots for persons to be effectively protected against any form
of attack. Inspections robots have been developed to help in detecting illegal materials
before they are smuggled into public or private places. In most protected sensitive areas
such as international airports, collaborative inspection robots are an essential layer of
security measures. The robots used in these areas of security use X-rays to scan passenger’s
luggage to detect any illegal objects, and raise alarms [35]. The inspection activities of
collaborative robots have enhanced the ability of military personnel to detect and neutralize
the possibility of terrorist activities occurring when terrorist weapons of mass destruction
are detected by the robots during border inspections using robotic machine inspection.
To further complement security inspection, some security inspection robots have been
developed and programmed to aid in defusing detected threats, such as bombs, that might
be too risky to be handled by human operators. Murphy, R. R. offers an instructional guide
on the utilization of robots in urban search and rescue missions, as well as an examination
of the challenges in combining humans and robots. Their paper further presents a domain
theory on searches, which is composed of a workflow model and an information flow
model [36].
Besides security inspection, robots are also crucial as human co-workers for prod-
uct and process inspection. During manufacturing and assembly processes, inspection
robots are used to visually inspect flaws in every stage of production. Most industrial
inspection collaborative robots are often designed with either 2-D or 3-D vision sensors [37].
The installation of 2-D and 3-D sensors enables collaborative robots to conduct efficient
accuracy-based inspections that ensure all requirement for each production stage are ob-
tained [38]. Because of the increased ability of COBOTS to evaluate various aspects during
the inspection, they have increasingly been adopted in the practical transport system to
assess the safety of using a particular means of transport. For this purpose, Tsagarakis
et al. present a humanoid robot platform that has been exploited to work in representative
unstructured environments [39].

3.5. Service Application of Collaborative Robots: Medicare


In Medicare, the collaborative treatment process between human Medicare profes-
sionals and robots has become popular. Patient handling has been one of the demanding
responsibilities of causing musculoskeletal issues among Medicare professionals who rely
on their physical strength to discharge their duties [40]. Notably, applying collaborative
robots has been essential in addressing such challenges. In most modern facilities, nurses
have been trained to collaborate with robots in providing services such as muscle massage
and fixing of broken bones. The application of medical COBOTS in fixing broken limbs has
ensured greater accuracy in restoring the mobility of individuals after sustaining multiple
Robotics 2023, 12, 79 8 of 33

fractures of the limbs. Therefore, collaborative robots are a significant breakthrough in


orthopedic medical facilities.
Moreover, collaborative robots are extensively used in surgical operations. For most
surgical doctors, working collaboratively with robots during operations ensures a higher
level of operation precision, flexibility, and control [41]. Furthermore, adopting COBOTS
in surgical processes facilitates the provision of 3-D vision via the robot vision, thus
allowing doctors to see the operation site better and reducing error that is caused by lack of
proper visibility during the operation. Therefore, through collaborative robots, surgeons
can perform delicate and complex procedures such as organ implantation that may be
difficult or impossible if done only through collaboration with other human surgeons. The
relationship between force and motion are a critical factor in conveying intended movement
direction. Mojtahedi, K. aims to understand how humans interact physically to perform
motor tasks such as moving a tool [42].
COBOT prosthetic applications are becoming increasingly popular in the field of
prosthetics. The technology is used to create custom-fit robotic prosthetic arms and hands,
allowing users with amputations or other physical impairments the ability to interact
with the environment in an innovative way. Vogel, J. presents a robotic arm/hand system
that is controlled in 6-D Cartesian space through measured human muscular activity.
Numerical validation and live evaluations demonstrate the validity of the system and its
potential applications [43]. An incremental learning method is used to control a robotic
hand prosthetic using myoelectric signals. The approach of Gijsberts, A. is effective and
applicable to this problem, by analyzing its performance while predicting single-finger
forces. They tested this method on a robotic arm and the subject could reliably grasp,
carry and release everyday objects, regardless of the signal changes [44]. Electrical signals
from the muscles of the operator can be employed as the main means of information
transportation. The work of Fleischer, C. and Hommel, G. presents a human-machine
interface to control exoskeletons. A biomechanical model and calibration algorithm are
presented, and an exoskeleton for a knee joint support is designed and constructed to
verify the model and investigate the interaction between operator and machine [45]. De
Vlugt, E. describes the design and application of a haptic device to study the mechanical
properties of the human arm during interaction with compliant environments [46]. With the
same aim, Burdet, E. found that humans learn to manipulate objects and tools in physical
environments by compensating for any forces arising from the interaction. This is achieved
by learning an internal model of the dynamics and by controlling the impedance [47].

3.6. Supernumerary Robotics


Soft robotic limbs (SRLs) have become increasingly popular tools for augmenting the
manipulation and locomotion capabilities of humans [48]. They are designed to provide
additional degrees of freedom that need to be controlled independently or simultaneously
with respect to biological limbs. A bilateral interface between the robot and the operator
is necessary for proper functioning, wherein control signals are acquired from the human
without interference with the biological limbs, and feedback is provided from the robot
to the human. SRLs have been developed for various purposes, for instance, legs, arms,
hands, and fingers. In the work published by Luo, J. et al., the authors face the challenge
of providing a solution that allows an individual operator to accomplish overhead tasks
with the assistance of a robotic limb. To address this challenge, the authors propose a
balance controller for the SuperLimb wearable robotic solution, utilizing a decomposition
methodology to decouple joint torques of the SuperLimb and the interaction forces. Addi-
tionally, a force plate is used to measure the center of pressure position as an evaluation
method of the standing balance [49]. In 2012, Baldin L. et al. presented a novel approach
to using a compliant robot to reduce the load on a human while performing physical
activities. The robot is attached to the subject’s waist and supports their body in fatiguing
postures, allowing them to sustain those postures with less effort. The team conducted a
mathematical analysis to optimize the robot’s posture and joint torques, thereby decreasing
Robotics 2023, 12, 79 9 of 33

the load on the individual. Results from numerical simulations and experiments showed
that the proposed method was successful in reducing the workload of the subject [50].
The work of Parietti, F. et al. presents a new approach to physically assisting a human
with a wearable robot. Supernumerary robotic limbs (SRLs) are attached to the waist of
the human to support their body in fatiguing postures, such as hunching over, squatting,
or reaching the ceiling. The SRL is able to take an arbitrary posture to maximize load
bearing efficiency, rather than constrained movements that leg exoskeletons require. A
methodology for supporting the human body is described and a mathematical analysis of
load bearing efficiency is conducted. Optimal SRL posture and joint torques are obtained
to minimize the human load. Numerical and experimental results of a prototype SRL
demonstrate the effectiveness of this method [51].
Recent advancements in robotic technology have proposed SRLs as a potential solution
to reduce the risk of work-related musculoskeletal disorders (WMSD). SRLs can be worn by
the worker and augment their natural ability, thus providing a new generation of personal
protective equipment. For instance, a supernumerary robotic upper limb allows for indirect
interaction with hazardous objects, such as chemical products or vibrating tools, thus
reducing the risks of injury associated with joint overloading, bad postures, and vibrations.
Within this perspective, Ciullo et al. present a supernumerary robotic limb system to reduce
vibration transmitted along the arms and minimize load on the upper limb joints. An
off-the-shelf wearable gravity compensation system is integrated with a soft robotic hand
and a custom damping wrist, designed based on a mass-spring-damper model. The efficacy
of the system is experimentally tested in a simulated industrial work environment, where
subjects perform a drilling task on two materials. Analysis of the results according to ISO
5349 show a reduction of 40–60% in vibration transmission with the presented SRL system,
without compromising time performance [52].
Studies conducted by Khazoom, C. et al. demonstrate the potential of a supernumerary
leg powered by delocalized magnetorheological clutches (MR) to assist walking with three
different gaits. Simulations show that the MR leg’s low actuation inertia reduces the impact
impulse by a factor of 4 compared to geared motors, and that delocalizing the clutches
reduces by half the inertial forces transmitted to the user during swing.
Other studies focus on hand applications. Surgeons may be able to use a third hand
under their direct control to perform certain surgical interventions without the need for a
human assistant, thus reducing coordination difficulties. To assess this possibility, Abdi
E. et al. present a study with naive adults using three virtual hands controlled by their
two hands and right foot. The results of this study show that participants were able to
successfully control virtual hands after a few trials. Further, the workspace of the hands was
found to be inversely correlated with the task velocity. There was no significant difference
between the three- and two-hand controls in terms of success in catching falling objects and
average effort during the tasks. Participants reported that they preferred the three-hand
control strategy, found it easier, and experienced less physical and mental burden [53].
Meraz, N.S. et al. present a sixth finger system as an extension of the human body and
investigate how an extra robotic thumb affects the body schema and self-perception. The
sixth finger is controlled with the thumb of the opposite hand and contact information
is conveyed via electrostimulation. Reaching task experiments are conducted with and
without visual information to evaluate the level of embodiment of the sixth robotic finger
and the modification of the self-perception of the controlling finger. Results indicate that
the sixth finger is incorporated into the body schema of the user and the body schema of
the controlling finger is modified, implying the brain’s ability to adapt to different scenarios
and body geometries [54].
Robotics 2023, 12, 79 10 of 33

4. Interactions with Human Beings: Practical Implications


As COBOTs become more common in various industries for several applications, there
is an increasing research activity on the technologies that enable them to work safely and
efficiently alongside humans. COBOTS are equipped with a range of technologies, includ-
ing control systems, intent recognition, programming, and learning systems. Dynamics
from the signals have influence through individual and social aspects that incorporate
personality traits.
These technologies allow COBOTs to adapt to changing conditions in real time, learn
from their experiences, and interact with humans in a way that is safe and efficient. This
section provides a detailed analysis of each technology’s research activity.

4.1. Control Interface


The control system is the component of COBOTS responsible for ensuring that the
machine operates safely and efficiently in a shared workspace with human. COBOT control
systems are designed to drive and monitor the robot’s movements and ensure that it does
not collide with humans or other objects in the environment. They also enable the robot to
adapt to changing conditions, such as changes in lighting or the presence of new obstacles.
COBOT control systems typically include sensors, software, and other technologies that
allow the robot to detect and respond to changes in the environment in real-time. Observing
human action instead of a robot leads to interference of executed actions. However, various
aspects affiliated with human movement have been instrumental in triggering the inter-
ference effect. Observing movement has measurable consequences for peripheral motor
systems [55]. In action observation, there exists a significant increase in a motor-evoked
potential originating from hand muscles that are utilized while making such movements.
For instance, P. Maurice et al. worked on a method for performing ergonomic assessments
of collaborative robotic activities and applying an evolutionary algorithm to optimize the
robot’s design for improved ergonomic performance [56]. Current investigations focused
on whether an interference effect linked with observed human action towards executed
action contains specifics information of biological motion trajectory. The research carried
out by J. Rosen et al. studied the integration of a human arm with a powered exoskeleton
and its experimental implementation in an elbow joint, using the neuromuscular signal
(EMG) as the primary command signal. Four indices of performance were used to assess
the system and results indicated the feasibility of an EMG-based powered exoskeleton
system as an integrated human-machine system [57]. Human movements are likely to cause
interference with incongruently executed arm movements only under biological trajectories.
Additionally, the observed non-biological incongruent human movement lacks the interfer-
ence effect associated with executed movements. In contrast, an observed ball movement
causes interference on an incongruent executed arm motion despite being biological or
non-biological. The method described by K. A. Farry et al. focuses on commanding two
grasping (key and chuck) options and three thumb motions (abduction, extension, and
flexion). Outcomes include a 90% correct grasp selection rate and an 87% correct thumb
motion selection, both using the myoelectric spectrum [58]. Such effects are outcomes
from the quantity of information distinguished by the brain based on distinct kinds of
motion stimuli [59]. Alternatively, the impact resulting from prior experience with diverse
kinds of forms as well as motion needs to be taken into consideration. Extensive research
is necessary to assist in discriminating amid the existing possibilities [60]. Data-driven
interaction involves using data to optimize interactions between collaborative robots and
human workers, mainly in manufacturing and industrial environments [61]. According
to Magrini et al., the collaboration between humans and robots depends on a suitable
exchange of contact forces that are likely to take place at various points along an existing
robotic structure. The researchers concentrated on the physical collaboration elements
whereby humans determine the nature of contact with robots, as the robot reacts as a
function of the altered forces [62]. The implication is that safe coexistence has been made
possible and ensured. O. Khatib et al. work on physical collaboration, where robots have to
Robotics 2023, 12, 79 11 of 33

ensure that they accomplish various kinds of subtasks [63]. The first task entails detecting
contact with a human and distinguish between intentional contact and undesired collision.
The second task the identification of points on the robot’s surface where contact has taken
place. The third task involves estimating the alteration of Cartesian forces. The fourth task
involves controlling the robot’s reactions based on the desired behavior. Force and pres-
sure represent significant considerations affiliated with the design and implementation of
collaborative robot interactions. According to Tsumugiwa et al., human-robot cooperative
responsibility has two main areas: the carrying task and positioning task. The carrying
task has independence characteristics, as a robot undergoes adjustments depending on
the mode of estimation stiffness from the arm stiffness [64]. Virtual stiffness is maintained
depending on human characteristics whereby the stiffness of the human operator’s arm or
applied force to robots is part of the cooperative task [65]. One of the major assumptions is
that human operators often stiffen their arms during the positioning task [66]. Morel et al.
proposed a novel variable impedance control comprising of virtual stiffness. Such virtual
forces produced through the proposed controller made a cooperative positioning task easy
to achieve with precise outcomes [23]. For confirmation of the usefulness of the proposed
control, a cooperative peg-in-hole task was executed by a robot [67]. Experimental out-
comes illustrate how the proposed control happens to be effective for cooperative carrying
as well as positioning tasks [68]. Vision is a significant element in the process of enabling
robots to effectively perceive and comprehend their surroundings and to interact with
humans within a safe and effective process. Using vision in COBOT interaction is effective
in object recognition as well as tracking, whereby vision sensors including cameras are used
for tracking objects in the surroundings of a robot. Human-computer interfaces are ideal for
facilitating communication that offer assistance in exchanging information and procedural
commands, in addition to controls. Within this domain, C. Plagemann et al. present a novel
interest point detector for mesh and range data that is particularly well suited for analyzing
the human shape. The experiments carried out show that our interest points are signifi-
cantly better in detecting body parts in depth images than state-of-the-art sliding-window
based detectors [69]. Working in the robotics sector, professionals often concentrate on
the integration of spoken natural language along with natural gestures associated with
commanding and controlling semi-autonomous mobile robots. Both spoken natural lan-
guage along with natural gesture have become user-friendly platforms of interaction with
mobile robots. Considering the human perspective, the mode of interactions has become
easier since the human is incapable of learning additional interactions despite depending
on natural channels for communication. According to Perzanowski et al., the objective
of developing a natural language or gesture interface in a semi-autonomous robot was
successful. Using natural language or gestures within the interface relies on two distinct as-
sumptions [70]. The first assumption suggests that as natural language remains ambiguous,
gestures disambiguate various kinds of information in the speech. The second assumption
is that humans utilize natural gestures in an easier manner when issuing directives and
locomotive commands in mobile robots. Association of vision and force/pressure sensing
provides several positive outcomes for COBOTs, enabling them to safely interact with
humans and carry out a range of tasks. Zanchettin, A.M. and Rocco, P. combine these two
elements in a constraint-based algorithm for combined trajectory generation and kinematic
control for robotic manipulators. The algorithm shifts from an imperative programming
paradigm to a declarative motion programming approach [71]. Furthermore, L. Peter-
nel et al. propose an exoskeleton control method for adaptive learning of assistive joint
torque profiles in periodic tasks. Within this research, human muscle activity is utilized
as feedback to modify the assistive joint torque behavior in a way that reduces the muscle
activity [72]. Force and pressure measurements tend to be critical components playing
central roles in making certain there is safe and effective collaboration between humans
and robots [73]. According to Lippiello et al., it is important to consider the interaction
control between a robot manipulator and a partially known environment. Autonomy in
a robotic system has a strict connection to the availability of sensing information within
Robotics 2023, 12, 79 12 of 33

external surroundings. Among the different sensing capabilities, vision and force that have
critical roles. This is confirmed within a work purposed by A. Cherubini et al., where a
multimodal sensor-based control framework for intuitive human–robot collaboration has
been developed. The approach is markerless, utilizes a Kinect and an on-board camera, and
is based on a unified task formalism. The framework has been validated in an industrial
mock-up scenario of humans and robots collaborating to insert screws [74]. Other research
by Lippiello et al., confirmed by a simulation case study, proposes an algorithm for online
estimation of the pose of an unknown and possibly time-varying rigid object based on
visual data from a camera. Force and joint position measurements are also used to improve
estimation accuracy [75].

4.2. Intention Recognition


Intention recognition is key technology for COBOT applications. A COBOT intention
recognition system typically relies on sensors and software that allow the robot to detect
and interpret human movements and gestures. By understanding the intentions of humans,
COBOTs can adapt their actions to avoid collisions or other safety hazards. They can
also provide more effective assistance to human workers by anticipating their needs and
responding in real-time. Intention recognition is an essential technology for COBOT
evolution, making it an important area of research.
An approach to developing relevant knowledge of discrete robot motions from differ-
ent sets of demonstration is relevant, especially during intention recognition. In a study by
Mohammad and Billard, there is the development of motion in the form of a non-linear
autonomous dynamical system (DS) as the researchers concentrate on the definition of
sufficient conditions to facilitate global asymptotic stability at the existing targets [76]. The
study proposes a learning approach known as a Stable Estimator of Dynamical Systems
(SEDS), which is ideal for learning the different parameters under dynamical systems to
ascertain all motions, following demonstrations as they reach and stop at the target [77].
From the study, it is logical to state that DS provides a significant framework ideal for
allowing the fast learning of robot motions through small sets of demonstrations [30].
Image-based collision detection is currently being studied in industrial-robot environ-
ments. The study published by F. Stulp et al. investigates the legibility of robot behavior
as a property that emerges from requirements for the efficiency and robustness of joint
human–robot task completion. Two experiments involving human subjects demonstrate
that robots are able to adjust their behavior to increase their ability to predict the robot’s
intentions, resulting in faster and more reliable task completion [78]. An ideal approach
associated with conducting collision tests depending on images retrieved from numer-
ous stationary cameras in a work cell has been also presented in the study conducted by
Ebert and Henrich [79]. The work of V. Magnanimo et al. proposes a Dynamic Bayesian
Network for recognizing tasks which consist of sequences of manipulated objects and
performed actions. The DBN takes RGBD raw data as input and classifies manipulated
objects and performed actions. To evaluate the effectiveness of the proposed approach, a
case study of three typical kitchen tasks is conducted [80]. The sensor-controlled transfer
motion originating from the current configuration to transferring motion from the current
configuration is a necessary basic skill that allows robots to operate safely with humans
under the same workspace. This has been studied in a paper by L. Bascetta et al., which
presents advanced algorithms for cognitive vision. Using a dynamic model of human
walking, these algorithms are applied to detecting and tracking humans and estimating
their intentions [81]. D. J. Agravante et al. purpose a framework combination that involves
vision and haptic information aligned with human–robot joint actions is an ideal angle
to understand the connection between vision and force/pressure in the intention recog-
nition of COBOT interaction. The framework consists of a hybrid controller that utilizes
visual serving in addition to impedance controllers. The presence of humanoid robots
has contributed to various advantages as they work alongside humans with the aim of
performing different kinds of tasks [82]. Furthermore, humanoids can maintain interaction
Robotics 2023, 12, 79 13 of 33

with human-like ranges of motion while they sense capabilities. The proposed general
framework of human–robot joint collaborative responsibilities proves to be effective.

4.3. Programming and Learning


Programming and learning are two critical technologies that enable COBOTs to adapt
to changing conditions and perform tasks safely and efficiently. COBOT programming
typically involves creating a set of instructions or commands that the robot will follow to
complete a specific task. Programming can be done manually by a human operator through
a programming visual interface. COBOTs can also learn from humans by observing their
movements and actions and adapting their behavior accordingly.
Paradigms affiliated with simultaneous and proportional control from hand prostheses
continue to gain momentum within the robotics rehabilitation community, which demon-
strates the value of bioelectricity in programming. Simultaneous and proportional control
is designed to facilitate control of desired forces or torques from each DoF of the hand or
wrist that has real-time predictions. The restoration process of motor function for an upper
following an amputation presents a significant task to the rehabilitation engineering sector.
The study conducted by I. Strazzulla et al. applies a simultaneous and proportional control
approach to two robotic hands [83]. In an investigation conducted by Calinon et al., the
robot programming by demonstration (PbD) is ideal, since it addresses methods through
which robots develop new skills via the observation of humans. The methodology proposes
a probabilistic approach combining hidden Markov models (HMM) and Gaussian mixture
regression (GMR) for learning and reproducing human motions. This approach is tested
on simulated and real robots, demonstrating their ability to handle cyclic and crossing
movements as well as multiple constraints at once [84]. The connection between robots and
human-like activities enables the machines to interact with people in natural and harmless
ways. New and complete strategies have been detected, estimated, and implemented to
handle dynamic force interaction taking place at various points in the robot’s structure. For
instance, L. Rozo et al. present a robot motion adaptation method for collaborative tasks
that combines extraction of the desired robot behavior, a task-parametrized Gaussian mix-
ture model, and variable impedance control for human-safe interaction. This approach is
tested in a scenario where a 7-DoF back-drivable manipulator learns to cooperate with a hu-
man to transport an object, and the results show that the proposed method is effective [85].
Human–robot interaction (HRI) is an indication that robots can establish communication
with a person based on needs and behave in a manageable manner [86]. Furthermore, the
authors present a framework that allows a user to teach a robot collaborative skills from
demonstrations, which can be applied to tasks involving physical contact with the user.
This method enables the robot to learn trajectory-following skills as well as impedance
behaviors [87]. The process of determining the levels of engagement in human–robot inter-
action is crucial. Engagement measures depend on the dynamics linked with social signals
traded through the partners, precisely speech, and gaze. This has been studied by S. Ivaldi
et al., who assessed the influence of extroversion and negative attitude towards robots on
speech and gaze during a cooperative task [88]. In the model presented by A. Colome et al.,
dynamic movement primitives (DMP) and visual/force feedback are utilized within the
reinforcement learning (RL) algorithm to enable the robot to learn safety-critical tasks such
as wrapping a scarf around the neck. Experimental results demonstrate that the robot is
consistently capable of learning tasks that could not be learned otherwise, thus improving
its capability with this approach [89]. Furthermore, according to the research presented
by S. Lallee et al., a cooperative human–robot interaction system has been developed to
recognize objects, recognize actions as sequences of perceptual primitives, and transfer this
learning between different robotic platforms. This system also provides the capability to
link actions into shared plans, forming the basis of human–robot cooperation [90]. Thus, in
the future, the sharing of spaces between humans and collaborative robots will become
more common. As a result of a process of integrating ever more advanced technologies,
people and COBOTs will be able to collaborate more effectively and securely [91] in the
Robotics 2023, 12, 79 14 of 33

same working environment. As confirmed by M. Lawitzky et al., combining, planning and


learning algorithms can lead to superior results in goal-directed physical robotic assistance
tasks [92]. The potential to cooperate, establish, and utilize shared action measures is a dis-
tinguished cognitive capacity that separates humans from non-human primates. Language
has become an inherently cooperative activity whereby a listener and speaker cooperate to
ensure the arrival at a shared objective of communication [93]. Current investigations are in
the greater context of cognitive-developmental robotics that possess physical embodiments
designed to play the central role in structuring representations in a system. The robotic sys-
tem can attain global information regarding the surrounding environment that is utilized
for task planning and obstacle avoidance. Having a complementary nature has influenced
a natural belief of vision and force being exploited in the integration and synergic mode
of designing sufficient planning and controlling strategies for the existing robotic system.
L. Peternel et al. demonstrate that robots can be taught dynamic manipulation tasks in
cooperation with a human partner using a multi-modal interface. They employ locally
weighted regression for trajectory generalization and adaptive oscillators for adaptation of
the robot to the partner’s motion. The authors conduct an experiment teaching a robot how
to use a two-person crosscut saw, demonstrating this approach [94].

4.4. Virtual Reality (VR)-Based COBOT


The combination of virtual reality (VR), digital twins, and virtual commissioning of
robotics and COBOTs is emerging as a promising solution for automation. This solution
allows for the real-time simulation of robotic systems in a virtual environment and enables
engineers and designers to monitor and optimize performance in a cost-effective and safe
manner. In addition, by using VR, digital twins, and virtual commissioning, users can gain
a better understanding of the robotic system, its components, and its environment. For
instance, the work of Oyekan, J.O. et al. presents the use of a virtual reality digital twin of a
physical layout as a mechanism to understand human reactions to both predictable and
unpredictable robot motions. A set of established metrics as well as a newly developed
kinetic energy ratio metric is used to analyse human reactions and validate the effectiveness
of the virtual reality environment [95]. Duguleana, M. et al. present an analysis of virtual
reality (VR) as an alternative to real-world applications for testing material-handling
scenarios that involve collaboration between robots and humans. They measure variables
such as the percentage of tasks completed successfully, the average time to complete
tasks, the relative distance and motion estimate, and presence and contact errors, and
compare the results between different manipulation scenarios [96]. People with two-arm
disabilities face difficulties in completing tasks that require them to grasp multiple objects
that are closely spaced. Current arm-free human–robot interfaces (HRIs) such as language-
based and gaze-based HRIs are not effective in controlling robotic arms to complete such
tasks. Zhang, C et al. propose a novel human–robot interface (HRI) system that leverages
mixed reality (MR) feedback and head control for arm-free operation. The proposed HRI
system is designed to enable users with disabilities to control a robotic gripper with high
accuracy and flexibility. Experiments conducted on objects of various sizes and shapes
demonstrate its capability to complete tasks with high adaptability and point cloud error
tolerance [97]. With the advancement of artificial intelligence technology in making smart
devices, understanding how humans develop trust in virtual agents is emerging as a critical
research field. In order to deal with this issue, Gupta et al. present a novel methodology
to investigate user trust in auditory assistance in a virtual reality (VR)-based search task.
The study collected physiological sensor data such as EEG, GSR, HRV, and subjective data
through questionnaires such as STS, SMEQ, and NASA-TLX, and a behavioral measure of
trust in response to valid/invalid verbal advice from the agent. Results show that cognitive
load and agent accuracy play an important role in trust formation in the customized VR
environment [98].
Robotics 2023, 12, 79 15 of 33

5. COBOT Market Analyses: Potentialities and Limits


In this section, 195 COBOTs that are existing in the current market have been inves-
tigated, listed in Appendix A. The classification is based on (i) the degrees of freedom;
(ii) the robot typology, as anthropomorphic, Cartesian, SCARA, and Torso; (iii) the payload;
(iv) the reach volume; and (v) the accuracy. The aim of this assessment is to provide a
synthetic overview of the features and performance of COBOTs available on the market.

5.1. COBOT Assessment: Degrees of Freedom


Robotic arms are characterized by the numbers of DoF from one to fourteen. A higher
number of DoF implies that the robot has more pose options. COBOTS can be classified
into four categories: Anthropomorphic, Cartesian, SCARA and Torso, as in Table 1.

Table 1. COBOT models by mechanism class.

Class No.
Anthropomorphic 176
Cartesian 1
SCARA 14
Torso 4

Anthropomorphic COBOTs consist of a mechanical serial structure composed of rigid


arms linked with at least four joints that allow their movement. The joints can be cylindrical
or prismatic. Cartesian COBOTs consist of a motion-based arm on an orthogonal Cartesian
ternary system. To move in space, they use orthogonal sliding joints through metal arms.
Since the movement works on linear axes, the movements are easily programmable at the
cost of less flexibility. Selective compliance assembly robot arm (SCARA) COBOTs are
defined with two arms that can move in the horizontal plane, with, at the end of them a
prismatic coupling that allows vertical movement. Torso COBOTs have a human-like aspect
and behavior capable of twisting, bending, and rotating in multiple directions, giving them
a high degree of freedom. The structure can be based on serial, parallel or differential
kinematics, each with pros and cons. Serial torso COBOTs are commonly easier to control.
In contrast, parallel and differential kinematics offer a greater number of DoF driven by
higher number of smaller actuators; however, the kinematics are more complex in control
and design.
The most popular class of COBOT is Anthropomorphic, followed by SCARA, Torso,
and Cartesian. The anthropomorphic class represents 90% of the total COBOTs offered by
the market. Regarding Torso COBOTS (2%), despite being designed with multiple degrees
of freedom to provide greater flexibility and adaptability in a wide range of applications,
their complexity and large footprints limit their acceptance.

5.2. COBOT Assessment: Reach and Payload


The payload capacity refers to the mass and inertia that the robot’s wrist can manage.
The robotic arm’s reach is a measurement of the distance that the mechanism can execute
tasks, defining the tridimensional workspace. The COBOTs studied in this review are
grouped into five categories, as shown in Table 2.

Table 2. COBOT clusters based on payload and reach features.

COBOT Cluster Payload (kg) Reach (mm)


Group 1 P ≤ 5.0 R < 500
Group 2 5.0 < P ≤ 10.0 500 < R ≤ 1000
Group 3 10.0 < P ≤ 15.0 1000 < R ≤ 1500
Group 4 15.0 < P ≤ 20.0 1500 < R ≤ 2000
Group 5 P > 20.0 R > 2000
Robotics 2023, 12, 79 16 of 33

Group 1 includes small-sizd COBOTs, with a payload lower than 5 kg and a limited
reach of 500 mm. Medium-size COBOTs have payloads between 5 and 20 kg. Large-size
COBOTs include devices with the highest payload, greater than 20 kg, and the highest-
reach Group 5. The Anthropomorphic class represents more than 90.0% of the total; the
most popular sizes are represented by Group 2, Group 3, and Group 4, which include
88.6% of the total Anthropomorphic models, as illustrated in Table 3. Small and large
Anthropomorphic models of Group 1 and Group 5 number 16 and 4, respectively. The
technology of COBOTs derives from traditional robotics equipment; thus it is possible
to find Anthropomorphic COBOTs with a long-distance reach and great payload, up to
170 kg. In this case, the producer equipped the traditional equipment with a tactile skin
and proximity sensors that allow it to avoid collisions and retract, depending on the contact
force. The mentioned COBOT model is exceptional in size, features, and application; thus
this model has been excluded in graphing and statistics. The Cartesian COBOT accounts
for one model; cartesian robots consist of a motion-based arm on an orthogonal Cartesian
ternary system. These machines are widely installed in production lines, typically with the
aim of performing activities such as feeding pallet or chain conveyors. SCARA accounts
for 14 models on the market, representing 7.2% of the total; its typical application is
pick-and-place with high speed and high accuracy, comparable to, and even higher than,
anthropomorphic. Despite their number degrees of freedom, the complexity and large
footprints of Torso COBOTs limit their diffusion and development. ABB, Rethink Robotics,
and Siasun are the key producers.

Table 3. Number of available COBOTs grouped by mechanism class and payload-reach clusters.

Class Group 1 Group 2 Group 3 Group 4 Group 5 Total


Anthropomorphic 16 91 50 15 4 176
Cartesian 1 1
SCARA 5 6 1 1 1 14
Torso 2 2 4
Total 21 99 54 16 5 195

Figure 2a,b shows the payload and the reach relation as a proportional trend for
Anthropomorphic and SCARA classes. The correlation coefficient is in the 29.4–38.5%
range for Anthropomorphic and SCARA typology, respectively.

5.3. COBOT Assessment: Accuracy


Accuracy is an indicator that represents the deviation between the planned and the
observed pose. COBOT accuracy is expressed in comparison with payload capacity in
Figure 3a,b. In the current market, more than 90% of the anthropomorphic COBOTs show
performance in the 0.01 mm and 0.20 mm range with no interrelated impact on robot
payload ability from 0.3 kg to 20.0 kg, Figure 3a. Moreover, there is no significant trend
between the maximum payload and the deterioration of accuracy. Figure 3b shows that
the payload ranges of Cartesian, Torso, and SCARA COBOTs concentrate in the range
0.5–5.0 kg, and the level of accuracy is lower than 0.10 mm.
Figure 4 shows a percentile representation of accuracy for the two main classes:
Anthropomorphic accuracy is described by the Q1—25th percentile as 0.03 mm, Q3—75th
percentile as 0.10 mm, and median as 0.05 mm. The SCARA COBOT level of accuracy is
described by the Q1—25th percentile as 0.02 mm, Q3—75th percentile as 0.06, and median
as 0.04 mm.
2023, 12, x FOR PEER REVIEW 17 of 33
Robotics 2023, 12, 79 17 of 33

(a)

(b)
Figure 2. COBOT scatter
Figure 2. plot of payload
COBOT and of
scatter plot reach: Anthropomorphic
payload (a); Cartesian, SCARA
and reach: Anthropomorphic and
(a); Cartesian, SCARA and
Torso (b).
Torso (b).
ure 3a,b. In the current market, more than 90% of the anthropomorphic COBOTs show
performance in the 0.01 mm and 0.20 mm range with no interrelated impact on robot pay-
load ability from 0.3 kg to 20.0 kg, Figure 3a. Moreover, there is no significant trend be-
tween the maximum payload and the deterioration of accuracy. Figure 3b shows that the
Robotics 2023, 12, 79
payload ranges of Cartesian, Torso, and SCARA COBOTs concentrate in the range 0.5–5.0
18 of 33
kg, and the level of accuracy is lower than 0.10 mm.

(a)

(b)
Figure 3. COBOT
Figure scatter
3. COBOT plotplot
scatter of of
accuracy
accuracy and payload:
and payload: Anthropomorphic
Anthropomorphic (a); Cartesian,
(a); Cartesian, SCARA
SCARA and
and Torso (b).
Torso (b).
Figure
Figure 44 shows
shows aa percentile
percentile representation
representation of
of accuracy
accuracy for
for the
the two
two main
main classes:
classes: An-
An-
thropomorphic
thropomorphic accuracy is described by the Q1—25th percentile as 0.03 mm, Q3—75th
accuracy is described by the Q1—25th percentile as 0.03 mm, Q3—75th
Robotics 2023, 12, 79 percentile 19 of 33
percentile as
as 0.10
0.10 mm,
mm, and
and median
median as
as 0.05
0.05 mm.
mm. The
The SCARA
SCARA COBOT
COBOT levellevel of
of accuracy
accuracy is
is
described
described by
by the
the Q1—25th
Q1—25th percentile
percentile as
as 0.02
0.02 mm,
mm, Q3—75th
Q3—75th percentile
percentile asas 0.06,
0.06, and
and median
median
as 0.04 mm.
as 0.04 mm.

FigureFigure 4. COBOT box plot of accuracy for Anthropomorphic and SCARA (minimum, Q1, median,
Figure 4.
4. COBOT
COBOT box
box plot
plot of
of accuracy
accuracy for
forAnthropomorphic
Anthropomorphic and
and SCARA
SCARA (minimum,
(minimum, Q1,
Q1, median,
median,
Q3, maximum
Q3, and
maximumoutlier
and - cycle).
outlier—cycle).
Q3, maximum and outlier - cycle).

Figure Figureconfirms
5a,b confirms the correlation analysis showing that accuracy is not affected
Figure 5a,b
5a,b confirms the
the correlation
correlation analysis
analysis showing
showing that
that accuracy
accuracy isis not
not affected
affected by
by
the by
COBOT the COBOT
reach, reach,
for for Anthropomorphic
Anthropomorphic configuration
configuration in in
Figure Figure
5a and 5a and
for for Cartesian,
Cartesian,
the COBOT reach, for Anthropomorphic configuration in Figure 5a and for Cartesian,
SCARA SCARATorsoand Torso architecture in Figure 5b. The oflevel of accuracy is lower0.25
than 0.25 mm
SCARA and and Torso architecture
architecture in
in Figure
Figure 5b.
5b. The
The level
level of accuracy
accuracy is
is lower
lower than
than 0.25 mm
mm
and and it depends on the model or provider.
and itit depends
depends onon the
the model
model or
or provider.
provider.

(a)
(a)
Figure 5. Cont.
obotics 2023, 12, x 2023,
Robotics FOR 12,
PEER
79 REVIEW 20ofof3333
20

(b)
Figure Figure
5. COBOT scatter
5. COBOT plot of
scatter accuracy
plot andand
of accuracy reach: Anthropomorphic
reach: Anthropomorphic(a);
(a); Cartesian, SCARAand
Cartesian, SCARA and
Torso (b).
Torso (b).

5.4. COBOTs Assessment: Energy Consuption vs. Tool Center Point (TCP) Velocity
5.4. COBOTs Assessment: Energy Consuption vs. Tool Center Point (TCP) Velocity
The TCP velocity is a valuable characteristic of the COBOT and refers to the end-
The TCPmotion
effector velocity is a valuable
performance characteristic
during of The
its operations. the TCP
COBOT andhas
velocity refers to the
a direct end-
impact
effector motion performance during its operations. The TCP velocity has
on the cycle time of the workstation and operator safety. Power consumption is an indexa direct impact
on thethat
cycleis time of in
central thethe
workstation
equipment and operator
installation andsafety.
devicePower consumptionThe
daily supervision. is an index
energy
that is consumption
central in theincreases
equipment installation
consistently andpayload.
with the device Figure
daily supervision. The energy
6 shows a correlation con-
between
energy consumption [kW] and the maximum TCP velocity [m/s],
sumption increases consistently with the payload. Figure 6 shows a correlation between listed in Appendix B.
energyThe investigated[kW]
consumption COBOT andpayload range is TCP
the maximum within 0.5 kg–20.0
velocity [m/s],kglisted
with in
a TCP
Annexvelocity
2. The
from 0.3 m/s–6.0 m/s. The expected power consumption exceeds
investigated COBOT payload range is within 0.5 kg–20.0 kg with a TCP velocity from 0.3 0.50 kW for COBOTs
that provide a payload greater than 10 kg. There is significant evidence that the TCP
m/s–6.0 m/s. The expected power consumption exceeds 0.50 kW for COBOTs that provide
increment from 1.0 m/s to 3.0 m/s does not statistically influence energy consumption.
a payload greater
The main than
driver for10 kg. There
power is significant
use is the evidence
payload offered by thethat the TCP increment
anthropomorphic COBOT,fromin
1.0 m/sparticular
to 3.0 m/s does not statistically influence energy consumption. The
for payloads from 1.0 kg to 6.0 kg, considering the total gripper combined main driver
withfor
powerthe usemanipulated
is the payload offered
workpiece by the
mass and anthropomorphic
inertia. COBOT, in particular for pay-
loads from 1.0 kg to 6.0 kg, considering the total gripper combined with the manipulated
workpiece mass and inertia.
botics 2023,Robotics
12, x FOR PEER
2023, REVIEW
12, 79 21 of 33 21 of 33

Figure 6.
Figure 6. COBOT COBOT
scatter plotscatter plot of
of power power consumption
consumption vs. tool
vs. tool center center
point point velocity
velocity of Anthropomor-
of Anthropo-
phic architecture.
morphic architecture.
6. Discussion
6. Discussion
Developing a COBOT selection procedure is a challenging task that covers a broad
Developing a COBOT selection procedure is a challenging task that covers a broad
range of domains. In particular, the application dictates the device concept and design.
range of domains. In particular, the application dictates the device concept and design.
Furthermore COBOT providers may offer ad hoc solutions that do not provide optimal
Furthermore COBOT providers may offer ad hoc solutions that do not provide optimal
performance. This paper provides an overview of the current state of the art of COBOT ap-
performance. This paper
plications provides
and learning an overview
abilities, and theofexisting
the current state of
equipment onthe
the art of COBOT
market. The applications
applications and learning
of COBOTs abilities, in
are growing and the existing
terms equipment
of installations, on the market
remarkably . The context.
in the SME appli-
cations of COBOTs are growing indomain,
In manufacturing terms ofCOBOTs
installations, remarkably
are employed in the with
to assist SME repetitive
context. work, re-
In manufacturing domain, COBOTs are employed to assist with repetitive
ducing the risks and fatigue associated with heavy tasks, making the work environment work, re-
ducing thesafer
risksfor
andemployees.
fatigue associated with heavy tasks, making the work
COBOTs are also used to support the assembly of products. Theenvironment
safer for employees. COBOTsthat
review highlights areaalso used of
number to researchers
support theare assembly
focusing of their
products. The
efforts onre-
the develop-
view highlights
ment of that a number
methods forofreducing
researchers are focusing
workload their efforts
and optimizing on the development
productivity. These methods are
of methods for reducing
mainly aimed atworkload and optimizing
complex components. Theproductivity. These methods
research in material handling areis mainly on
mainly aimed at complex components. The research in material handling
the challenging task of handling unstable materials. In this field, the developed is mainly on the methods
challenging fortask of handling
adjusting unstable materials.
and compensating In thisinfield,
trajectories the developed
real time methods for
are very promising. COBOT em-
adjusting and compensating trajectories in real time are very promising.
ployment in security and inspection supporting activities is growing. COBOTs COBOT employ- offer new
ment in security and inspection
opportunities in a varietysupporting
of contexts activities is growing.
in healthcare, COBOTs
improving offer new
accuracy andop-precision in
portunitiestasks
in asuch
variety of contexts
as surgery in healthcare,
or rehabilitation. Byimproving
combiningaccuracy and precision
AI and machine learninginalgorithms
tasks suchwith
as surgery or COBOTs
robotics, rehabilitation.
can beBy combining
trained AIphysical
to assist and machine learning
therapists algorithms
in providing superior and
with robotics, COBOTs can be trained to assist physical therapists in providing
faster restoration. COBOT-assisted physical therapy could also provide personalized and superior
and faster dynamic
restoration. COBOT-assisted
treatment, allowing physical
for more therapy
effectivecould also provide
rehabilitation. Thepersonalized
use of AI and machine
and dynamic treatment,
learning systemsallowing for more
is rapidly effectivethe
accelerating rehabilitation. The use to
ability of COBOTs of interact
AI and ma-with humans
chine learning systemsthe
and reduce is rapidly
trainingaccelerating
times. Various the research
ability ofhas COBOTs to interact
been carried out towith hu- COBOTs
control
mans andusingreduce the training
natural language times. Various research
and gestures. Efforts arehascurrently
been carried
focusingout onto training
control COBOTs
COBOTs using natural language
to understand and respond and gestures.
to naturalEfforts are currently
language, focusinggestures,
interpret human on training
and visualize
COBOTs toobjects to achieve
understand more accurate
and respond to naturaltask language,
completion. Additionally,
interpret human AI-enabled
gestures, and systems are
being developed
visualize objects to achieve to allow
more COBOTs
accurate tasktocompletion.
constantly update their knowledge
Additionally, AI-enabledand sys-refine their
decision-making.
tems are being developed toForce,allowpressure,
COBOTsand vision sensors
to constantly are critical
update components
their knowledge andin enabling
refine theirhuman–robot
decision-making.interaction.
Force, pressure, and vision sensors are critical components
This article interaction.
in enabling human–robot reviewed a number of COBOTs. The literature shows a gap in human–robot
interaction; nevertheless, there are still issues and constraints to improve the collaboration
Robotics 2023, 12, 79 22 of 33

abilities. The market analyses results show that the most promising typology for COBOTs
applications is the Anthropomorphic one, which can provide greater flexibility and adapt-
ability than traditional robotics. Anthropomorphic COBOTs show improved adaptability
to time-variant conditions and unstructured environmentS. Future developments should
consider the usability conditions to increase the compliant applications. The proposed
classification and comparison underline how the SME and researchers are moving toward
innovative solution.

7. Conclusions
In this paper, a systematic review was performed, which led to the selection of
98 papers to find current trends in COllaborative roBOT applications, with a specific focus
on the use of stationary (fixed) systems. The results were evaluated, and screening criteria
were applied in industrial and service applications from 1996 to 2023, filtering 423 papers
with a two-stage process. A classification of different collaborative contexts was proposed,
mainly composed of industrial assembly, material handling, service personal assistance,
security and inspection, Medicare and supernumerary classifications. Collaborative robot
technology offers an innovative and modular solution to enhance safe interaction with
human beings. The article focused on the robot architecture, AI, and machine learning com-
bination paradigm since these factors are interconnected for an effective implementation.
Furthermore, the progress in force control, vision processing, and pressure regulation are
enabling human–robot collaboration. The studied potential barriers and challenges that
require research effort and business approval are mainly the following: Machine learning
implementation is not predominant in the literature and it may classified in supervised,
unsupervised and reinforcement techniques. Regarding perception and sensing, vision
is the most used mode in the selected set of papers, followed by accelerometer input and
muscular signal input. Sensor fusion is not extensively used in human–robot collaboration.
The market analysis covers 195 models, focusing on the key features: (i) degrees of freedom,
(ii) reach and payload, (iii) accuracy, and (iv) energy consumption vs. tool center point
velocity, to further demonstrate the relevance and growing interest from researchers and
companies. In particular, medium-size anthropomorphic COBOTs are the suitable configu-
ration for most applications, providing greater flexibility and adaptability than the SCARA
or Torso configuration. It is noted that COBOTs with payloads of 5.0 kg–20.0 kg and reach
of 500 mm–2000 mm show an invariant accuracy lower than 0.20 mm, representing 88.6%
of the analyzed samples. The potential barriers or challenges in guaranteeing the 0.20 mm
accuracy depend on their design and how they are programmed, the reduced TCP velocity,
the higher pose stabilization time, and the limited payload. The investigated COBOT
payload range is within 0.5 kg–20.0 kg, with a TCP velocity from 0.3 m/s–6.0 m/s. The
expected power consumption exceeds 0.50 kW for COBOTs that provide a payload greater
than 10 kg. There is significant evidence that the TCP increment from 1.0 m/s to 3.0 m/s
does not statistically influence energy consumption. The main driver for power use is the
payload offered by the Anthropomorphic COBOT, in particular for payloads from 1.0 kg to
6.0 kg, considering the total gripper combined with the manipulated workpiece mass and
inertia.
The comparative analysis between COBOTs and traditional robots highlight that robots
are designed to complete repetitive tasks with high accuracy and precision (0.03 mm),
making them ideal for scenarios that need consistent performance, as throughput, and
cycle time. In same setting, industrial robots provide higher technical specifications vs
collaborative robots. They are faster, more accurate, and have a higher reach volume. The
best use is in high-volume processes with low variations. They are not easy to reprogram
and redeploy on new cell settings and part configurations. Nowadays, collaborative robot
models are providing increasing performance and quality, and they can flexibly adapt
to part variations, improving the worker experience. User-friendliness is the main factor
for SME accessibility over traditional robots that require additional safety equipment.
The safety elements include light barriers, scanners, fencing, and dual-emergency stops.
Robotics 2023, 12, 79 23 of 33

Additionally, the industrial robot is more difficult to integrate. This is mainly due to the
more complex programming environment. These robots require experienced staff to install
and set up the layout. Finally, maintenance costs are higher for industrial robots as well.
Industrial robots are often more expensive after integration compared with collaborative
robots due to the high-usage daily duty cycle. Nevertheless, human interaction and learning
technologies would have to apply research from multidisciplinary fields such as psychology
and behavioral sciences in order to be ready for deployment in real world applications that
offer new opportunities for COBOTs in the future.

Author Contributions: Conceptualization, C.T., F.A. and N.P.; methodology, C.T., F.A. and N.P.;
formal analysis, C.T., F.A. and N.P.; investigation, C.T., F.A. and N.P.; resources, C.T., F.A. and N.P.;
data curation, C.T., F.A. and N.P.; writing—original draft preparation, C.T., F.A. and N.P. All authors
have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.

Appendix A

Table A1. COBOT vs. Payload, Reach and Accuracy.

Producer Model Class DoF Payload [kg] Reach [mm] Accuracy [mm]
CRB 11000 SWIFTI Anthropomorphic 6 4.0 580 0.01
CRB 15000 GoFa Anthropomorphic 6 5.0 950 0.05
ABB
IRB 1400 Yumi Torso 14 0.5 1200 0.02
IRB 14050 Yumi Anthropomorphic 7 0.5 559 0.02
Acutronics MARA Anthropomorphic 6 3.0 656 0.10
Kuka Agilus
AIRSKIN Anthropomorphic 6 10.0 1100 0.02
Fenceless
Kuka Cybertech
Airskin Anthropomorphic 6 24.0 2020 0.04
Fenceless
I10 Anthropomorphic 6 10.0 1350 0.10
I3 Anthropomorphic 6 3.0 625 0.03
AUBO Robotics
I5 Anthropomorphic 6 5.0 924 0.05
I7 Anthropomorphic 6 7.0 1150 0.05
Automata EVA Anthropomorphic 6 1.3 600 0.50
AW-Tube 5 Anthropomorphic 6 5.0 900 0.03
AW-Tube 8 Anthropomorphic 6 8.0 1000 0.04
AW-Tube 12 Anthropomorphic 6 13.0 1300 0.05
Automationware
AW-Tube 15 Anthropomorphic 6 15.0 1000 0.05
AW-Tube 18 Anthropomorphic 6 18.0 1700 0.06
AW-Tube 20 Anthropomorphic 6 20.0 1500 0.07
Bosch APAS Anthropomorphic 6 4.0 911 0.03
Aura Anthropomorphic 6 170.0 2790 0.10
Comau e.Do Anthropomorphic 6 1.0 478 0.01
Racer 5 0.80 Cobot Anthropomorphic 6 5.0 809 0.03
Denso Cobotta Anthropomorphic 6 0.5 342 0.05
Robotics 2023, 12, 79 24 of 33

Table A1. Cont.

Producer Model Class DoF Payload [kg] Reach [mm] Accuracy [mm]
CR10 Anthropomorphic 6 10.0 1525 0.03
CR16 Anthropomorphic 6 16.0 1223 0.03
CR3 Anthropomorphic 6 3.0 795 0.02
CR5 Anthropomorphic 6 5.0 1096 0.03
Dobot
M1 SCARA 4 1.5 400 0.02
Magician Anthropomorphic 4 0.3 320 0.20
Magician Anthropomorphic 4 0.5 340 0.20
MG 400 Anthropomorphic 4 0.8 440 0.05
A0509 Anthropomorphic 6 5.0 900 0.03
A0912 Anthropomorphic 6 9.0 1200 0.05
H2017 Anthropomorphic 6 20.0 1700 0.10
Doosan H2515 Anthropomorphic 6 25.0 1500 0.10
Robotics M0609 Anthropomorphic 6 6.0 900 0.10
M0617 Anthropomorphic 6 6.0 1700 0.10
M1013 Anthropomorphic 6 10.0 1300 0.10
M1509 Anthropomorphic 6 15.0 900 0.10
Efort ECR5 Anthropomorphic 6 5.0 928 0.03
C3 Anthropomorphic 6 3.0 500 0.50
E5 Anthropomorphic 6 5.0 810 0.50
Elephant
myCobot Anthropomorphic 6 0.3 280 0.20
Robotics
Panda 3 Anthropomorphic 6 3.0 550 0.50
Panda 5 Anthropomorphic 6 5.0 850 0.50
CS612 Anthropomorphic 6 12.0 1304 0.05
CS63 Anthropomorphic 6 3.0 624 0.02
CS66 Anthropomorphic 6 6.0 914 0.03
Elite Robot
EC612 Anthropomorphic 6 12.0 1304 0.03
EC63 Anthropomorphic 6 3.0 624 0.02
EC66 Anthropomorphic 6 6.0 914 0.03
C-15 Anthropomorphic 6 15.0 1323 0.05
ESI
C-7 Anthropomorphic 6 7.0 900 0.05
F&P Personal 2R 24V Anthropomorphic 6 3.0 775 0.10
Robotics 2R 48V Anthropomorphic 6 5.0 775 0.10
1CR4iAL Anthropomorphic 6 14.0 911 0.03
CR15iA Anthropomorphic 6 15.0 1411 0.02
CR35iA Anthropomorphic 6 35.0 1813 0.08
CR4iA Anthropomorphic 6 4.0 550 0.02
Fanuc
CR7iA Anthropomorphic 6 7.0 717 0.02
CR7iAL Anthropomorphic 6 7.0 911 0.02
CRX10iA Anthropomorphic 6 10.0 1249 0.05
CR10XiAL Anthropomorphic 6 10.0 1418 0.05
Flexiv Rizon 4 Anthropomorphic 7 4.0 780 0.01
Franka Emika Robot Anthropomorphic 7 3.0 855 0.10
E10 Anthropomorphic 6 10.0 1000 0.05
E15 Anthropomorphic 6 15.0 700 0.05
Hans Robot E3 Anthropomorphic 6 3.0 590 0.05
E5 Anthropomorphic 6 5.0 800 0.05
E5-L Anthropomorphic 6 3.5 950 0.05
HCR-12 Anthropomorphic 6 12.0 1300 0.10
HCR-12A Anthropomorphic 6 12.0 1300 0.05
HCR-3 Anthropomorphic 6 3.0 630 0.10
Hanwha
HCR-3A Anthropomorphic 6 3.0 630 0.05
HCR5 Anthropomorphic 6 5.0 915 0.10
HCR-5A Anthropomorphic 6 5.0 915 0.05
Robotics 2023, 12, 79 25 of 33

Table A1. Cont.

Producer Model Class DoF Payload [kg] Reach [mm] Accuracy [mm]
HIT Robot
T5 Anthropomorphic 6 5.0 850 0.10
Group
Z-Arm 1632 SCARA 4 1.0 452 0.02
Z-Arm 1832 SCARA 4 3.0 455 0.02
Z-Arm 2140 SCARA 4 3.0 532 0.03
HITBOT
Z-Arm 2442 SCARA 4 1.0 617 0.03
Z-Arm 6140 SCARA 4 1.0 532 0.02
Z-Arm mini SCARA 4 1.0 320 0.10
YL005 Anthropomorphic 6 5.0 916 0.10
Hyundai YL012 Anthropomorphic 6 12.0 1350 0.10
YL015 Anthropomorphic 6 15.0 963 0.10
Robotic Arm 1300 Anthropomorphic 6 3.0 1.340 0.25
Inovo Robotics Robotics Arm 650 Anthropomorphic 6 10.0 690 0.25
Robotics Arm 850 Anthropomorphic 6 6.0 990 0.25
Isybot SYB3 Anthropomorphic 4 10.0 1600 0.20
Zu 12 Anthropomorphic 6 12.0 1.300 0.03
Zu 18 Anthropomorphic 6 18.0 1.073 0.03
JAKA
Zu 3 Anthropomorphic 6 3.0 498 0.03
Zu 7 Anthropomorphic 6 7.0 796 0.03
KR1018 Anthropomorphic 6 10.0 1000 0.10
KR1205 Anthropomorphic 7 5.0 1200 0.10
Kassow Robots KR1410 Anthropomorphic 7 10.0 1400 0.10
KR1805 Anthropomorphic 7 5.0 1800 0.10
KR810 Anthropomorphic 7 10.0 850 0.10
Kawasaki Duaro SCARA 8 4.0 760 0.05
Robotics Duaro 2 SCARA 8 6.0 760 0.05
6 Axes Robot Anthropomorphic 6 16.0 1900 0.05
Kinetic Systems
SCARA Robot SCARA 4 5.0 1200 0.05
Gen2 Anthropomorphic 7 2.4 985 0.15
Kinova Gen3 Anthropomorphic 7 4.0 902 0.15
Gen3 Lite Anthropomorphic 6 0.5 760 0.15
LBR iisy 3 R760 Anthropomorphic 6 3.0 760 0.01
LBR iisy 11 R1300 Anthropomorphic 6 11.0 1300 0.15
LBR iisy 15 R930 Anthropomorphic 6 15.0 930 0.15
KUKA
LBR iiwa 14 R820 Anthropomorphic 7 14.0 820 0.15
LBR iiwa 7 R800 Anthropomorphic 7 7.0 800 0.10
LWR Anthropomorphic 7 7.0 790 0.05
Life Robotics CORO Anthropomorphic 6 2.0 800 1.00
Speedy 12 Anthropomorphic 6 12.0 1250 0.10
Mabi
Speedy 6 Anthropomorphic 6 6.0 800 0.10
Megarobo MRX-T4 Anthropomorphic 4 3.0 505 0.05
Junior 200 SCARA 4 3.0 400 0.50
MIP Robotics
Junior 300 SCARA 4 5.0 600 0.40
Mitsubishi RV-5AS-D MELFA
Anthropomorphic 6 5.0 910 0.03
Electric ASSISTA
MRK Systeme KR 5 SI Anthropomorphic 6 5.0 1432 0.04
Nachi CZ 10 Anthropomorphic 6 10.0 1300 0.10
Robotics 2023, 12, 79 26 of 33

Table A1. Cont.

Producer Model Class DoF Payload [kg] Reach [mm] Accuracy [mm]
LARA 10 Anthropomorphic 6 10.0 1000 0.02
Neura Robotics
LARA 5 Anthropomorphic 6 5.0 800 0.02
Indy 10 Anthropomorphic 6 10.0 1000 0.10
Indy 12 Anthropomorphic 6 12.0 1200 0.50
Indy 3 Anthropomorphic 6 3.0 590 0.10
Indy 5 Anthropomorphic 6 3.0 800 0.10
Neuromeka Indy 7 Anthropomorphic 6 7.0 800 0.05
Indy RP Anthropomorphic 6 5.0 950 0.05
Indy RP 2 Anthropomorphic 7 5.0 800 0.05
Opti 10 Anthropomorphic 6 10.0 1216 0.10
Opti 5 Anthropomorphic 6 5.0 880 0.10
Niryo One Anthropomorphic 6 0.3 440 0.10
Pilz PRBT Anthropomorphic 6 6.0 741 0.20
Direct Drive 6 Axes SCARA 6 6.0 1793 0.02
PAVP6 Anthropomorphic 6 2.5 432 0.02
Precise
PAVS6 Anthropomorphic 6 37.0 770 0.03
Automation
PF3400 SCARA 4 23.0 588 0.05
PP100 Cartesian 4 2.0 1270 0.10
OB7 Anthropomorphic 7 5.0 1000 0.10
Productive OB7 Max 12 Anthropomorphic 7 12.0 1300 0.10
Robotics OB7 Max 8 Anthropomorphic 7 8.0 1700 0.10
OB7 Stretch Anthropomorphic 7 4.0 1250 0.10
RB10 1200 Anthropomorphic 6 10.0 1200 0.10
Rainbow
RB3 1300 Anthropomorphic 6 3.0 1300 0.10
Robotics
RB5 850 Anthropomorphic 6 5.0 850 0.10
Baxter Torso 14 2.2 1210 3.00
Rethink
Sawyer Anthropomorphic 7 4.0 1260 0.10
Robotics
Sawyer Black Edition Anthropomorphic 7 4.0 1260 0.10
Robut
Armobot Anthropomorphic 6 3.0 1500 0.10
Tecnology
X Mate 3 Anthropomorphic 7 3.0 760 0.03
Rokae
X Mate 7 Anthropomorphic 7 7.0 850 0.03
Rozum Pulse 75 Anthropomorphic 6 6.0 750 0.10
Robotics Pulse 90 Anthropomorphic 6 4.0 900 0.10
DSCR3 Duco Torso 7 3.0 800 0.02
DSCR5 Torso 7 5.0 800 0.02
GCR14 1400 Anthropomorphic 6 14.0 1400 0.05
GCR20 1100 Anthropomorphic 6 20.0 1100 0.05
Siasun GCR5 910 Anthropomorphic 6 5.0 910 0.05
SCR3 Anthropomorphic 7 3.0 600 0.02
SCR5 Anthropomorphic 7 5.0 800 0.02
TCR 0.5 Anthropomorphic 6 0.5 300 0.05
TCR 1 Anthropomorphic 6 1.0 500 0.05
R12 Anthropomorphic 6 1.0 500 0.10
ST Robotics
R17 Anthropomorphic 6 3.0 750 0.20
TX2 Touch 60 Anthropomorphic 6 4.5 670 0.02
TX2 Touch 60L Anthropomorphic 6 3.7 920 0.03
Staubli TX2 Touch 90 Anthropomorphic 6 14.0 1000 0.03
TX2 Touch 90L Anthropomorphic 6 12.0 1200 0.04
TX2 Touch 90XL Anthropomorphic 6 7.0 1450 0.04
Robotics 2023, 12, 79 27 of 33

Table A1. Cont.

Producer Model Class DoF Payload [kg] Reach [mm] Accuracy [mm]
YA-U5F Anthropomorphic 7 5.0 559 0.06
Yamaha YA-U10F Anthropomorphic 7 10.0 720 0.10
YA-U20F Anthropomorphic 7 20.0 910 0.10
Techman TM12 Anthropomorphic 6 12.0 1300 0.10
Techman TM14 Anthropomorphic 6 14.0 1100 0.10
Techman
Techman TM5 700 Anthropomorphic 6 6.0 700 0.05
Techman TM5 900 Anthropomorphic 6 4.0 900 0.05
Torobo Arm Anthropomorphic 7 6.0 600 0.05
Tokyo Robotics
Torobo Arm Mini Anthropomorphic 7 3.0 600 0.05
uArm Swift Pro Anthropomorphic 4 0.5 320 0.20
xArm 5 Lite Anthropomorphic 5 3.0 700 0.10
UFACTORY
xArm 6 Anthropomorphic 6 3.0 700 0.10
xArm 7 Anthropomorphic 7 3.5 700 0.10
UR10 CB3 Anthropomorphic 6 10.0 1300 0.10
UR10e Anthropomorphic 6 10.0 1300 0.03
UR16e Anthropomorphic 6 16.0 900 0.05
Universal
UR3 CB3 Anthropomorphic 6 3.0 500 0.10
Robots
UR3e Anthropomorphic 6 3.0 500 0.03
UR5 CB3 Anthropomorphic 6 5.0 850 0.10
UR5e Anthropomorphic 6 5.0 850 0.03
Motoman HC10 Anthropomorphic 6 10.0 1200 0.10
Yaskawa Motoman HC10 DT Anthropomorphic 6 10.0 1200 0.10
Motoman HC20 Anthropomorphic 6 20.0 1700 0.05
Yuanda Robotics Arm Anthropomorphic 6 7.0 1000 0.10
SR-L3 Anthropomorphic 6 3.0 600 0.03
SR-L6 Anthropomorphic 6 6.0 850 0.03
Svaya Robotics SR-L10 Anthropomorphic 6 10.0 1300 0.05
SR-L12 Anthropomorphic 6 12.0 1100 0.05
SR-L16 Anthropomorphic 6 16.0 900 0.05

Appendix B

Table A2. COBOT vs. TCP velocity and Power consumption.

Model Class Payload [kg] TCP Velocity [m/s] Power Consumption [kW]
OB7 Max 12 Anthropomorphic 12.0 2.0 0.90
OB7 Max 8 Anthropomorphic 8.0 2.0 0.90
AW-Tube 5 Anthropomorphic 5.0 0.75
AW-Tube 8 Anthropomorphic 8.0 0.75
AW-Tube 12 Anthropomorphic 13.0 0.75
AW-Tube 15 Anthropomorphic 15.0 0.75
AW-Tube 18 Anthropomorphic 18.0 0.75
AW-Tube 20 Anthropomorphic 20.0 0.75
SYB3 Anthropomorphic 10.0 1.0 0.70
OB7 Stretch Anthropomorphic 4.0 2.0 0.65
Zu 18 Anthropomorphic 18.0 3.5 0.60
Robotics 2023, 12, 79 28 of 33

Table A2. Cont.

Model Class Payload [kg] TCP Velocity [m/s] Power Consumption [kW]
RV-5AS-D MELFA ASSISTA Anthropomorphic 5.0 1.0 0.60
GCR20 1100 Anthropomorphic 20.0 1.0 0.60
I10 Anthropomorphic 10.0 4.0 0.50
Racer 5 0.80 Cobot Anthropomorphic 5.0 6.0 0.50
CS612 Anthropomorphic 12.0 3.0 0.50
EC612 Anthropomorphic 12.0 3.2 0.50
Zu 12 Anthropomorphic 12.0 3.0 0.50
I7 Anthropomorphic 7.0 0.40
SCR5 Anthropomorphic 5.0 1.0 0.40
Gen3 Anthropomorphic 4.0 0.5 0.36
E10 Anthropomorphic 10.0 1.0 0.35
E15 Anthropomorphic 15.0 1.0 0.35
Zu 7 Anthropomorphic 7.0 2.5 0.35
Indy 10 Anthropomorphic 10.0 1.0 0.35
Indy 12 Anthropomorphic 12.0 1.0 0.35
Indy 3 Anthropomorphic 3.0 1.0 0.35
Indy 5 Anthropomorphic 3.0 1.0 0.35
Indy 7 Anthropomorphic 7.0 1.0 0.35
Indy RP 2 Anthropomorphic 5.0 1.0 0.35
UR10e Anthropomorphic 10.0 2.0 0.35
UR16e Anthropomorphic 16.0 1.0 0.35
X Mate 3 Anthropomorphic 3.0 0.30
Techman TM12 Anthropomorphic 12.0 1.3 0.30
Techman TM14 Anthropomorphic 14.0 1.1 0.30
EVA Anthropomorphic 1.3 0.8 0.28
E5 Anthropomorphic 5.0 1.0 0.26
Panda 5 Anthropomorphic 5.0 1.0 0.26
CS66 Anthropomorphic 6.0 2.6 0.25
EC66 Anthropomorphic 6.0 2.8 0.25
Gen2 Anthropomorphic 2.4 0.2 0.25
KR 5 SI Anthropomorphic 5.0 0.25
Pulse 90 Anthropomorphic 4.0 2.0 0.25
SCR3 Anthropomorphic 3.0 0.8 0.25
UR10 CB3 Anthropomorphic 10.0 1.0 0.25
MRX-T4 Anthropomorphic 3.0 0.24
Techman TM5 700 Anthropomorphic 6.0 1.1 0.22
Techman TM5 900 Anthropomorphic 4.0 1.4 0.22
I5 Anthropomorphic 5.0 2.8 0.20
CR10 Anthropomorphic 10.0 3.0 0.20
Robotics 2023, 12, 79 29 of 33

Table A2. Cont.

Model Class Payload [kg] TCP Velocity [m/s] Power Consumption [kW]
CR16 Anthropomorphic 16.0 3.0 0.20
CR3 Anthropomorphic 3.0 3.0 0.20
CR5 Anthropomorphic 5.0 3.0 0.20
ECR5 Anthropomorphic 5.0 2.8 0.20
E3 Anthropomorphic 3.0 1.0 0.20
Gen3 Lite Anthropomorphic 0.5 0.3 0.20
PAVP6 Anthropomorphic 2.5 0.20
GCR5 910 Anthropomorphic 5.0 0.20
UR5e Anthropomorphic 5.0 1.0 0.20
C3 Anthropomorphic 3.0 1.0 0.18
E5 Anthropomorphic 5.0 1.0 0.18
E5-L Anthropomorphic 3.5 1.0 0.18
IRB 14050 Yumi Anthropomorphic 0.5 1.5 0.17
Panda 3 Anthropomorphic 3.0 1.0 0.16
I3 Anthropomorphic 3.0 1.9 0.15
CS63 Anthropomorphic 3.0 2.0 0.15
EC63 Anthropomorphic 3.0 2.0 0.15
Zu 3 Anthropomorphic 3.0 1.5 0.15
Pulse 75 Anthropomorphic 6.0 2.0 0.15
UR5 CB3 Anthropomorphic 5.0 1.0 0.15
xArm 5 Lite Anthropomorphic 3.0 0.3 0.12
UR3 CB3 Anthropomorphic 3.0 1.0 0.12
2R 48V Anthropomorphic 5.0 0.10
T5 Anthropomorphic 5.0 0.10
UR3e Anthropomorphic 3.0 1.0 0.10
OB7 Anthropomorphic 5.0 2.0 0.09
2R 24V Anthropomorphic 3.0 0.08
CORO Anthropomorphic 2.0 0.08
Robot Anthropomorphic 3.0 2.0 0.06
One Anthropomorphic 0.3 0.06

References
1. Moulières-Seban, T.; Salotti, J.M.; Claverie, B.; Bitonneau, D. Classification of Cobotic Systems for Industrial Applications. In
Proceedings of the 6th Workshop towards a Framework for Joint Action, Paris, France, 26 October 2015.
2. Di Marino, C.; Rega, A.; Vitolo, F.; Patalano, S.; Lanzotti, A. A new approach to the anthropocentric design of human–robot
collaborative environments. Acta IMEKO 2020, 9, 80–87. [CrossRef]
3. Vitolo, F.; Rega, A.; Di Marino, C.; Pasquariello, A.; Zanella, A.; Patalano, S. Mobile Robots and Cobots Integration: A Preliminary
Design of a Mechatronic Interface by Using MBSE Approach. Appl. Sci. 2022, 12, 419. [CrossRef]
4. Harold, L.S.; Michael, Z.; Ryan, R.J. The Robotics Revolution. Electron. Power 1985, 31, 598. [CrossRef]
5. Rigby, M. Future-proofing UK manufacturing Current investment trends and future opportunities in robotic automation.
Barclays/Dev. Econ. 2015, 1, 1–10.
6. Russmann, M.; Lorenz, M.; Gerbert, P.; Waldner, M.; Justus, J.; Engel, P.; Harnisch, M. Industry 4.0: The Future of Productivity
and Growth in Manufacturing Industries. Bost. Consult. Gr. 2015, 9, 54–89.
Robotics 2023, 12, 79 30 of 33

7. Di Marino, C.; Rega, A.; Vitolo, F.; Patalano, S.; Lanzotti, A. The anthropometric basis for the designing of collaborative
workplaces. In Proceedings of the II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0 IoT), Naples, Italy, 4–6 June
2019; pp. 98–102.
8. Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and
applications. Mechatronics 2018, 55, 248–266. [CrossRef]
9. Bi, Z.; Luo, C.; Miao, Z.; Zhang, B.; Zhang, W.; Wang, L. Safety assurance mechanisms of collaborative robotic systems in
manufacturing. Robot. Comput. Manuf. 2021, 67, 102022. [CrossRef]
10. OECD. The Future of Productivity; OECD: Paris, France, 2015; Volume 1.
11. Schmidtler, J.; Knott, V.; Hölzel, C.; Bengler, K. Human Centered Assistance Applications for the working environment of the
future. Occup. Ergon. 2015, 12, 83–95. [CrossRef]
12. Wang, X.V.; Seira, A.; Wang, L. Classification, personalised safety framework and strategy for human-robot collaboration. In
Proceedings of the CIE 48, International Conference on Computers & Industrial Engineering, Auckland, New Zealand, 2–5
December 2018.
13. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly.
CIRP Ann. 2019, 68, 701–726. [CrossRef]
14. Antonelli, D.; Astanin, S. Qualification of a Collaborative Human-robot Welding Cell. Procedia CIRP 2016, 41, 352–357. [CrossRef]
15. Levratti, A.; De Vuono, A.; Fantuzzi, C.; Secchi, C. TIREBOT: A novel tire workshop assistant robot. In Proceedings of the
IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Zurich, Switzerland, 4–7 September 2016; AIM:
Cranberry Township, PA, USA, 2016; pp. 733–738.
16. Peternel, L.; Tsagarakis, N.; Caldwell, D.; Ajoudani, A. Adaptation of robot physical behaviour to human fatigue in hu-man-robot
co-manipulation. In Proceedings of the IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun,
Mexico, 15–17 November 2016; pp. 489–494.
17. Cherubini, A.; Passama, R.; Crosnier, A.; Lasnier, A.; Fraisse, P. Collaborative manufacturing with physical human–robot
interaction. Robot. Comput. Integr. Manuf. 2016, 40, 1–13. [CrossRef]
18. Tan, J.T.C.; Duan, F.; Zhang, Y.; Watanabe, K.; Kato, R.; Arai, T. Human-robot collaboration in cellular manufacturing: Design and
development. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, St Louis,
MI, USA, 11–15 October 2009; pp. 29–34.
19. Krüger, J.; Lien, T.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009, 58, 628–646. [CrossRef]
20. Erden, M.S.; Billard, A. End-point impedance measurements at human hand during interactive manual welding with robot.
In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014;
pp. 126–133.
21. Wojtara, T.; Uchihara, M.; Murayama, H.; Shimoda, S.; Sakai, S.; Fujimoto, H.; Kimura, H. Human–robot collaboration in precise
positioning of a three-dimensional object. Automatica 2009, 45, 333–342. [CrossRef]
22. Morel, G.; Malis, E.; Boudet, S. Impedance based combination of visual and force control. In Proceedings of the IEEE International
Conference on Robotics and Automation, Leuven, Belgium, 20–20 May 1998; Volume 2, pp. 1743–1748.
23. Magrini, E.; Ferraguti, F.; Ronga, A.J.; Pini, F.; De Luca, A.; Leali, F. Human-robot coexistence and interaction in open in-dustrial
cells. Robot. Comput. Integr. Manuf. 2020, 61, 101846. [CrossRef]
24. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot
collaboration. Auton. Robots 2018, 42, 957–975. [CrossRef]
25. Michalos, G.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Dimoulas, K.; Koukas, S.; Mparis, K.; Papavasileiou, A.; Makris, S.
Seamless human robot collaborative assembly—An automotive case study. Mechatronics 2018, 55, 194–211. [CrossRef]
26. Baraglia, J.; Cakmak, M.; Nagai, Y.; Rao, R.P.; Asada, M. Efficient human-robot collaboration: When should a robot take initiative?
Int. J. Robot. Res. 2017, 36, 563–579. [CrossRef]
27. Donner, P.; Buss, M. Cooperative Swinging of Complex Pendulum-Like Objects: Experimental Evaluation. IEEE Trans. Robot.
2016, 32, 744–753. [CrossRef]
28. Dimeas, F.; Aspragathos, N. Online Stability in Human-Robot Cooperation with Admittance Control. IEEE Trans. Haptics 2016, 9,
267–278. [CrossRef]
29. Kruse, D.; Radke, R.J.; Wen, J.T. Collaborative human-robot manipulation of highly deformable materials. In Proceedings of the
IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 3782–3787.
30. Gams, A.; Nemec, B.; Ijspeert, A.J.; Ude, A. Coupling Movement Primitives: Interaction with the Environment and Bimanual
Tasks. IEEE Trans. Robot. 2014, 30, 816–830. [CrossRef]
31. Bestick, A.M.; Burden, S.A.; Willits, G.; Naikal, N.; Sastry, S.S.; Bajcsy, R. Personalized kinematics for human-robot collaborative
manipulation. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg,
Germany, 28 September–2 October 2015.
32. Kildal, J.; Martín, M.; Ipiña, I.; Maurtua, I. Empowering assembly workers with cognitive disabilities by working with collaborative
robots: A study to capture design requirements. Procedia CIRP 2019, 81, 797–802. [CrossRef]
33. Böhme, H.-J.; Wilhelm, T.; Key, J.; Schauer, C.; Schröter, C.; Groß, H.-M.; Hempel, T. An approach to multi-modal human–machine
interaction for intelligent service robots. Robot. Auton. Syst. 2003, 44, 83–96. [CrossRef]
Robotics 2023, 12, 79 31 of 33

34. Vasconez, J.P.; Kantor, G.A.; Cheein, F.A.A. Human–robot interaction in agriculture: A survey and current challenges. Biosyst.
Eng. 2019, 179, 35–48. [CrossRef]
35. Hjorth, S.; Chrysostomou, D. Human–robot collaboration in industrial environments: A literature review on non-destructive
disassembly. Robot. Comput. Manuf. 2021, 73, 102208. [CrossRef]
36. Murphy, R.R. Human—Robot Interaction in Rescue Robotics. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2004, 34, 138–153.
[CrossRef]
37. Magalhaes, P.; Ferreira, N. Inspection Application in an Industrial Environment with Collaborative Robots. Automation 2022, 3,
13. [CrossRef]
38. Weiss, A.; Wortmeier, A.-K.; Kubicek, B. Cobots in Industry 4.0: A Roadmap for Future Practice Studies on Human–Robot
Collaboration. IEEE Trans. Hum. Mach. Syst. 2021, 51, 335–345. [CrossRef]
39. Tsagarakis, N.G.; Caldwell, D.G.; Negrello, F.; Choi, W.; Baccelliere, L.; Loc, V.G.; Noorden, J.; Muratore, L.; Margan, A.; Cardellino,
A.; et al. WALK-MAN: A High-Performance Humanoid Platform for Realistic Environments. J. Field Robot. 2017, 34, 1225–1259.
[CrossRef]
40. Masaracchio, M.; Kirker, K. Resistance Training in Individuals with Hip and Knee Osteoarthritis: A Clinical Commentary with
Practical Applications. Strength Cond. J. 2022, 44, 36–46. [CrossRef]
41. Gravel, D.P.; Newman, W.S. Flexible Robotic Assembly Efforts at Ford Motor Company. Proceeding of the 2001 IEEE International
Symposium on Intelligent Control (ISIC’ 01) (Cat. No.01CH37206), Mexico City, Mexico, 5–7 September 2001. Available online:
https://ieeexplore.ieee.org/abstract/document/971504/ (accessed on 17 May 2022).
42. Mojtahedi, K.; Whitsell, B.; Artemiadis, P.; Santello, M. Communication and Inference of Intended Movement Direction during
Human–Human Physical Interaction. Front. Neurorobot. 2017, 11, 21. [CrossRef]
43. Vogel, J.; Castellini, C.; Van Der Smagt, P. EMG-Based Teleoperation and Manipulation with the DLR LWR-III. In Proceedings of
the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany,
23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6434–6437.
44. Gijsberts, A.; Bohra, R.; González, D.S.; Werner, A.; Nowak, M.; Caputo, B.; Roa, M.A.; Castellini, C. Stable myoelectric control of
a hand prosthesis using non-linear incremental learning. Front. Neurorobot. 2014, 8, 8. [CrossRef] [PubMed]
45. Fleischer, C.; Hommel, G. A Human-Exoskeleton Interface Utilizing Electromyography. IEEE Trans. Robot. 2008, 24, 872–882.
[CrossRef]
46. de Vlugt, E.; Schouten, A.; van der Helm, F.C.; Teerhuis, P.C.; Brouwn, G.G. A force-controlled planar haptic device for movement
control analysis of the human arm. J. Neurosci. Methods 2003, 129, 151–168. [CrossRef] [PubMed]
47. Burdet, E.; Osu, R.; Franklin, D.W. The central nervous system stabilizes unstable dynamics by learning optimal impedance.
Nature 2001, 414, 446–449. [CrossRef] [PubMed]
48. Hao, M.; Zhang, J.; Chen, K.; Asada, H.H.; Fu, C. Supernumerary Robotic Limbs to Assist Human Walking with Load Carriage. J.
Mech. Robot. 2020, 12, 061014. [CrossRef]
49. Luo, J.; Gong, Z.; Su, Y.; Ruan, L.; Zhao, Y.; Asada, H.H.; Fu, C. Modeling and Balance Control of Supernumerary Robotic Limb
for Overhead Tasks. IEEE Robot. Autom. Lett. 2021, 6, 4125–4132. [CrossRef]
50. Bonilla, B.L.; Parietti, F.; Asada, H.H. Demonstration-based control of supernumerary robotic limbs. In Proceedings of the
IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp.
3936–3942.
51. Parietti, F.; Chan, K.; Asada, H.H. Bracing the human body with supernumerary Robotic Limbs for physical assistance and load
reduction. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31
May–7 June 2014; pp. 141–148.
52. Ciullo, A.S.; Catalano, M.G.; Bicchi, A.; Ajoudani, A. A Supernumerary Soft Robotic Limb for Reducing Hand-Arm Vibration
Syndromes Risks. Front. Robot. AI 2021, 8, 650613. [CrossRef]
53. Abdi, E.; Burdet, E.; Bouri, M.; Himidan, S.; Bleuler, H. In a demanding task, three-handed manipulation is preferred to
two-handed manipulation. Sci. Rep. 2016, 6, 21758. [CrossRef]
54. Meraz, N.S.; Sobajima, M.; Aoyama, T.; Hasegawa, Y. Modification of body schema by use of extra robotic thumb. Robomech J.
2018, 5, 3. [CrossRef]
55. Kilner, J.; Hamilton, A.F.d.C.; Blakemore, S.-J. Interference effect of observed human movement on action is due to velocity profile
of biological motion. Soc. Neurosci. 2007, 2, 158–166. [CrossRef]
56. Maurice, P.; Padois, V.; Measson, Y.; Bidaud, P. Human-oriented design of collaborative robots. Int. J. Ind. Ergon. 2017, 57, 88–102.
[CrossRef]
57. Rosen, J.; Brand, M.; Fuchs, M.; Arcan, M. A myosignal-based powered exoskeleton system. IEEE Trans. Syst. Man Cybern. Part A
Syst. Humans 2001, 31, 210–222. [CrossRef]
58. Farry, K.; Walker, I.; Baraniuk, R. Myoelectric teleoperation of a complex robotic hand. IEEE Trans. Robot. Autom. 1996, 12, 775–788.
[CrossRef]
59. Castellini, C.; Artemiadis, P.; Wininger, M.; Ajoudani, A.; Alimusaj, M.; Bicchi, A.; Caputo, B.; Craelius, W.; Dosen, S.; Englehart,
K.; et al. Proceedings of the first workshop on Peripheral Machine Interfaces: Going beyond traditional surface electromyography.
Front. Neurorobot. 2014, 8, 22. [CrossRef]
Robotics 2023, 12, 79 32 of 33

60. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The Extraction of Neural Information
from the Surface EMG for the Control of Upper-Limb Prostheses: Emerging Avenues and Challenges. IEEE Trans. Neural Syst.
Rehabilitat. Eng. 2014, 22, 797–809. [CrossRef] [PubMed]
61. Kim, S.; Kim, C.; Park, J.H. Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database.
In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2006;
pp. 3486–3491. [CrossRef]
62. Magrini, E.; Flacco, F.; De Luca, A. Estimation of contact forces using a virtual force sensor. In Proceedings of the IEEE International
Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2126–2133. [CrossRef]
63. Khatib, O.; Demircan, E.; De Sapio, V.; Sentis, L.; Besier, T.; Delp, S. Robotics-based synthesis of human motion. J. Physiol. 2009,
103, 211–219. [CrossRef] [PubMed]
64. Tsumugiwa, T.; Yokogawa, R.; Hara, K. Variable impedance control with virtual stiffness for human-robot cooperative pegin-hole
task. In Proceedings of the Intelligent Robots and Systems, Osaka, Japan, 5–7 August 2003; Volume 2, pp. 1075–1081. [CrossRef]
65. Ficuciello, F.; Romano, A.; Villani, L.; Siciliano, B. Cartesian impedance control of redundant manipulators for human-robot
co-manipulation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA,
14–18 September 2014; pp. 2120–2125. [CrossRef]
66. Kosuge, K.; Hashimoto, S.; Yoshida, H. Human-robots collaboration system for flexible object handling. In Proceedings of the
1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), Leuven, Belgium, 20–20 May 2002;
Volume 2, pp. 1841–1846. [CrossRef]
67. Ajoudani, A.; Godfrey, S.B.; Bianchi, M.; Catalano, M.G.; Grioli, G.; Tsagarakis, N.; Bicchi, A. Exploring Teleimpedance and Tactile
Feedback for Intuitive Control of the Pisa/IIT SoftHand. IEEE Trans. Haptics 2014, 7, 203–215. [CrossRef] [PubMed]
68. Yang, C.; Ganesh, G.; Haddadin, S.; Parusel, S.; Albu-Schaeffer, A.; Burdet, E. Human-Like Adaptation of Force and Impedance in
Stable and Unstable Interactions. IEEE Trans. Robot. 2011, 27, 918–930. [CrossRef]
69. Plagemann, C.; Ganapathi, V.; Koller, D.; Thrun, S. Real-time identification and localization of body parts from depth images.
In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010;
pp. 3108–3113. [CrossRef]
70. Perzanowski, D.; Schultz, A.; Adams, W. Integrating natural language and gesture in a robotics domain. In Proceedings of the 1998
IEEE International Symposium on Intelligent Control (ISIC) held jointly with IEEE International Symposium on Computational
Intelligence in Robotics and Automation (CIRA) Intell, Gaithersburg, MD, USA, 17 September 2002. [CrossRef]
71. Zanchettin, A.M.; Rocco, P. Reactive motion planning and control for compliant and constraint-based task execution. In Proceed-
ings of the International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2748–2753.
[CrossRef]
72. Peternel, L.; Noda, T.; Petrič, T.; Ude, A.; Morimoto, J.; Babič, J. Adaptive Control of Exoskeleton Robots for Periodic Assistive
Behaviours Based on EMG Feedback Minimisation. PLoS ONE 2016, 11, e0148942. [CrossRef]
73. Maeda, Y.; Takahashi, A.; Hara, T.; Arai, T. Human-robot cooperation with mechanical interaction based on rhythm entrainment-
realization of cooperative rope turning. In Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and
Automation (Cat. No.01CH37164), Seoul, Republic of Korea, 21–26 May 2002; Volume 4, pp. 3477–3482.
74. Cherubini, A.; Passama, R.; Meline, A.; Crosnier, A.; Fraisse, P. Multimodal control for human-robot cooperation. In Proceedings
of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2202–2207.
[CrossRef]
75. Lippiello, V.; Siciliano, B.; Villani, L. Robot Interaction Control Using Force and Vision. In Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1470–1475. [CrossRef]
76. Khansari-Zadeh, S.M.; Billard, A. Learning Stable Nonlinear Dynamical Systems with Gaussian Mixture Models. IEEE Trans.
Robot. 2011, 27, 943–957. [CrossRef]
77. Fernandez, V.; Balaguer, C.; Blanco, D.; Salichs, M. Active human-mobile manipulator cooperation through intention recognition.
In Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul,
Republic of Korea, 21–26 May 2002. [CrossRef]
78. Stulp, F.; Grizou, J.; Busch, B.; Lopes, M. Facilitating intention prediction for humans by optimizing robot motions. In Proceedings
of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October
2015; pp. 1249–1255. [CrossRef]
79. Ebert, D.; Henrich, D. Safe human-robot-cooperation: Image-based collision detection for industrial robots. IEEE Int. Conf. Intell.
Robot. Syst. 2003, 2, 1826–1831. [CrossRef]
80. Magnanimo, V.; Saveriano, M.; Rossi, S.; Lee, D. A Bayesian approach for task recognition and future human activity prediction.
In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK,
25–29 August 2014; pp. 726–731. [CrossRef]
81. Bascetta, L.; Ferretti, G.; Rocco, P.; Ardö, H.; Bruyninckx, H.; Demeester, E.; Di Lello, E. Towards safe human-robot interaction
in robotic cells: An approach based on visual tracking and intention estimation. In Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2971–2978. [CrossRef]
Robotics 2023, 12, 79 33 of 33

82. Agravante, D.J.; Cherubini, A.; Bussy, A.; Gergondet, P.; Kheddar, A. Collaborative human-humanoid carrying using vision and
haptic sensing. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31
May–7 June 2014; pp. 607–612. [CrossRef]
83. Strazzulla, I.; Nowak, M.; Controzzi, M.; Cipriani, C.; Castellini, C. Online Bimanual Manipulation Using Surface Electromyogra-
phy and Incremental Learning. IEEE Trans. Neural Syst. Rehabilitat. Eng. 2016, 25, 227–234. [CrossRef] [PubMed]
84. Calinon, S.; Sauser, E.L.; Caldwell, D.G.; Billard, A.G. Learning and reproduction of gestures by imitation an approach based on
Hidden Markov Model and Gaussian Mixture Regression. IEEE Robot. Autom. Mag. 2010, 17, 44–54. [CrossRef]
85. Rozo, L.; Bruno, D.; Calinon, S.; Caldwell, D.G. Learning optimal controllers in human-robot cooperative transportation tasks
with position and force constraints. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1024–1030. [CrossRef]
86. Rozo, L.; Calinon, S.; Caldwell, D.G. Learning force and position constraints in human-robot cooperative transportation. In
Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot
Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions, Edinburgh,
UK, 25–29 August 2014; pp. 619–624. [CrossRef]
87. Rozo, L.; Calinon, S.; Caldwell, D.G.; Jimenez, P.; Torras, C. Learning Physical Collaborative Robot Behaviors from Human
Demonstrations. IEEE Trans. Robot. 2016, 32, 513–527. [CrossRef]
88. Ivaldi, S.; Lefort, S.; Peters, J.; Chetouani, M.; Provasi, J.; Zibetti, E. Towards Engagement Models that Consider Individual Factors
in HRI: On the Relation of Extroversion and Negative Attitude Towards Robots to Gaze and Speech During a Human–Robot
Assembly Task: Experiments with the iCub humanoid. Int. J. Soc. Robot. 2016, 9, 63–86. [CrossRef]
89. Colome, A.; Planells, A.; Torras, C. A friction-model-based framework for Reinforcement Learning of robotic tasks in non-rigid
environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30
May 2015; pp. 5649–5654. [CrossRef]
90. Lallee, S.; Pattacini, U.; Lemaignan, S.; Lenz, A.; Melhuish, C.; Natale, L.; Skachek, S.; Hamann, K.; Steinwender, J.; Sisbot, E.A.;
et al. Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and
Executing Actions and Shared Plans. IEEE Trans. Auton. Ment. Dev. 2012, 4, 239–253. [CrossRef]
91. Lee, D.; Ott, C. Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Auton. Robot. 2011, 31,
115–131. [CrossRef]
92. Lawitzky, M.; Medina, J.R.; Lee, D.; Hirche, S. Feedback motion planning and learning from demonstration in physical robotic
assistance: Differences and synergies. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,
Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 3646–3652. [CrossRef]
93. Petit, M.; Lallee, S.; Boucher, J.-D.; Pointeau, G.; Cheminade, P.; Ognibene, D.; Chinellato, E.; Pattacini, U.; Gori, I.; Martinez-
Hernandez, U.; et al. The Coordinating Role of Language in Real-Time Multimodal Learning of Cooperative Tasks. IEEE Trans.
Auton. Ment. Dev. 2012, 5, 3–17. [CrossRef]
94. Peternel, L.; Petrič, T.; Oztop, E.; Babič, J. Teaching robots to cooperate with humans in dynamic manipulation tasks based on
multi-modal human-in-the-loop approach. Auton. Robot. 2013, 36, 123–136. [CrossRef]
95. Zhang, C.; Lin, C.; Leng, Y.; Fu, Z.; Cheng, Y.; Fu, C. An Effective Head-Based HRI for 6D Robotic Grasping Using Mixed Reality.
IEEE Robot. Autom. Lett. 2023, 8, 2796–2803. [CrossRef]
96. Duguleana, M.; Barbuceanu, F.G.; Mogan, G. Evaluating Human-Robot Interaction during a Manipulation Experiment Conducted
in Immersive Virtual Reality. In Proceedings of the International Conference on Virtual and Mixed Reality, Orlando, FL, USA,
9–14 July 2011; pp. 164–173. [CrossRef]
97. Zhang, Z. Building Symmetrical Reality Systems for Cooperative Manipulation. In Proceedings of the IEEE Conference on Virtual
Reality and 3D User Interfaces Abstracts and Workshops, Shanghai, China, 25–29 March 2023; pp. 751–752. [CrossRef]
98. Gupta, K.; Hajika, R.; Pai, Y.S.; Duenser, A.; Lochner, M.; Billinghurst, M. Measuring Human Trust in a Virtual Assistant using
Physiological Sensing in Virtual Reality. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR),
Atlanta, GA, USA, 22–26 March 2020; pp. 756–765. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like