Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 10, June
Previous Issue
Volume 9, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Robotics, Volume 10, Issue 1 (March 2021) – 51 articles

Cover Story (view full-size image): Endoscopic endonasal surgery is a common procedure for treating pituitary lesions. However, the reduced workspace and lack of tool dexterity hinder the execution of complex surgical tasks such as suturing. In this paper, we propose a robot-assisted stitching method based on an online optimization-based trajectory generation for curved needle stitching and a constrained motion planning framework to ensure safe surgical instrument motion. Experimental evaluations were conducted to compare the proposed method with the use of conventional instruments. Our results demonstrate a noticeable improvement in the stitching success ratio and a high reduction of the interaction forces with the phantom tissue. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 1183 KiB  
Article
Explanations from a Robotic Partner Build Trust on the Robot’s Decisions for Collaborative Human-Humanoid Interaction
by Misbah Javaid and Vladimir Estivill-Castro
Robotics 2021, 10(1), 51; https://doi.org/10.3390/robotics10010051 - 23 Mar 2021
Cited by 9 | Viewed by 5397
Abstract
Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously affect the effectiveness of a team of robots and humans. We can create effective interactions that generate trust by augmenting robots with an explanation capability. The explanations provide [...] Read more.
Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously affect the effectiveness of a team of robots and humans. We can create effective interactions that generate trust by augmenting robots with an explanation capability. The explanations provide justification and transparency to the robot’s decisions. To demonstrate such effective interaction, we tested this with an interactive, game-playing environment with partial information that requires team collaboration, using a game called Spanish Domino. We partner a robot with a human to form a pair, and this team opposes a team of two humans. We performed a user study with sixty-three human participants in different settings, investigating the effect of the robot’s explanations on the humans’ trust and perception of the robot’s behaviour. Our explanation-generation mechanism produces natural-language sentences that translate the decision taken by the robot into human-understandable terms. We video-recorded all interactions to analyse factors such as the participants’ relational behaviours with the robot, and we also used questionnaires to measure the participants’ explicit trust in the robot. Overall, our main results demonstrate that explanations enhanced the participants’ understandability of the robot’s decisions, because we observed a significant increase in the participants’ level of trust in their robotic partner. These results suggest that explanations, stating the reason(s) for a decision, combined with the transparency of the decision-making process, facilitate collaborative human–humanoid interactions. Full article
(This article belongs to the Special Issue Human–Robot Collaboration)
Show Figures

Figure 1

Figure 1
<p>Two types of team during the activities. Heterogeneous teams are composed of a robot and a human. Homogeneous teams have two humans.</p>
Full article ">Figure 2
<p>Representation of the 28 tiles of <span class="html-italic">Domino</span>.</p>
Full article ">Figure 3
<p>Complete architectural overview of our human–robot interaction scenario using components.</p>
Full article ">Figure 4
<p>A summary of the quantitative data analysis results for trust.Summary of results</p>
Full article ">
20 pages, 2474 KiB  
Article
Industrial Robot Trajectory Tracking Control Using Multi-Layer Neural Networks Trained by Iterative Learning Control
by Shuyang Chen and John T. Wen
Robotics 2021, 10(1), 50; https://doi.org/10.3390/robotics10010050 - 21 Mar 2021
Cited by 26 | Viewed by 7742
Abstract
Fast and precise robot motion is needed in many industrial applications. Most industrial robot motion controllers allow externally commanded motion profiles, but the trajectory tracking performance is affected by the robot dynamics and joint servo controllers, to which users have no direct access [...] Read more.
Fast and precise robot motion is needed in many industrial applications. Most industrial robot motion controllers allow externally commanded motion profiles, but the trajectory tracking performance is affected by the robot dynamics and joint servo controllers, to which users have no direct access and about which they have little information. The performance is further compromised by time delays in transmitting the external command as a setpoint to the inner control loop. This paper presents an approach for combining neural networks and iterative learning controls to improve the trajectory tracking performance for a multi-axis articulated industrial robot. For a given desired trajectory, the external command is iteratively refined using a high-fidelity dynamical simulator to compensate for the robot inner-loop dynamics. These desired trajectories and the corresponding refined input trajectories are then used to train multi-layer neural networks to emulate the dynamical inverse of the nonlinear inner-loop dynamics. We show that with a sufficiently rich training set, the trained neural networks generalize well to trajectories beyond the training set as tested in the simulator. In applying the trained neural networks to a physical robot, the tracking performance still improves but not as much as in the simulator. We show that transfer learning effectively bridges the gap between simulation and the physical robot. Finally, we test the trained neural networks on other robot models in simulation and demonstrate the possibility of a general purpose network. Development and evaluation of this methodology are based on the ABB IRB6640-180 industrial robot and ABB RobotStudio software packages. Full article
(This article belongs to the Special Issue Robotics and AI)
Show Figures

Figure 1

Figure 1
<p>Overview of the neural-learning trajectory tracking control approach.</p>
Full article ">Figure 2
<p>Robot dynamics with closed loop joint servo control. The goal of the neural-learning control approach is to find a possibly non-causal feedforward compensator <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="script">G</mi> </mrow> <mo>†</mo> </msup> </semantics></math> using MNNs.</p>
Full article ">Figure 3
<p>Trajectory tracking improvement with iterative refinement after 8 iterations. The desired trajectory is a sinusoid with <span class="html-italic">ω</span> = 3 rad/s. (<b>a</b>) Desired output (also the input to externally guided motion (EGM)), RobotStudio output, and linear model output. (<b>b</b>) Desired output, RobotStudio output, and modified input based on iterative refinement.</p>
Full article ">Figure 4
<p>The architecture of the employed neural network that has two fully-connected hidden layers and 100 nodes in each hidden layer. The input and output layer contains 50 and 25 nodes, respectively.</p>
Full article ">Figure 5
<p>Comparison of responses between RobotStudio (red curve) and the physical robot (blue curve). Desired inputs are sinusoids with amplitude of 2° and angular frequency <span class="html-italic">ω</span> of 2 rad/s (<b>a</b>) and 10 rad/s (<b>b</b>). (<b>a</b>) RobotStudio captures the dynamics of the physical robot well when commanded a slow trajectory. (<b>b</b>) Discrepancy exists between RobotStudio dynamics and the physical robot when commanded a fast trajectory.</p>
Full article ">Figure 6
<p>Comparison of tracking performance without and with the NN compensation for a chirp signal in RobotStudio. TheNNfeedforward controller addresses the issues of lag effect and amplitude discrepancies induced by robot inner loop dynamics. (<b>a</b>) Uncompensated case: desired output (also the input into EGM), RobotStudio output, and linear model output. (<b>b</b>) Compensation with NN: desired output, RobotStudio output, and the input generated by the NN.</p>
Full article ">Figure 7
<p>Comparison of tracking performance with and without the NN compensation of a random joint trajectory for joint 1 in RobotStudio. The generalization capability of the NN feedforward controller is demonstrated through the improved tracking accuracy.</p>
Full article ">Figure 8
<p>Comparison of tracking performance with and without the NN compensation of a multi-joint sinusoidal trajectory in RobotStudio. The NN feedforward controller improves the trajectory tracking accuracy for all 6 joints.</p>
Full article ">Figure 9
<p>Comparison of tracking performance with and without the NN compensation of the MoveIt! planned joint trajectories in RobotStudio. The inset figures clearly demonstrate the improvement of tracking.</p>
Full article ">Figure 10
<p>Comparison of tracking performance without and with the NN compensation of a Cartesian square trajectory in the <span class="html-italic">x</span>-<span class="html-italic">y</span> plane with <span class="html-italic">z</span> constant in RobotStudio. The improved tracking performance of each Cartesian axis is reported in <a href="#robotics-10-00050-t005" class="html-table">Table 5</a>.</p>
Full article ">Figure 11
<p>Improved tracking performance of <a href="#robotics-10-00050-f010" class="html-fig">Figure 10</a> in individual Cartesian axis. With the NN compensation, the tracking errors are significantly reduced for all axes, as reported in <a href="#robotics-10-00050-t005" class="html-table">Table 5</a>. (<b>a</b>) Tracking performance in <span class="html-italic">x</span>. (<b>b</b>) Tracking performance in <span class="html-italic">y</span>. (<b>c</b>) Tracking performance in <span class="html-italic">z</span>.</p>
Full article ">Figure 12
<p>Comparison of the tracking performance of the same chirp signal as the one in the simulation (<a href="#robotics-10-00050-f006" class="html-fig">Figure 6</a>) without and with the NN compensation for joint 1 of the physical robot. (<b>a</b>) Uncompensated case: desired output (also the input into EGM) and the physical robot output. (<b>b</b>) Compensation with NN: desired output, physical robot output, and the input generated by the NN.</p>
Full article ">Figure 12 Cont.
<p>Comparison of the tracking performance of the same chirp signal as the one in the simulation (<a href="#robotics-10-00050-f006" class="html-fig">Figure 6</a>) without and with the NN compensation for joint 1 of the physical robot. (<b>a</b>) Uncompensated case: desired output (also the input into EGM) and the physical robot output. (<b>b</b>) Compensation with NN: desired output, physical robot output, and the input generated by the NN.</p>
Full article ">Figure 13
<p>Comparison of tracking performance of a sinusoidal trajectory without and with transfer learning for joint 1 of the physical robot. The <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>2</mn> </msub> </semantics></math> norm of the tracking error of each case is shown in <a href="#robotics-10-00050-t007" class="html-table">Table 7</a>. Transfer learning plays a key role for tracking a fast trajectory. (<b>a</b>) No NN compensation. (<b>b</b>) Compensation using the NN trained by the simulation data. (<b>c</b>) Compensation using the NN tuned by transfer learning.</p>
Full article ">Figure 13 Cont.
<p>Comparison of tracking performance of a sinusoidal trajectory without and with transfer learning for joint 1 of the physical robot. The <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>2</mn> </msub> </semantics></math> norm of the tracking error of each case is shown in <a href="#robotics-10-00050-t007" class="html-table">Table 7</a>. Transfer learning plays a key role for tracking a fast trajectory. (<b>a</b>) No NN compensation. (<b>b</b>) Compensation using the NN trained by the simulation data. (<b>c</b>) Compensation using the NN tuned by transfer learning.</p>
Full article ">Figure 14
<p>The trained NNs using IRB6640-180 also improve the tracking accuracy of a chirp and a sinusoidal joint trajectories for IRB120 and IRB6640-130 robots in RobotStudio, possibly because these robots have similar inner loop dynamics.</p>
Full article ">
17 pages, 5080 KiB  
Article
Dynamic and Friction Parameters of an Industrial Robot: Identification, Comparison and Repetitiveness Analysis
by Lei Hao, Roberto Pagani, Manuel Beschi and Giovanni Legnani
Robotics 2021, 10(1), 49; https://doi.org/10.3390/robotics10010049 - 19 Mar 2021
Cited by 28 | Viewed by 6290
Abstract
This paper describes the results of dynamic tests performed to study the robustness of a dynamics model of an industrial manipulator. The tests show that the joint friction changes during the robot operation. The variation can be identified in a double exponential law [...] Read more.
This paper describes the results of dynamic tests performed to study the robustness of a dynamics model of an industrial manipulator. The tests show that the joint friction changes during the robot operation. The variation can be identified in a double exponential law and thus the variation can be predicted. The variation is due to the heat generated by the friction. A model is used to estimate the temperature and related friction variation. Experimental data collected on two robots EFORT ER3A-C60 are presented and discussed. Repetitive tests performed on different days showed that the inertial and friction parameters can be robustly estimated and that the value of the measured joint friction can be used to estimate the unexpected conditions of the joints. Future applications may include sensorless identification of collisions, predictive maintenance programs, or human–robot interaction. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>Friction versus speed changes as the robot warms up for each joint.</p>
Full article ">Figure 2
<p>One cycle of the test trajectory on all joints. The first section is reserved for the estimation of the inertial parameters. The second one is specially designed for friction estimation; the joints are moved individually one after the other to highlight the contribution of the friction component in the motor output. The last section takes care of heating the joints by performing a high-speed point-to-point movement for a few minutes.</p>
Full article ">Figure 3
<p>Position performed by each joint during the execution of the final excitation trajectory.</p>
Full article ">Figure 4
<p>Principle used for the simplified model of (<a href="#FD11-robotics-10-00049" class="html-disp-formula">11</a>) when just one joint is moved at each time and the others assume a predefined value (example of Joint 2) The robot in the figure is the 3D model of the manipulator used in this work.</p>
Full article ">Figure 5
<p>Experimental data. (<b>a</b>) The torque motor output of Joints 2 and 3 during the friction measure cycle. Data collected from one test on “Robot 1” (60% of the velocity). (<b>b</b>) The same data without the gravity effect.</p>
Full article ">Figure 6
<p>(<b>a</b>) Mean value of friction torque versus time for each test on “Robot 1”. Experimental data, Joint 3, and velocity at 60%. (<b>b</b>) Fitting of the data using (<a href="#FD16-robotics-10-00049" class="html-disp-formula">16</a>).</p>
Full article ">Figure 7
<p>Friction torque versus time for all joints in Tests 1–4 performed on “Robot 1”, with the velocity at 60%. The Mix curve is the results of the curve fitting by merging the data of each test. Joint 6 had a variation on Day 4, probably due to a measurement error, but the reason is under analysis.</p>
Full article ">Figure 8
<p>Scheme of the interconnections between the dynamic, the friction, and the thermal model and the possible use for advanced applications (predictive maintenance, virtual force sensor [<a href="#B46-robotics-10-00049" class="html-bibr">46</a>], and human–robot interaction). Symbols with the “hat” marks the estimated values, symbols without the “hat” are real values.</p>
Full article ">Figure 9
<p>Experimental data. (<b>a</b>) Evolution of the identified values <math display="inline"><semantics> <msub> <mi>P</mi> <mi>y</mi> </msub> </semantics></math> of Joint 3 for all the tests on “Robot 2”. (<b>b</b>) The evolution of the same parameter, but slightly widening the scale of the value. It is worth noting that the difference between the initial and final estimations is less than 5%.</p>
Full article ">Figure 10
<p>Velocity and torque versus time during a working cycle in cold and hot conditions: experimental data for all 6 Joints.</p>
Full article ">Figure 11
<p>Example of measured and predicted torque (dynamics plus friction) in cold and hot conditions on the same trajectory with variable velocity (Joint 5, Test 1).</p>
Full article ">Figure 12
<p>Evolution of the identified values <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>x</mi> <mi>x</mi> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mspace width="4pt"/> <mrow> <mi>I</mi> <mi>x</mi> <mi>y</mi> </mrow> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mspace width="4pt"/> <mrow> <mi>I</mi> <mi>x</mi> <mi>z</mi> </mrow> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mspace width="4pt"/> <mrow> <mi>I</mi> <mi>y</mi> <mi>z</mi> </mrow> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>z</mi> <mi>z</mi> </mrow> </semantics></math> of Joint 2 for all the tests on “Robot 1” and “Robot 2”. The values of the parameters are repeatable and quite similar between the two robots.</p>
Full article ">Figure 13
<p>(<b>Top</b>) The friction torque versus time for “Robot 1” and “Robot 2”. During one of the tests on “Robot 2”, a mechanical problem occurred. It is possible to see the unexpected increase in torque on the left-side graph. The graph on the opposite side shows the ordinary behavior of “Robot 1” performing the same tests. (<b>Bottom</b>) The evolution of the friction parameters (Equation (<a href="#FD4-robotics-10-00049" class="html-disp-formula">4</a>)) during the tests performed on “Robot 2”. It is evident how the mechanical problem results in a change in the model values.</p>
Full article ">
27 pages, 7718 KiB  
Article
Motion Planning and Control of an Omnidirectional Mobile Robot in Dynamic Environments
by Mahmood Reza Azizi, Alireza Rastegarpanah and Rustam Stolkin
Robotics 2021, 10(1), 48; https://doi.org/10.3390/robotics10010048 - 17 Mar 2021
Cited by 38 | Viewed by 9729
Abstract
Motion control in dynamic environments is one of the most important problems in using mobile robots in collaboration with humans and other robots. In this paper, the motion control of a four-Mecanum-wheeled omnidirectional mobile robot (OMR) in dynamic environments is studied. The robot’s [...] Read more.
Motion control in dynamic environments is one of the most important problems in using mobile robots in collaboration with humans and other robots. In this paper, the motion control of a four-Mecanum-wheeled omnidirectional mobile robot (OMR) in dynamic environments is studied. The robot’s differential equations of motion are extracted using Kane’s method and converted to discrete state space form. A nonlinear model predictive control (NMPC) strategy is designed based on the derived mathematical model to stabilize the robot in desired positions and orientations. As a main contribution of this work, the velocity obstacles (VO) approach is reformulated to be introduced in the NMPC system to avoid the robot from collision with moving and fixed obstacles online. Considering the robot’s physical restrictions, the parameters and functions used in the designed control system and collision avoidance strategy are determined through stability and performance analysis and some criteria are established for calculating the best values of these parameters. The effectiveness of the proposed controller and collision avoidance strategy is evaluated through a series of computer simulations. The simulation results show that the proposed strategy is efficient in stabilizing the robot in the desired configuration and in avoiding collision with obstacles, even in narrow spaces and with complicated arrangements of obstacles. Full article
Show Figures

Figure 1

Figure 1
<p>The schematic model of omnidirectional mobile robot (OMR) and the attached frames.</p>
Full article ">Figure 2
<p>The obstacle cone and relative velocity of the robot with respect to the obstacle.</p>
Full article ">Figure 3
<p>The structure of the designed control system.</p>
Full article ">Figure 4
<p>Geometrical interpretation of weights variations in the vicinity of target position.</p>
Full article ">Figure 5
<p>The maximum achievable velocity of the robot.</p>
Full article ">Figure 6
<p>Position and velocity of the robot in decelerating motion.</p>
Full article ">Figure 7
<p>Position and orientation error of the robot in Example 1.</p>
Full article ">Figure 8
<p>The robot’s distance from the obstacles during its motion in Example 1.</p>
Full article ">Figure 9
<p>The snapshot of the robot trajectory among the obstacles in Example 1.</p>
Full article ">Figure 10
<p>Position and orientation error of the robot in Example 2.</p>
Full article ">Figure 11
<p>The robot’s distance from the obstacles during its motion in Example 2.</p>
Full article ">Figure 12
<p>The snapshot of the robot trajectory among the obstacles in Example 2.</p>
Full article ">Figure 13
<p>Optimal cost function value in Example 1 during the robot’s motion.</p>
Full article ">Figure 14
<p>Optimal cost function value in Example 2 during the robot’s motion.</p>
Full article ">Figure 15
<p>The snapshot of the robot’s motion in Example 1 for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0.1</mn> <mtext> </mtext> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>2</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>The snapshot of the robot’s motion in Example 1 for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0.05</mn> <mtext> </mtext> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0.6</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>The snapshot of the robot’s motion in Example 1 for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>0.03</mn> <mtext> </mtext> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>The snapshot of the robot trajectory in Example 3 given in [<a href="#B26-robotics-10-00048" class="html-bibr">26</a>].</p>
Full article ">
47 pages, 19996 KiB  
Review
Service Robots in the Healthcare Sector
by Jane Holland, Liz Kingston, Conor McCarthy, Eddie Armstrong, Peter O’Dwyer, Fionn Merz and Mark McConnell
Robotics 2021, 10(1), 47; https://doi.org/10.3390/robotics10010047 - 11 Mar 2021
Cited by 138 | Viewed by 43552
Abstract
Traditionally, advances in robotic technology have been in the manufacturing industry due to the need for collaborative robots. However, this is not the case in the service sectors, especially in the healthcare sector. The lack of emphasis put on the healthcare sector has [...] Read more.
Traditionally, advances in robotic technology have been in the manufacturing industry due to the need for collaborative robots. However, this is not the case in the service sectors, especially in the healthcare sector. The lack of emphasis put on the healthcare sector has led to new opportunities in developing service robots that aid patients with illnesses, cognition challenges and disabilities. Furthermore, the COVID-19 pandemic has acted as a catalyst for the development of service robots in the healthcare sector in an attempt to overcome the difficulties and hardships caused by this virus. The use of service robots are advantageous as they not only prevent the spread of infection, and reduce human error but they also allow front-line staff to reduce direct contact, focusing their attention on higher priority tasks and creating separation from direct exposure to infection. This paper presents a review of various types of robotic technologies and their uses in the healthcare sector. The reviewed technologies are a collaboration between academia and the healthcare industry, demonstrating the research and testing needed in the creation of service robots before they can be deployed in real-world applications and use cases. We focus on how robots can provide benefits to patients, healthcare workers, customers, and organisations during the COVID-19 pandemic. Furthermore, we investigate the emerging focal issues of effective cleaning, logistics of patients and supplies, reduction of human errors, and remote monitoring of patients to increase system capacity, efficiency, resource equality in hospitals, and related healthcare environments. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

Figure 1
<p>UV sterilisation robot (Reproduced with permission from: Chanprakon et al. [<a href="#B13-robotics-10-00047" class="html-bibr">13</a>] © 2019 IEEE.</p>
Full article ">Figure 2
<p>(<b>a</b>) UV disinfection Robot (Reproduced with permission from: UVD Robots® [<a href="#B14-robotics-10-00047" class="html-bibr">14</a>]) and (<b>b</b>) XENEX pulsed xenon ultraviolet device (Reproduced with permission from: Disinfection Services [<a href="#B89-robotics-10-00047" class="html-bibr">89</a>] [Cambridge University Press]).</p>
Full article ">Figure 3
<p>Decontamination of rooms and equipment with a hydrogen peroxide aerosol, using a pre-programmed robot (Reproduced with permission from: Andersen et al. [<a href="#B18-robotics-10-00047" class="html-bibr">18</a>] [Journal of Hospital Infection]).</p>
Full article ">Figure 4
<p>(<b>a</b>) Human Support Robot platform with cleaning module (Reproduced with permission from: Ramalingam et al. [<a href="#B20-robotics-10-00047" class="html-bibr">20</a>]) and (<b>b</b>) Experimental testbed for the table cleaning human support robot (Reproduced with permission from: Yin et al. [<a href="#B21-robotics-10-00047" class="html-bibr">21</a>]).</p>
Full article ">Figure 5
<p>(<b>a</b>) GeckoH13 under the cover and bottom view (Reproduced with permission from: Cepolina et al. [<a href="#B22-robotics-10-00047" class="html-bibr">22</a>] [VDE VERLAG]), (<b>b</b>) The XDBOT being controlled wirelessly and semi-autonomously (Source: The Engineer [<a href="#B24-robotics-10-00047" class="html-bibr">24</a>], reproduced with permission from I-Ming Chen, Fellow of Academy of Engineering, Nanyang Technological University, Singapore) and (<b>c</b>) Taski Intellibot Swingobot 2000 (Reproduced with permission from: Taski [<a href="#B103-robotics-10-00047" class="html-bibr">103</a>]).</p>
Full article ">Figure 6
<p>(<b>a</b>) SHUYU robot used in Yantai ETDA (Reproduced with permission from: Gong et al. [<a href="#B67-robotics-10-00047" class="html-bibr">67</a>] [SpringerOpen]), (<b>b</b>) SHUYUmini robot used in the First Affiliated Hospital of Tsinghua University and (<b>c</b>) and Misty II, a temperature screening assistant robot (Reproduced with permission from: Misty Robotics [<a href="#B68-robotics-10-00047" class="html-bibr">68</a>]).</p>
Full article ">Figure 7
<p>(<b>a</b>) Experimental setups for nasopharyngeal swab (Reproduced with permission from: Wang et al. [<a href="#B27-robotics-10-00047" class="html-bibr">27</a>] ©2020 IEEE), (<b>b</b>) world’s first commercial throat swabbing robot (Reproduced with permission from: LifeLine Robotics [<a href="#B28-robotics-10-00047" class="html-bibr">28</a>]), (<b>c</b>) oropharyngeal swab robot and its sampling (Reproduced with permission from: the ©ESR 2020 [<a href="#B29-robotics-10-00047" class="html-bibr">29</a>]), and (<b>d</b>) application of the developed prototype of the slave robot to the patient (Reproduced with permission from: Seo et al. [<a href="#B30-robotics-10-00047" class="html-bibr">30</a>]).</p>
Full article ">Figure 8
<p>(<b>a</b>) Pathfinder during initial testing in Kosice-Šaca Hospital (Reproduced with permission from: Bačík et al. [<a href="#B32-robotics-10-00047" class="html-bibr">32</a>] © 2017 IEEE), (<b>b</b>) typical medication delivery TUG (Reproduced with permission form: Aethon [<a href="#B33-robotics-10-00047" class="html-bibr">33</a>]) and (<b>c</b>) virtual prototype of the i-MERC (Reproduced with permission from: Carreira et al. [<a href="#B35-robotics-10-00047" class="html-bibr">35</a>] © 2006 IEEE).</p>
Full article ">Figure 9
<p>(<b>a</b>) MiR100 mobile robot (Reproduced with permission from: Mobile Industrial Robotics [<a href="#B38-robotics-10-00047" class="html-bibr">38</a>]) and (<b>b</b>) Moxi fetching items from central supply (Reproduced with permission from: Diligent Robotics [<a href="#B37-robotics-10-00047" class="html-bibr">37</a>]).</p>
Full article ">Figure 10
<p>(<b>a</b>) Implementation of the automated blood testing and analysis using the venipuncture robot (Reproduced with permission from: Balter et al. [<a href="#B41-robotics-10-00047" class="html-bibr">41</a>] [World Scientific]) and (<b>b</b>) hand-held venipuncture device (Reproduced with permission from: Leipheimer et al. [<a href="#B42-robotics-10-00047" class="html-bibr">42</a>] [World Scientific]).</p>
Full article ">Figure 11
<p>(<b>a</b>) The KUKA robot of the KR AGILUS series (Reproduced with permission from: KUKA Robotics [<a href="#B43-robotics-10-00047" class="html-bibr">43</a>]), (<b>b</b>) UR5 collaborative robot (Reproduced with permission from: Universal Robotics [<a href="#B44-robotics-10-00047" class="html-bibr">44</a>]) and (<b>c</b>) ABB’s high-precision robot (Reproduced with permission from: ABB robotics [<a href="#B45-robotics-10-00047" class="html-bibr">45</a>]).</p>
Full article ">Figure 12
<p>(<b>a</b>) one-on-one interaction between user and robot coach (Reproduced with permission from: Fasola and Matarić [<a href="#B50-robotics-10-00047" class="html-bibr">50</a>] [Journal of Human-Robot Interaction Steering Committee]) and (<b>b</b>) TIAGo robot (Reproduced with permission from: Pal Robotics [<a href="#B51-robotics-10-00047" class="html-bibr">51</a>]).</p>
Full article ">Figure 13
<p>(<b>a</b>) iPal robot: assisted therapy for autism (Reproduced with permission from: Avatarmind [<a href="#B55-robotics-10-00047" class="html-bibr">55</a>]) and (<b>b</b>) QTrobot: therapist’s little helper and kid’s great friend (Reproduced with permission from: LuxAI [<a href="#B54-robotics-10-00047" class="html-bibr">54</a>]).</p>
Full article ">Figure 14
<p>(<b>a</b>) Telemedical Assistant (Reproduced with permission from: Danish Arif [<a href="#B59-robotics-10-00047" class="html-bibr">59</a>]), (<b>b</b>) Temi robot (Reproduced with permission from: Temi [<a href="#B60-robotics-10-00047" class="html-bibr">60</a>]), (<b>c</b>) ARI, the humanoid robot: the perfect mix of Service Robotics and Artificial Intelligence in one single platform (Reproduced with permission from: [<a href="#B61-robotics-10-00047" class="html-bibr">61</a>]) (<b>d</b>) Remote daily consultation of physical and mental conditions using the telepresence system (Reproduced with permission from: Yang et al. [<a href="#B62-robotics-10-00047" class="html-bibr">62</a>] [SpringerOpen]).</p>
Full article ">Figure 15
<p>(<b>Top</b>) Positive image samples and (<b>Bottom</b>) negative image samples for artificial neural network training (Reproduced with permission from: Mamun et al. [<a href="#B56-robotics-10-00047" class="html-bibr">56</a>] [JSW]).</p>
Full article ">Figure 16
<p>(<b>a</b>) Da Vinci robotic surgical system (Reproduced with permission from: Intuitive Surgical) and (<b>b</b>) The ZEUS subsystems (Reproduced with permission from: Marescaux et al. [<a href="#B70-robotics-10-00047" class="html-bibr">70</a>] [Surgical Clinics].</p>
Full article ">Figure 17
<p>Telerobotic ultrasound system used during COVID-19: (<b>a</b>) A sonographer uses an ultrasound probe to control movements of the scanning ultrasound probe (<b>b</b>) an assistant moves the frame for the MELODY system over the patient’s uterus (Reproduced with permission from: Adams et al. [<a href="#B71-robotics-10-00047" class="html-bibr">71</a>] [SAGE Open]).</p>
Full article ">Figure 18
<p>Robot manipulation and grasping: the cloth placemat task (Reproduced with permission from: McConachie [<a href="#B146-robotics-10-00047" class="html-bibr">146</a>] [SAGE]).</p>
Full article ">Figure 19
<p>Modelling social interaction: An example of the haggling sequence. (<b>a</b>) an example scene showing two sellers and one buyer and (<b>b</b>) reconstructed 3D social signals (Reproduced with permission from: Joo [<a href="#B149-robotics-10-00047" class="html-bibr">149</a>]).</p>
Full article ">
17 pages, 2041 KiB  
Article
On the Impact of Gravity Compensation on Reinforcement Learning in Goal-Reaching Tasks for Robotic Manipulators
by Jonathan Fugal, Jihye Bae and Hasan A. Poonawala
Robotics 2021, 10(1), 46; https://doi.org/10.3390/robotics10010046 - 9 Mar 2021
Cited by 4 | Viewed by 5129
Abstract
Advances in machine learning technologies in recent years have facilitated developments in autonomous robotic systems. Designing these autonomous systems typically requires manually specified models of the robotic system and world when using classical control-based strategies, or time consuming and computationally expensive data-driven training [...] Read more.
Advances in machine learning technologies in recent years have facilitated developments in autonomous robotic systems. Designing these autonomous systems typically requires manually specified models of the robotic system and world when using classical control-based strategies, or time consuming and computationally expensive data-driven training when using learning-based strategies. Combination of classical control and learning-based strategies may mitigate both requirements. However, the performance of the combined control system is not obvious given that there are two separate controllers. This paper focuses on one such combination, which uses gravity-compensation together with reinforcement learning (RL). We present a study of the effects of gravity compensation on the performance of two reinforcement learning algorithms when solving reaching tasks using a simulated seven-degree-of-freedom robotic arm. The results of our study demonstrate that gravity compensation coupled with RL can reduce the training required in reaching tasks involving elevated target locations, but not all target locations. Full article
Show Figures

Figure 1

Figure 1
<p>Framework for Residual Reinforcment Learning. A controller’s output is combined with a policy learned by a reinforcement learning agent to control a robot. The environment for the RL agent is the closed-loop robot control system.</p>
Full article ">Figure 2
<p>Visualization of initial robot poses and goal locations for Tasks 1–3. In these images the goal location is indicated by the red dot.</p>
Full article ">Figure 3
<p>Grid-search results for the ACKTR and PPO2 reinforcement learning agents. The mean and the standard deviation of the cumulative regret in the 10 training sessions corresponding to each hyperparameter permutation are shown. The center point of each bar being the mean cumulative regret while the total length of the bar indicates the standard deviation. The system with gravity compensation is slightly shifted to the right to enhance visibility. The results for each system are displayed in ascending order of the mean cumulative regret. (<b>a</b>) Gravity compensation leads to lower regret for all hyperparameters permutations when training using the ACKTR algorithm. (<b>b</b>) Gravity compensation leads to lower regret for all hyperparameters permutations when training using the PPO2 algorithm.</p>
Full article ">Figure 4
<p>Training session learning curves of ACKTR and PPO2 RL agents on Tasks 1–3. The solid curve is the mean episode reward for the network during training, and the shaded region indicates the standard deviation. (<b>a</b>) In Task 1, ACKTR with gravity compensation achieves higher rewards than ACKTR without gravity compensation, and learns faster. (<b>b</b>) In Task 1, PPO2 with gravity compensation achieves similar rewards to PPO2 without gravity compensation. Gravity compensation appears to enable faster learning. (<b>c</b>) In Task 2, ACKTR with gravity compensation achieves higher rewards than ACKTR without gravity compensation, although the rewards are lower in the initial period of training. (<b>d</b>) In Task 2, PPO2 with gravity compensation achieves lower rewards than PPO2 without gravity compensation, and also learns slower. (<b>e</b>) In Task 3, ACKTR with gravity compensation achieves significantly higher rewards than ACKTR without gravity compensation, and also learns faster. (<b>f</b>) In Task 3, PPO2 with gravity compensation achieves higher rewards than PPO2 without gravity compensation, and also learns faster.</p>
Full article ">
18 pages, 2917 KiB  
Article
Cobot User Frame Calibration: Evaluation and Comparison between Positioning Repeatability Performances Achieved by Traditional and Vision-Based Methods
by Roberto Pagani, Cristina Nuzzi, Marco Ghidelli, Alberto Borboni, Matteo Lancini and Giovanni Legnani
Robotics 2021, 10(1), 45; https://doi.org/10.3390/robotics10010045 - 8 Mar 2021
Cited by 14 | Viewed by 5955
Abstract
Since cobots are designed to be flexible, they are frequently repositioned to change the production line according to the needs; hence, their working area (user frame) needs to be often calibrated. Therefore, it is important to adopt a fast and intuitive user frame [...] Read more.
Since cobots are designed to be flexible, they are frequently repositioned to change the production line according to the needs; hence, their working area (user frame) needs to be often calibrated. Therefore, it is important to adopt a fast and intuitive user frame calibration method that allows even non-expert users to perform the procedure effectively, reducing the possible mistakes that may arise in such contexts. The aim of this work was to quantitatively assess the performance of different user frame calibration procedures in terms of accuracy, complexity, and calibration time, to allow a reliable choice of which calibration method to adopt and the number of calibration points to use, given the requirements of the specific application. This has been done by first analyzing the performances of a Rethink Robotics Sawyer robot built-in user frame calibration method (Robot Positioning System, RPS) based on the analysis of a fiducial marker distortion obtained from the image acquired by the wrist camera. This resulted in a quantitative analysis of the limitations of this approach that only computes local calibration planes, highlighting the reduction of performances observed. Hence, the analysis focused on the comparison between two traditional calibration methods involving rigid markers to determine the best number of calibration points to adopt to achieve good repeatability performances. The analysis shows that, among the three methods, the RPS one resulted in very poor repeatability performances (1.42 mm), while the three and five points calibration methods achieve lower values (0.33 mm and 0.12 mm, respectively) which are closer to the reference repeatability (0.08 mm). Moreover, comparing the overall calibration times achieved by the three methods, it is shown that, incrementing the number of calibration points to more than five, it is not suggested since it could lead to a plateau in the performances, while increasing the overall calibration time. Full article
(This article belongs to the Special Issue Human–Robot Collaboration)
Show Figures

Figure 1

Figure 1
<p>Example of the set-up of the Robot Positioning System (RPS) landmark-based calibration adopted to test the repeatability along the z-axis. The proximity sensor is mounted on the end-effector and the landmark is placed in between points <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mn mathvariant="bold">3</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mn mathvariant="bold">4</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Distances obtained from the proximity sensor for each position <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">i</mi> </msub> </mrow> </semantics></math> along the x-axis compared between the reference test and the RPS test. For each boxplot, <span class="html-italic">n =</span> 30 samples have been considered. The solid line represents the median, while the dashed line represents the mean value of each boxplot.</p>
Full article ">Figure 3
<p>Distances obtained from the proximity sensor for each position <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">i</mi> </msub> </mrow> </semantics></math> along the y-axis compared between the reference test and the RPS test. For each boxplot, <span class="html-italic">n =</span> 30 samples have been considered. The solid line represents the median, while the dashed line represents the mean value of each boxplot.</p>
Full article ">Figure 4
<p>Distances obtained from the proximity sensor for each position <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mi mathvariant="bold-italic">i</mi> </msub> </mrow> </semantics></math> along the z-axis compared between the reference test and the RPS test. For each boxplot, <span class="html-italic">n =</span> 30 samples have been considered. The solid line represents the median, while the dashed line represents the mean value of each boxplot.</p>
Full article ">Figure 5
<p>Image showing the “close” set-up. The aluminum markers (AMs) have been glued on the table at fixed positions with corresponding robot coordinates written in yellow.</p>
Full article ">Figure 6
<p>Image showing the “wide” set-up. The AMs have been glued on the table at fixed positions with corresponding robot coordinates written in yellow.</p>
Full article ">Figure 7
<p>Average calibration times achieved for each calibration method. The averages have been obtained as the mean value of five tests performed by different users.</p>
Full article ">
10 pages, 13790 KiB  
Communication
Determining Robotic Assistance for Inclusive Workplaces for People with Disabilities
by Elodie Hüsing, Carlo Weidemann, Michael Lorenz, Burkhard Corves and Mathias Hüsing
Robotics 2021, 10(1), 44; https://doi.org/10.3390/robotics10010044 - 5 Mar 2021
Cited by 10 | Viewed by 4899
Abstract
Human–robot collaboration (HRC) provides the opportunity to enhance the physical abilities of severely and multiply disabled people thus allowing them to work in industrial workplaces on the primary labour market. In order to assist this target group optimally, the collaborative robot has to [...] Read more.
Human–robot collaboration (HRC) provides the opportunity to enhance the physical abilities of severely and multiply disabled people thus allowing them to work in industrial workplaces on the primary labour market. In order to assist this target group optimally, the collaborative robot has to support them based on their individual capabilities. Therefore, the knowledge about the amount of required assistance is a central aspect for the design and programming of HRC workplaces. The paper introduces a new method that bases the task allocation on the individual capabilities of a person. The method obtains human capabilities on the one hand and the process requirements on the other. In the following step, these two profiles are compared and the workload of the human is acquired. This determines the amount of support or assistance, which should be provided by a robot capable of HRC. In the end, the profile comparison of an anonymized participant and the concept of the human–robot workplace is presented. Full article
(This article belongs to the Special Issue Human–Robot Collaboration)
Show Figures

Figure 1

Figure 1
<p>Overall concept of the individual capability-based task allocation.</p>
Full article ">Figure 2
<p>Concept of the break down of a process.</p>
Full article ">Figure 3
<p>Basic elements of standard processes.</p>
Full article ">Figure 4
<p>Break down of the standard process “form-fit positioning” in its basic elements.</p>
Full article ">Figure 5
<p>Rating scale for each capability, respectively, capability-based requirement.</p>
Full article ">Figure 6
<p>Evaluation: level of insufficient challenge resulting from comparison of capability and capability-based requirement.</p>
Full article ">Figure 7
<p>Evaluation: level of excessive demand resulting from comparison of capability and capability-based requirement.</p>
Full article ">Figure 8
<p>Summary of comparison subdivided into overloaded (red), average workload (black) and under challenged (green).</p>
Full article ">Figure 9
<p>Concept of human–robot workplace for people with disabilities.</p>
Full article ">
20 pages, 9405 KiB  
Article
Mechatronic Re-Design of a Manual Assembly Workstation into a Collaborative One for Wire Harness Assemblies
by Ilaria Palomba, Luca Gualtieri, Rafael Rojas, Erwin Rauch, Renato Vidoni and Andrea Ghedin
Robotics 2021, 10(1), 43; https://doi.org/10.3390/robotics10010043 - 5 Mar 2021
Cited by 17 | Viewed by 6687
Abstract
Nowadays, the wire harness assembly process is still manually performed due to the process complexity and product variability (e.g., wires of different kind, size and length). The Wire cobots project, in which this work was conceived, aims at improving the current state-of-art assembly [...] Read more.
Nowadays, the wire harness assembly process is still manually performed due to the process complexity and product variability (e.g., wires of different kind, size and length). The Wire cobots project, in which this work was conceived, aims at improving the current state-of-art assembly process by introducing in it collaborative robotics. A shared workstation exploiting human abilities and machine strengths was developed to assembly automotive wire harness by means of insulated tape for a real industrial case. In the new workstation, the human deals with the complex task of wire handling, while the robot performs the repetitive and strenuous taping operations. Such a task allocation together with the workstation redesign allow for an improvement of the operator’s well-being in terms of postural conditions and for an increase of the production efficiency. In this paper, the mechanical and mechatronic design, as well as the realization and validation of this new collaborative workstation are presented and discussed. Full article
(This article belongs to the Special Issue Advances in Italian Robotics II)
Show Figures

Figure 1

Figure 1
<p>Manual assembly workstation at the ELVEZ d.o.o. company (<b>left</b>); and wire harness (<b>right</b>).</p>
Full article ">Figure 2
<p>Key design drivers to fulfill the challenge requirements.</p>
Full article ">Figure 3
<p>Prototype.</p>
Full article ">Figure 4
<p>Bench design.</p>
Full article ">Figure 5
<p>Assembly panel design.</p>
Full article ">Figure 6
<p>Market research for collaborative robots.</p>
Full article ">Figure 7
<p>Collaborative robot UR10.</p>
Full article ">Figure 8
<p>End effector: Kabtec KTH Spot 9.</p>
Full article ">Figure 9
<p>Approaching direction of the taping pistol.</p>
Full article ">Figure 10
<p>Locking system.</p>
Full article ">Figure 11
<p>Functional scheme.</p>
Full article ">Figure 12
<p>Workstation simulation in RoboDK.</p>
Full article ">Figure 13
<p>Experiments with the new collaborative workstation.</p>
Full article ">
23 pages, 2058 KiB  
Article
Globally Optimal Redundancy Resolution with Dynamic Programming for Robot Planning: A ROS Implementation
by Enrico Ferrentino, Federico Salvioli and Pasquale Chiacchio
Robotics 2021, 10(1), 42; https://doi.org/10.3390/robotics10010042 - 4 Mar 2021
Cited by 8 | Viewed by 6734
Abstract
Dynamic programming techniques have proven much more flexible than calculus of variations and other techniques in performing redundancy resolution through global optimization of performance indices. When the state and input spaces are discrete, and the time horizon is finite, they can easily accommodate [...] Read more.
Dynamic programming techniques have proven much more flexible than calculus of variations and other techniques in performing redundancy resolution through global optimization of performance indices. When the state and input spaces are discrete, and the time horizon is finite, they can easily accommodate generic constraints and objective functions and find Pareto-optimal sets. Several implementations have been proposed in previous works, but either they do not ensure the achievement of the globally optimal solution, or they have not been demonstrated on robots of practical relevance. In this communication, recent advances in dynamic programming redundancy resolution, so far only demonstrated on simple planar robots, are extended to be used with generic kinematic structures. This is done by expanding the Robot Operating System (ROS) and proposing a novel architecture meeting the requirements of maintainability, re-usability, modularity and flexibility that are usually required to robotic software libraries. The proposed ROS extension integrates seamlessly with the other software components of the ROS ecosystem, so as to encourage the reuse of the available visualization and analysis tools. The new architecture is demonstrated on a 7-DOF robot with a six-dimensional task, and topological analyses are carried out on both its state space and resulting joint-space solution. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>Mapping of the workspace path in the joint space yielding the state space grids.</p>
Full article ">Figure 2
<p>Pictorial representation of the local optimization problem.</p>
Full article ">Figure 3
<p>Context diagram.</p>
Full article ">Figure 4
<p>Hybrid decomposition/class diagram.</p>
Full article ">Figure 5
<p>Sequence diagram representing dynamic programming redundancy resolution.</p>
Full article ">Figure 6
<p>Workspace path assigned to the Panda arm, together with the base reference frame and obstacle.</p>
Full article ">Figure 7
<p>Panda grids (each corresponding to a different extended aspect) representing <math display="inline"><semantics> <msub> <mi>q</mi> <mn>1</mn> </msub> </semantics></math> for the trajectory described in <a href="#sec4dot1-robotics-10-00042" class="html-sec">Section 4.1</a> considering joint limits.</p>
Full article ">Figure 8
<p>Panda grids (each corresponding to a different extended aspect) representing <math display="inline"><semantics> <msub> <mi>q</mi> <mn>1</mn> </msub> </semantics></math> for the trajectory described in <a href="#sec4dot1-robotics-10-00042" class="html-sec">Section 4.1</a> neglecting joint limits.</p>
Full article ">Figure 9
<p>Discrete globally optimal (<b>left</b>) and Pareto-optimal (<b>right</b>) solution for the Panda example.</p>
Full article ">
21 pages, 4259 KiB  
Article
Development of a High-Speed, Low-Latency Telemanipulated Robot Hand System
by Yuji Yamakawa, Yugo Katsuki, Yoshihiro Watanabe and Masatoshi Ishikawa
Robotics 2021, 10(1), 41; https://doi.org/10.3390/robotics10010041 - 3 Mar 2021
Cited by 6 | Viewed by 5332
Abstract
This paper focuses on development of a high-speed, low-latency telemanipulated robot hand system, evaluation of the system, and demonstration of the system. The characteristics of the developed system are the followings: non-contact, high-speed 3D visual sensing of the human hand, intuitive motion mapping [...] Read more.
This paper focuses on development of a high-speed, low-latency telemanipulated robot hand system, evaluation of the system, and demonstration of the system. The characteristics of the developed system are the followings: non-contact, high-speed 3D visual sensing of the human hand, intuitive motion mapping between human hands and robot hands, and low-latency, fast responsiveness to human hand motion. Such a high-speed, low-latency telemanipulated robot hand system can be considered to be more effective from the viewpoint of usability. The developed system consists of a high-speed vision system, a high-speed robot hand, and a real-time controller. For the developed system, we propose new methods of 3D sensing, mapping between the human hand and the robot hand, and the robot hand control. We evaluated the performance (latency and responsiveness) of the developed system. As a result, the latency of the developed system is so small that humans cannot recognize the latency. In addition, we conducted experiments of opening/closing motion, object grasping, and moving object grasping as demonstrations. Finally, we confirmed the validity and effectiveness of the developed system and proposed method. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Concept of this research.</p>
Full article ">Figure 2
<p>Positioning of our work [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 3
<p>Conceptual illustration of the high-speed, low-latency telemanipulated robot hand system [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 4
<p>High-speed vision system [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 5
<p>Setup of sensing system [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 6
<p>Mechanism of high-speed robot hand [<a href="#B19-robotics-10-00041" class="html-bibr">19</a>] (© 2004 IEEE).</p>
Full article ">Figure 7
<p>Flow of proposed method [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>]; Motion sensing, Motion mapping and Control of robot hand are explained in <a href="#sec4dot1-robotics-10-00041" class="html-sec">Section 4.1</a>, <a href="#sec4dot2-robotics-10-00041" class="html-sec">Section 4.2</a> and <a href="#sec4dot3-robotics-10-00041" class="html-sec">Section 4.3</a>, respectively. (© 2015 IEEE).</p>
Full article ">Figure 8
<p>High-speed non-contact sensing [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 9
<p>Finding corresponding points.</p>
Full article ">Figure 10
<p>Initial posture and correspondence relationship of each finger [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 11
<p>Initial state of simulation.</p>
Full article ">Figure 12
<p>Mapping result.</p>
Full article ">Figure 13
<p>Reproduction of high-speed motion of a human finger [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 14
<p>Latency for actual joint angle to achieve reference joint angle [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 15
<p>Experimental result of hand opening and closing motion.</p>
Full article ">Figure 16
<p>Experimental result of hand opening and closing motion (enlarged) [<a href="#B18-robotics-10-00041" class="html-bibr">18</a>] (© 2015 IEEE).</p>
Full article ">Figure 17
<p>Experimental result of ball catching.</p>
Full article ">Figure 18
<p>Experimental results of falling stick catching.</p>
Full article ">Figure 19
<p>Success rate of falling stick catching.</p>
Full article ">Figure 20
<p>Detailed analysis of success rate of falling stick catching.</p>
Full article ">
42 pages, 4180 KiB  
Review
A Review of Active Hand Exoskeletons for Rehabilitation and Assistance
by Tiaan du Plessis, Karim Djouani and Christiaan Oosthuizen
Robotics 2021, 10(1), 40; https://doi.org/10.3390/robotics10010040 - 3 Mar 2021
Cited by 103 | Viewed by 22439
Abstract
Disabilities are a global issue due to the decrease in life quality and mobility of patients, especially people suffering from hand disabilities. This paper presents a review of active hand exoskeleton technologies, over the past decade, for rehabilitation, assistance, augmentation, and haptic devices. [...] Read more.
Disabilities are a global issue due to the decrease in life quality and mobility of patients, especially people suffering from hand disabilities. This paper presents a review of active hand exoskeleton technologies, over the past decade, for rehabilitation, assistance, augmentation, and haptic devices. Hand exoskeletons are still an active research field due to challenges that engineers face and are trying to solve. Each hand exoskeleton has certain requirements to fulfil to achieve their aims. These requirements have been extracted and categorized into two sections: general and specific, to give a common platform for developing future devices. Since this is still a developing area, the requirements are also shaped according to the advances in the field. Technical challenges, such as size requirements, weight, ergonomics, rehabilitation, actuators, and sensors are all due to the complex anatomy and biomechanics of the hand. The hand is one of the most complex structures in the human body; therefore, to understand certain design approaches, the anatomy and biomechanics of the hand are addressed in this paper. The control of these devices is also an arising challenge due to the implementation of intelligent systems and new rehabilitation techniques. This includes intention detection techniques (electroencephalography (EEG), electromyography (EMG), admittance) and estimating applied assistance. Therefore, this paper summarizes the technology in a systematic approach and reviews the state of the art of active hand exoskeletons with a focus on rehabilitation and assistive devices. Full article
(This article belongs to the Special Issue Medical and Rehabilitation Robots)
Show Figures

Figure 1

Figure 1
<p>The basic outline of the contents covered in this paper.</p>
Full article ">Figure 2
<p>The internal bone structure of the hand as well as the wrist movements. (<b>a</b>) The skeletal structure of the hand, indicating the various bones and joints. The figure is taken from [<a href="#B26-robotics-10-00040" class="html-bibr">26</a>], which is an open-access article. (<b>b</b>) Indicates the flexion and extension motion of the wrist with its maximum range of motion (ROM). (<b>c</b>) Indicates the radial and ulnar deviation of the wrist with its maximum ROM of the right hand.</p>
Full article ">Figure 3
<p>Indicates the flexion, extension, abduction, and adduction of the 2nd to 5th digits of the hand. This diagram was taken from [<a href="#B27-robotics-10-00040" class="html-bibr">27</a>], which is an open-source article.</p>
Full article ">Figure 4
<p>Common types of hand grips that are used every day. (<b>a</b>) Indicates the power grip with the thumb in-plane with the palm. The illustration is taken from [<a href="#B30-robotics-10-00040" class="html-bibr">30</a>]. (<b>b</b>) Illustrates another type of power grip called the cylindrical grip. The diagram is also from [<a href="#B30-robotics-10-00040" class="html-bibr">30</a>]. (<b>c</b>) Represents a precision grip, more specifically a key grip [<a href="#B30-robotics-10-00040" class="html-bibr">30</a>]. (<b>d</b>) Illustrates a hook grip where the thumb is not involved. The illustration is from the free access paper [<a href="#B24-robotics-10-00040" class="html-bibr">24</a>].</p>
Full article ">Figure 5
<p>This diagram is an outline of the hierarchy structure identified for the various types of hand exoskeletons, in different fields, as well as the type of existing hand exoskeleton technologies.</p>
Full article ">Figure 6
<p>Different existing hand exoskeleton designs. Various designs exist for each category, although the fundamental design idea is illustrated. (<b>a</b>) Matched-axis design. (<b>b</b>) Remote center of rotation (RCR) design. (<b>c</b>) Redundant mechanism (four-bar link mechanisms). (<b>d</b>) Base-to-distal design. (<b>e</b>) Tendon driven glove (complaint). The tendon/cables are illustrated in orange, and the blue represents a glove or flexible base structure. (<b>f</b>) Jointless structure. Can either be actuated from compressed air (pneumatics) or hydraulic fluids.</p>
Full article ">Figure 7
<p>This graph shows the average weight of each exoskeleton design type by only considering the weight of the exoskeleton on the hand.</p>
Full article ">Figure 8
<p>The pie chart indicates the distribution of the exoskeleton design type according to the literature.</p>
Full article ">Figure 9
<p>This diagram indicates the various elements, identified through previous designs, of the hand exoskeleton technologies and their basic building blocks. These various elements are still being researched and expanded for different applications and designs.</p>
Full article ">Figure 10
<p>Illustrates the various actuation methods explored in both active and passive systems.</p>
Full article ">Figure 11
<p>This chart indicates a distribution of the active actuation used in the literature.</p>
Full article ">Figure 12
<p>Illustrates the power transmission methods explored in different designs.</p>
Full article ">Figure 13
<p>This chart indicates the distribution of the various power transmission methods found in the literature.</p>
Full article ">Figure 14
<p>Illustrates the various sensing methods explored throughout hand exoskeletons.</p>
Full article ">Figure 15
<p>Illustrates the various control signals, type of control, as well as control schemes explored in various design attempts.</p>
Full article ">Figure 16
<p>This chart shows the distribution of the control methods used in various designs.</p>
Full article ">
17 pages, 2612 KiB  
Article
Model-Based Flow Rate Control with Online Model Parameters Identification in Automatic Pouring Machine
by Nobutoshi Kabasawa and Yoshiyuki Noda
Robotics 2021, 10(1), 39; https://doi.org/10.3390/robotics10010039 - 2 Mar 2021
Cited by 4 | Viewed by 4371
Abstract
In this study, we proposed an advanced control system for tilting-ladle-type automatic pouring machines in the casting industry. Automatic pouring machines have been introduced recently to improve the working environment of the pouring process. In the conventional study on pouring control, it has [...] Read more.
In this study, we proposed an advanced control system for tilting-ladle-type automatic pouring machines in the casting industry. Automatic pouring machines have been introduced recently to improve the working environment of the pouring process. In the conventional study on pouring control, it has been confirmed that the pouring flow rate control contributes to improving the accuracy of the entire automatic pouring machine, such as the outflow liquid’s falling position from the ladle, the liquid’s weight filled in the mold, and the sprue cup’s liquid level. However, the conventional control system has problems: it is not easy to precisely pour the liquid in the ladle with a large tilting angle, and it takes time to adjust the control parameters. Therefore, we proposed the feedforward pouring flow rate control system, constructed by the pouring process’ inverse model with the online model parameters identification. In this approach, we derived the pouring process’ mathematical model, representing precisely the pouring process with the ladle’s large tilting angle. The model parameters in the pouring process’ inverse model in the controller are updated online via the model parameters identification. To verify the proposed pouring control system’s efficacy, we experimented using the tilting-ladle-type automatic pouring machine. In the experimental results, the mean absolute error between the outflow liquid’s weight and the reference weight was improved from 0.1346 at the first pouring to 0.0498 at the fifth pouring. Moreover, the model parameters were identified within 4 s. Therefore, it enables updating the controller’s parameters within each pouring motion interval by the proposed approach. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>Tilting-ladle-type automatic pouring machine.</p>
Full article ">Figure 2
<p>Tilting-ladle-type automatic pouring machine in laboratory.</p>
Full article ">Figure 3
<p>Ladle geometry.</p>
Full article ">Figure 4
<p>Block diagram of automatic pouring process.</p>
Full article ">Figure 5
<p>Cross section of pouring process.</p>
Full article ">Figure 6
<p>Parameters on pouring mouth.</p>
Full article ">Figure 7
<p>Internal liquid shape in ladle with large tilting angle.</p>
Full article ">Figure 8
<p>Block diagram of proposed pouring flow rate control system.</p>
Full article ">Figure 9
<p>Block diagram of feedforward control with inverse model.</p>
Full article ">Figure 10
<p>Desired pouring patterns.</p>
Full article ">Figure 11
<p>Derivation system of model parameters from ladle shape.</p>
Full article ">Figure 12
<p>Model parameters obtained from ladle shape shown in <a href="#robotics-10-00039-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 13
<p>Flowchart of updating controller’s parameters.</p>
Full article ">Figure 14
<p>Pouring Patterns as Reference Input in Experiments.</p>
Full article ">Figure 15
<p>Experimental results of first pouring in first trial experiments.</p>
Full article ">Figure 16
<p>Experimental results of second pouring in first trial experiments.</p>
Full article ">Figure 17
<p>Experimental results of third pouring in first trial experiments.</p>
Full article ">Figure 18
<p>Experimental results of forth pouring in second trial experiments.</p>
Full article ">Figure 19
<p>Experimental results of fifth pouring in second trial experiments.</p>
Full article ">
21 pages, 1381 KiB  
Article
Accessible Educational Resources for Teaching and Learning Robotics
by Maria Pozzi, Domenico Prattichizzo and Monica Malvezzi
Robotics 2021, 10(1), 38; https://doi.org/10.3390/robotics10010038 - 23 Feb 2021
Cited by 20 | Viewed by 7972
Abstract
Robotics is now facing the challenge of deploying newly developed devices into human environments, and for this process to be successful, societal acceptance and uptake of robots are crucial. Education is already playing a key role in raising awareness and spreading knowledge about [...] Read more.
Robotics is now facing the challenge of deploying newly developed devices into human environments, and for this process to be successful, societal acceptance and uptake of robots are crucial. Education is already playing a key role in raising awareness and spreading knowledge about robotic systems, and there is a growing need to create highly accessible resources to teach and learn robotics. In this paper, we revise online available educational material, including videos, podcasts, and coding tools, aimed at facilitating the learning of robotics related topics at different levels. The offer of such resources was recently boosted by the higher demand of distance learning tools due to the COVID-19 pandemic. The potential of e-learning for robotics is still under-exploited, and here we provide an updated list of resources that could help instructors and students to better navigate the large amount of information available online. Full article
(This article belongs to the Special Issue Advances and Challenges in Educational Robotics)
Show Figures

Figure 1

Figure 1
<p>Three screenshots from the “The Art of Grasping and Manipulation in Robotics” by Prattichizzo et al.: (<b>a</b>) basic definitions, (<b>b</b>) robotic grasp modeling, and (<b>c</b>) simulating robotic grasps with the Syngrasp toolbox [<a href="#B26-robotics-10-00038" class="html-bibr">26</a>].</p>
Full article ">Figure 2
<p>Treated topics of MOOCs and lecture series on robotics. Note that we counted each one of the MOOCs included in specializations as a separate item.</p>
Full article ">Figure 3
<p>Number of subscribers of the main YouTube channels on robotics.</p>
Full article ">Figure 4
<p>Year of release and status (active/not active) of the main podcasts on robotics.</p>
Full article ">Figure 5
<p>A screenshot from the talk provided by one of the authors at TEDx Roma in 2014 entitled “Wearable technology for the sense of touch”.</p>
Full article ">Figure 6
<p>Target participants and participation modes of the main robotics challenges.</p>
Full article ">
14 pages, 1432 KiB  
Article
Visual Intelligence: Prediction of Unintentional Surgical-Tool-Induced Bleeding during Robotic and Laparoscopic Surgery
by Mostafa Daneshgar Rahbar, Hao Ying and Abhilash Pandya
Robotics 2021, 10(1), 37; https://doi.org/10.3390/robotics10010037 - 21 Feb 2021
Cited by 5 | Viewed by 4356
Abstract
Unintentional vascular damage can result from a surgical instrument’s abrupt movements during minimally invasive surgery (laparoscopic or robotic). A novel real-time image processing algorithm based on local entropy is proposed that can detect abrupt movements of surgical instruments and predict bleeding occurrence. The [...] Read more.
Unintentional vascular damage can result from a surgical instrument’s abrupt movements during minimally invasive surgery (laparoscopic or robotic). A novel real-time image processing algorithm based on local entropy is proposed that can detect abrupt movements of surgical instruments and predict bleeding occurrence. The uniform nature of the texture of surgical tools is utilized to segment the tools from the background. By comparing changes in entropy over time, the algorithm determines when the surgical instruments are moved abruptly. We tested the algorithm using 17 videos of minimally invasive surgery, 11 of which had tool-induced bleeding. Our preliminary testing shows that the algorithm is 88% accurate and 90% precise in predicting bleeding. The average advance warning time for the 11 videos is 0.662 s, with the standard deviation being 0.427 s. The proposed approach has the potential to eventually lead to a surgical early warning system or even proactively attenuate tool movement (for robotic surgery) to avoid dangerous surgical outcomes. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

Figure 1
<p>The figure depicts the change in the binarized entropy map due to the movement of the surgical instrument. The surgical instruments including the surgical robotic arm located at the right-hand side of the scene and the black arm is the suctioning tool.</p>
Full article ">Figure 2
<p>This figure depicts the temporal entropy within a pre-recorded video with arterial bleeding.</p>
Full article ">
18 pages, 2375 KiB  
Article
Dynamic Parameter Identification of a Pointing Mechanism Considering the Joint Clearance
by Jing Sun, Xueyan Han, Tong Li and Shihua Li
Robotics 2021, 10(1), 36; https://doi.org/10.3390/robotics10010036 - 20 Feb 2021
Cited by 2 | Viewed by 3679
Abstract
The clearance of the revolute joint influences the accuracy of dynamic parameter identification. In order to address this problem, a method for dynamic parameter identification of an X–Y pointing mechanism while considering the clearance of the revolute joint is proposed in this paper. [...] Read more.
The clearance of the revolute joint influences the accuracy of dynamic parameter identification. In order to address this problem, a method for dynamic parameter identification of an X–Y pointing mechanism while considering the clearance of the revolute joint is proposed in this paper. Firstly, the nonlinear dynamic model of the pointing mechanism was established based on a modified contact model, which took the effect of the asperity of contact surface on joint clearance into consideration. Secondly, with the aim of achieving the anti-interference incentive trajectory, the trajectory was optimized according to the condition number of the observation matrix and the driving functions of activate joints that could be obtained. Thirdly, dynamic simulation was conducted through Adams software, and clearance was involved in the simulation model. Finally, the dynamic parameter identification of the pointing mechanism was conducted based on an artificial bee colony (ABC) algorithm. The identification result that considered joint clearance was compared with that which did not consider joint clearance. The results showed that the accuracy of the dynamic parameter identification was improved when the clearance was taken into consideration. This study provides a theoretical basis for the improvement of dynamic parameter identification accuracy. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>Approximate contact model of considering joint clearance.</p>
Full article ">Figure 2
<p>Schematic diagram of the gap.</p>
Full article ">Figure 3
<p>The X–Y pointing mechanism with clearance.</p>
Full article ">Figure 4
<p>Coordinate system of the X–Y pointing mechanism with clearance.</p>
Full article ">Figure 5
<p>Rotational angle curves: (<b>a</b>) rotational angle curve of shafting 1 and (<b>b</b>) rotational angle curve of shafting 2.</p>
Full article ">Figure 6
<p>Angular velocity curves: (<b>a</b>) the angular velocity curve of shafting 1 and (<b>b</b>) the angular velocity curve of shafting 2.</p>
Full article ">Figure 7
<p>Angular acceleration curves: (<b>a</b>) the angular acceleration curve of shafting 1 and (<b>b</b>) the angular acceleration curve of shafting 2.</p>
Full article ">Figure 8
<p>Convergence effect of the ABC algorithm for dynamic parameter identification.</p>
Full article ">
18 pages, 3041 KiB  
Article
Experimental Investigation of a Cable Robot Recovery Strategy
by Giovanni Boschetti, Riccardo Minto and Alberto Trevisani
Robotics 2021, 10(1), 35; https://doi.org/10.3390/robotics10010035 - 16 Feb 2021
Cited by 8 | Viewed by 5130
Abstract
Developing an emergency procedure for cable-driven parallel robots is not a trivial process, since it is not possible to halt the end-effector by quickly braking the actuators as in rigid-link manipulators. For this reason, the cable robot recovery strategy is an important topic [...] Read more.
Developing an emergency procedure for cable-driven parallel robots is not a trivial process, since it is not possible to halt the end-effector by quickly braking the actuators as in rigid-link manipulators. For this reason, the cable robot recovery strategy is an important topic of research, and the literature provides several approaches. However, the computational efficiency of the recovery algorithm is fundamental for real-time applications. Thus, this paper presents a recovery strategy adopted in an experimental setup consisting of a three degrees-of-freedom (3-DOF) suspended cable robot controlled by an industrial PC. The presentation of the used control system lists the industrial-grade components installed, further highlighting the industrial implication of the work. Lastly, the experimental validation of the recovery strategy proves the effectiveness of the work. Full article
(This article belongs to the Special Issue Feature Papers 2020)
Show Figures

Figure 1

Figure 1
<p>Example of a generic suspended cable-driven parallel robot.</p>
Full article ">Figure 2
<p>PID and feed-forward control scheme of the robot.</p>
Full article ">Figure 3
<p>Three-dimensional view of the velocity components in the mobile reference system.</p>
Full article ">Figure 4
<p>Servo system adopted in the presented work.</p>
Full article ">Figure 5
<p>Prototype cable suspended parallel robot (CSPR) used during the tests.</p>
Full article ">Figure 6
<p>Vicon Tracker display showing the four cameras and markers position with respect to the robot workcell. FoV: field of view.</p>
Full article ">Figure 7
<p>Spatial trajectory of the end-effector: comparison between the reference motion (actuator position) and the measurements from the motion tracking system.</p>
Full article ">Figure 8
<p>Motor torques during the planned trajectory.</p>
Full article ">Figure 9
<p>Comparison between the reference motion (actuator position) and the measurements from the motion tracking system using a more complex trajectory.</p>
Full article ">Figure 10
<p>Spatial trajectory of the end-effector during the recovery strategy: the black line is the trajectory before cable failure, while the red and green lines represent the two phases of the recovery strategy.</p>
Full article ">Figure 11
<p>Motor torques during the recovery strategy for motor 1 (blue), motor 2 (red), motor 3 (green) and motor 4 (violet).</p>
Full article ">
23 pages, 3503 KiB  
Article
Time Coordination and Collision Avoidance Using Leader-Follower Strategies in Multi-Vehicle Missions
by Camilla Tabasso, Venanzio Cichella, Syed Bilal Mehdi, Thiago Marinho and Naira Hovakimyan
Robotics 2021, 10(1), 34; https://doi.org/10.3390/robotics10010034 - 13 Feb 2021
Cited by 15 | Viewed by 5437
Abstract
In recent years, the increasing popularity of multi-vehicle missions has been accompanied by a growing interest in the development of control strategies to ensure safety in these scenarios. In this work, we propose a control framework for coordination and collision avoidance in cooperative [...] Read more.
In recent years, the increasing popularity of multi-vehicle missions has been accompanied by a growing interest in the development of control strategies to ensure safety in these scenarios. In this work, we propose a control framework for coordination and collision avoidance in cooperative multi-vehicle missions based on a speed adjustment approach. The overall problem is decoupled in a coordination problem, in order to ensure coordination and inter-vehicle safety among the agents, and a collision-avoidance problem to guarantee the avoidance of non-cooperative moving obstacles. We model the network over which the cooperative vehicles communicate using tools from graph theory, and take communication losses and time delays into account. Finally, through a rigorous Lyapunov analysis, we provide performance bounds and demonstrate the efficacy of the algorithms with numerical and experimental results. Full article
(This article belongs to the Special Issue Women in Robotics)
Show Figures

Figure 1

Figure 1
<p>Cooperative control framework.</p>
Full article ">Figure 2
<p>Evolution of the pace of the mission on the left, and coordination error between leader and follower on the right.</p>
Full article ">Figure 3
<p>Evolution of the virtual time on the left, and coordination error between leader and follower on the right.</p>
Full article ">Figure 4
<p>Evolution of mission at three different time instants. The trajectory of the leader is shown in blue while the trajectories of follower is shown in red.</p>
Full article ">Figure 5
<p>Experimental setup for the collision-avoidance experiment.</p>
Full article ">Figure 6
<p>Evolution of mission at three different time instants. The trajectory of the leader is shown in cyan while the trajectories of the three robots are shown in blue, green, and yellow, and the trajectory of the obstacle is shown in red. The nominal positions of the vehicles are also shown in a lighter shade of the color.</p>
Full article ">Figure 7
<p>The figure on the left shows the distance between the ground robots and the obstacle shown as solid blue, yellow, and green lines along with the minimum required safety distance shown by the red dashed line. The light blue line shows the distance for robot 1 if the collision-avoidance algorithm was not used. We note that the distance between the leader and the obstacle is omitted, since a collision between the two is impossible. The figure on the right shows the inter-vehicle distances between the followers, along with the minimum safe distance being shown by the dashed red line.</p>
Full article ">Figure 8
<p>Evolution of <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mi>δ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Evolution of the pace of the mission (<b>top</b> row), and evolution of the virtual time (<b>bottom</b> row) for case 2.</p>
Full article ">
18 pages, 1319 KiB  
Article
Impact of Cycle Time and Payload of an Industrial Robot on Resource Efficiency
by Florian Stuhlenmiller, Steffi Weyand, Jens Jungblut, Liselotte Schebek, Debora Clever and Stephan Rinderknecht
Robotics 2021, 10(1), 33; https://doi.org/10.3390/robotics10010033 - 12 Feb 2021
Cited by 11 | Viewed by 5815
Abstract
Modern industry benefits from the automation capabilities and flexibility of robots. Consequently, the performance depends on the individual task, robot and trajectory, while application periods of several years lead to a significant impact of the use phase on the resource efficiency. In this [...] Read more.
Modern industry benefits from the automation capabilities and flexibility of robots. Consequently, the performance depends on the individual task, robot and trajectory, while application periods of several years lead to a significant impact of the use phase on the resource efficiency. In this work, simulation models predicting a robot’s energy consumption are extended by an estimation of the reliability, enabling the consideration of maintenance to enhance the assessment of the application’s life cycle costs. Furthermore, a life cycle assessment yields the greenhouse gas emissions for the individual application. Potential benefits of the combination of motion simulation and cost analysis are highlighted by the application to an exemplary system. For the selected application, the consumed energy has a distinct impact on greenhouse gas emissions, while acquisition costs govern life cycle costs. Low cycle times result in reduced costs per workpiece, however, for short cycle times and higher payloads, the probability of required spare parts distinctly increases for two critical robotic joints. Hence, the analysis of energy consumption and reliability, in combination with maintenance, life cycle costing and life cycle assessment, can provide additional information to improve the resource efficiency. Full article
(This article belongs to the Special Issue Industrial Robotics in Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>Exemplary task analyzed in this study: the configuration for picking up and for placing a sphere as the workpiece is presented in the left and right, respectively. The trace is depicted in yellow.</p>
Full article ">Figure 2
<p>Curve fit of failure probability for the strain wave gear based on life adjustment factors for reliability from ISO 281.</p>
Full article ">Figure 3
<p>Segments and their estimated mass for the UR5.</p>
Full article ">Figure 4
<p>Overview of the energy consumption and reliability for each joint after 36,000 h of operation.</p>
Full article ">Figure 5
<p>Composition of LCC and GHG emissions after 36,000 h of continuous operation.</p>
Full article ">Figure 6
<p>Consumed energy and reliability after 36,000 h of continuous operation depending on the weight of the workpiece and the duration of one cycle.</p>
Full article ">Figure 7
<p>Probability of the number of repairs, depending on the component of the robot during 36,000 h of continuous operation.</p>
Full article ">Figure 8
<p>LCC and GHG emissions after 36,000 h of continuous operation considering possible repairs depending on the load of the workpiece and the duration of the cycle.</p>
Full article ">Figure 9
<p>LCC and GHG emissions per workpiece (pW) after 36,000 h of continuous operation considering possible repairs, depending on the load of the workpiece and the duration of the cycle.</p>
Full article ">
29 pages, 4147 KiB  
Article
Adaptive Position/Force Control of a Robotic Manipulator in Contact with a Flexible and Uncertain Environment
by Piotr Gierlak
Robotics 2021, 10(1), 32; https://doi.org/10.3390/robotics10010032 - 12 Feb 2021
Cited by 8 | Viewed by 6181
Abstract
The present paper concerns the synthesis of robot movement control systems in the cases of disturbances of natural position constraints, which are the result of surface susceptibility and inaccuracies in its description. The study contains the synthesis of control laws, in which the [...] Read more.
The present paper concerns the synthesis of robot movement control systems in the cases of disturbances of natural position constraints, which are the result of surface susceptibility and inaccuracies in its description. The study contains the synthesis of control laws, in which the knowledge of parameters of the susceptible environment is not required, and which guarantee stability of the system in the case of an inaccurately described contact surface. The novelty of the presented solution is based on introducing an additional module to the control law in directions normal to the interaction surface, which allows for a fluent change of control strategy in the case of occurrence of distortions in the surface. An additional module in the control law is perceived as a virtual viscotic resistance force and resilient environment acting upon the robot. This interpretation facilitates intuitive selection of amplifications and allows for foreseeing the behavior of the system when disturbances occur. Introducing reactions of virtual constraints provides automatic adjustment of the robot interaction force with the susceptible environment, minimizing the impact of geometric inaccuracy of the environment. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Main strategies for force control applied in industrial robotics: (<b>a</b>) The strategy of maintaining a given pressure force; (<b>b</b>) the strategy of adjusting the feed speed to the motion resistance.</p>
Full article ">Figure 2
<p>The activity of control strategy in the case of distortions in the surface: (<b>a</b>) Movement on the surface without distortion; (<b>b</b>) movement on the distorted surface.</p>
Full article ">Figure 3
<p>Cooperative strategy for force control.</p>
Full article ">Figure 4
<p>The operation of the cooperative force control strategy in the case of surface inaccuracy (<math display="inline"><semantics> <mi>h</mi> </semantics></math> is a variable characterizing the inaccuracy of the real contact surface).</p>
Full article ">Figure 5
<p>Model of a robotic manipulator in contact with a flexible environment: <math display="inline"><semantics> <mrow> <msub> <mi>O</mi> <mn>0</mn> </msub> <mi>E</mi> <mo>=</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>E</mi> <mi>A</mi> <mo>=</mo> <msub> <mi>l</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>B</mi> <mo>=</mo> <msub> <mi>l</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>c</mi> <mo>=</mo> <msub> <mi>l</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>D</mi> <mo>=</mo> <msub> <mi>d</mi> <mn>5</mn> </msub> </mrow> </semantics></math>—geometrical parameters characterizing the robot arm; <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>3</mn> </msub> </mrow> </semantics></math>—angles of link rotation assumed as generalized coordinates; <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mn>3</mn> </msub> </mrow> </semantics></math>—input moments; <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>e</mi> </msub> </mrow> </semantics></math>—stiffness coefficient, <math display="inline"><semantics> <mi>μ</mi> </semantics></math>—coefficient of dry friction.</p>
Full article ">Figure 6
<p>The desired motion: (<b>a</b>) motion path of point D in the xy plane; (<b>b</b>) motion path of point D in the xz plane; (<b>c</b>) the desired velocity of motion of point D.</p>
Full article ">Figure 7
<p>The desired trajectory: (<b>a</b>) coordinates of point D; (<b>b</b>) pressure force; (<b>c</b>) nominal coordinate of point D in tangential direction; (<b>d</b>) deformation of the surface under the influence of pressure force.</p>
Full article ">Figure 8
<p>Disruption of the surface of constraints: (<b>a</b>) defect in the surface; (<b>b</b>) change of the surface of constraints in time along the desired motion path.</p>
Full article ">Figure 9
<p>The overall control signals: (<b>a</b>) in tangential directions; (<b>b</b>) in the normal direction.</p>
Full article ">Figure 10
<p>Control signals: (<b>a</b>) PD control in tangential directions, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>P</mi> <mi>D</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>D</mi> <mi>τ</mi> <mn>1</mn> </mrow> </msub> <msub> <mi>s</mi> <mrow> <mi>τ</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>P</mi> <mi>D</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>D</mi> <mi>τ</mi> <mn>2</mn> </mrow> </msub> <msub> <mi>s</mi> <mrow> <mi>τ</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) PD control in the normal direction, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>P</mi> <mi>D</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>D</mi> <mi>n</mi> </mrow> </msub> <msub> <mi>s</mi> <mi>n</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) compensatory control in tangential directions, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>k</mi> <mi>o</mi> <mi>m</mi> <mi>p</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>k</mi> <mi>o</mi> <mi>m</mi> <mi>p</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) compensatory control in the normal direction, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>k</mi> <mi>o</mi> <mi>m</mi> <mi>p</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <mi>n</mi> </msub> </mrow> </semantics></math>; (<b>e</b>) control compensating for the influence of friction forces; (<b>f</b>) control compensating for normal force; (<b>g</b>) robust control in tangential directions; (<b>h</b>) robust control in the normal direction.</p>
Full article ">Figure 10 Cont.
<p>Control signals: (<b>a</b>) PD control in tangential directions, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>P</mi> <mi>D</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>D</mi> <mi>τ</mi> <mn>1</mn> </mrow> </msub> <msub> <mi>s</mi> <mrow> <mi>τ</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>P</mi> <mi>D</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>D</mi> <mi>τ</mi> <mn>2</mn> </mrow> </msub> <msub> <mi>s</mi> <mrow> <mi>τ</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) PD control in the normal direction, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>P</mi> <mi>D</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>K</mi> <mrow> <mi>D</mi> <mi>n</mi> </mrow> </msub> <msub> <mi>s</mi> <mi>n</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) compensatory control in tangential directions, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>k</mi> <mi>o</mi> <mi>m</mi> <mi>p</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>k</mi> <mi>o</mi> <mi>m</mi> <mi>p</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) compensatory control in the normal direction, where <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mi>k</mi> <mi>o</mi> <mi>m</mi> <mi>p</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <mi>n</mi> </msub> </mrow> </semantics></math>; (<b>e</b>) control compensating for the influence of friction forces; (<b>f</b>) control compensating for normal force; (<b>g</b>) robust control in tangential directions; (<b>h</b>) robust control in the normal direction.</p>
Full article ">Figure 11
<p>Realized trajectory: (<b>a</b>) coordinates of point D in the tangential directions; (<b>b</b>) coordinates of point D in the normal direction related to surface deformation; (<b>c</b>) pressure force; (<b>d</b>) deviation of robot’s end-effector from assumed constraints in the normal direction.</p>
Full article ">Figure 12
<p>Tracking errors: (<b>a</b>) motion errors in tangential directions; (<b>b</b>) normal force error; (<b>c</b>) filtered motion errors in tangential directions; (<b>d</b>) filtered force error in the normal direction.</p>
Full article ">Figure 13
<p>Estimates of the system parameters: (<b>a</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>5</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>7</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>9</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>8</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>13</mn> </mrow> </msub> </mrow> </semantics></math> (<b>c</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>6</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>8</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>13</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>5</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>6</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>17</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>18</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>19</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>f</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>7</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>9</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>g</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>14</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>15</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>τ</mi> <mn>16</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>h</b>) parameters <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>10</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>The influence of the co-operation gain coefficient <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mi>δ</mi> </msub> </mrow> </semantics></math> on the task of the robotic manipulator: (<b>a</b>) the coordinates of point D in the normal direction; (<b>b</b>) the pressure force.</p>
Full article ">
13 pages, 20917 KiB  
Article
Inverse and Forward Kinematic Analysis of a 6-DOF Parallel Manipulator Utilizing a Circular Guide
by Alexey Fomin, Anton Antonov, Victor Glazunov and Yuri Rodionov
Robotics 2021, 10(1), 31; https://doi.org/10.3390/robotics10010031 - 7 Feb 2021
Cited by 17 | Viewed by 6216
Abstract
The proposed study focuses on the inverse and forward kinematic analysis of a novel 6-DOF parallel manipulator with a circular guide. In comparison with the known schemes of such manipulators, the structure of the proposed one excludes the collision of carriages when they [...] Read more.
The proposed study focuses on the inverse and forward kinematic analysis of a novel 6-DOF parallel manipulator with a circular guide. In comparison with the known schemes of such manipulators, the structure of the proposed one excludes the collision of carriages when they move along the circular guide. This is achieved by using cranks (links that provide an unlimited rotational angle) in the manipulator kinematic chains. In this case, all drives stay fixed on the base. The kinematic analysis provides analytical relationships between the end-effector coordinates and six controlled movements in drives (driven coordinates). Examples demonstrate the implementation of the suggested algorithms. For the inverse kinematics, the solution is found given the position and orientation of the end-effector. For the forward kinematics, various assembly modes of the manipulator are obtained for the same given values of the driven coordinates. The study also discusses how to choose the links lengths to maximize the rotational capabilities of the end-effector and provides a calculation of such capabilities for the chosen manipulator design. Full article
Show Figures

Figure 1

Figure 1
<p>CAD model (virtual prototype) of the 6-DOF parallel manipulator with a circular guide.</p>
Full article ">Figure 2
<p>Toward kinematic analysis of the 6-DOF parallel manipulator with a circular guide: (<b>a</b>) 3D model; (<b>b</b>) fragment of the circular guide.</p>
Full article ">Figure 3
<p>Eight different assembly modes of the manipulator, obtained in solving the forward position problem for the values of driven coordinates indicated in Equation (15); dashed line indicates the circular guide; the yellow triangle indicates the end-effector; red, green, and blue lines correspond to <span class="html-italic">X<sub>P</sub></span>, <span class="html-italic">Y<sub>P</sub></span>, and <span class="html-italic">Z<sub>P</sub></span> axes of system <span class="html-italic">PX<sub>P</sub>Y<sub>P</sub>Z<sub>P</sub></span>, respectively; red dot represents point <span class="html-italic">P</span>.</p>
Full article ">Figure 4
<p>Position of the horizontal kinematic chains when carriages are in the extreme positions (angle between crank and swinging arm is 90°).</p>
Full article ">Figure 5
<p>Maximum and minimum values of rotational angle φ around the vertical axis depending on platform height <span class="html-italic">z</span>; red line indicates φ<span class="html-italic"><sub>max</sub></span>, blue line indicates φ<span class="html-italic"><sub>min</sub></span>.</p>
Full article ">Figure 6
<p>Variation of the manipulator design with two circular guides that permits higher values of rotational angle φ.</p>
Full article ">
13 pages, 2293 KiB  
Article
Balancing of the Orthoglide Taking into Account Its Varying Payload
by Jing Geng, Vigen Arakelian, Damien Chablat and Philippe Lemoine
Robotics 2021, 10(1), 30; https://doi.org/10.3390/robotics10010030 - 6 Feb 2021
Cited by 2 | Viewed by 4270
Abstract
For fast-moving robot systems, the fluctuating dynamic loads transmitted to the supporting frame can excite the base and cause noise, wear, and fatigue of mechanical components. By reducing the shaking force completely, the dynamic characteristics of the robot system can be improved. However, [...] Read more.
For fast-moving robot systems, the fluctuating dynamic loads transmitted to the supporting frame can excite the base and cause noise, wear, and fatigue of mechanical components. By reducing the shaking force completely, the dynamic characteristics of the robot system can be improved. However, the complete inertial force and inertial moment balancing can only be achieved by adding extra counterweight and counter-rotation systems, which largely increase the total mass, overall size, and complexity of robots. In order to avoid these inconveniences, an approach based on the optimal motion control of the center of mass is applied for the shaking force balancing of the robot Orthoglide. The application of the “bang–bang” motion profile on the common center of mass allows a considerable reduction of the acceleration of the total mass center, which results in the reduction of the shaking force. With the proposed method, the shaking force balancing of the Orthoglide is carried out, taking into account the varying payload. Note that such a solution by purely mechanical methods is complex and practically inapplicable for industrial robots. The simulations in ADAMS software validate the efficiency of the suggested approach. Full article
(This article belongs to the Special Issue Advances in European Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Motors used as counterweights [<a href="#B2-robotics-10-00030" class="html-bibr">2</a>]; (<b>b</b>) parallel spatial manipulator balanced by adding counterweights [<a href="#B3-robotics-10-00030" class="html-bibr">3</a>]; (<b>c</b>) shaking force balancing by adding a pantograph in oder to keep the center of mass (CoM) stationary [<a href="#B7-robotics-10-00030" class="html-bibr">7</a>]; (<b>d</b>) a combination of a proper distribution of link masses and two springs [<a href="#B9-robotics-10-00030" class="html-bibr">9</a>]; (<b>e</b>) two-step kinematic parameter adjustment in the adjusting kinematic parameters method [<a href="#B10-robotics-10-00030" class="html-bibr">10</a>]; (<b>f</b>) the optimal acceleration control of the substituted center of mass <math display="inline"><semantics> <mrow> <msup> <mi>S</mi> <mo>∗</mo> </msup> </mrow> </semantics></math> of a 5R parallel manipulator [<a href="#B17-robotics-10-00030" class="html-bibr">17</a>].</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>) Motors used as counterweights [<a href="#B2-robotics-10-00030" class="html-bibr">2</a>]; (<b>b</b>) parallel spatial manipulator balanced by adding counterweights [<a href="#B3-robotics-10-00030" class="html-bibr">3</a>]; (<b>c</b>) shaking force balancing by adding a pantograph in oder to keep the center of mass (CoM) stationary [<a href="#B7-robotics-10-00030" class="html-bibr">7</a>]; (<b>d</b>) a combination of a proper distribution of link masses and two springs [<a href="#B9-robotics-10-00030" class="html-bibr">9</a>]; (<b>e</b>) two-step kinematic parameter adjustment in the adjusting kinematic parameters method [<a href="#B10-robotics-10-00030" class="html-bibr">10</a>]; (<b>f</b>) the optimal acceleration control of the substituted center of mass <math display="inline"><semantics> <mrow> <msup> <mi>S</mi> <mo>∗</mo> </msup> </mrow> </semantics></math> of a 5R parallel manipulator [<a href="#B17-robotics-10-00030" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>The prototype of the Orthoglide (LS2N).</p>
Full article ">Figure 3
<p>(<b>a</b>) The structure of the Orthoglide; (<b>b</b>) the geometrical model of the Orthoglide.</p>
Full article ">Figure 4
<p>Variations of shaking forces for three studied cases.</p>
Full article ">Figure 5
<p>Variations of shaking moments for three studied cases.</p>
Full article ">Figure 6
<p>The center mass of the parallelogram.</p>
Full article ">
18 pages, 9207 KiB  
Article
An Application-Based Review of Haptics Technology
by Gowri Shankar Giri, Yaser Maddahi and Kourosh Zareinia
Robotics 2021, 10(1), 29; https://doi.org/10.3390/robotics10010029 - 5 Feb 2021
Cited by 49 | Viewed by 20171
Abstract
Recent technological development has led to the invention of different designs of haptic devices, electromechanical devices that mediate communication between the user and the computer and allow users to manipulate objects in a virtual environment while receiving tactile feedback. The main criteria behind [...] Read more.
Recent technological development has led to the invention of different designs of haptic devices, electromechanical devices that mediate communication between the user and the computer and allow users to manipulate objects in a virtual environment while receiving tactile feedback. The main criteria behind providing an interactive interface are to generate kinesthetic feedback and relay information actively from the haptic device. Sensors and feedback control apparatus are of paramount importance in designing and manufacturing a haptic device. In general, haptic technology can be implemented in different applications such as gaming, teleoperation, medical surgeries, augmented reality (AR), and virtual reality (VR) devices. This paper classifies the application of haptic devices based on the construction and functionality in various fields, followed by addressing major limitations related to haptics technology and discussing prospects of this technology. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Phantom Desktop (TouchX). (<b>B</b>) Phantom Omni (Touch). (<b>C</b>) Modified Phantom Premium for neuroArm. (<b>D</b>) Omega 3.</p>
Full article ">Figure 2
<p>Commercial haptic devices in the market.</p>
Full article ">Figure 3
<p>Medical simulators and the haptic device they have used. The blue hexagons show the type of the device, handle type, and the number of DoFs (positional/rotational and force/torque).</p>
Full article ">Figure 4
<p>Workstation of neuroArm II prototype used in telerobotic microsurgery.</p>
Full article ">
21 pages, 5759 KiB  
Article
Autonomous Elbow Controller for Differential Drive In-Pipe Robots
by Liam Brown, Joaquin Carrasco and Simon Watson
Robotics 2021, 10(1), 28; https://doi.org/10.3390/robotics10010028 - 2 Feb 2021
Cited by 10 | Viewed by 4550
Abstract
The inspection of legacy nuclear facilities to aid in decommissioning is a world wide issue. One of the challenges is the characterisation of pipe networks within them. This paper presents an autonomous control system for the navigation of these unknown pipe networks, specifically [...] Read more.
The inspection of legacy nuclear facilities to aid in decommissioning is a world wide issue. One of the challenges is the characterisation of pipe networks within them. This paper presents an autonomous control system for the navigation of these unknown pipe networks, specifically focusing on elbows. The controller utilises three low-cost feeler sensors to navigate the FURO II robot around 150 mm short elbows. The controller is shown to allow the robot to safely navigate around an elbow on all 39 attempts comparing that with the brute force method which only completed five of the nine attempts and damaging the robot. This shows the advantages of the proposed controller. A new metric (Impulse) is also proposed to compare the extra force applied to the robot over the time it is slipping in the elbow due to the errors in the drive unit speeds. Using this metric, the controller is shown to decrease the Impulse applied to the robot by 213.97 Ns when compared to the brute force method. Full article
(This article belongs to the Special Issue Advances in Robots for Hazardous Environments in the UK)
Show Figures

Figure 1

Figure 1
<p>Photo of pipework from an exemplar facility.</p>
Full article ">Figure 2
<p>Short and long elbows.</p>
Full article ">Figure 3
<p>Pipe vehicle categories, based on [<a href="#B6-robotics-10-00028" class="html-bibr">6</a>].</p>
Full article ">Figure 4
<p>Elbow drive unit paths.</p>
Full article ">Figure 5
<p>FURO II prototype.</p>
Full article ">Figure 6
<p>FURO II control architecture.</p>
Full article ">Figure 7
<p>Simplified diagram of a feeler, modified from [<a href="#B3-robotics-10-00028" class="html-bibr">3</a>].</p>
Full article ">Figure 8
<p>Autonomous Elbow Controller overview.</p>
Full article ">Figure 9
<p>Simplified diagram of Autonomous Elbow Control stages.</p>
Full article ">Figure 10
<p>Combining feeler end positions (in corner entrance).</p>
Full article ">Figure 11
<p>2D simplified diagram of exit conditions.</p>
Full article ">Figure 12
<p>Change in <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mover accent="true"> <mi>y</mi> <mo>¯</mo> </mover> <mover accent="true"> <mi>z</mi> <mo>¯</mo> </mover> </mrow> </msub> </semantics></math> as the robot exits the corner.</p>
Full article ">Figure 13
<p>Test rig for experiments.</p>
Full article ">Figure 14
<p>FUROII prototype navigating the corner at entrance angle <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>0</mn> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Estimation of direction during Autonomous Elbow Controller testing.</p>
Full article ">Figure 16
<p>Time taken to turn using Autonomous Elbow Controller.</p>
Full article ">Figure 17
<p>Time taken to turn using brute force.</p>
Full article ">Figure 18
<p>Impulse applied to the robot while turning through an elbow.</p>
Full article ">Figure 19
<p>Real change in <math display="inline"><semantics> <msub> <mi>d</mi> <mrow> <mi>y</mi> <mi>z</mi> </mrow> </msub> </semantics></math> when navigating the elbow.</p>
Full article ">Figure A1
<p>Model of the robot exiting the elbow.</p>
Full article ">Figure A2
<p>Feeler angles of exit conditions.</p>
Full article ">
24 pages, 27317 KiB  
Article
Optimization-Based Constrained Trajectory Generation for Robot-Assisted Stitching in Endonasal Surgery
by Jacinto Colan, Jun Nakanishi, Tadayoshi Aoyama and Yasuhisa Hasegawa
Robotics 2021, 10(1), 27; https://doi.org/10.3390/robotics10010027 - 1 Feb 2021
Cited by 22 | Viewed by 5525
Abstract
The reduced workspace in endonasal endoscopic surgery (EES) hinders the execution of complex surgical tasks such as suturing. Typically, surgeons need to manipulate non-dexterous long surgical instruments with an endoscopic view that makes it difficult to estimate the distances and angles required for [...] Read more.
The reduced workspace in endonasal endoscopic surgery (EES) hinders the execution of complex surgical tasks such as suturing. Typically, surgeons need to manipulate non-dexterous long surgical instruments with an endoscopic view that makes it difficult to estimate the distances and angles required for precise suturing motion. Recently, robot-assisted surgical systems have been used in laparoscopic surgery with promising results. Although robotic systems can provide enhanced dexterity, robot-assisted suturing is still highly challenging. In this paper, we propose a robot-assisted stitching method based on an online optimization-based trajectory generation for curved needle stitching and a constrained motion planning framework to ensure safe surgical instrument motion. The needle trajectory is generated online by using a sequential convex optimization algorithm subject to stitching kinematic constraints. The constrained motion planner is designed to reduce surrounding damages to the nasal cavity by setting a remote center of motion over the nostril. A dual concurrent inverse kinematics (IK) solver is proposed to achieve convergence of the solution and optimal time execution, in which two constrained IK methods are performed simultaneously; a task-priority based IK and a nonlinear optimization-based IK. We evaluate the performance of the proposed method in a stitching experiment with our surgical robotic system in a robot-assisted mode and an autonomous mode in comparison to the use of a conventional surgical tool. Our results demonstrate a noticeable improvement in the stitching success ratio in the robot-assisted mode and the shortest completion time for the autonomous mode. In addition, the force interaction with the tissue was highly reduced when using the robotic system. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Endoscopic endonasal surgery. (<b>b</b>) Dura suturing in endonasal endoscopic surgery (EES) performed in a head model phantom. (<b>c</b>) An example of an endoscopic view of the suturing task in the head model.</p>
Full article ">Figure 2
<p>(<b>a</b>) SmartArm robot for endoscopic endonasal surgery (EES) [<a href="#B4-robotics-10-00027" class="html-bibr">4</a>]. (<b>b</b>) 4-DOF articulated forceps. (<b>c</b>) Human–robot user interface. (<b>d</b>) Push-buttons used to alternate between robot-assisted modes.</p>
Full article ">Figure 3
<p>Definition of the coordinate frames used to represent the stitching task.</p>
Full article ">Figure 4
<p>High-level schematic overview of the proposed robot-assisted stitching.</p>
Full article ">Figure 5
<p>Block diagram of the proposed constrained motion planning.</p>
Full article ">Figure 6
<p>Guidance virtual fixture.</p>
Full article ">Figure 7
<p>Remote center of motion characterization.</p>
Full article ">Figure 8
<p>(<b>a</b>) CoppeliaSim simulation environment. (<b>b</b>) Trajectory generated in simulation.</p>
Full article ">Figure 9
<p>(<b>a</b>) Experiment setup. (<b>b</b>) Force sensor placed behind the stitching testbed. (<b>c</b>) 3D printed nose model. (<b>d</b>) Stitching testbed.</p>
Full article ">Figure 10
<p>Experimental results of the stitching task. (<b>a</b>) Task completion time of successful trials. (<b>b</b>) Success ratio. (<b>c</b>,<b>d</b>) Distance between the desired entry/exit point and the actual entry/exit point.</p>
Full article ">Figure 11
<p>Comparison of the constrained motion planning performance. (<b>a</b>) Maximum RCM error for manual and robotic operation (including robot-assisted and autonomous modes). (<b>b</b>) An example of the trajectories followed by the forceps shaft intersection with the nostril plane during the task execution by one participant.</p>
Full article ">Figure 12
<p>Distribution of RSS force samples. In robot operation (robot-assisted and autonomous modes), approximately 90% of the RSS force samples are within the range of [0, 0.05] N. For the manual operation, the 90% of the samples are within the range of [0, 0.15] N.</p>
Full article ">Figure 13
<p>(<b>a</b>) Needle grasped by the 4-DOF forceps before starting the stitching task. (<b>b</b>–<b>g</b>) Robot-assisted stitching task sequence. (<b>b</b>) Initial needle positioning about 2cm over the tissue. (<b>c</b>) Needle reorientation. (<b>d</b>) Needle approach to the tissue. (<b>e</b>) Right tissue penetration. (<b>f</b>) Needle insertion. (<b>g</b>) Left tissue penetration from underneath.</p>
Full article ">
27 pages, 4712 KiB  
Article
Unmanned Aerial Drones for Inspection of Offshore Wind Turbines: A Mission-Critical Failure Analysis
by Mahmood Shafiee, Zeyu Zhou, Luyao Mei, Fateme Dinmohammadi, Jackson Karama and David Flynn
Robotics 2021, 10(1), 26; https://doi.org/10.3390/robotics10010026 - 1 Feb 2021
Cited by 72 | Viewed by 18780
Abstract
With increasing global investment in offshore wind energy and rapid deployment of wind power technologies in deep water hazardous environments, the in-service inspection of wind turbines and their related infrastructure plays an important role in the safe and efficient operation of wind farm [...] Read more.
With increasing global investment in offshore wind energy and rapid deployment of wind power technologies in deep water hazardous environments, the in-service inspection of wind turbines and their related infrastructure plays an important role in the safe and efficient operation of wind farm fleets. The use of unmanned aerial vehicle (UAV) and remotely piloted aircraft (RPA)—commonly known as “drones”—for remote inspection of wind energy infrastructure has received a great deal of attention in recent years. Drones have significant potential to reduce not only the number of times that personnel will need to travel to and climb up the wind turbines, but also the amount of heavy lifting equipment required to carry out the dangerous inspection works. Drones can also shorten the duration of downtime needed to detect defects and collect diagnostic information from the entire wind farm. Despite all these potential benefits, the drone-based inspection technology in the offshore wind industry is still at an early stage of development and its reliability has yet to be proven. Any unforeseen failure of the drone system during its mission may cause an interruption in inspection operations, and thereby, significant reduction in the electricity generated by wind turbines. In this paper, we propose a semiquantitative reliability analysis framework to identify and evaluate the criticality of mission failures—at both system and component levels—in inspection drones, with the goal of lowering the operation and maintenance (O&M) costs as well as improving personnel safety in offshore wind farms. Our framework is built based upon two well-established failure analysis methodologies, namely, fault tree analysis (FTA) and failure mode and effects analysis (FMEA). It is then tested and verified on a drone prototype, which was developed in the laboratory for taking aerial photography and video of both onshore and offshore wind turbines. The most significant failure modes and underlying root causes within the drone system are identified, and the effects of the failures on the system’s operation are analysed. Finally, some innovative solutions are proposed on how to minimize the risks associated with mission failures in inspection drones. Full article
(This article belongs to the Special Issue Advances in Robots for Hazardous Environments in the UK)
Show Figures

Figure 1

Figure 1
<p>Robotic platforms for the inspection of offshore wind farms.</p>
Full article ">Figure 2
<p>Multirotor (<b>a</b>), fixed-wing (<b>b</b>), and single-rotor (<b>c</b>) drones.</p>
Full article ">Figure 3
<p>The schematic working principle of unmanned aerial drones.</p>
Full article ">Figure 4
<p>Subsystems and components of our drone prototype for inspecting wind turbines.</p>
Full article ">Figure 5
<p>A fault-tree diagram of a drone prototype system.</p>
Full article ">Figure 6
<p>A fault-tree diagram of the drone’s communication subsystem.</p>
Full article ">Figure 7
<p>A fault-tree diagram of the drone’s propulsion subsystem.</p>
Full article ">Figure 8
<p>Fault-tree diagram of the drone’s sensor subsystem.</p>
Full article ">Figure 9
<p>Fault-tree diagram of the drone’s frame subsystem.</p>
Full article ">Figure 10
<p>Risk-priority-number (RPN) values for the drone system components.</p>
Full article ">
26 pages, 5763 KiB  
Article
On Fast Jerk–, Acceleration– and Velocity–Restricted Motion Functions for Online Trajectory Generation
by Burkhard Alpers
Robotics 2021, 10(1), 25; https://doi.org/10.3390/robotics10010025 - 1 Feb 2021
Cited by 7 | Viewed by 6338
Abstract
Finding fast motion functions to get from an initial state (distance, velocity, acceleration) to a final one has been of interest for decades. For a solution to be practically relevant, restrictions on jerk, acceleration and velocity have to be taken into account. Such [...] Read more.
Finding fast motion functions to get from an initial state (distance, velocity, acceleration) to a final one has been of interest for decades. For a solution to be practically relevant, restrictions on jerk, acceleration and velocity have to be taken into account. Such solutions use optimization algorithms or try to directly construct a motion function allowing online trajectory generation. In this contribution, we follow the latter strategy and present an approach which first deals with the situation where initial and final accelerations are 0, and then relates the general case as much as possible to this situation. This leads to a classification with just four major cases. A continuity argument guarantees full coverage of all situations which is not the case or is not clear for other available algorithms. We present several examples that show the variety of different situations and, thus, the complexity of the task. We also describe an implementation in MATLAB® and results from a huge number of test runs regarding accuracy and efficiency, thus demonstrating that the algorithm is suitable for online trajectory generation. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

Figure 1
<p>Seven-segment profile (<b>a</b>) for RR; (<b>b</b>) for BB—two segments with length 0.</p>
Full article ">Figure 2
<p>Velocity–acceleration curve for the motion function (<b>a</b>) in <a href="#robotics-10-00025-f001" class="html-fig">Figure 1</a>a; (<b>b</b>) in <a href="#robotics-10-00025-f001" class="html-fig">Figure 1</a>b.</p>
Full article ">Figure 3
<p>Switching between velocities: (<b>a</b>) positive acceleration; (<b>b</b>) negative acceleration.</p>
Full article ">Figure 4
<p>Examples for getting from <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>A</mi> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>E</mi> </msub> </mrow> </semantics></math>: (<b>a</b>) velocity–acceleration plane; (<b>b</b>) velocity over time.</p>
Full article ">Figure 5
<p>For <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>A</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>E</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>: (<b>a</b>) S over <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>m</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) T over S.</p>
Full article ">Figure 6
<p>Illustration of auxiliary functions.</p>
Full article ">Figure 7
<p>Case A. (<b>a</b>) Part I; (<b>b</b>) Part III; (<b>c</b>) Part IIa; (<b>d</b>) Part IIb.</p>
Full article ">Figure 8
<p>(<b>a</b>) No gap between Parts I and III, but Parts IIa or IIb are better; (<b>b</b>) zoom into area.</p>
Full article ">Figure 9
<p><b>(a)</b> No gap between Parts I and III, Parts IIa or IIb are not needed; (<b>b</b>) zoom into area.</p>
Full article ">Figure 10
<p>(<b>a</b>) Gap between Parts I and III, Parts IIa or IIb are needed; (<b>b</b>) zoom into area.</p>
Full article ">Figure 11
<p>(<b>a</b>) Best solution is in Part IIa where the curve is “bulging”; (<b>b</b>) zoom into area.</p>
Full article ">Figure 12
<p>Case B. (<b>a</b>) Part I; (<b>b</b>) Part III; (<b>c</b>) Part IIa; (<b>d</b>) Part IIb.</p>
Full article ">Figure 13
<p>Positioning of A and E in cases C and D.</p>
Full article ">Figure 14
<p>(<b>a</b>) Sub-case Ca; (<b>b</b>) sub-case Cb.</p>
Full article ">Figure 15
<p>Case Ca. (<b>a</b>) Part I; (<b>b</b>) Part III; (<b>c</b>) Part IIa; (<b>d</b>) Part IIb.</p>
Full article ">Figure 16
<p>(<b>a</b>) Example 1; (<b>b</b>) example 2; (<b>c</b>): example 3; (<b>d</b>) example 3 with zoom.</p>
Full article ">Figure 17
<p>Case Cb. (<b>a</b>) Part I; (<b>b</b>) Part III; (<b>c</b>) Part IIa; (<b>d</b>) Part IIb.</p>
Full article ">Figure 18
<p>(<b>a</b>) Example 1; (<b>b</b>) example 2; (<b>c</b>) example 3; (<b>d</b>) example 3 with zoom.</p>
Full article ">Figure 19
<p>Case D.</p>
Full article ">
21 pages, 1150 KiB  
Article
Attitudes towards Social Robots in Education: Enthusiast, Practical, Troubled, Sceptic, and Mindfully Positive
by Matthijs H. J. Smakman, Elly A. Konijn, Paul Vogt and Paulina Pankowska
Robotics 2021, 10(1), 24; https://doi.org/10.3390/robotics10010024 - 26 Jan 2021
Cited by 19 | Viewed by 7974
Abstract
While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) [...] Read more.
While social robots bring new opportunities for education, they also come with moral challenges. Therefore, there is a need for moral guidelines for the responsible implementation of these robots. When developing such guidelines, it is important to include different stakeholder perspectives. Existing (qualitative) studies regarding these perspectives however mainly focus on single stakeholders. In this exploratory study, we examine and compare the attitudes of multiple stakeholders on the use of social robots in primary education, using a novel questionnaire that covers various aspects of moral issues mentioned in earlier studies. Furthermore, we also group the stakeholders based on similarities in attitudes and examine which socio-demographic characteristics influence these attitude types. Based on the results, we identify five distinct attitude profiles and show that the probability of belonging to a specific profile is affected by such characteristics as stakeholder type, age, education and income. Our results also indicate that social robots have the potential to be implemented in education in a morally responsible way that takes into account the attitudes of various stakeholders, although there are multiple moral issues that need to be addressed first. Finally, we present seven (practical) implications for a responsible application of social robots in education following from our results. These implications provide valuable insights into how social robots should be implemented. Full article
(This article belongs to the Special Issue Advances and Challenges in Educational Robotics)
Show Figures

Figure 1

Figure 1
<p>Study methodology per research question (RQ).</p>
Full article ">Figure 2
<p>Stakeholder means for all six scales; based on 1–6 point scales (ranging from 1= totally not agree to 6 = totally agree).</p>
Full article ">
20 pages, 6384 KiB  
Article
Monocular Visual Inertial Direct SLAM with Robust Scale Estimation for Ground Robots/Vehicles
by Bismaya Sahoo, Mohammad Biglarbegian and William Melek
Robotics 2021, 10(1), 23; https://doi.org/10.3390/robotics10010023 - 26 Jan 2021
Cited by 6 | Viewed by 5282
Abstract
In this paper, we present a novel method for visual-inertial odometry for land vehicles. Our technique is robust to unintended, but unavoidable bumps, encountered when an off-road land vehicle traverses over potholes, speed-bumps or general change in terrain. In contrast to tightly-coupled methods [...] Read more.
In this paper, we present a novel method for visual-inertial odometry for land vehicles. Our technique is robust to unintended, but unavoidable bumps, encountered when an off-road land vehicle traverses over potholes, speed-bumps or general change in terrain. In contrast to tightly-coupled methods for visual-inertial odometry, we split the joint visual and inertial residuals into two separate steps and perform the inertial optimization after the direct-visual alignment step. We utilize all visual and geometric information encoded in a keyframe by including the inverse-depth variances in our optimization objective, making our method a direct approach. The primary contribution of our work is the use of epipolar constraints, computed from a direct-image alignment, to correct pose prediction obtained by integrating IMU measurements, while simultaneously building a semi-dense map of the environment in real-time. Through experiments, both indoor and outdoor, we show that our method is robust to sudden spikes in inertial measurements while achieving better accuracy than the state-of-the art direct, tightly-coupled visual-inertial fusion method. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

Figure 1
<p>Schematic for Visual Inertial Epipolar Constrained Odometry. Two threads run in parallel. The tracking thread encodes the epipolar optimization and the mapping thread uses the optimized pose (<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="bold">R</mi> <mo>*</mo> </msup> <mo>,</mo> <msup> <mi mathvariant="bold">t</mi> <mo>*</mo> </msup> </mrow> </semantics></math>) to update the map.</p>
Full article ">Figure 2
<p>The epipole positions plotted on the keyframe image during an optimization process for a straight line motion. RED shows the epipole position due to noisy prior due to integration of IMU measurements at the start of the optimization. GREEN shows intermediate epipole positions during the optimization. BLUE is the final epipole position. Since the trajectory is straight, the epipole’s image on the keyframe image should be at the centre of the image, which is where the initial noisy pose prior is driven to, as a result of optimization.</p>
Full article ">Figure 3
<p>Epipolar Stereo Matching on Keyframe and Reference Images. On the left are five equidistant points on the keyframe image and on the right, are the same five points being searched along the epipolar line (shown as RED line). The best match point, <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi mathvariant="bold">bm</mi> </msub> </semantics></math> is shown as a box.</p>
Full article ">Figure 4
<p>SSD Error as five equidistant points are checked along the epipolar line (See <a href="#robotics-10-00023-f003" class="html-fig">Figure 3</a>) in the reference image. The minima is the point of best match.</p>
Full article ">Figure 5
<p>Camera–Inertial Measurement Unit (IMU) setup close-up view. Axis conventions shown for clarity.</p>
Full article ">Figure 6
<p>Indoor Experiment Setup. The monocular camera and IMU fixed rigidly and mounted on a trolley-cart with one misaligned wheel.</p>
Full article ">Figure 7
<p>Outdoor Experiment Setup. The monocular camera and IMU fixed rigidly and mounted on an off-road vehicle. The axis conventions are shown in <a href="#robotics-10-00023-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 8
<p>Temporal offset between Camera and IMU sampling.</p>
Full article ">Figure 9
<p>(<b>a</b>) Raw Accelerometer Reading versus IMU frame number. (<b>b</b>) Raw Gyroscope Reading versus IMU Frame Number. Note, that even though IMU sampling rate(100 Hz) is twice that of the camera (50 Hz), the readings are plotted with respect to Image Frame No. for easy comparison. Also note that the coordinate system in IMU centric: X-forward, Y-Right, Z-Down.</p>
Full article ">Figure 10
<p>Trajectory #1: (<b>a</b>) Translation Errors (m) versus Image frame number (<b>b</b>) Angular Errors (rad) versus Image Frame number. Note: The coordinate frame expressed here is Camera centric: Z-forward, Y-down, and X-right.</p>
Full article ">Figure 11
<p>A semi-dense map build of an indoor corridor.</p>
Full article ">Figure 12
<p><b>Outdoor Experiment</b>. A portion of the 3D structure is highlighted in RED in all three figures for comparison. (<b>a</b>) shows the sample RGB image seen by the camera. (<b>b</b>) shows reconstruction quality for tightly-coupled system. (<b>c</b>) shows reconstruction quality for our method. Notice the improvement in map quality due to increased accuracy of pose estimation. Also note that in (<b>a</b>), radial distortion has not been removed to represent the actual, wide FOV sensor data received by the camera.</p>
Full article ">Figure A1
<p>Trajectory #2: (<b>a</b>) Translation Errors (m) versus Image frame number (<b>b</b>) Angular Errors (rad) versus Image Frame number. Note: The coordinate frame expressed here is Camera centric: Z-forward, Y-down and X-right.</p>
Full article ">Figure A2
<p>Trajectory #2: (<b>a</b>) Raw Accelerometer Reading versus IMU frame number. (<b>b</b>) Raw Gyroscope Reading versus IMU Frame Number. Note, that even though IMU sampling rate (100 Hz) is twice that of the camera (50 Hz), the readings are plotted with respect to Image Frame No. for easy comparison. Also note that the coordinate system in IMU centric: X-forward, Y-right, Z-down.</p>
Full article ">Figure A3
<p>Trajectory #3: (<b>a</b>) Translation Errors (m) versus Image frame number (<b>b</b>) Angular Errors (rad) versus Image Frame number. Note: The coordinate frame expressed here is Camera centric: Z-forward, Y-down and X-right.</p>
Full article ">Figure A4
<p>Trajectory #3: (<b>a</b>) Raw Accelerometer Reading versus IMU frame number. (<b>b</b>) Raw Gyroscope Reading versus IMU Frame Number. Note, that even though IMU sampling rate (100 Hz) is twice that of the camera (50 Hz), the readings are plotted with respect to Image Frame No. for easy comparison. Also note that the coordinate system in IMU centric: X-forward, Y-right, Z-down.</p>
Full article ">
13 pages, 1513 KiB  
Article
Deep Reinforcement Learning for the Control of Robotic Manipulation: A Focussed Mini-Review
by Rongrong Liu, Florent Nageotte, Philippe Zanne, Michel de Mathelin and Birgitta Dresp-Langley
Robotics 2021, 10(1), 22; https://doi.org/10.3390/robotics10010022 - 24 Jan 2021
Cited by 105 | Viewed by 16586
Abstract
Deep learning has provided new ways of manipulating, processing and analyzing data. It sometimes may achieve results comparable to, or surpassing human expert performance, and has become a source of inspiration in the era of artificial intelligence. Another subfield of machine learning named [...] Read more.
Deep learning has provided new ways of manipulating, processing and analyzing data. It sometimes may achieve results comparable to, or surpassing human expert performance, and has become a source of inspiration in the era of artificial intelligence. Another subfield of machine learning named reinforcement learning, tries to find an optimal behavior strategy through interactions with the environment. Combining deep learning and reinforcement learning permits resolving critical issues relative to the dimensionality and scalability of data in tasks with sparse reward signals, such as robotic manipulation and control tasks, that neither method permits resolving when applied on its own. In this paper, we present recent significant progress of deep reinforcement learning algorithms, which try to tackle the problems for the application in the domain of robotic manipulation control, such as sample efficiency and generalization. Despite these continuous improvements, currently, the challenges of learning robust and versatile manipulation skills for robots with deep reinforcement learning are still far from being resolved for real-world applications. Full article
(This article belongs to the Special Issue Robotics: Intelligent Control Theory)
Show Figures

Figure 1

Figure 1
<p>Simplified schematic diagram of mechanical components of a two-joint robotic arm.</p>
Full article ">Figure 2
<p>A simple deep learning architecture.</p>
Full article ">Figure 3
<p>Universal model of reinforcement learning.</p>
Full article ">Figure 4
<p>Deep reinforcement learning.</p>
Full article ">Figure 5
<p>A schematic diagram of robotic manipulation control using DRL.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop